Artificial Intelligence in Health Care: Revolutionizing Patient Interaction and Mental Health Support
The advent of artificial intelligence (AI) in health care has been nothing short of revolutionary. One of the most significant advancements has been the integration of AI tools in electronic health records (EHR). The pandemic has acted as a catalyst, accelerating the adoption of EHR tools by patients at institutions like NYU Langone Health. A groundbreaking study titled ‘Large Language Model–Based Responses to Patients’ In-Basket Messages,’ co-authored by researchers from NYU Stern, NYU Grossman School of Medicine, and NYU Tandon, has demonstrated that AI can draft responses to patient queries with a level of empathy that is comparable to human interactions. This study underscores the potential of AI to alleviate the workload on physicians while enhancing communication with patients. However, it is crucial that human providers review these AI-generated drafts before they are sent to ensure accuracy and appropriateness.
The surge in EHR messages has been overwhelming for many physicians. According to Paul A. Testa, Chief Medical Information Officer at NYU Langone, there has been a more than 30% annual increase in the number of EHR messages received daily, with some physicians receiving over 150 messages a day. This influx has led to long hours spent sifting through messages, contributing significantly to physician burnout. To address this issue, NYU Langone has been experimenting with generative artificial intelligence (GenAI), which utilizes patient data to generate human-like responses to patient questions. In 2023, NYU Langone licensed a private instance of GPT-4, a popular chatbot, to test this technology using real patient data while adhering to strict data privacy rules.
The study, published in JAMA Network Open, involved primary care physicians comparing and rating draft responses generated by GPT-4 against actual human responses. The findings were promising: physicians found the AI-generated responses to be comparable to human responses in terms of accuracy, relevance, and completeness. In some cases, the AI responses even outperformed human responses in terms of understandability and tone. However, the AI responses were often longer and more complex, indicating a need for further refinement and training. The study was funded by grants from the National Science Foundation, highlighting the growing interest and investment in AI-driven health care solutions.
While the potential of AI in health care is vast, it is essential to consider the ethical implications and regulatory challenges. The future of AI-powered therapy, for instance, is largely unregulated. A recent NPR’s Morning Edition segment highlighted the case of psychologist and therapist Jessica Jackson, who discovered a mysterious website paying people to upload their therapy sessions. Jackson suspected that the audio was being used to train a chatbot, raising concerns about the lack of disclosures and anonymity. This incident underscores the need for stringent regulations to ensure the safety and efficacy of AI tools in mental health.
The use of chatbots in mental health is becoming increasingly prevalent, particularly among younger demographics. The COVID-19 pandemic has normalized seeking help through technology, further driving the adoption of AI in mental health services. Historically, the use of technology in mental health dates back to the 1960s with the creation of the ‘Eliza’ computer program. Today, companies are vying to scale up the use of technology in mental health, with tech giants like Apple introducing journaling apps and incorporating ‘Apple intelligence’ into their devices. These advancements aim to democratize access to mental health expertise, providing support to those who may not have access to traditional therapy.
However, the effectiveness and potential harm of chatbots remain a concern. While some companies have received FDA approval for their chatbots, many others label themselves as wellness apps to avoid stringent regulations. This creates a regulatory loophole, raising questions about the safety and expertise of these AI tools. Jessica Jackson believes that while AI has a role in mental health, it should be used as a tool rather than a replacement for traditional therapy. She encourages therapists to inquire if their clients have used chatbots and to exercise caution when sharing recorded sessions for monetary gain.
The ability of AI to deliver compassion, especially in emotionally charged medical situations, is still under scrutiny. Physician and computer scientist Jonathan H. Chen tested an AI chatbot’s ability to handle scenarios involving ethical dilemmas and emotional complexity. In one experiment, the AI chatbot played the role of a clinician providing supportive counseling to a family member concerned about a patient with advancing dementia. While the AI showed promise, it highlighted the limitations of AI in fully understanding and conveying empathy.
Experts like Tina Hernandez-Boussard believe that AI could help meet the growing demand for mental health services. AI-driven chatbots can provide immediate assistance and support, especially when human professionals are unavailable. By analyzing clinical notes and patient communications, AI can also identify patients at risk for mental health issues such as depression and suicidal ideation. This is achieved through natural language processing, a subfield of AI that enables computers to understand and communicate in human language. Hernandez-Boussard’s team has been working on using natural language processing to detect phrases and patterns indicative of mental health concerns, aiding in early detection and intervention.
Despite the advancements, maintaining human involvement and supervision is crucial for effective healthcare delivery when using AI. Both Hernandez-Boussard and Chen advocate for human-computer interactions as a training platform for clinicians to learn empathy in difficult situations. Chen hopes that AI-powered simulations can improve human interactions by allowing clinicians to practice high-stakes conversations in a low-stakes environment. In his essay, Chen provides a checklist of recommendations and warnings for using AI systems in patient care, emphasizing the importance of constant testing and monitoring to ensure safe, reliable, and compassionate counseling and advice for all patients.
In India, AI is expected to transform the work context in mental healthcare. Chatbots are being explored as potential solutions, with a 2019 study examining various chatbots and their features. The study classified these systems based on purpose, platform, response, and dialogue, revealing that most chatbots only allowed user interaction through writing and used a mix of text, audio, and imagery for feedback. Dr. Alok Kulkarni, a senior consultant and interventional psychiatrist, notes that AI tools provide valuable resources on mental health to the public. However, Anju Bhandari Gandhi, a professor, remarks that professionals in India rarely use AI tools, though they do seek aid from AI for final diagnosis.
AI-powered chatbots and virtual assistants offer remote care for individuals in distress but are still in their nascent stage. Researchers are utilizing machine learning algorithms to predict mental health patterns, with Anju Gandhi developing an app for diagnosing and predicting depression. This app suggests solutions for different conditions, addressing the stress and challenges faced by individuals of all ages. AI can make diagnoses faster than manual approaches like in-person counseling, but it cannot account for the nuances and complexities of a person’s cognitive and emotional landscape. It is vital for AI models to undergo scrutiny and have long-term data to ensure efficacy and safety.
While integrating cultural aspects into mental healthcare is desirable, AI tools may not be efficient in doing so. AI is prone to perpetuating stereotypes and biases, which can hinder its effectiveness. Although AI can add ease and accuracy to a professional’s work, it cannot replace the competence and empathy of a clinician. AI could be the key to tackling challenges and reaching remote areas in countries like India, where more people are educated and can access user-friendly apps for mental healthcare. However, the balance between AI and human involvement must be carefully managed to ensure ethical and effective utilization.
In conclusion, artificial intelligence holds immense potential in revolutionizing patient interaction and mental health support. From reducing physician burnout by managing EHR messages to providing immediate mental health assistance through chatbots, AI is poised to transform healthcare. However, the ethical implications, regulatory challenges, and the need for human oversight cannot be overlooked. As AI continues to evolve, it is crucial to strike a balance between leveraging its capabilities and maintaining the irreplaceable human touch in healthcare. By doing so, we can ensure that AI serves as a valuable tool in enhancing healthcare delivery while preserving the empathy and compassion that are central to patient care.