Psychological healthcare experts expressed concerns over the growing role artificial intelligence is playing in supplanting mental health care, stating that the right balance is needed between human connection and AI-supported care.
During a Wednesday hearing within the House Energy and Commerce subcommittee, C. Vaile Wright — the senior director of health care innovation at the American Psychological Association — spoke to the burgeoning trends experts are seeing surrounding AI and mental health treatments and their hopes for safety measures.
At the top of Wright’s list: tailoring chatbots for each user and age-specific guardrails.
“The APA recognizes AI’s immense potential to revolutionize health care for consumers and patients,” she testified. “It’s not an all good or an all bad thing. It’s about how we use the tool appropriately, how we safeguard it, how we test its effectiveness, to ensure that children who are going to continue to use these AI tools — because they’re not going away — will be protected.”
Wright underscored that the speed of AI development is outpacing current research into its impact in clinical practice and emphasized the need for guardrails to further curtail adverse impacts in clinical settings.
She also said that a given large language model’s parameters need to be acutely tuned to the age of a patient in treatment.
“When we think about the model, it’s really about representativeness and ensuring that whatever training model we’re employing has been trained on the data that’s appropriate,” Wright said. “You do end up seeing these harms when you’ve got systems that were developed for adults. And children are not just little adults. They have very different developmental trajectories. What is helpful for one child may not be helpful for somebody else, not not just based on their age, but based on their temperament and how they have been raised.”
In her testimony, Wright cited several new studies that document adolescent users’ relationship with chatbots both inside and outside clinical settings. This research points to dangerous outcomes for teenage users who are unfamiliar with what chatbots are designed for, a fault Wright says needs to be addressed by tech companies.
“The direct to consumer market is flooded with unregulated chatbots making deceptive and dangerous claims,” she said. “We could encourage companies to make them less addictive in their coding tactics and make it so that user engagement isn’t the sole outcome that they are trying to achieve.”
Auditing and reporting requirements were another recommended action Wright brought up, asking for company transparency on user suicide rates. She also said better age verification and restriction protocols are lacking.
“The boundary really lies in ensuring that these types of chatbots don’t misrepresent themselves, that they’re not allowed to call themselves a licensed anything,” she added.
Wright’s concerns echo those of other advocacy groups, namely the Parents Television and Media Council, which is asking for Big Tech to implement “stringent safeguards” over the types of AI products children can access.
“It is unconscionable that Big Tech has once again unleashed its products on an unsuspecting public without accountability for protecting children,” PTC Vice President Melissa Henon said in a statement.
Witnesses also discussed including clinicians and providers in talks about regulating the use of AI tools in healthcare. Wright said having providers in the loop and able to understand how individual AI systems are trained is important for proper care, as well as understanding their patients’ relationships to AI.
“It starts with helping providers, clinicians and others really know how to evaluate the tools and what they should be looking for if they’re going to incorporate these tools into any kind of setting that they’re doing,” she said. “And I think it’s about helping clinicians and providers know what questions to ask of their patients about their AI use.”
Beyond understanding the technicalities behind an AI model’s programming and determining its suitability in mental health treatments, Wright expressed concern over AI’s tendency to alienate individuals from human connection.
“What do we need to do as a culture to help people understand that coming together and having empathy and having social connections is actually the better solution than turning to these technologies as a replacement for people?” she said. “It can be a huge tool, as long as you use them and then go back to the people that you’re trying to connect with.”