The advisory emphasizes that while technology has immense potential to help psychologists address the mental health crisis it must not distract from the urgent need to fix the foundations of America’s mental health care system.
The report offers recommendations for the public, policymakers, tech companies, researchers, clinicians, parents, caregivers and other stakeholders to help them understand their role in a rapidly changing technology landscape so that the burden of navigating untested and unregulated digital spaces does not fall solely on users. Key recommendations include:
- Due to the unpredictable nature of these technologies, do not use chatbots and wellness apps as a substitute for care from a qualified mental health professional.
- Prevent unhealthy relationships or dependencies between users and these technologies
- Establish specific safeguards for children, teens and other vulnerable populations
“The development of AI technologies has outpaced our ability to fully understand their effects and capabilities. As a result, we are seeing reports of significant harm done to adolescents and other vulnerable populations,” Evans said. “For some, this can be life-threatening, underscoring the need for psychologists and psychological science to be involved at every stage of the development process.”
Even generative AI tools that have been developed with high-quality psychological science and using best practices do not have enough evidence to show that they are effective or safe to use in mental health care, according to the advisory. Researchers must evaluate generative AI chatbots and wellness apps using randomized clinical trials and longitudinal studies that track outcomes over time. But in order to do so, tech companies and policymakers must commit to transparency on how these technologies are being created and used.

