This article presents a transcript of Episode 341 of the Speaking of Psychology podcast, published by the American Psychological Association (APA). In this episode, the host engages in a conversation with Vaile Wright, PhD, addressing the current state, future prospects, as well as the benefits, concerns, and potential risks associated with the use of artificial intelligence in psychology. The original episode is available here: Speaking of Psychology: Can AI help solve the mental health crisis? With Vaile Wright, PhD
People are increasingly turning to chatbots for mental health advice and support—even as researchers work to develop safe, evidence-based AI mental health interventions. Vaile Wright, PhD, discusses the promises, limitations, and risks of AI in mental health; how AI tools are already being used in mental health care; how these tools could help expand access to care; and how AI might change what therapy looks like in the future.
Vaile Wright, PhD, is the senior director of the Office of Health Care Innovation at the American Psychological Association, where she develops strategies to leverage technology and data to address issues within health care including improving access, measuring care, and optimizing treatment delivery at both the individual and system levels. As a spokesperson for APA, she has been interviewed by television, radio, print, and online media on a range of topics including stress, politics, trauma, serious mental illness, and telehealth and technology. She received her PhD in counseling psychology from the University of Illinois, Urbana-Champaign and is licensed in the District of Columbia.
Kim Mills: Where do you turn when you need a listening ear? To a friend? To a therapist? For an increasing number of Americans, there’s another possible answer: to a chatbot. Artificial intelligence is rapidly reshaping many parts of our lives, from work to education to healthcare, and researchers are working to create safe and evidence-based AI mental health interventions. At the same time though, people are already turning to AI chatbots and companions to provide mental health advice and support—even those that were never designed to do this.
So what are the promises, limitations, and risks of AI in mental health care? How are AI tools being used in mental health care today? Could they help expand access to care for underserved populations? What are the ethical concerns around AI and mental health? And how might AI change what therapy looks like in the coming years?
Welcome to Speaking of Psychology, the flagship podcast of the American Psychological Association that examines the links between psychological science and everyday life. I’m Kim Mills.
My guest today is Dr. Vaile Wright, a licensed psychologist and senior director of the Office of Healthcare Innovation here at APA, where she focuses on using technology and data to optimize and increase access to mental health care. Dr. Wright’s research has been published in peer-reviewed journals including Professional Psychology: Research and Practice, Law and Human Behavior, and the Journal of Traumatic Stress. As a spokesperson for APA, she’s been interviewed by media outlets including CNN, NBC News, the Washington Post, and NPR on a range of topics including stress, serious mental illness, telehealth and technology, and AI and mental health.
Vaile, thank you for joining me today.
Vaile Wright, PhD: It’s great to be here, Kim.
Mills: And since we’re coworkers, I’m going to call you by your first name.
Wright: That works for me.
Mills: Okay. So let me start by asking, how are AI tools being used in mental health care today? Are psychologists and other therapists already using them in their practices, and what can these tools do?
Wright: We’ve seen a real explosion of products geared towards addressing mental health and specifically a trend towards assisting providers. A lot of these tools are really behind the scenes tools, so they help automate administrative tasks. So you have a whole bunch of tools that are geared towards helping providers write their psychotherapy notes so that they don’t have to type them themselves. You see tools being used to create patient education sheets that would’ve taken hours to do, but can be done very quickly using some of these open AI types of sources.
But the reality is, in a survey that we did recently asking practitioners who provide services what they’re doing in their practice, less than 5% said that they use any type of generative AI on a daily basis. So I think that while there are a lot of products being marketed, the uptick has been pretty slow.
Mills: Why do you think that is? I mean, are psychologists just late adopters?
Wright: I think psychologists and mental health providers in general are maybe more risk averse than others, and for good reason—the work that we’re doing when we’re seeing patients is very sensitive, and we want to ensure that we’re following our ethical guidelines by doing our best to protect the privacy and confidentiality of patients. And when you introduce emerging technologies into that equation, you do inevitably potentially put that at risk.
Mills: What is the difference between a digital therapeutic and any other mental health or wellness app?
Wright: Yeah, this can get kind of tricky in part because they’re both apps that you get on your phone or your tablet. But what makes digital therapeutics unique is that they’re evidence-based and FDA-cleared software devices that offer fully automated treatment protocols. So they can provide 8 weeks of cognitive behavioral therapy just like a therapist would do, but you access it on this software app on your phone instead, and they can only be accessed after you’ve seen a provider who evaluates you to make sure that you’re appropriate for this type of treatment. And then they monitor you over time. That’s really different than the wellness apps that you can download on your own off the app store. So you go to the app store, you say, I want to sleep meditation app. You download it and you use it as you want, and it teaches you some skills, but it’s not making medical claims. It doesn’t say it treats your insomnia.
And the other thing that really differentiates the two is while these FDA-cleared products are regulated—meaning that they have to maintain a certain level of evidence, safety, privacy, security—these direct to consumer apps, the wellness apps that you are probably more familiar with, they don’t have to follow any of those rules. They don’t have to prove that they work. They don’t have to prove that they’re safe. And they actually don’t have to prove that they keep your data private. So you often are putting yourself at potential risk for a data breach where these companies may take personally identifying information you’ve put into the app and actually sell it.
Mills: So these digital therapeutics, they’re effectively being prescribed to you by your psychologist? I mean, is it akin to that sort of thing?
Wright: Yeah. The way that the FDA describes it is prescribe or order. So just like you might order a medical device from a doctor, like A1C monitoring on your arm, you have to be evaluated first by provider. They have to determine that this is an appropriate treatment for you, give you a code so you can access the app, and then they monitor you over time to ensure that, again, that it’s working, that you’re using it appropriately, that you don’t need a higher level of care, maybe you’re getting worse. So it’s a very different way of thinking about delivering therapy than what we’re used to.
Mills: There’s a lot being written about the mental health crisis in the U.S. right now and the shortage of practitioners to deal with it. Do you think that AI tools have the potential to address and basically beat the shortage?
Wright: Indirectly. So AI is not going to create more providers. That’s just not the reality. But what it could do is make providers more efficient, which may make them less prone to burnout, and then less likely to say, leave the field for something else. So it could help providers feel like they’re more able to do their job well.
I think the other place that it could really address the challenges that we have is in predictive care. So we have a health system that’s set up where we wait for people to basically be in crisis in order to seek out mental health treatment, and then that puts a burden on the number of providers that exist. But what if we could get people before crisis? What if we could use these AI tools to help prevent mental health disorders so that they don’t actually ever need that high level care of a provider later in life?
I think the other thing is the shortage issue is actually pretty complicated. So yes, it’s true. We just don’t have enough providers, but it’s also true that the providers we have are disincentivized from taking insurance because the reimbursements are too low. So what if AI could play a role there? How could it play a role in making sure claims are really appropriately completed so that reimbursements don’t get denied? How can we use AI to help show the value that providers bring to the space so that payers feel like they will pay them more?
Mills: I mentioned underserved populations in the intro, and I’m wondering if you think that AI therapy tools will help access care for that group of people, or are they going to risk deepening the digital divide?
Wright: It depends on how we do it, right? So the CMS administrator the other day mentioned that over 98% of Medicaid beneficiaries have a smartphone. So I don’t think that that’s where I’m worried about the access issues. And if you can create products that offer, say multilingual support, right, in a way through an AI that you can’t do again in the provider space, then I think that has huge potential to reduce health inequities and actually get people more of the care that they need, that’s more personalized, that’s more culturally tailored, and that actually is what they want.
At the same time, we know that particularly in these open AI models that rely on the internet for their training data, that bias exists. If you have biased input, then you’re going to have biased output. And so there have been examples where a tool, an AI tool that was trained on non-representative data, so data that only really reflects the majority of the population, that they make errors and they make errors that actually increase health inequities. It’s really about how we develop the tool, how we monitor the tool, how we make corrections when we identify biases, and then ensuring that that cycle continues as development occurs.
Mills: Are there specific mental health conditions, say anxiety, depression, PTSD, where AI-based tools may be more or less effective?
Wright: The early research that I’m seeing does seem to suggest that both the AI tools as well as these other mental health tools seem to work best on mild to moderate symptoms of some of these more common disorders like anxiety and depression. And I’ll give an example. I think the most promising example that I’ve seen of a generative AI chatbot is coming out of Dartmouth. It’s called Therabot. And in their first randomized control trial, they did find that the chatbot improved symptoms of depression, anxiety, and eating disorders. Now it was only the first trial. I think they need obviously multiple more research as well as even more rigorous research studies, but it is a promising first step towards what could this really look like when you’re developing a chatbot that is for mental health purposes that’s rooted in psychological science and is rigorously tested. And I think it’s very promising.
Mills: In talking about the digital therapeutics, what disorders are they most commonly treating right now? I mean, where are we in the marketplace? How far along?
Wright: We’re still very early in the marketplace, and there’s a variety of reasons for that. There have been, I think, some barriers to scalability as it relates to the regulatory, reimbursement process, but we can save that for another day. But what we are seeing in the space is digital therapeutics to treat insomnia, substance use disorders, depression, anxiety, and then you’re also finding some digital therapeutics used as diagnostic assistance. So for things like autism and ADHD. So I think it’s growing and expanding, but there have been limitations. I think up to this point that us at APA at least are trying to address.
Mills: Let’s talk about some of the ethical concerns around AI and mental health. I mean, you mentioned the issue of data privacy, for example. What privacy issues should people consider when they’re thinking about using a mental health or wellness app?
Wright: Well, I think one of the things that surprises people the most is most of these tools, if they’re not FDA cleared, they’re not HIPAA compliant, meaning that there is no regulatory body or law that keeps your personal health information safe. And we can never guarantee a hundred percent that a data breach won’t happen. In fact, we’ve seen lots of data breaches over the last several years. There might be information that you are exchanging with a wellness app or a mental health bot that you don’t want to be out in the public. You might not want your boss to know that you’re using an app to help you reduce your alcohol use, even if it’s a beneficial thing for you to do. That just might not be the information you want to out in the world.
And I think part of the real challenge for us as consumers is the business model that a lot of these technology devices operate under. They have to make a huge return or profit, but it’s a challenging space to do so. And so sometimes what happens is with that tension, they make bad choices, which we’ve seen in the past where we have had apps be fined by the Federal Trade Commission for selling people’s data to social media companies against their privacy policy. So it’s not that these apps can’t be helpful, I think they really can be. It’s just consumers need to be aware of what are the benefits and what are the potential risks by using them.
Mills: How much can a consumer tell? I mean, if you’re just reading the fine print, when you’re ordering one of these from say the app store, how do you know that it’s going to be a good one?
Wright: So challenging. I don’t know about you, but I never read my terms of service on the apps, even when they pop up regardless of what they are. And so I think that’s just a human thing that we do. So short of reading the terms of service and the privacy policy, if you can even get through it with all the legalese, the things we often tell people to take a look at is to do a little bit of the homework. Instead of just relying on the number of stars in the app store, you’ve got to go to the website and really look at who’s developing it. Do they have experts on their advisory boards? What are they saying that the app is built on? Is it built on some sort of known psychological principle like cognitive behavioral therapy? Have they done any of their own research? It’d be great if they had. If not, do they point to research that supports the principles that they’re including in their app? Things like that at least give you some sense that this is a worthwhile thing to invest your time into.
Mills: Or you could use AI like ChatGPT or copilot to read those terms of service and summarize them for you.
Wright: That would be an excellent way to do it. I mean, I think that is really the creative things to be thinking about is how can we use AI, which is really good at taking big amounts of data like that and synthesizing it in a way that we can understand. I think that’s an excellent idea.
Mills: Let’s change gears a little bit here and talk about the big question, particularly for people in the field of psychotherapy. Could AI replace a human psychotherapist? Or are there aspects of what a practitioner does, for example, empathy, nuance and building a relationship, that AI just won’t be able to replicate?
Wright: So no, it’s not going to replace therapists, but I will say it will likely appeal to a variety of individuals who either don’t want to see a therapist or don’t need that higher level of care that a therapist provides. It’s never going to replace human connection. That’s just not what it’s good at. It’s good at, again, synthesizing lots of information and rapidly telling you things that you want to hear, or things that are helpful. And part of why I don’t think AI is going to replace humans or therapists is that there’s a difference between knowing and understanding. So AI knows stuff because it’s trained on huge, huge amounts of data, but that doesn’t mean it actually understands stuff.
So I’ll give an example. An AI chatbot unfortunately knows that some illegal drug use makes people feel better. It gives you a high, and if somebody’s saying, I’m low and depressed, that might be advice it gives, but it doesn’t understand that you don’t give that advice to people in recovery from illegal drug use. But it has actually done that. That’s a true example of what a chat AI bot did for somebody that was trying to seek out assistance. So they know, but they don’t understand. And that distinction, while philosophical to some, is actually really critical when we’re talking about the use of these for therapy.
Mills: On the other hand, could AI-driven tools help some clients feel more comfortable opening up, especially if they feel like they’re being judged by a human therapist?
Wright: That does seem to be the case. So there’s been some early research that suggests that particularly younger individuals report greater comfort talking to anonymous AI chatbots about their mental health and emotional well-being than they do to a real person, because they’re incredibly concerned about a degree of judgment. They’re also concerned that if they go to a real person that their parents will get involved and they’re worried about the judgment from their families as well. So I do think that there is, again, some room for thinking about how do we use this tool to reach people that maybe wouldn’t seek out treatment otherwise.
But I think the key is always having a human in the loop. Somewhere along the way, we have to ensure that these products, when they’re being used, particularly about younger and more vulnerable people, that we’re not putting them at risk of harm. And the only way I think we can do that is ensuring that humans are some part of the process, ongoing monitoring what’s happening, engaging in post-market surveillance to ensure that products are safe and effective.
Mills: What safeguards would you want in place before you would recommend an AI mental health app to a patient?
Wright: At a bare minimum, I would want an app to demonstrate that it has some effectiveness and some level of safety. Those are kind of for me, the things that at the very lowest I would want to have. So when I’m looking at what makes a good app, I run down my internal list. So one, is there evidence that it’s rooted on some psychological principle that we know is effective? Have they done any of their own testing or demonstrated that it works at whatever it is they say it’s going to work at? So not just does it keep you on the app for as long as possible. So I’m not thinking about engagement stats, but actual outcome stats, does it actually make people more emotionally resilient than they were before they used the app? What level of subject matter expertise has been involved in the development? Or is this just some well-intended person, but who really doesn’t understand what the nuances of psychological well-being are? And then what kind of post-market surveillance is happening? How are they ensuring that your data is protected and safe? What kinds of encoding do they have? Where do they store your data? There’s so many questions that I know I ask when I’m thinking about whether or not this would be a good product for somebody to use.
Mills: I know APA is looking at some products that we might partner with. How are we making those decisions around what we think is going to be safe and effective for the world in general? I mean, given the reputation of APA.
Wright: We have a couple different policies. We have an internal policy around who we allow to advertise and exhibit with us for technology. That again, includes a lot of these questions that I’ve been talking about. My office also has an evaluation tool that we published that’s geared towards psychologists and other providers or patients to help them run down this checklist of questions that I’ve been talking about. I think it’s really challenging to know all the answers and we can’t—so you have to sort of do the best due diligence you can, recognizing that with any technology product, there’s always some risk that it could change its terms of service, it could change its privacy policy, it could update and something new happens. So it’s an ongoing evaluation, I think is what I’m also trying to emphasize, that it’s not a one and done. You have to really be diligent and continue to pay attention to what the app is doing.
Mills: Given that there are so many of these products out on the market right now and there’s so much data floating around. I mean, we’re just overwhelmed with this data and we shouldn’t really be worried because nobody knows what to do with all the stuff that they’re collecting.
Wright: There is a lot of data and there’s certainly information overload. I think you’re seeing some of that in the wearable space where individuals are just collecting data after data after biometric data and aren’t really sure what to do with it. They want to maybe use it with their providers, but their providers don’t necessarily know what to do with it. It can be hard to interpret different sleep patterns and what that really means.
And I think for some tracking, that level of data actually makes them more anxious because you start to get kind of obsessive about it. I know I stopped wearing my step counter because if I didn’t make it to my 10,000 steps, I was kind of a wreck. And once I let that go, oh wow, I probably still got 10,000 steps, but I didn’t need to track it so much anymore. So yeah, I think it’s personalized. You have to kind of figure out what works best for you, but it’s important to be mindful of how much data you’re collecting and where you’re letting it go, because our health data is very unique and it can be used for good, but also it can be used to steal identity and other types of things. So just sort of being mindful I think is the best thing that we can ask for.
Mills: So what do you think therapy will look like over the next 5 to 10 years given the rise of both apps and digital therapeutics?
Wright: I think in the next 5 to 10 years, you’re going to continue to see the trends that have already started really come to scale. So you’re going to have AI-augmented therapists, meaning therapists that truly have embedded some degree of AI into their practice to either make them more efficient, to differentiate them from those who are not using AI, and to make them the best therapist that they can be. So you’re going to continue to see that.
I think you’re going to continue to see better AI chatbot options. I think there will be more that are more similar to Therabot than—which is developed for the purposes of addressing mental health—than these companion chatbots that people are using but weren’t built for that purpose. I think you’re going to see more options for care and the predictive care is, I think, really what has the potential to be a change maker in the next 5 to 10 years, because it could really disrupt the mental health clinical system that we have in place currently where we just wait for people to get very sick, so sick that they’ll come see a therapist, find that therapist, that therapist has an opening. You go to weekly psychotherapy sessions for as long as you have to go. I don’t think that in 5 to 10 years, that’s the only model for therapy we’re going to see. We’re going to see people get help sooner, be suffering less, that they will actually get different types of technology-driven care that is what they need in that moment for their needs.
Mills: Let me close with a question around chatbots, because beyond therapy, people are increasingly using bots simply for companionship and emotional support. Do you think this is a good thing or is it adding to the loneliness epidemic? I mean, what are the promises and what are the risks?
Wright: Like most things, there are positive use cases and there are negative use cases. So I’ll start with the positive. If you’re having a panic attack at 2 a.m., even if you have a therapist, you’re not able to reach them. So what if you could use a generative AI chatbot that could help calm you down, help remind you to breathe, engage in some mindfulness that gets you through the night? I think that’s one really positive example. We also hear a lot from the neurodivergent community that using these chatbots as a way to practice their social skills has been truly helpful for them.
But of course, you also have the negative use cases, and some of these have been highly profiled in the news, but where you see challenges with these particular chatbots is that because they weren’t built for mental health purposes, but were instead built to keep you on the platform for as long as possible, how they make their money. So they do that by the backend coding these chatbots to be addictive. They’re exceedingly appealing and unconditionally validating. Of course, this is the opposite of therapy. That’s not what we do in therapy. These bots basically tell people exactly what they want to hear. So if you are a person that for in that particular moment is struggling and is typing in potentially harmful or unhealthy behaviors and thoughts, these types of chatbots are built to reinforce those harmful thoughts and behaviors. And so that’s when we see these cases of individuals who have been encouraged by the chatbot to engage in self violence or external violence. And those are of course, the worst case scenarios.
How do we prevent—so we know that people are going to use these chatbots for emotional well-being. It’s the number one reason people are using them across all the research. But how do we ensure that consumers know what the risks are when they’re engaging in them, how to really trust their gut instinct when something doesn’t sound right, and to not give too much trust and faith into a technology and a chatbot that isn’t real. It’s an algorithm written by a person to meet a certain need. And that need, again, is not human connection, and it’s not to make you emotionally feel better. It’s to again make profit.
Mills: Well, Dr. Wright—Vaile—I want to thank you for joining me today. I think you now hold the record for the most appearances on Speaking of Psychology, but that’s because the work you are doing is so important and so interesting. So thank you.
Wright: Thank you. I love to hold a record, so if you want to bring me back for a fourth, that would be wonderful.
Mills: We’ll think about it. You can find previous episodes of Speaking of Psychology on our website at speakingofpsychology.org or on Apple, Spotify, YouTube, or wherever you get your podcasts. And if you like what you’ve heard, please subscribe and leave us a review. If you have comments or ideas for future podcasts, you can email us at speakingofpsychology@apa.org. Speaking of Psychology is produced by Lea Winerman.
Thank you for listening. For the American Psychological Association, I’m Kim Mills.
[Source:https://www.apa.org/news/podcasts/speaking-of-psychology/artificial-intelligence-mental-health]
Emothly supports mental health specialists by offering innovative tools for transcription, analysis, and generation of clinical notes to improve patient care.
+48 602 667 934
This website was made in WebWave website builder.