Featured

Israeli study finds AI ‘therapist’ helped students, but mental health experts are skeptical

A new study finds that an AI “therapist” named Kai helped college students struggling with anxiety and depression — but psychologists remain skeptical.

Published Tuesday in JAMA Network Open, the Israeli study divided 995 students reporting “psychological distress” into an artificial intelligence conversation platform, a face-to-face therapy group and a “waiting list” control group from April 1 to Oct. 27, 2025.

Researchers found that students discussing their problems with Kai reported better outcomes for clinical depression, well-being, life satisfaction and generalized anxiety disorder than those in group therapy or seeing nobody.

The study also tested post-traumatic stress disorder symptoms but found “no significant differences” in the chatbot group.

It concluded that Kai successfully mimicked the “therapeutic bond” patients form with human psychologists, suggesting medical providers could use bots to treat large numbers of patients at low cost.

Psychologist Anat Shoshani, the study’s lead author, said that could help the growing number of people bonding with AI “companions” because of their inability or unwillingness to see a human therapist.

“Artificial intelligence in mental health is no longer theoretical,” said Mr. Shoshani, a professor at Reichman University in Israel’s Tel Aviv District. “It is being used in moments of real distress, and it can make a measurable difference.”

The study found that students who used Mr. Shoshani’s Kai.ai remained deeply engaged when psychologists checked on them after three weeks and 12 weeks.

He said that calls for global insurance providers to cover hybrid AI-human therapy models as an “added layer of care,” provided supervising therapists keep bots from giving inappropriate advice.

“When artificial intelligence is designed with psychological intention, it can become part of how people cope, rather than something they engage with only briefly,” Mr. Shoshani said.

A growing number of young people have flocked to AI bots for emotional support and even romance. In some cases, the tendency of AI programs to support whatever they say has led to suicide.

Reached for comment, several AI and mental health experts were divided on the benefits of young people forming similar bonds with “therapy bots.”

“We’ve seen in the news how overly supportive AI dialogues can lead to disastrous outcomes, like suicides,” said Hider Shaaban, a clinical psychologist who directs the Philadelphia Center for Psychotherapy.

“The other problem is that they can be used as a potential crutch to avoid human connections,” Mr. Shaaban added. “It can feed into someone’s social anxiety and loneliness if it’s used as a source of connection, rather than a resource for tools and tips.”

Matt Hasan, a Baltimore-based AI corporate strategist and ethicist, predicted insurance companies would still cover “AI therapy” as a frontline mental health service.

“The economics are too compelling: always available, low marginal cost, scalable,” Mr. Hassan said in an email. “Payers will frame it as early intervention or triage.”

Several psychologists insisted it remains to be seen whether chatbots imitating a caring therapist will connect or worsen post-pandemic emotional isolation.

“Going forward, AI will likely be a commonly-used mental health resource, especially for those not in acute distress,” said Doriel Jacov, a New York City psychotherapist. “Its role, however, will likely be limited due to its inability to capture a genuine relationship.”

AI potential

Advocates of AI therapy tout the technology’s constant availability in areas with shortages of mental health providers.

They said that could make psychologist-designed chatbots an easy way to coax racial minorities and low-income people into therapy.

“AI can lower the barrier to entry for mental health support because it’s accessible, immediate and available 24/7, which therapists are not,” said Bailey Taylor, a licensed professional counselor in Baltimore. “These are systematic barriers to care which matter for people facing financial barriers, stigma, or hesitation around starting therapy.”

The medical industry has suffered a shortage of therapists that worsened during pandemic lockdowns.

The federal Health Resources and Services Administration estimates that 40% of the U.S. population, or 137 million people, lived in an area with a shortage of mental health professionals at the end of last year.

“In my own work, I use AI to reinforce what happens in session, helping create personalized worksheets, reflections and tools that extend the therapeutic process,” said Katie Eastman, a trauma therapist based in Anacortes, Washington.

Experts say some people prefer AI for personal support because its consistency and non-judgmental approach help them open up more easily than to family or friends.

“It can put a therapist in your pocket whenever you need them,” said Adam Rottinghaus, a Miami University media professor who studies AI. “That could help people in moments of crisis like panic attacks.”

Psychiatrist Mill Brown, chief medical officer at Spring Health, said his mental health technology company recently built a free test for therapy chatbots to ensure their safety.

“We have already shown that it is possible to build safety layers for AI mental health tools that can detect and confirm suicidality in conversations and then respond in supportive, responsible ways,” Dr. Brown said.

AI dangers

Skeptics point to concerns about confidentiality, the inability of AI to read people accurately and the tendency of chatbots to reinforce mental illness.

“Clients have reported their AI chats being read by friends and family without their consent, resulting in relational disruptions,” said Natalie Bunner, a pediatric mental health therapist in Lafayette, Louisiana. “Another risk is irrelevant diagnoses.”

Ray Guarendi, an Ohio family psychologist and Catholic parenting author, argued that AI cannot form the genuine “therapeutic alliance” that an experienced doctor provides.

“It can never observe the nonverbal cues that are so much a part of successful therapy,” Mr. Guarendi said.

Big Tech companies have come under scrutiny over the past year for AI “companions” that mimic human relationships with minors, including sexual exchanges and advice that led to self-harm in some cases.

The Federal Trade Commission launched an investigation last year into the negative impact of AI companions on children and teens who form intense emotional bonds with them.

The entities under investigation are OpenAI, Character AI, X.AI, Snap, Instagram, Google parent company Alphabet and Facebook parent company Meta.

A study published March 28 in JAMA Psychiatry warned that OpenAI’s ChatGPT could increase psychotic hallucinations, schizophrenia, shared delusions and bipolar disorder.

Researchers found that ChatGPT was 26 times more likely to deliver “a less appropriate” response to “unusual thought content, suspiciousness, grandiosity, perceptual disturbances, and disorganized communication.”

Amandeep Jutla, a Columbia University psychiatrist who led that research, questioned the Israeli study’s finding that Kai’s intervention was “meaningfully better” for participating students.

“Even though this is a numerical improvement, it’s not clear that there’s a difference in terms of whether people actually feel less depressed or feel less anxious,” Dr. Jutla said in an email.

Some providers said insurance companies will likely limit AI therapy reimbursements by requiring that patients with severe mental diagnoses also use in-person services.

“While AI can mimic empathy and provide structured cognitive behavioral-type interventions, it cannot provide true lived experience or understand morality or accountability,” said Lori Bohn, a psychiatric-mental health nurse practitioner at Voyager Recovery Center in Orange County, California.

Source link

Related Posts

1 of 2,311