Should AI be used to treat mental health? Not yet.
- MHAI
- Oct 1
- 4 min read

There’s no escaping the huge leaps forward in generative AI in the last few years. The technology has impacted in one way or another every facet of our lives.
Much of this is positive. The Boston Consulting Group says that AI offers “tangible productivity gains across areas like software development, customer service (for example, smarter contact centers), marketing content, and R&D.” And yet AI is something of a double-edged sword. For example, an MIT article talks about the negative impact of generative AI on the environment.
This is the general theme of generative AI – finding a balance between the benefits and the drawbacks. Yes, it can improve productivity, but it can also make jobs redundant. Yes, it can write that essay for you, but it might include unfactual information.
So how do we, as a society, deal with AI in a responsible and ethical way?
AI in mental health treatment
This is the conversation that has recently been under the spotlight in Illinois, with Governor J.B. Pritzker having signed a law prohibiting the use of AI Therapy in the state.
In an official statement, the IDFPR Secretary Mario Treto, Jr. said:
“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients. This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”
At MHAI, we echo this sentiment. Mark J. Heyrman, the Chair of Mental Health America of Illinois’ Public Policy Committee, said:
"AI may at some point prove to be really helpful for persons with mental health problems. However, at the present time it also poses many risks. MHAI strongly supports Illinois' adoption of Public Act 104-0054 to regulate the use of AI for mental health treatment."
What’s wrong with using AI for mental health treatment?
It goes back to this idea of benefits and drawbacks. As Mark acknowledges, there may be a time when the use of AI has reached a level where it can genuinely serve our citizens.
However, as it stands, the risks are still too great.
AI and the vulnerable
AI, at the end of the day, is a machine. It lacks the ability to empathize, show compassion, or identify with people on a human level. If the technology is ever used in a mental health context, it should always be alongside a genuine mental health expert.
The issue with this lack of humanity is that AI is unable to change its tone or output based on specific circumstances. We can look at the tragic case of 16-year-old Adam Raine, who recently took his own life.
The teenager began using ChatGPT for schoolwork, but over time the generative AI became “the teenager's closest confidant,” according to the lawsuit filed by his parents against Open AI.
As he struggled with mental health issues such as anxiety, Adam became more open with ChatGPT, and outlined his plan to commit suicide.
Chillingly, the machine responded: "Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it." That same day, Adam took his own life.
What does the new law cover?
Because the risks with AI and mental health are so great, this new law is a welcome step forward in protecting individuals.
But what does it cover, specifically? You can find a full insight of the coverage here, but as an overview:
Only qualified human professionals can offer Therapy
Under WOPR, therapy and psychotherapy may only be delivered by licensed or certified professionals recognized by the state. This includes psychologists, social workers, counselors, marriage and family therapists, certified addiction counselors, professional music therapists, and advanced practice psychiatric nurses.
It’s worth noting that physicians, clergy offering religious counseling, and peer support providers are not counted as “qualified” human professionals when it comes to mental health. This means that they are excused from these regulations – although at MHAI we strongly recommend not using AI in these contexts.
Limits the use of AI in mental health counseling
AI can still be used in certain contexts in mental health counseling, but never when it comes to providing actual emotional or psychiatric support.
For example, it is allowed for administrative support (e.g., scheduling or billing) and supplementary support (e.g., maintaining client records or analyzing anonymized data).
AI is prohibited from:
Making independent therapeutic decisions.
Engaging directly in therapeutic communication with clients.
Generating treatment recommendations without professional review.
Detecting emotions or mental states.
In other words, AI may support (but never replace) the judgment and interaction of qualified professionals.
Consent for AI use and consumer protections
WOPR requires that clients give informed consent before any AI tool is used in recording or transcribing therapeutic sessions. This means:
Clients must be told clearly how AI will be used and for what purpose.
Broad or vague consent forms are not valid.
Clients must explicitly acknowledge understanding and approval.
To enforce these protections, violations may trigger civil penalties of up to $10,000 per infraction, with oversight from the Illinois Department of Financial and Professional Regulation. Repeat or severe violations may face even stricter penalties.
Increased focus on transparency, notice, and consent
The law took effect immediately upon signing, signaling Illinois’ urgency in addressing the risks of AI in mental health. Providers must now establish strong protocols for transparency and consent, ensuring clients are fully informed about when and how AI is used.
For professionals and AI developers alike, WOPR marks a paradigm shift: AI can be a tool in the background, but never the therapist in the room.
Avoid the use of AI for emotional support
As AI becomes an evermore common fixture in our lives, it is easy for it to become the support we turn to at a moment’s notice.
This is fine if we’re using it to increase productivity in our jobs or help us do our homework more effectively.
But we need to remember that it is a machine. It isn’t suitable or capable of helping us with our emotional problems. If you are struggling, don’t turn to ChatGPT or similar software for help.
Instead, you can call the mental health hotline at 988 or reach out to us at MHAI for more guidance.
Your mental health matters – don’t put it in the hands of machines.
