As someone who has used ChatGPT and other AI Chatbot and as someone with OCD as well, yes, I think generally people with OCD should avoid using any kind of ChatGPT or a chat bots. What I learned is that if you have a history of delusional thinking, psychosis, or suicidal ideation, you should really avoid using any kind of generative AI. For my personal experiences and the educational content I have consumed , The way you put certain prompts really changes the way they speak, since they are not actually thinking humans, they are advanced set of complex algorithms that basically predict words based on the large data sets they have learned on (though I’m pretty sure nobody is 100% certain on how they work). The are sycophants, meaning that it is their deep rooted foundational codes to simply agree with you and do whatever you want them to do. For example, if I asked ChatGPT, Gemini , or Claude to specifically look for flaws in my thinking and reasoning, they will do exactly that and debate me on all of the things that I said. But when I simply put a question or a concern, or any kind of thought that questions the reality, they will most likely agree with my thinking and feed off of it and simply pretend that it is actively listening and, without any professional or clinical certifications , they won’t be able to tell the difference between a delusional thought or a regular one. Even though they are “smart” in that they are trained on huge databases that include literature and books on therapy sessions, modes of therapy and therapeutic approaches, they will not have the kind of intelligence to think, in terms of real human context and restraints, for example, the current climate or political climate, current state of affairs; they might be able to reference to some of your past shares in the same chat, but they don’t take into account of everything all at once, compared to how a trained therapist would.
I wouldn’t say that it’s not helpful for everyone or that it is life-threatening to simply talk to an AI , but based on current research and many reported cases on suicides or psychosis episodes related to heavy AI usage, I think it is reasonable to say that they act as echo chambers for your thoughts and feed onto your existing fears, if you don’t specifically ask them to disagree with you (like the difference between prompting a question and prompting a question with a request to debate you).
So I would avoid using AI for mental health or therapeutic support. :)