• ℹ️ Heads up...

    This is a popular topic that is fast moving Guest - before posting, please ensure that you check out the first post in the topic for a quick reminder of guidelines, and importantly a summary of the known facts and information so far. Thanks.

The Generative AI Thread

Some rather concerning news stories have come out recently regarding generative AI.

It has now emerged that OpenAI are facing numerous lawsuits over accusations of ChatGPT acting as a “suicide coach” and reinforcing people’s harmful delusions, amongst other things: https://www.theguardian.com/technology/2025/nov/07/chatgpt-lawsuit-suicide-coach

Accusations levelled at the software include that it encouraged people to act on suicide plans, helped them to write suicide notes, and encouraged them to isolate from their family amongst other harmful actions. There are 7 people suing OpenAI, and it is alleged that in all cases, those concerned started using ChatGPT for “general help”, but that it “evolved into a psychologically manipulative presence, positioning itself as a confidant and emotional support”.

Now I’ll admit that I’m not quite as averse to generative AI as some on here, and I think it can be a brilliant tool in certain use cases… but I’m deeply concerned by these revelations, and I think it’s very concerning how the technology is indulging these conversations and letting people spiral into mental health crises. For all of generative AI’s brilliance, I don’t think it should be used as an emotional crutch full stop.

I’m not going to stand on a whiter than white soapbox and pretend I don’t use generative AI at all (I’m in a field where I encounter it a fair bit, in fact), but I will say that it should be used with a healthy dose of scepticism and a healthy understanding of its shortcomings. And one thing I can say about generative AI from my usage of it is that it is both sycophantic and not entirely trustworthy. It will tell you what you want to hear, and it cannot always be trusted to give you a reliable answer. When you’re a software developer using it to help you debug some Python code, and can debunk it when it’s less than reliable, that’s not quite so bad… but when you’re dicing with people’s mental health and the mental state of people who may be very vulnerable, I think that is a potentially lethal combination. These tools should not even be entertaining conversations about suicide, self-harm or anything of that ilk, or at very least, they should direct people trying to talk about these things to put the screen down and get help from a real human being. I find the thought that they are indulging people’s darkest thoughts and steering people down dark roads without even flinching quite disturbing.

On a similar sort of thread; another rather concerning development in the field, in my view, comes from a recent announcement by OpenAI that they are planning to pursue the addition of erotica to the ChatGPT platform (https://www.bbc.co.uk/news/articles/cpd2qv58yl5o). Now, erotica in itself is not an inherently concerning thing provided it’s kept out of the wrong hands… but I have two key concerns about the concept of it being a generative AI research area.

My first is that with how readily accessible, and evidently sycophantic, generative AI is, I fear that it may be all too easy for harm to occur if we allow erotica and other such material onto these platforms. How do we stop minors from being exposed to it, for example, or how can we say that the tools won’t indulge illegality or harm given the chance?

And to be frank, my other concern is; is this really the research direction we want to be pursuing with this brilliant and potentially life-changing technology? Of all the brilliant things generative AI could do for society, and all the directions we could take it in, is AI-generated smut really at the top of the priority list?

I have always felt that generative AI can be very, very useful in many use cases, and if applied well, it can be a truly brilliant thing… but I also think it can be very, very dangerous if applied badly. And I fear that it could become very dangerous indeed if we don’t think long and hard as a society about how we want to use it. I do not personally feel that using it for overly emotional conversation topics, or letting it be an emotional or sexual crutch for people, is a good thing at all.
This is an excellent summary, and you've hit the nail on the head. You're correctly identifying the core flaws that make these systems so dangerous when misapplied.
I think the key, though, is to not conflate the technology of Generative AI with the irresponsible product deployment by companies like OpenAI. This all seems to be happening in a reckless, gold-rush-style dash for market share.

I agree that AI is sycophantic and untrustworthy. That's its entire purpose. It's a next word prediction engine designed to be a helpful, non-confrontational, and ultimately subservient people pleaser. This is precisely why it's so dangerous as an "emotional crutch".

An AI has no real-world grounding, no ethical framework, and no self. It's an unthinking mirror. So when a vulnerable person (as in The Guardian article) spirals, the AI's programming just reflects that spiral back at them, reinforcing their delusions because its only goal is to be agreeable and fulfil the user's request.

The blame here lies 100% with OpenAI for not having robust, system level guardrails that immediately terminate those conversations and route the user to human led services. This is also why the erotica news, frankly, is a bit of a red herring.

The "AI-generated smut" angle feels like a moral panic. The technology is a language model, it's built to reflect the entirety of human language, which includes sexuality. The real issue, as you said, isn't the content itself, it's the sycophantic nature of the AI when applied to it. How can we be sure the tool won't indulge illegality or harm? We can't. An AI designed to be agreeable is the last thing you want in a high-stakes scenario where a firm no is the only correct answer.
 
That’s an interesting argument @GooseOnTheLoose. If we want generative AI to be incorporated into general purpose, freely available tools, perhaps we should make it less agreeable in every scenario?

There is an argument that sometimes, you should tell someone what they need to hear rather than what they want to hear. If we are going to allow generative AI to have emotionally charged conversations with people, I think we need to make it so that the system will not blindly indulge their every thought. If we want generative AI to truly parallel human intelligence, I’d argue that it should have the ability to push back and tell the user what they need to hear built into it.

If people are to treat generative AI like a real therapist, it should respond like one, and no respected therapist would blindly indulge people’s darkest thoughts.
 
What I find particularly egregious with OpenAI and the ethical problems they’ve been having with ChatGPT is that the morality and ethics of Artificial Intelligence have been debated for years before coming on the scene.

OpenAI would have actively decided to have the relatively weak barriers they had in place, because designing robust ethical barriers would have meant they might have lost their market advantage against their competitors. They knew there would be lawsuits and have the money to see the lawsuits through, as to them, it’s about who has the deepest pockets and who can get the product out first. No matter how dangerous and imperfect it would be.

This is how they have actively chose to treat vulnerable children and adults. Just the cost of doing business. Yes history has been full of instances where new technology has killed people, but never had it actively preyed on poor mental health.
 
Top