• ℹ️ Heads up...

    This is a popular topic that is fast moving Guest - before posting, please ensure that you check out the first post in the topic for a quick reminder of guidelines, and importantly a summary of the known facts and information so far. Thanks.
  • 📸 Imgur UK Withdrawal

    As a result of Imgur's withdrawal from the UK, some images in topics may not be visible. Read the update in guest services for further information.

The Generative AI Thread

Matt N

TS Member
Favourite Ride
VelociCoaster (Islands of Adventure)
Hi guys. With the technology being increasingly prevalent, and an increasingly large part of most fields, I figure it’s about time we had a thread to discuss large language models (LLMs), or so-called “generative AI”. Whether ChatGPT, Gemini, Claude, DeepSeek or another model entirely is your weapon of choice, I’m sure most people have come across generative AI at some point within the last 2 years or so, so I figured it might be fun to have a wider discussion topic about generative AI (particularly seeing as another Alton Towers thread was derailed by GenAI talk earlier).

In here, I’m sure we could discuss anything about it. From where you think it might go to ethical dilemmas to fun things you’ve done with generative AI, I’d love to hear your opinions!

I’ll get the ball rolling with a news article I saw today in The Guardian.

Since generative AI first came about, there has been a lot of debate about whether LLMs, and their more recent sibling LRMs (Large Reasoning Models), have the ability to reason. Within the last couple of days, Apple have released a research paper all but indicating that these models cannot reliably reason: https://www.theguardian.com/comment...ar-ai-puzzle-break-down?CMP=oth_b-aplnews_d-5

In terms of the more detailed summary of Apple’s findings, they have found that generative AI models are very good at pattern recognition, but often fail to generalise when greeted with scenarios too far removed from their training data, despite being explicitly designed for reasoning problems in some cases.

As an example, generative AI models were tested on the classic Towers of Hanoi problem, a problem containing three pegs and a number of discs that entails moving all discs on the left peg onto the right one without stacking a larger disc on top of a smaller one. It was found that generative AI models could only just do it when 7 discs were present, attaining 80% accuracy, and that they could hardly do it at all when 8 discs were present. For some idea, this is a problem that has already been solvable by classical AI techniques since as early as 1957, and, as the author of the Guardian article puts it, “is solvable by a bright and patient 7 year old”. It was also found that even when the generative AI models were told the solution algorithm, they still could not reliably solve the Towers of Hanoi problem, which would suggest that their thought process is not logical and intelligent like that of a human.

Now of course, it’s worth noting that many humans would have issues solving a puzzle like Hanoi, particularly with 8 discs. But as the author of the Guardian article points out, it is a definite setback for LLMs that would suggest that the current iteration of generative AI is not the technology that will bring about the sort of artificial general intelligence (AGI), or “the singularity”, that is capable of superseding human intelligence and solving any complex problem. These findings would suggest that the outputs of generative AI are too hit and miss to fully trust, and that it can’t be expected to solve any complex problem on its own with any degree of reliability.

These findings would confirm a long-held suspicion of mine about generative AI. It is undeniably very clever, but I’ve long been sceptical of talk of it being able to reason and the like. What they effectively produce, despite the anthropomorphic seeming quality of the “speech” output, is something that looks like a plausible solution to the given question based on the data it has been trained on. These models are trained with language data that says “x word is most commonly followed by y word” and such. While ChatGPT and the like are probably underpinned by mind-blowing model architectures trained on equally mind-blowing training datasets, they suffer from the same flaws as any mathematical model in that they can generalise within a distribution of their training data. If greeted by a complicated problem that’s too far removed from their training data, they lack the skill to provide a reliable answer. I myself have had instances with ChatGPT where it has generated outputs that look plausible at first glance, but do not hold up to any scrutiny whatsoever if inspected more closely.

But I’d be keen to know; what are your thoughts on Apple’s findings, and generative AI in general? I’d love to hear any thoughts about the wider topic of generative AI!
 
It seems to me that the main strength of A.I. at the moment is in making connections that are difficult for us to see, as humans. I find it quite hard to believe that Generative A.I. would struggle with the Tower of Hanoi problem, as any computer can easily beat us at Chess of Scrabble, for instance, weighing up all the possibilities, but maybe I'm just not understanding the different types.

From what I've seen of Generative A.I., it's incredibly clever, drawing on a vast database to create anything you want. It's not for me to say whether this is truly "thinking", but just from using search engines, the A.I. assistance does seem to know what I mean much more than the old ones. It's like having a built in thesaurus, and seeing all the ways people have connected different things.

One possible use is in health. YouTube is full of videos saying "Berries cure this, Apples cure that" etc... Well, in theory, A.I. could look at the cells of people who have had certain illnesses, and see if there's anything in common with the people who have recovered. Do certain nutrients cure certain things? It's not that it couldn't be done before, but A.I. could sift through a vast amount of data.

But as you suggest, it does need the data in the first place. As Generative A.I. is connected to the internet, though, it is not unreasonable to think it has "read" the internet. I have heard anecdotal evidence that different countries deliberately tell their own A.I. to cover up their less proud moments. But it does know, that's the point, and if you're tactical enough you can get it to tell you what it's been told to censor.

In Avengers: Age of Ultron, Tony gives it an instruction something like "Protect the Earth". Ultron decides that protecting the Earth means eliminating humans. Hopefully, A.I. won't come to that conclusion! However, it will know what most of us consider "harm", and it will be able to work out who is causing the most harm in the world. It won't be the ordinary people, that's for sure.

So whether A.I. is truly "aware" or not, it will definitely be able to give us information. But to be honest, a lot of that information is already out there, so it may come down to that old chestnut - Do we really want to know?
 
Last edited:
Apple have released a research paper all but indicating that these models cannot reliably reason: https://www.theguardian.com/comment...ar-ai-puzzle-break-down?CMP=oth_b-aplnews_d-5
It is very much in the interests of a company, which doesn't have a play in the LLM / LRM game, to attempt all they can to discredit and disprove LLM /LRMs.

Apple dropped the ball with this one and instead of pouring money into their own research and development, for alternative AI routes, they're instead funding research to discredit and disprove a field they don't have a reasonable stake in.

Apple have failed significantly to deliver on the relatively modest AI tools which they promoted at last year's WWDC. They lied to us, they lied to their investors. AI, or Apple Intelligence, almost went without a mention at this week's WWDC, at the same time which Apple published this paper.

Apple brought in ex-Googler John Giannandrea to run their AI division in 2018. Giannandrea is on record, more or less from the start, with not liking, not believing and having no interest in LLMs and LRMs. It is not coincidental that his research lab published a paper confirming his views.

Apple is not generally a company which is known for its research, development or publishing of technical and scientific papers. They're extremely secretive about the research they conduct, and extremely protective over any discoveries. Publishing this paper is unusual for them, but it's one which happens to align with a narrative that they will be pushing hard to Wall Street. This is a defence for missing the largest shift in technology since the web.
 
Hi guys. With the technology being increasingly prevalent, and an increasingly large part of most fields, I figure it’s about time we had a thread to discuss large language models (LLMs), or so-called “generative AI”. Whether ChatGPT, Gemini, Claude, DeepSeek or another model entirely is your weapon of choice, I’m sure most people have come across generative AI at some point within the last 2 years or so, so I figured it might be fun to have a wider discussion topic about generative AI (particularly seeing as another Alton Towers thread was derailed by GenAI talk earlier).

In here, I’m sure we could discuss anything about it. From where you think it might go to ethical dilemmas to fun things you’ve done with generative AI, I’d love to hear your opinions!

I’ll get the ball rolling with a news article I saw today in The Guardian.

Since generative AI first came about, there has been a lot of debate about whether LLMs, and their more recent sibling LRMs (Large Reasoning Models), have the ability to reason. Within the last couple of days, Apple have released a research paper all but indicating that these models cannot reliably reason: https://www.theguardian.com/comment...ar-ai-puzzle-break-down?CMP=oth_b-aplnews_d-5

In terms of the more detailed summary of Apple’s findings, they have found that generative AI models are very good at pattern recognition, but often fail to generalise when greeted with scenarios too far removed from their training data, despite being explicitly designed for reasoning problems in some cases.

As an example, generative AI models were tested on the classic Towers of Hanoi problem, a problem containing three pegs and a number of discs that entails moving all discs on the left peg onto the right one without stacking a larger disc on top of a smaller one. It was found that generative AI models could only just do it when 7 discs were present, attaining 80% accuracy, and that they could hardly do it at all when 8 discs were present. For some idea, this is a problem that has already been solvable by classical AI techniques since as early as 1957, and, as the author of the Guardian article puts it, “is solvable by a bright and patient 7 year old”. It was also found that even when the generative AI models were told the solution algorithm, they still could not reliably solve the Towers of Hanoi problem, which would suggest that their thought process is not logical and intelligent like that of a human.

Now of course, it’s worth noting that many humans would have issues solving a puzzle like Hanoi, particularly with 8 discs. But as the author of the Guardian article points out, it is a definite setback for LLMs that would suggest that the current iteration of generative AI is not the technology that will bring about the sort of artificial general intelligence (AGI), or “the singularity”, that is capable of superseding human intelligence and solving any complex problem. These findings would suggest that the outputs of generative AI are too hit and miss to fully trust, and that it can’t be expected to solve any complex problem on its own with any degree of reliability.

These findings would confirm a long-held suspicion of mine about generative AI. It is undeniably very clever, but I’ve long been sceptical of talk of it being able to reason and the like. What they effectively produce, despite the anthropomorphic seeming quality of the “speech” output, is something that looks like a plausible solution to the given question based on the data it has been trained on. These models are trained with language data that says “x word is most commonly followed by y word” and such. While ChatGPT and the like are probably underpinned by mind-blowing model architectures trained on equally mind-blowing training datasets, they suffer from the same flaws as any mathematical model in that they can generalise within a distribution of their training data. If greeted by a complicated problem that’s too far removed from their training data, they lack the skill to provide a reliable answer. I myself have had instances with ChatGPT where it has generated outputs that look plausible at first glance, but do not hold up to any scrutiny whatsoever if inspected more closely.

But I’d be keen to know; what are your thoughts on Apple’s findings, and generative AI in general? I’d love to hear any thoughts about the wider topic of generative AI!

I am going to counter this article in the best way I can, with the best intent.

The number of elephants in the room in that article could populate a continent. Let’s start with the basics and some huge ironies…

To paraphrase, the article quite proudly announces that a billion-dollar AI cannot do what a human child can do—so it essentially concludes and insinuates that AI is rubbish based on that fact.

The most amazingly ironic thing about this is the sheer number of flaws and inaccuracies in the article itself. To rephrase the article’s own words: “It is truly embarrassing that humans cannot reliably fact-check information.”

Apple sat on their backsides and did absolutely nothing with AI, while the big tech companies overtook them, excelled, and then steamrolled them—and continue to do so. Is it any surprise, then, that they would release a completely biased article (which was never peer-reviewed) into the wild? Even their half-baked and cumbersome “Apple Intelligence” feature is powered by their competitors and still doesn’t come close to what is offered on other platforms.

The specific problem with the, quite frankly, nonsense Apple used to justify their take is the fact the “researchers” who were playing Hanoi were forcing the mathematical models not to solve the puzzles algorithmically, but instead to spit out long, perfect sequences of moves manually. That’s not reasoning or thinking—it’s testing repetitive busywork. Neither humans nor LLMs are great at that, and they never have been. It is, however, a convincing argument to push to the masses from a company that was completely left behind in what is probably one of the most important revolutions in human history.

If anyone can write a script that works out the perfect sequence of 1,023 moves when playing a 10-disc Tower of Hanoi without messing it up, they could walk into a six-figure job tomorrow.

If you pulled it off, would you really consider that you had demonstrated more reasoning or intelligence than solving a four-disc Tower of Hanoi problem?

The algorithm is exactly the same. Yet Apple’s researchers claimed that if you started making errors in the longer sequence, it demonstrated a collapse of reasoning—which makes no sense.

Confusing? It confused me, big time. But going off the theme of that article, wouldn’t it be more embarrassing for people to repost and believe things that were never fact-checked in the first place? Let alone reposting without understanding bias, motivations, and the specific technology. That’s far more embarrassing than an AI failing at a contrived mathematical problem that is not only flawed in execution but also objectively harder for humans to solve.

Based on that logic, surely I can conclude that humans are all rubbish?. Same principle, after all.

I mean all this tongue-in-cheek, of course, but written in a way to make you think.

Love it, loathe it or hate it. We are at the start of the biggest revolution ever to face humanity, in the form of AI. That is not saying it is a good thing, or a bad thing. Just a statement of where we are....

And yes, I did grammar check this whole post with AI. I mean why not? Work smart, not hard. 😀
 
Last edited:
What's going to happen some years down the line when unemployment benefits far outweigh the tax that governments can bring in helped by the fact that many computer based and other jobs stop becoming available due to Ai? Then the scales really tilt and less jobs elsewhere are created because less people have jobs and therefore disposable income. Will we just play an endless charade of printing more money and pretending it'll be paid back by generations to come? The future was already looking a bit bleak, but I think this Ai thing is the final nail in the coffin.
 
But that is the way we have gone for the last few generations though, continuing a fine "faith in the system or it collapses" tradition.
And borrow more.
Perhaps this should be the Degenerative AI thread...
 
What's going to happen some years down the line when unemployment benefits far outweigh the tax that governments can bring in helped by the fact that many computer based and other jobs stop becoming available due to Ai? Then the scales really tilt and less jobs elsewhere are created because less people have jobs and therefore disposable income. Will we just play an endless charade of printing more money and pretending it'll be paid back by generations to come? The future was already looking a bit bleak, but I think this Ai thing is the final nail in the coffin.
My view is that the threats of mass job losses as a result of generative AI are exaggerated and alarmist.

In its current guise at least, I don’t think generative AI will replace large amounts of people. I think it will supplement them, and function as a tool to aid them in completing tasks more efficiently.

Take a software developer, for example.

Will a software developer be able to use AI in their work? Almost certainly, yes. They can use generative AI to automate some of the more boiler plate, routine parts of their work, like making small constituent functions, or generating tests, or doing anything like that. For that sort of thing, generative AI is very good and can save a lot of time.

But despite that, can AI entirely replace the software developer? I don’t believe it can. As much as AI can automate parts of their job, you still need skilled people to ask for the right prompts and validate generative AI’s output, and that’s where humans come in. We are a long, long way off the point where non-technical business-minded folk can come in, talk to ChatGPT in non-technical business speak, and have it churn out a perfect end-to-end software solution, if that is ever even possible. You still need skilled people to translate non-technical business needs into workable technical solutions, and that is where the human software developer comes in.

With this in mind, my belief is that generative AI will not replace people en masse as many believe it will, but make them more efficient and give them more time to focus on what really matters, which should only make them more powerful. It’s in the same vein as how people said the calculator would render mathematics and mathematicians redundant, but it only made mathematicians more efficient and more powerful.

Not to mention, generative AI will create many new jobs. Fields that don’t currently exist or are in their infancy will explode as generative AI becomes more and more prevalent. One such field I know of is prompt engineering. That entire profession wouldn’t have existed a few years ago, but there are now companies specifically hiring prompt engineers to figure out how to best query LLMs for the most effective output (I’ve looked into it a little, and LLM prompting is a surprisingly exact science!).

So on balance, I don’t think generative AI will create the absolute jobs apocalypse that some predict it will. There will always be a need for skilled people in these professions.
 
My view is that the threats of mass job losses as a result of generative AI are exaggerated and alarmist.

In its current guise at least, I don’t think generative AI will replace large amounts of people. I think it will supplement them, and function as a tool to aid them in completing tasks more efficiently.

Take a software developer, for example.

Will a software developer be able to use AI in their work? Almost certainly, yes. They can use generative AI to automate some of the more boiler plate, routine parts of their work, like making small constituent functions, or generating tests, or doing anything like that. For that sort of thing, generative AI is very good and can save a lot of time.

But despite that, can AI entirely replace the software developer? I don’t believe it can. As much as AI can automate parts of their job, you still need skilled people to ask for the right prompts and validate generative AI’s output, and that’s where humans come in. We are a long, long way off the point where non-technical business-minded folk can come in, talk to ChatGPT in non-technical business speak, and have it churn out a perfect end-to-end software solution, if that is ever even possible. You still need skilled people to translate non-technical business needs into workable technical solutions, and that is where the human software developer comes in.

With this in mind, my belief is that generative AI will not replace people en masse as many believe it will, but make them more efficient and give them more time to focus on what really matters, which should only make them more powerful. It’s in the same vein as how people said the calculator would render mathematics and mathematicians redundant, but it only made mathematicians more efficient and more powerful.

Not to mention, generative AI will create many new jobs. Fields that don’t currently exist or are in their infancy will explode as generative AI becomes more and more prevalent. One such field I know of is prompt engineering. That entire profession wouldn’t have existed a few years ago, but there are now companies specifically hiring prompt engineers to figure out how to best query LLMs for the most effective output (I’ve looked into it a little, and LLM prompting is a surprisingly exact science!).

So on balance, I don’t think generative AI will create the absolute jobs apocalypse that some predict it will. There will always be a need for skilled people in these professions.
The problem I see here is, if you're getting the same work done in half the time, when it comes to replacing that job, the employer can afford to hire someone on half the hours and get the job done, or hire one person if it was previously two.
 
When we discuss the output of generative AI models, it baffles me when people get caught up about the perceived fakeness of early models. I consider this to be a little short sighted, as history has demonstrated time and time again that it absolutely will get better. By obsessing over whether we can tell if something is AI slop, we lose sight over the other achievements that the tech is making in the background, the more impressive and terrifying leaps.

I shared the following video with my partner earlier, it floored me. They were significantly less impressed. Their first response was "unconvincing", followed by "I could tell it was a fake video", "it looks AI" and that visually it "wasn't that impressive".

This entire video, from the script to the final output is entirely AI generated:

From: https://www.youtube.com/watch?v=3sVO_j4czYs


Yes it is bland, yes it is excruciating, yes it's uncanny and it isn't by any means perfect... but it's good. It's passable. It looks, almost, like any other corporate fun promotional video that could be commissioned and uploaded to YouTube for promotion, and it barely involved any humans.

There is a story with a clear and understandable narrative arc. There are convincing and emotive performances. It looks and sounds like it was filmed to a high professional standard.

None of this was possible three years ago.

To create and shoot a video like the above, the old fashioned way, is time and resource intensive. It's an entire industry:
  1. Draft, write, edit and revise the script.
  2. Shop it around to potential funding sources, providing that you don't have the financial backing already.
  3. Hire a production co-ordinator, director, camera operator, sound recordist, lighting engineer, costume, props and makeup.
  4. Storyboard and create a shot list.
  5. Cast for each and every single actor and background artist.
  6. Scout for and secure locations.
  7. Draw up contracts for rights releases, temporary employment / freelancers.
  8. Complete health and safety assessments for the filming.
  9. Source any props and wardrobe.
  10. Hire camera, lighting and sound equipment for the shoot.
  11. Rehearse with the actors.
  12. Set up the sets / locations.
  13. Film over several days.
  14. Teardown.
  15. Ingest the footage, create an assembly edit, refine the cut.
  16. Colour correction..
  17. Sound design and mixing.
  18. Graphics and visual effects.
  19. Review and export.
The above would probably take a few weeks to produce, and consist of a cast and crew of 15 - 20 people. The AI video was created by a singular person in a few days. It is only going to get easier.

I share this not as a "OMG this is the coolest thing ever". I'm not sharing this because I find it a great piece of art, or the most convincing thing I've ever seen. I'm sharing this because it is the current state of the technology. It IS only going to improve and get better, and I'm really not sure what we can do next as a world and a society.

On the one wing, I am in absolute awe of the state of generative AI now and am incredibly excited and optimistic for its potential uses. On the other wing, I am incredibly afraid of the destruction it is going to bring to various different industries, especially if gone unchecked.

I've always called for nuance within AI discussions. I don't believe it's evil and bad, I don't believe that it should never be used, I also do not believe that it should be blindly adopted everywhere all at once.

The reaction of the creative industries toward AI is, understandably, overwhelmingly scared and negative. I would argue that this is likely because, for all intents and purposes, they are the last remaining industries to have their workforce threatened with mass redundancies (for the first time) because of changes in technology.

So, please, consider "AI Commissioner". It is no worse, and is arguably much better, than many first year film student projects. It is something that I, even with my technologist hat on, thought possible or saw coming to fruition. I thought the threat to camera ops was going to be from automated pocket drones, able to track and film you wherever you needed. I always thought that in order to film something, you would always need someone to film. I never dreamed that you would be able to use a few text prompts to create something so human and time intensive.

It is scary. It is exciting. It is depressing. It is exhilarating.

What happens next?
 
I don't think I have any positive thoughts at all about the advance of Ai in the last couple of years and what is coming in future. In fact, if I had a choice, I wish the internet never became a thing in the first place. Ironic, as I'm using it to send this message, but that's only because it's there and we have to be online to basically do a lot of important things nowadays anyway (if you don't want to make your life more difficult and costly than it has to be).

I mean, if the time/money saved by Ai in any given company was going to be a way of rewarding existing staff with better pay or lightening their workload, then I would welcome that, but everyone knows that it'll just be a way of the company reporting higher profits year on year and congratulating themselves at the top with bigger bonuses and salary. It's just all depressing really.
 
What happens next?

For the most part: nothing good. I agree that "AI" as we understand this current wave of it has some useful aspects and should not be written off entirely, but as far as fully generative video I think the harm will so vastly outweight the benefits that it won't be a close comparison whatsoever. In about 5 years (maybe less!) you will not be able to trust that any given video or audio is real, which seems like a pretty bad thing for society at large.

For creative purposes I also can't see it being a good thing at all. Individual artists might use it for actually meaningful art, but on the broader scale, look at Netflix's catalog today: almost totally "slop" to the point that it's shocking to me whenever I watch something of quality with the Netflix logo on it, and that's all from the before times when you needed humans to staff the production. The reels/shorts/Tiktoks also have a similar problem where a computer generates an infinite number of short videos on an infinite number of topics, that are then served to people on an individual basis based on what ones the algorithm thinks will tickle your lizard brain long enough to register as a single view to the associated advertising engine. Talk about bleak.

Personally I just don't care for any of it. The internet I grew up with, the one that seemed to have so much promise, has now been almost entirely ruined, and these advancements in generative AI are only going to make that much worse. On the positive side I'm grateful that there are enough quality made-by-human-being films I haven't seen to last me, more or less, for the rest of my life.
 
The fascinating thing here, for me, is reading the arguments put forward for and against AI. Parallels can be drawn between these arguments and those at the very start of the Industrial Revolution—specifically around steam engines. These machines were the literal driving force and the technology that enabled us to completely transform the world we live in, on a magnitude, level, and scale never before seen.

Of course, they were very controversial in the beginning. The Industrial Revolution as a whole—enabled by the steam engine—totally changed people's worlds as they knew them. People do not like change; it's a scary thing for many, even if they don't consciously realize it.

The comparison to AI is relevant because AI is the technology that is going to completely disrupt and change everything—on a scale much larger than, and not seen since, the Industrial Revolution.

The first steam engines were around 0.05% efficient. That is to say, of all the coal put into these machines, less than 1/20th was converted into steam. Ridiculously inefficient—especially considering the massively damaging environmental impact of coal. But this went on and on. If we hadn’t persisted, we would never have gotten good at it, improved the machines, and progressed. We’d still be living in caves, hitting sticks together.

The justification and arguments used to support those monumentally inefficient steam engines sound a little similar to the argument today that generative AI is damaging the planet with its huge power consumption. It’s a valid point—for sure. Not one to be ignored—but valid nonetheless.

Just like with steam engines, as we use AI more and build better silicon to run the models on, it will become more efficient over time. We didn’t just wake up one day and have LED light bulbs. That technology is the result of many decades—centuries—of advancement.

Also, I don’t buy the arguments that “generative AI is going to ruin everything,” or that “we won’t be able to tell real videos from AI,” or that “AI slop is going to ruin everything.” The list goes on.

Not a single argument I’ve seen put forward—across many platforms—has ever considered the situation that’s (slowly) arising now, where AI vets AI. Fighting fire with fire, if you will.

Love it or hate it, subjectively speaking, AI is bringing the biggest change to everything as we know it. The biggest technological leap humans have ever seen. For me it's exciting, but unfortunately, the time all of us die and are long gone, we will have only scratched the tip of the iceberg, we will never see the full extent of what this will bring. Non the less, I feel privileged as a human, to be alive at the very start of a technological leap, similar to the industrial revolution, that will change everything. Not many humans can have that claim. We can though.

Disclaimer: I was pretty drunk when writing this. Also, GPT did grammar correct this.
 
Last edited:
Top