• ℹ️ Heads up...

    This is a popular topic that is fast moving Guest - before posting, please ensure that you check out the first post in the topic for a quick reminder of guidelines, and importantly a summary of the known facts and information so far. Thanks.

ChatGPT and AI

I know plenty of devs who have teams of 5 or so people doing the work of 30, thanks to AI, this is an industry wide trend too. Not just about the big tech companies. What about the bacteria developed by AI who's sole food is plastic, to help with the ocean crisis. Something a team of 1000 people would have took approximately 400 years to create, was created in a few months, by AI.

Like you said, the proof is in the pudding, this is the pudding, those two use cases are the pudding. You cannot compare this to things such as fusion and hydrogen power, because like everything else, AI can help transform those technologies.

These were both un proven areas until a very short time ago, it did not just exceed expectations in these areas, it totally wiped the floor. Then we have it doing all the work in engineering and other aspects of science. So yeah, I stand by what I said, it is a perfect tool for this sort of application.

Instead of arguing about it with me, go and try it yourself, go get it to create stuff for you that would be an IP. Go try it for yourself, you will see, it is very good at it. I can absolutely see why Merlin would use it.

These tech companies may have earned a bit, but you forget one massive thing, this AI is free to use and usable by anyone. So it is not like they are pulling wool over anyone's eyes to earn this money. As anyone can try GPT in particular, as see how good it is at certain things.
 
  • Like
Reactions: Tom
I've moved a number of posts here from the Merlin General topic, as they were getting into the specifics of ChatGPT rather than anything remotely to do with Merlin 😂
 
On the subject of AI’s ability to do creative things, as was mentioned earlier; surely AI, due to the inherent nature of it, does not have the ability to be creative like humans do?

The reason I think this is because AI is rooted in statistics and data analytics. Any AI model is trained on datasets, statistics, and all kinds of quantifiable things. Surely you can’t quantify creativity?

AI is not sentient. It cannot think for itself. It cannot have feelings and emotions. Surely creativity comes from the ability to understand people’s emotions, the ability to think originally and for yourself, and the ability to have thoughts and understand how to evoke a certain emotion in people? I’d argue that AI is an embodiment of the very opposite of creativity; it’s based on statistics, datasets and black and white outcomes, whereas creativity is based on blurred lines and something beyond the black and white, data-driven solutions that underpin AI.

I won’t deny for a second that AI is clearly getting very clever; some of the potential applications are truly revolutionary! However, I don’t believe that we’ll ever hit the stage where we have some AI uprising and the AI gains the emotional capabilities of humans. Creativity, and similar emotion-driven things like empathy and such, are not something that can be quantified. How do you teach AI to be empathetic? How do you teach AI to be creative? There’s no amount of datasets or statistics that can train those skills.
 
  • Like
Reactions: Tom
I think you are fundamentally mis understanding AI. Humans are rooted in statistics and data analysis too. That is how we have learnt everything we know. Literally everything.

AI will be sentient in the next couple of years. The proof is in the pudding, some of the creative things AI can do right now, are amazing. AI already is hugely creative Matt.

So 10 years ago, we reached the level with AI that was at the level of a rats brain. 10 years later, we are basically at human level, without it being sentient. It is only a couple of years before it has emotions. Of course it is going to have emotions, we have emotions. Are brains are just biological computers, AI is working on the same format our brains work in, but rather than using biological matter, we use silicon.

I think you will be very surprised in the very short space of time that it is going to be until AI has all the emotions that a human have. The issue is computing power and throwing enough at it. There is no reason why it would not have emotions when our brains are just computers, like what AI runs on.

How do you teach AI to be empathetic? Turn the problem around Matt, how do you teach humans to be empathetic?

I've moved a number of posts here from the Merlin General topic, as they were getting into the specifics of ChatGPT rather than anything remotely to do with Merlin 😂

My bad, spank me silly and call me Julie.
 
Last edited:
I know plenty of devs who have teams of 5 or so people doing the work of 30, thanks to AI, this is an industry wide trend too. Not just about the big tech companies. What about the bacteria developed by AI who's sole food is plastic, to help with the ocean crisis. Something a team of 1000 people would have took approximately 400 years to create, was created in a few months, by AI.

Like you said, the proof is in the pudding, this is the pudding, those two use cases are the pudding. You cannot compare this to things such as fusion and hydrogen power, because like everything else, AI can help transform those technologies.

These were both un proven areas until a very short time ago, it did not just exceed expectations in these areas, it totally wiped the floor. Then we have it doing all the work in engineering and other aspects of science. So yeah, I stand by what I said, it is a perfect tool for this sort of application.

Instead of arguing about it with me, go and try it yourself, go get it to create stuff for you that would be an IP. Go try it for yourself, you will see, it is very good at it. I can absolutely see why Merlin would use it.

These tech companies may have earned a bit, but you forget one massive thing, this AI is free to use and usable by anyone. So it is not like they are pulling wool over anyone's eyes to earn this money. As anyone can try GPT in particular, as see how good it is at certain things.
I was hoping Craig wouldn't split the topics off as I don't have the stamina I used to, but:

It undoubtedly has immense use in IT applications, but O'Neil was talking about creative innovation and artistic direction. That is a crazy area on which to focus AI at this stage. He doesn't strike me as particularly bright, but I have not worked out if he is joking/insincere in some of the things he says.

Algorithms, his so-called dynamic pricing upshift, analysing guest numbers, spend, etc. - fine. Using it to generate captivating concepts better than humans can - not foreseeable at this stage.
 
Last edited:
I think you are fundamentally mis understanding AI. Humans are rooted in statistics and data analysis too. That is how we have learnt everything we know. Literally everything.

AI will be sentient in the next couple of years. The proof is in the pudding, some of the creative things AI can do right now, are amazing. AI already is hugely creative Matt.

So 10 years ago, we reached the level with AI that was at the level of a rats brain. 10 years later, we are basically at human level, without it being sentient. It is only a couple of years before it has emotions. Of course it is going to have emotions, we have emotions. Are brains are just biological computers, AI is working on the same format our brains work in, but rather than using biological matter, we use silicon.

I think you will be very surprised in the very short space of time that it is going to be until AI has all the emotions that a human have. The issue is computing power and throwing enough at it. There is no reason why it would not have emotions when our brains are just computers, like what AI runs on.

How do you teach AI to be empathetic? Turn the problem around Matt, how do you teach humans to be empathetic?
Those are all very valid points, in fairness.

However, I would raise that there are still parts of the human brain where biologists have not deduced how they work. There are parts of the human psyche rooted firmly in the illogical. If biologists haven’t worked out how this stuff works in the human brain, how on earth do you program AI, a firmly logic-based model, to encompass it?

I won’t deny that computers, in some aspects, can replicate parts of the human brain with great accuracy. Artificial neural networks have already had this effect for a number of years now, and principles like reinforcement learning mean that over a certain number of epochs and with the right training dataset, AI can learn things with staggering accuracy in the same way humans can. I’d argue that anything with a reasonable degree of repetition, or anything fundamentally rooted in logic, is a space where AI can truly thrive and match, or possibly even beat, humans.

But I once again ask; how do you train AI to feel? Since the dawn of time, humans have had the illogical parts of the brain that make us feel, allow us to be creative, allow us to have empathy. All of that stuff is illogical, while computers and AI are fundamentally rooted in logic. How do you train something that is fundamentally 100% logical to contain illogical components? Emotion isn’t something that can be taught or trained, it’s something that humans have just always had.

I’ll admit that I’m probably not clever enough to understand the finer details of how advanced AI (of the ilk of ChatGPT) works. I’m currently studying a Computer Science degree, and I’ve even recently finished a module on AI, where we learned the different AI methods and the principles of how AI works. I’ll digress that that knowledge probably doesn’t even scratch the surface in terms of things like ChatGPT. So in all honesty, I’m probably wasting your time debating this, as you probably know a lot more than I do.

With that being said, the knowledge I have about AI makes me really struggle to understand how you could make it sentient or give it the ability to feel things. I know the scale of something like ChatGPT is far beyond that of the neural networks I’ve been learning about in my AI module, but surely the basic principles are the same, no?

My thought is; yes, you can replicate the parts of the brain rooted in logic. But how do you replicate those illogical parts of the brain in a fundamentally logical medium? Surely AI is only as good as the training dataset underpinning it, and what dataset can you give AI to help it replicate the illogical parts of the brain and gain the ability to feel and have emotions?
 
If this AI stuff can hold off it's evil plans for another 45-50 years, it would be grand as I'll be dead by then. Then it can take over the world.

Don't pretend it's not happening people. I've seen I-Robot, War Games and Terminator 2. We know where this is going folks.

I asked it if it was really a Cyberdyne Systems model T-101 sat at a PC somewhere telling me sweet nothings. It completely denied it and said "that's a very popular movie franchise". Yeah right, not fooling me, that's exactly what it wants us to think.

It also thought that HST's had been taken completely out of service whilst a set was right behind me stopped at a light. And it told me the Swarm was painted green and red. Clearly too busy plotting world domination if you ask me.
 
If this AI stuff can hold off it's evil plans for another 45-50 years, it would be grand as I'll be dead by then. Then it can take over the world.

Don't pretend it's not happening people. I've seen I-Robot, War Games and Terminator 2. We know where this is going folks.

I asked it if it was really a Cyberdyne Systems model T-101 sat at a PC somewhere telling me sweet nothings. It completely denied it and said "that's a very popular movie franchise". Yeah right, not fooling me, that's exactly what it wants us to think.

It also thought that HST's had been taken completely out of service whilst a set was right behind me stopped at a light. And it told me the Swarm was painted green and red. Clearly too busy plotting world domination if you ask me.
I apologise for the interjection, but I can’t tell if you’re being sarcastic or not here?
 
I’ve had chat GPT mark soon student work this week (GCSE mock answers from last year - anonymised of course). I gave it the question, the mark scheme, the context (GGSE UK etc) and then 25 student responses per question. Asked it to grade them and write between 150-200 words of feedback to help improve their answers.

The first question was a geography one about waterfall formation, it marked them all and gave feedback faster than I could read them - I QA’d them all and I agreed with every mark and honestly, the feedback was as good as anything I could have written. This was impressive, but somewhat predictable as the question has a more or less set response guided by the mark scheme. It’s pretty much repetitive grunt work marking this kind of stuff.

The second question was from GCSE Religious Studies about the ethics of abortion - same sample size, gave the AI the same guidance and a mark scheme…the results, not so amazing. Although it marked them all, it fed back in a very vanilla way, and the grades were off, it down marked the higher grades and was more generous at the lower end. Very interesting stuff.
 
I’ve had chat GPT mark soon student work this week (GCSE mock answers from last year - anonymised of course). I gave it the question, the mark scheme, the context (GGSE UK etc) and then 25 student responses per question. Asked it to grade them and write between 150-200 words of feedback to help improve their answers.

The first question was a geography one about waterfall formation, it marked them all and gave feedback faster than I could read them - I QA’d them all and I agreed with every mark and honestly, the feedback was as good as anything I could have written. This was impressive, but somewhat predictable as the question has a more or less set response guided by the mark scheme. It’s pretty much repetitive grunt work marking this kind of stuff.

The second question was from GCSE Religious Studies about the ethics of abortion - same sample size, gave the AI the same guidance and a mark scheme…the results, not so amazing. Although it marked them all, it fed back in a very vanilla way, and the grades were off, it down marked the higher grades and was more generous at the lower end. Very interesting stuff.
If you don’t mind my saying, I think this perfectly evidences what I’m on about. ChatGPT is excellent at things that are more black and white and more repetitive, but it fundamentally struggles with anything where creativity and subjectivity are involved by virtue of AI as a medium.

In your example, I’d imagine that the Geography question about how waterfalls are formed has quite a black and white solution to an extent, which is where AI thrives. With that in mind, it can go through student text as a bit of a box-ticking exercise with no real problems, as this is still fundamentally rooted in logic.

The Religious Studies question on abortion, however, sounds far more subjective and open to creative interpretation, and I imagine that’s why ChatGPT struggled with it a lot more. The subject matter is quite emotive and multi-faceted, and it has many potential answers; I’m guessing the main thing you as a teacher are assessing is the strength of the argument and the justification techniques used? A human can do this competently, but how does AI assess an argument’s strength in terms of influencing emotion and making it think?

I have experimented with ChatGPT, and I also have doubts about its creative ability because if you ask it to write anything, it has some very distinct calling cards that definitely make it look like an algorithm has written the passage rather than a human, in my view.
 
Those are all very valid points, in fairness.

However, I would raise that there are still parts of the human brain where biologists have not deduced how they work. There are parts of the human psyche rooted firmly in the illogical. If biologists haven’t worked out how this stuff works in the human brain, how on earth do you program AI, a firmly logic-based model, to encompass it?

I won’t deny that computers, in some aspects, can replicate parts of the human brain with great accuracy. Artificial neural networks have already had this effect for a number of years now, and principles like reinforcement learning mean that over a certain number of epochs and with the right training dataset, AI can learn things with staggering accuracy in the same way humans can. I’d argue that anything with a reasonable degree of repetition, or anything fundamentally rooted in logic, is a space where AI can truly thrive and match, or possibly even beat, humans.

But I once again ask; how do you train AI to feel? Since the dawn of time, humans have had the illogical parts of the brain that make us feel, allow us to be creative, allow us to have empathy. All of that stuff is illogical, while computers and AI are fundamentally rooted in logic. How do you train something that is fundamentally 100% logical to contain illogical components? Emotion isn’t something that can be taught or trained, it’s something that humans have just always had.

I’ll admit that I’m probably not clever enough to understand the finer details of how advanced AI (of the ilk of ChatGPT) works. I’m currently studying a Computer Science degree, and I’ve even recently finished a module on AI, where we learned the different AI methods and the principles of how AI works. I’ll digress that that knowledge probably doesn’t even scratch the surface in terms of things like ChatGPT. So in all honesty, I’m probably wasting your time debating this, as you probably know a lot more than I do.

With that being said, the knowledge I have about AI makes me really struggle to understand how you could make it sentient or give it the ability to feel things. I know the scale of something like ChatGPT is far beyond that of the neural networks I’ve been learning about in my AI module, but surely the basic principles are the same, no?

My thought is; yes, you can replicate the parts of the brain rooted in logic. But how do you replicate those illogical parts of the brain in a fundamentally logical medium? Surely AI is only as good as the training dataset underpinning it, and what dataset can you give AI to help it replicate the illogical parts of the brain and gain the ability to feel and have emotions?

Your brain is rooted in logic all of it, like computers.

In the sense that the way neurons fire and seemingly create connections to each other, has been very successfully recreated in software. This was possible because they work so logically. Neurons are the basis for absolutely EVERYTHING in your brain, literally everything, the logical and illogical, Including emotions and empathy etc. I assume that you get enough neurons together and they start to do illogical things, exactly as the brain does. This would also explain why less intelligent species do not have these illogical trains, not enough brain power and logically connected neurons to create these illogical emotions.

So yeah, back to the point that AI just needs more computer power throwing at it. So that these AI can create enough neurons and connections (hundreds of billions) to make this work. This is where sentiment comes in.

The same argument could be said about the brain, because the neurons work in a very logical way. How does emotion and empathy come from that. As I said, the neurons work so logically in fact, we have been able to re create the framework of them in computer software, which is the basis for AI.

The brain is a logical medium, but its wild and crazy power has enabled illogical things to happen. Does that make sense?

Don't be silly Matt, you are not wasting my time at all. I could happy talk about this stuff to anyone who shares an interest.

GPT4 which is the new GPT AI, is leaps and bounds ahead of GPT3, of which you will have used the chat on. Unless you have explicidly paid for GPT4, it is currently behind a paywall. That said, I found GPT3 impressive, both in the CODEX-GPT, which is a sort of AI driven integrated development environment for writing java script and with Chat-GPT

GPT4 has some truly exceptional features though. Such as being able to create you videos based on what you tell it to create. That is creative right? Also, creating you a website from a design you have drawn on paper. Partly creative partly technical.
 
Your brain is rooted in logic all of it, like computers.

In the sense that the way neurons fire and seemingly create connections to each other, has been very successfully recreated in software. This was possible because they work so logically. Neurons are the basis for absolutely EVERYTHING in your brain, literally everything, the logical and illogical, Including emotions and empathy etc. I assume that you get enough neurons together and they start to do illogical things, exactly as the brain does. This would also explain why less intelligent species do not have these illogical trains, not enough brain power and logically connected neurons to create these illogical emotions.

So yeah, back to the point that AI just needs more computer power throwing at it. So that these AI can create enough neurons and connections (hundreds of billions) to make this work. This is where sentiment comes in.

The same argument could be said about the brain, because the neurons work in a very logical way. How does emotion and empathy come from that. As I said, the neurons work so logically in fact, we have been able to re create the framework of them in computer software, which is the basis for AI.

The brain is a logical medium, but its wild and crazy power has enabled illogical things to happen. Does that make sense?

Don't be silly Matt, you are not wasting my time at all. I could happy talk about this stuff to anyone who shares an interest.

GPT4 which is the new GPT AI, is leaps and bounds ahead of GPT3, of which you will have used the chat on. Unless you have explicidly paid for GPT4, it is currently behind a paywall. That said, I found GPT3 impressive, both in the CODEX-GPT, which is a sort of AI driven integrated development environment for writing java script and with Chat-GPT

GPT4 has some truly exceptional features though. Such as being able to create you videos based on what you tell it to create. That is creative right? Also, creating you a website from a design you have drawn on paper. Partly creative partly technical.
I see your point.

Clearly whatever part of the brain causes these illogical elements is caused by the right combination of logically connected neurons, and in theory, I guess there’s no reason why you couldn’t eventually replicate that on a computer.

However, the whole science of sentience and being is yet to be ascertained in humans, let alone in computers. Clearly there is some sort of neuron-based algorithm underpinning that part of the brain, but it is so complex that humans simply can’t fathom it.

Besides, I think you’re right in saying that the advancement of AI relies heavily on the advancement of computer architecture. For a computer to replicate the human brain, you would need an inordinate amount of transistors and such to form the connections that make the human brain what it is. Our current CPU architecture, the Von Neumann architecture, has only a few. Surely you can’t replicate the state of being and the full intelligence of the human brain on something with far less connections than the human brain? I think that if AI ever gains sentience, this would need an entirely new model of CPU architecture to supersede the Von Neumann architecture, and I think we’re quite some years away from an architecture powerful enough to match the human brain yet.

I guess, to an extent, that those functions of GPT-4 that you describe are creativity. With that being said, while I can’t speak for GPT-4, asking GPT-3 to do anything creative seems to produce quite a generic response that definitely gives off calling cards of it having been created by an algorithm in a way that a response produced by a human doesn’t. From my experience of GPT-3, its responses when you ask it to do creative writing are quite repetitive and regimented in a way that a human’s response to the same request wouldn’t be. Perhaps I’m not saying that AI can’t be creative full stop, but rather that it can’t be creative to the same extent as humans, or that it can’t currently comprehend the subjectivity and emotion of full, unencumbered human creativity.

AI is actually a really strange topic to discuss, because once you go past a certain point, it seems as though the questions and concepts grow almost philosophical rather than technical…
 
I think that even in creative industries, there is often a structure or formula to what is produced, for example the length, acts and story arcs in films. If we can teach humans how to ‘make TV’ or movies, then we can train an AI.

One thing is for sure though, it’ll be hugely disruptive to white collar workers. I know in education some exam boards are looking at electronic exams (ie, you do it on a laptop instead of a paper) in the next few years becoming the norm. Getting an AI to mark exams is, for most subjects, going to be quicker, more consistent, fairer, produce less mistakes…oh and cheaper - much much cheaper - but I’m sure that’s the last thing on their minds…
 
I know for a fact that AI could be hugely disruptive in universities in particular.

I’ve just finished the second year of an undergraduate degree in Computer Science, and one of my lecturers was saying about how the university is pondering introducing a spoken viva component to all assignments because ChatGPT makes it much harder to ensure that coursework is a student’s own.

ChatGPT also likely presents a significant setback for the argument against exams. Even when I did my GCSEs pre-AI, the coursework component of one of my subjects was scrapped due to prolific cheating. If cheating was prolific back in 2019, imagine how prolific cheating would be now with ChatGPT available!
 
I know for a fact that AI could be hugely disruptive in universities in particular.

I’ve just finished the second year of an undergraduate degree in Computer Science, and one of my lecturers was saying about how the university is pondering introducing a spoken viva component to all assignments because ChatGPT makes it much harder to ensure that coursework is a student’s own.

ChatGPT also likely presents a significant setback for the argument against exams. Even when I did my GCSEs pre-AI, the coursework component of one of my subjects was scrapped due to prolific cheating. If cheating was prolific back in 2019, imagine how prolific cheating would be now with ChatGPT available!
Yep, it’ll be the norm. Could this be the nail in the coffin that finally brings our Victorian education system to an end?!
 
Universities have all the established techniques to bring back integrity to their assessment process, but since Covid they are too keen for remote/electronic exams and assessments.

Bring back invigilated exam halls and oral presentations or degrees will become even less valued than they have already slipped to.
 
I think it could be the nail in the coffin yes. It is becoming widely accepted within education institutions, that the conventional means of examinations are going to have to change radically in light of what is to come with AI.
 
Interesting subject actually, because I started studying for a degree last week and being out of education for 24 years now, I still had this vision in my head of our plagiarism session being like it used to be. On top of the old-school "don't go copying pages out of library books now will you" that was pretty much the only warning we had back in the day, I was expecting an "and that goes for Google too" update. Session ended up being dominated by AI discussion.

Being countered with the mandatory whacking it through Turnitin again and again of course. But thankfully I asked GPT how to get round it and it told me how, so all is well (I jest but it did do quite a good job of exploring it).

After they started making GCSE's easy a few years after I left school, I always thought moving away from the model of mostly just trying to remember loads of stuff and having a big exam at the end of 2 years was strange for basic skill qualifications. I could have got away with just copying out of text books (not from the school library of course, not that stupid) and passing with flying colours if I'd been at school just a few years after. Instead though, I decided I had no hope of remembering that stuff, so spent my "study leave" time working full time, playing the original Silent Hill and getting drunk instead.
 
This is a good guide in how to make your AI written essay undetectable. Passing it through lots of camouflaging tools basically.
Funny how GPT can not only write the essay, but also tell you how to camouflage it to pass exams too - kind of like fighting fire with fire.

The current examination process that exists in schools and education institutes across the planet, are now obsolete thanks to AI. It is only going to get worse as time goes on, as AI advances.

I am keen to see what changes will be made. But I am confident they will be quite significant, they have to be, the current method of testing someone's knowledge has now been rendered complete obsolete. We are only at the very start of this AI driven 'knowledge age' too.

This blog from Bill Gates called "The Age of AI has begun" is well worth a quick read. He mentions AI as being only one of two technologies he has seen in his lifetime as being truly revolutionary. A VERY bold statement from one of the worlds most intelligent people, of whom is responsible for transforming our life's through the other truly revolutionary technology.
Bill Gates said:
The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts—it asks you to think critically about biology.) If you can do that, I said, then you’ll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months.

In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam—and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5—the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course.

Obviously, doing exams would be difficult, seeing as they are heavily monitored, but it goes to show the power of what is a very new technology. Already smashing it.

Granted, Microsoft does have a massive stake in GPT, but they did not create it. The reason why they have such a massive stake is because they saw the importance AI is going to make and saw how great GPT was becoming.
 
Last edited:
Top