• ℹ️ Heads up...

    This is a popular topic that is fast moving Guest - before posting, please ensure that you check out the first post in the topic for a quick reminder of guidelines, and importantly a summary of the known facts and information so far. Thanks.

Should children under 16 be banned from using social media and/or smartphones?

Should children under 16 be banned from using social media and/or smartphones?


  • Total voters
    43

Matt N

TS Member
Favourite Ride
Mako (SeaWorld Orlando)
Hi guys. In recent days, the murderers of Brianna Ghey, a 16-year-old girl from Cheshire, were given life prison sentences. With Ghey’s murder having been brought back into the limelight in recent days, a debate has ensued regarding smartphone and social media use in children.

As it has emerged that the perpetrators had been viewing online content relating to torture and murder, Ghey’s mother is now calling for social media apps to be banned for under-16s, and for child-specific smartphones to be made available for those under 16 that do not have social media apps on them: https://www.independent.co.uk/news/...t-jenkinson-murder-social-media-b2490172.html

Tory MP Miriam Cates has also called for a ban on social media and smartphones for children under 16 following a case of a 14-year-old girl committing suicide due to Snapchat cyberbullying, and in a recent survey, it was found that as many as 44% of parents would support a ban for under-16s: https://www.mirror.co.uk/news/politics/should-under-16s-banned-using-31963254

Some campaigners, such as “Britain’s strictest headteacher” Katherine Birbalsingh, are even calling for a ban on phones and tablets for under-16s, along with “tobacco-style health warnings” on their packaging, as they believe that technology and social media “break children’s brains”: https://www.dailymail.co.uk/health/article-11956063/Ban-phones-16s-campaigners-demand.html

An ethical debate has certainly opened up regarding social media and smartphone use in children, with many smartphone sceptics now arguing that social media and smartphones damage children’s attention spans and mental health and expose them to adult content too early. These campaigners feel that they are an active threat to public health in the same manner as smoking. With this in mind, I’d be interested to know; do you feel that children under 16 should be banned from using social media and/or smartphones?

Personally, I’m possibly leaning towards no, but with a few caveats.

On one hand, I can definitely see the threat that smartphones and social media could potentially pose to children and adolescents.

At present, it is all too easy for children to access adult or dangerous content on social media. Whether it’s pornography, murder content, pro-eating disorder content or some other form of dangerous content, it can be just a few clicks away on Facebook, Twitter, Instagram, Snapchat, TikTok and the like with very little to stop that sort of content from being proliferated. Once a child has clicked on one of these dangerous posts, the algorithms fuelling social media apps mean that it is all too easy to be sucked into an ever-deepening online rabbit hole filled with this sort of content, and that can prove very dangerous, possibly even fatal in some cases.

Social media also means that for children being bullied, there is no escape from it. At least in the days prior to the advent of smartphones and social media, children who were being bullied could get an escape from the bullying when they went home at the end of the school day. But now cyberbullying has taken the bullying online, there is very little way for children getting bullied to escape it.

I absolutely acknowledge that smartphones and social media present these dangers to children, and I acknowledge that the status quo is not sufficient to protect children from this sort of content. There needs to be something more.

On the other hand, though, I don’t think that smartphones and social media are entirely bad for children and adolescents, and I do think that a full ban is perhaps disproportionate. A lot of socialising happens online these days, and social media allows children to socialise with their friends more easily and whenever they would like to. I also believe that smartphones and social media can be very valuable for giving a voice to the voiceless and allowing children and adolescents who might feel like they don’t fit in, or children and adolescents who belong to minority groups (such as neurodiverse or LGBT+ children), to find online communities that make them feel like they belong and form invaluable connections with like-minded people to make them feel less isolated. Smartphones and social media also allow parents to keep in easier contact with their older children as they start to become more independent.

Furthermore, I do think that social media is too commonly blamed for a lot of things; it’s a scapegoat that a lot of people seem to blame modern society’s ills on. As much as it does arguably proliferate certain things, and I fully accept that it has flaws, I believe that many of the ills it is blamed for would exist to some degree even without it. For example, I think the murderers of Brianna Ghey would probably have found a way to commit the murder and been introduced to those concepts even without social media; where there’s a will, there’s a way. There were child murderers in the days before social media.

So personally, I don’t believe that a complete ban on smartphones and/or social media for under-16s is necessarily the way forward. However, I do believe that there should be stronger moderation on social media than there is at present to protect children from dangerous and adult content; I don’t believe that the status quo is working in this regard.

But what are your thoughts on this topic? Do you agree with me? Or do you feel that under-16s should be banned from social media and/or smartphones?
 
Last edited:
As the parent of a nearly 13 year old, I don’t think a ban would ever work. Some parents will allow their kids to access stuff they shouldn’t (the number of kids with WhatsApp and Facebook in chimp’s year proves that) and I know from personal experience what a constant battle it is to keep ahead of changes in popular apps etc.
Part of the answer is for parents to keep an eye on what their kids are up to. Another part is for school to do their cyber safety training regularly and from a young age. And a big part is the phone and app manufacturers making parental controls simple to use and make sure they actually work. The number of times chimp has been able to circumvent her parent controls by “forgetting” to update her software is irritating. Luckily all that happened was nightmares after seeing scary videos on YouTube. Could’ve been a lot worse.
The problem with cyber bullying is huge. Chimp has suffered being constantly called and messaged by people who just want to be mean to her - the only thing that stops them is turning her phone off so they get bored and move onto the next victim.
 
I’m not really clued up in terms of safety protocol involving social media, but it seems companies have failed miserably at making it a safe space for younger users. So until they decide to get their arse in gear I wouldn’t be against a blanket ban on smartphones for under 16s. But unfortunately the horse has bolted from the stable a long time ago and it would be almost impossible to enforce without age verification systems being in place from manufacturers.
 
I’m not really clued up in terms of safety protocol involving social media, but it seems companies have failed miserably at making it a safe space for younger users. So until they decide to get their arse in gear I wouldn’t be against a blanket ban on smartphones for under 16s. But unfortunately the horse has bolted from the stable a long time ago and it would be almost impossible to enforce without age verification systems being in place from manufacturers.
And even so, some parents would just buy it for their kids anyway. They already allow them to download age restricted apps “because everyone else has got it”. I feel for chimpy sometimes as I seem to be one of the only year 8 parents who continues to say no.
 
I have 2 teenagers and a pre-teen and the internet terrifies me. It terrifies me because I'm not in control. I do have control over what time they come home and where they go to a certain extent. But it's hard to know what they are exposed to online and you only get a glimpse now and then when they come up with something that you know they didn't get from school.

What terrifies me most of all is that I'm a couple of year too old to understand social media myself. I go on the Facebook band wagon when it was new. But I don't understand pretty much anything after that. Facebook seemed like a free version of Friends Reunited, I find Twitter odd and for the life of me I can't understand what Instant Grams and Tik Toks are all about.

I think we're too lenient on social media companies full stop. They are 100% to blame. They've created websites, run for profit, that allow people to say and post what they want. Therefore they are publishers pure and simple. If you text something racist to a radio station, the presenter wouldn't read it out. If you wrote a letter to a Newspaper advocating rape, the editor wouldn't print it. If you emailed a TV network belittling someone you go to school with, they wouldn't broadcast it. What's the difference? The fact that you can just log on and post away and they can't keep up with it is their problem. They want to to host these sites and make money from them, they accept responsibility for what is posted. If disgusting things are posted via your server, you are to blame.

I think that successive governments have failed to protect our children in this regard. As a parent, I have few powers to protect my children in the absence on legislation. My approach is to speak to my children, warn them constantly that there are evil people in the world who seek to do them harm, and want them of the dangers of believing what they read on the internet. I have good kids who stand against everything wrong, who speak to us when they are concerned about something. But hand on heart I don't know fully what they are exposed to like my parents did with me, I just have to trust them that they will make the right choices.
 
Agree with a lot of this whole heartedly. I used to say to chimp, “I wouldn’t ever drop you off in the centre of Bristol at night on your own when you’re 10, no way am I letting you loose on the internet without any controls, as it’s just as dangerous”. She grumbles, but she knows to come and talk to me if she ever finds something upsetting. As she gets older I do loosen the controls, but I am a bit concerned about when she hits 13 as I’m already getting emails from the likes of Microsoft telling me family safety deactivates at that age. The balance between letting her have a bit of freedom and protecting her is a fine line to tread.
 
OK, so if just banning children is a little unreasonable, what about just banning social media?
Bring back individual emails, texting, and good old teletext...
There was a cheap way of finding last minute holidays.
This is the nearest I come to any social media, and things over the last few days show this to be very social and supportive,
Out in the "real" social media, I only hear of upset and argument.
It's easier to stop if you never started though, apparently.
Edit, and further off topic, you lucky sods, I would just love a night out in Bristol!
 
Agree with a lot of this whole heartedly. I used to say to chimp, “I wouldn’t ever drop you off in the centre of Bristol at night on your own when you’re 10, no way am I letting you loose on the internet without any controls, as it’s just as dangerous”. She grumbles, but she knows to come and talk to me if she ever finds something upsetting. As she gets older I do loosen the controls, but I am a bit concerned about when she hits 13 as I’m already getting emails from the likes of Microsoft telling me family safety deactivates at that age. The balance between letting her have a bit of freedom and protecting her is a fine line to tread.
I'm past that stage already with 2 of mine. They're both quite different kids but both have similar values. That's all you can do really, try to educate them in right and wrong and have a trusting relationship with them. We're always trying to balance out being the bad guys for saying no, with them trusting us to make the best decisions for them.

I suppose this is all you can do with little chimpy. Just hope that she trusts you and is respectful enough to know that you care for her and are only doing this to protect her.

Before I had kids, I thought I could just impart my knowledge of my own youth and protect them from the local gang members, drug dealers and other bad doer's. I never thought I would be powerless to protect them from evil, far right, American Billionaires like Elon Musk within their own home.

Sadly, the genie is out of the bottle. Our politicians have failed to keep pace with this and now it's already out there, the kids know about it, and people are already making completely contradictory "freedom" arguments in favour of going onto the internet to publish whatever hatred they like. Things that many wouldn't dare say to anyone else in person. It's the safety of a touch screen and keyboard that needs taking away. And only legislators can do that by going after the odious people that profit from these sites, those that get rich from the suffering of others.
 
I think we're too lenient on social media companies full stop. They are 100% to blame. They've created websites, run for profit, that allow people to say and post what they want. Therefore they are publishers pure and simple. If you text something racist to a radio station, the presenter wouldn't read it out. If you wrote a letter to a Newspaper advocating rape, the editor wouldn't print it. If you emailed a TV network belittling someone you go to school with, they wouldn't broadcast it. What's the difference? The fact that you can just log on and post away and they can't keep up with it is their problem. They want to to host these sites and make money from them, they accept responsibility for what is posted. If disgusting things are posted via your server, you are to blame.
As much as this is an interesting analogy, I’m not sure the social media companies would see it that way.

The radio station, the newspaper, the TV network and the social network are all providing services. However, I think the key difference falls in what the service being provided by the company is.

In the cases you mention of radio stations, newspapers and TV networks, the content is the service being provided by them. However, it could be argued that in the context of a social network, it is not the content that is the service they provide, it is merely the ability for you to publish your own content and view the content of others. As such, the social media companies might argue that as they are only providing an avenue for users to publish and view content through, the responsibility for the content itself belongs to those who post it rather than to them. The social media companies might view it in the sense of “don’t shoot the messenger”; their mentality might be “we are only providing an avenue for people to distribute content, so we don’t hold responsibility for what that content is”. That’s slightly different to a radio show, a newspaper or a TV show; in those cases, the content is the service being provided by the company, whereas with a social network, it isn’t.

For clarity, I’m not saying that it’s right or that I agree with it as an entirely valid excuse for lax moderation. I absolutely agree that there should be some degree of stronger moderation or protection on the part of the social media companies. However, I do think that rightly or wrongly, that is a distinction that the social media companies may make from the examples you mention.

Not to mention that in that sense, you also have the debate about censorship and people’s freedom of speech being curtailed. Rightly or wrongly, the people who post content on these sites may well kick up a fuss about censorship and their right to freedom of speech. Freedom of speech does not equal freedom from consequences and I completely understand that, but the people who post content may not see it that way, which could cause problems for the government and social media companies.
 
Last edited:
A lot of the time it is the parents that should be banned from social media. If they looked up from their phones occasionally they might notice they have a child, could consider doing some parenting, and as a result might not only instil some standards, value and sense into their offspring, but also notice what their own child is up to amd guide them to safety.

You can't stop kids looking at filth on the Internet any more than you could stop 13 year old me finding hard-core porn in a bush. Its just easier now. Although I do agree there is no excuse for the social media companies being so atrocious at challenging and removing harmful content in general regardless of the recipients age. With the money they make they could certainly do it properly, but they leave it to an algorithm that allows all sorts of fraud and defemation but can't understand Scunthorpe exists and bans you.
 
As much as this is an interesting analogy, I’m not sure the social media companies would see it that way.

The radio station, the newspaper, the TV network and the social network are all providing services. However, I think the key difference falls in what the service being provided by the company is.

In the cases you mention of radio stations, newspapers and TV networks, the content is the service being provided by them. However, it could be argued that in the context of a social network, it is not the content that is the service they provide, it is merely the ability for you to publish your own content and view the content of others. As such, the social media companies might argue that as they are only providing an avenue for users to publish and view content through, the responsibility for the content itself belongs to those who post it rather than to them. The social media companies might view it in the sense of “don’t shoot the messenger”; their mentality might be “we are only providing an avenue for people to distribute content, so we don’t hold responsibility for what that content is”. That’s slightly different to a radio show, a newspaper or a TV show; in those cases, the content is the service being provided by the company, whereas with a social network, it isn’t.

For clarity, I’m not saying that it’s right or that I agree with it as an entirely valid excuse for lax moderation. I absolutely agree that there should be some degree of stronger moderation or protection on the part of the social media companies. However, I do think that rightly or wrongly, that is a distinction that the social media companies may make from the examples you mention.

Not to mention that in that sense, you also have the debate about censorship and people’s freedom of speech being curtailed. Rightly or wrongly, the people who post content on these sites may well kick up a fuss about censorship and their right to freedom of speech. Freedom of speech does not equal freedom from consequences and I completely understand that, but the people who post content may not see it that way, which could cause problems for the government and social media companies.
I fully understand your point. The bleeding heart excuses that social media companies use are exactly as you described.

I don't really give a toss how social media companies describe themselves. It's up to legislator's to define what they are. It's just my view that they are publishers/broadcasters, and seeing how long they've got away with it and how ingrained this toxic nonsense is now in our culture, I've lost the argument.

To pretend that they are not broadcasters or publishers means that they must therefore be considered a method of communication. I don't get that argument either. You can get hate mail in the post, you can have threatening phone calls, and you can get porn images sent by text. But when's the last time anyone received a court summons via Tick Tock, your bank sent you a new PIN via Twitter, and your mortgage statement came through on Instant Gram? It's not called social communication is it? The key word is media. And the rest of the media is regulated. The fact they've designed a business model that provides a platform for people to do whatever they like should be their problem and no one else's. No one forced them to host servers for profit that allow people to say whatever they want, so it really shouldn't be anyone else's concern. If some heat was put on that they can't stand, I'm sure they'd be forced to get out of the kitchen.

That's why the "freedom of speech" argument is so rediclous. If it was wiped off the face of the earth tomorrow, no one would be less "free". I felt more free before it than I do now. I can make all the false complaints I like about the teachers that look after my children to OFSTED, and could ruin the teachers life in the process, but I'm free to create a fake profile and manipulate people into watching videos that incite child exploitation knowing that the worse thing that will ever happen to me is that my posts will be deleted - eventually anyway. If someone roughs up my children in a pub and they feel threatened, I can legally give the guy a good old fashioned smack in the chops. If they use Zuckerberg and Musks servers to threaten them in their own home then that's all cool apparently.

It has nothing to do with freedom of speech. Social media platforms are exactly that - they are media platforms that provide people with a platform to publish their own media. As strange as it may sound to youngsters who have never lived without it (which I do find genuinely quite sad), there was a world before Mark Zuckerberg, Elon Musk, and the Chinese government commercialised the way we talked to eachother.
 
I fully understand your point. The bleeding heart excuses that social media companies use are exactly as you described.

I don't really give a toss how social media companies describe themselves. It's up to legislator's to define what they are. It's just my view that they are publishers/broadcasters, and seeing how long they've got away with it and how ingrained this toxic nonsense is now in our culture, I've lost the argument.

To pretend that they are not broadcasters or publishers means that they must therefore be considered a method of communication. I don't get that argument either. You can get hate mail in the post, you can have threatening phone calls, and you can get porn images sent by text. But when's the last time anyone received a court summons via Tick Tock, your bank sent you a new PIN via Twitter, and your mortgage statement came through on Instant Gram? It's not called social communication is it? The key word is media. And the rest of the media is regulated. The fact they've designed a business model that provides a platform for people to do whatever they like should be their problem and no one else's. No one forced them to host servers for profit that allow people to say whatever they want, so it really shouldn't be anyone else's concern. If some heat was put on that they can't stand, I'm sure they'd be forced to get out of the kitchen.

That's why the "freedom of speech" argument is so rediclous. If it was wiped off the face of the earth tomorrow, no one would be less "free". I felt more free before it than I do now. I can make all the false complaints I like about the teachers that look after my children to OFSTED, and could ruin the teachers life in the process, but I'm free to create a fake profile and manipulate people into watching videos that incite child exploitation knowing that the worse thing that will ever happen to me is that my posts will be deleted - eventually anyway. If someone roughs up my children in a pub and they feel threatened, I can legally give the guy a good old fashioned smack in the chops. If they use Zuckerberg and Musks servers to threaten them in their own home then that's all cool apparently.

It has nothing to do with freedom of speech. Social media platforms are exactly that - they are media platforms that provide people with a platform to publish their own media. As strange as it may sound to youngsters who have never lived without it (which I do find genuinely quite sad), there was a world before Mark Zuckerberg, Elon Musk, and the Chinese government commercialised the way we talked to eachother.
I agree with some of the points you make here, and having read it, I am coming around more to your point of view. On the platforms where people share things in a relatively untargeted way, such as Facebook, Twitter, Instagram, TikTok and the like, I fully agree that the companies themselves should take more responsibility for the safeguarding of their users. Having read your post, I've definitely come to agree with your reasoning around why social media is most certainly media and should be regulated in a similar way to broadcast media.

However, I think that with certain forms of social media such as WhatsApp and Discord, they do fall more under the umbrella of "communication", so I think you're entering a very grey area ethically by trying to police those. Many people would not take kindly to their private conversations being heavily policed.

And I think that is precisely the problem with social media governance. While people like you and me coalesce around the view that social media should be more heavily regulated due to its level of influence as a source of media, some argue that social media is not the same as regular media and that stronger controls would be committing censorship, curtailing people's right to freedom of speech and the like. People make arguments as to why social media shouldn't be regulated, and it becomes viewed as one of those grey areas that we don't really touch.

I feel that if we're not careful, we may be sleepwalking into a similar issue with AI in the years to come. AI, similar to social media, is a brilliant thing in many ways, but most certainly comes with a raft of negatives and ethical questions, and could be very dangerous if misused. If we don't introduce stronger governance around AI, I fear that we may have scenarios similar to those currently being grappled with on social media, but on an even grander scale. Things like deepfakes are becoming ever more convincing, so in the years to come, AI could generate all kinds of new, dangerous content. The likes of OpenAI could also evade accountability in a very similar manner; if someone created something dangerous with ChatGPT, for example, OpenAI could say "It's not our fault and it's none of our business. We only created the AI tool; what people do with it is entirely their responsibility". For this reason, I believe that AI needs to have strong governance around its use in a way that social media never has. I think we're possibly a bit late for that with social media, but we aren't with AI.
 
I agree with some of the points you make here, and having read it, I am coming around more to your point of view. On the platforms where people share things in a relatively untargeted way, such as Facebook, Twitter, Instagram, TikTok and the like, I fully agree that the companies themselves should take more responsibility for the safeguarding of their users. Having read your post, I've definitely come to agree with your reasoning around why social media is most certainly media and should be regulated in a similar way to broadcast media.

However, I think that with certain forms of social media such as WhatsApp and Discord, they do fall more under the umbrella of "communication", so I think you're entering a very grey area ethically by trying to police those. Many people would not take kindly to their private conversations being heavily policed.

And I think that is precisely the problem with social media governance. While people like you and me coalesce around the view that social media should be more heavily regulated due to its level of influence as a source of media, some argue that social media is not the same as regular media and that stronger controls would be committing censorship, curtailing people's right to freedom of speech and the like. People make arguments as to why social media shouldn't be regulated, and it becomes viewed as one of those grey areas that we don't really touch.

I feel that if we're not careful, we may be sleepwalking into a similar issue with AI in the years to come. AI, similar to social media, is a brilliant thing in many ways, but most certainly comes with a raft of negatives and ethical questions, and could be very dangerous if misused. If we don't introduce stronger governance around AI, I fear that we may have scenarios similar to those currently being grappled with on social media, but on an even grander scale. Things like deepfakes are becoming ever more convincing, so in the years to come, AI could generate all kinds of new, dangerous content. The likes of OpenAI could also evade accountability in a very similar manner; if someone created something dangerous with ChatGPT, for example, OpenAI could say "It's not our fault and it's none of our business. We only created the AI tool; what people do with it is entirely their responsibility". For this reason, I believe that AI needs to have strong governance around its use in a way that social media never has. I think we're possibly a bit late for that with social media, but we aren't with AI.
I must admit that I dreaded the likes of WhatsApp being mentioned as at it's core it is a form of communication. I think freedom of speech arguments about using platforms to broadcast dangerous content is nonsense and is quite clear cut. But social media companies broadening into messaging apps does indeed blur the lines. I suppose one line that can be drawn is that to access someone via these methods requires some form of profile, phone number or other form of sign up information. That shouldn't be allowed by the the platforms for anyone under the age of at least 16 in my view and it should be the host's problem to enforce it. But by and large you can't, and shouldn't police what people discuss with each other in private. It's a tricky situation.

I share your fear with AI. Social media is still a relatively new thing and I remember watching it take hold like a car crash in slow motion. It's only recent years that politicians have started to understand it and by now it's too late. I fear we're going the same way with AI. It's out there and is changing the world as we speak. It's happening very fast and like social media, we're basically relying on private gatekeepers, living in foreign jurisdictions to not do bad stuff with it. I asked at AI chatbot recently if it was possible to fall into the wrong hands and what would happen if it does. After giving the usual assurances that it's creators are a bunch of good guys, it admitted that an evil despot could easily use it to do bad things.

A very old fashioned way of looking at it, but I see AI like nuclear bombs. It's all very well claiming that the Atom bombs dropped on Nagasaki and Hiroshima ended the second world war and have prevented warfare on that scale ever since. But evil dudes also now have this technology. The Americans are currently bombing Iran, a country that has a nuclear program, and the Ukrainians are being hung out to dry simply because their invaders have a large arsenal of nukes. The very same guys who have used social media to interfere with western democratic processes.

It's all got out of hand now and we're stuck with this destructive technology out in the wilderness that genuinely has fallen into the hands people we don't trust, all this happening before we had a chance to debate the ethics and create laws to control it.

We can't have another situation where our elected governments are caught asleep at the wheel. It's over 80 years too late to do anything about nukes, 20 years too late to do much about social media. As AI changes the economy, the way we communicate and the way we live, is it already too late?
 
I think what is demonstrated here is the gulf in understanding of technology between adults and children and that is the most concerning thing. I don't think technology should be restricted but parents do have a responsibility in monitoring what their children are doing and sending.... but because of the gulf in understanding that is, understandably, challenging. It would not be difficult for tech firms to get their heads together lock down operating systems and make it that you can only get a play/ apple/ MS account if you are 16+ or have an account which needs parental approval and provides an overview of what said child is doing.

Ultimately parents are responsible for their children's behaviour be it online or in the 'real' world. I just feel that tech companies should be making it easier.
 
Don't think it should be banned at all.

Plus I'd be massively worried if they did try something like that as they'd then be wanting everyone 16+ to upload documents proving they were over the age of 16 which would probably end up turning into a massive data breach further on down the line.
 
There's no doubt that smart phones are damaging society as a whole. We are all guilty of being glued to the things rather than concentrating on the world as a whole. (Except @rob666) .Only the other day I was talking to a gentlemen about escape lines at eden camp and in the back of my mind all I could think is "wonder what the LFC score is, would it rude to check now?"

I have a autistic, I will say step nephew. He has absolutely obsessed with his phone. His mum, bangs on about it to him. She's just as bad. The other week I was with him out and about, kept his phone away, right until she phoned him to nag him about something.


I'm all for kids having a phone. For security and that. Maybe get apple android to have basic features only. Remove the app store. Blocks on all social media. Can't be hard to legislate.

The biggest thing. Parents need to put the phone down when around kids. Don't let them use them until at least the age of 8. I see toddlers with iPads in shops. Engage with them
 
Last edited:
Top