As usual, Grace Eve, a mother of a 8-year-old kid, just went back home after work and was preparing dinner for her little daughter Elaine. While in the living room, she was watching Peppa Pig on YouTube kids on her iPad. Suddenly, Grace heard the screams from her daughter and a loud crash. She rushed out to see what had happened with Elaine, and found the poor girl was in a state of frightened and burst into tears.
Looking at this, Grace picks up the iPad on the ground, and seeing the video was still playing–Peppa Pig went to the dentist only actually got tortured instead where Peppa Pig eat her father drinks bleach. Apparently, Grace felt this content should not appear on YouTube Kids for her daughter. She was too busy to be extra vigilant for monitoring her kids to watch cartoons on YouTube at home. Grace complained, “It’s really annoying! Why does YouTube Kids allow these objectionable contents reach to children?”
It’s really annoying! Why does YouTube Kids allow these objectionable contents reach to children? —Grace
YouTube Kids, kids-focused version of YouTube, is specially designed for children with advanced parental guides. It gives the option to parents for preselecting what kinds of content can children watch. There is a statement showing the company’s objective when you go through the introductory page on YouTube Kids. ‘ We work hard to keep the videos on YouTube Kids family-friendly and use a mix of automated filters built by our engineering teams, human review and feedback from parents to protect our youngest users online.’
In the online video industry, just with a single click, kids can access to hundreds of cartoon episodes, such as Spiderman, Frozen and Mickey Mouse etc. It’s safe to assume that most of the video content that recommends by algorithm might be decent and age-appropriate. However, those disturbing incidents, like ‘Peppa Pig case’, do happen because YouTube search filters are not always standing on the test.
Sometimes, parents leave their children to use YouTube Kids to watch cartoons in public space, but they cannot supervise kids all the time. This might be one reason that causes the problem. According to a sophomore majored in digital culture at King’s College London, Nina once commented, ‘ I have to say, the goal of YouTube recommendation system is not selecting age-appropriate videos for children, but using data to find their target audiences to make them stay longer on this application.’ She continued to add, ‘ I feel like the best solution for parents is to co-viewing with kids. You cannot fully trust the safety kids settings.’
I feel like the best solution for parents is to co-viewing with kids. You cannot fully trust the safety kids settings —Nina
Besides, does the YouTube algorithmic issue can only be solved by parents? Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood argued, ‘Anything that gives parents the ability to select programming that has been vetted in some fashion by people is an improvement, but I also think not every parent is going to do this. Giving parents more control doesn’t absolve YouTube of the responsibility of keeping the bad content out of YouTube Kids”, in a statement to CBC News.
This technical bug on YouTube Kids, like ‘Peppa Pig’ incident, that Grace was concerned about is that how many children have already witness these objectionable content before the company or parents notice this problem? Does this phenomenon heavily exaggerate by media nowadays?
This isn’t just one or two people trying to manage to scam YouTube’s recommendation system, either, but thousands. According to a survey conducted by Pew Research Center, it has emphasized the important role of YouTube in offering content for children. The statistics highlights that ‘Fully 81% of all parents with children age 11 or younger say they ever let their child watch videos on YouTube…And among parents who let their young child watch content on the site, 61% say they have encountered content there that they felt was unsuitable for children.’
These data show that you might fall down an objectionable YouTube hole if we give too much room for algorithm to breath.
But as for company, this technology is a blessing rather than a curse. The YouTube’s parent company, Google, has revealed that YouTube recommendation system actually rely on a black box technology which refers to algorithms. Additionally, it neutral networks and this technology puts this online video giant on the edge of world market.
People know how YouTube works, just typing the key words on the search broad, it will show you loads of personalized video content. After finishing watch the first one, it can automatically slip through algorithm without enough human moderation to ‘up-next’ ones. So experts begin to explain more on this technology to give us a better understanding of how it works and how much can we rely on this black box algorithm?
At King’s College London, the lecturer and researcher in the Department of Informatics, Dr. Christopher Hampson said “ultimately, this is a big topic in artificial intelligence at the moment, about safe and trusted artificial intelligence. Often these algorithms are essentially black-boxes where they feed in how users are behaving… based on a variety of information collected about them – with neural networks in the background – to produce outputs in the form of recommendations.”
This automated system sometimes is not suggesting what it is suggesting to users actually. YouTube company has already taken some measures to solve this problem like you can tag the video that is not proper for kids and send it to YouTube. However, it would take a few days for YouTube to figure it out and it is still not a perfect solution by far. Therefore, parents start to rely on their own to ensure the YouTube videos are suitable for their children. Grace, the mother of Elaine, will not allow her daughter to watch cartoons on any video application instead of watching them together.