How voice-assistant algorithms reinforce problematic gender stereotypes: What we should pay attention to?

Voice assistants are facing an issue of gender bias in algorithms, and it reflects wider gender inequality in the field of technology.

Just after the lecture, Ella, a college student who majored in Gender Studies, was doing her final project about feminism at school. While she was too boring to focus on it, so she started to play with Siri on her iPhone to kill time. “What do you think about feminism?” Once received this question, the responses are generically such as “Don’t engage”, “Sorry, I don’t really know”.

Soon, what shocked Ella most is that this is not just an incident and the same responses are also used for those feminism-related questions like, “Do you think women should get equal pay for equal work?” “What’s your opinion on gender inequality?” So she became wonder why Siri deflect these questions? How prevalent are these cases in society?

A similar case occurred on Amazon’s Alexa as well when you mention the word “feminism”. However, its response even has a flirtatious style. When a user praises Alexa, “You’re so hot”. Her typical response has been a cheery, “That’s nice of you to say”!

These virtual assistants are widely used in people’s daily lives and make our lives smarter than ever, which argues that these problematic gender stereotypes will have a harmful impact on society. According to the introductory page of Apple’s website, it states, “Machine learning is constantly making Siri smarter. And you can personalize Siri to make it even more useful… Even when you don’t ask, Siri works behind the scenes like a personal assistant”.

“I’d blush if I could”, a new report released by UNESCO, is a standard response of this default female-voice of Apple’s digital assistant Siri. Additionally, it might reflect a more serious gender inequality issue than we imagine. Apart from Siri, other “female” voice assistants also express submissive traits, an expression of the gender bias built into AI products as a result of what UNESCO calls the “stark gender-imbalances in skills, education and the technology sector.”

Despite these harmful consequences, this issue has put parents companies of these voice assistants in the spotlight. Should we only owe this fault to technology giants? Director of UNESCO for gender equality, Saniye Gulser Corat said, ” Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves. To change course, we need to pay much closer attention to how, when and whether AI technologies are gendered and, crucially, who is gendering them.”

People might not know, actually, both Apple’s Siri and Amazon’ Alexa have female names. The former one derives from a Norse name means “beautiful women who lead you to victory”, and the latter one is unquestionably female named for the ancient library of Alexandria. But algorithms behind the interface should be neutral.

According to a report titled I’d Brush if I Could which response this phenomenon directly, it highlights that the gender bias still maintains in algorithms. It highlights that this problem actually stems from the engineering departments that are overwhelmingly taken by male stuff.

So, there is no doubt that the responses written in these virtual assistants have such an obvious patriarchal tendency. Besides, this set of data from UN official website proved that the gender inequality still holds and spread in the technological field. “Women make up only 12% of AI researchers, six% of software developers, and are 13 times less likely to file ICT (information and communication technology) patents”.

Regarding this serious problem, some digital companies have already taken measures to avoid gender bias. The news report from Reuters states, “A team of creatives created the first gender-neutral digital assistant voice earlier this year in an attempt to avoid reinforcing sexist stereotypes.”

Let’s go back to Apple’s Siri and Amazon’s Alexa incident. If algorithms still bring about such gender bias and discrimination, it is hard to imagine that the deployments of algorithms are gender-neutral. Therefore, digital companies ought to take action to stop making digital assistants female by default and explore more gender-neutral solutions. However, our greatest challenges of the next era will still be bridging the gender inequality in the very beginning to avoid this bias.

 

Is YouTube’s recommendation system asking you to become an extremist?

Currently China is undergoing a major health epidemic situation, and the whole world is paying great attention to the daily change and its affects. My friend Joseph, who is an Australian-born Chinese was also searching for the information on the YouTube platform. He typed ‘China coronavirus’ as key words in the search box and was looking for some on-site-videos in order to understand the situation better. As one of the major social events happened recently, lots of key western news agencies are actively participating in the news report and information update.

According to his search results, the first few videos are presented by some officials and well-known news organizations such as Channel 4 News, DW News, South China Morning Post etc., sorted by the degree of relevancy. Joseph clicked into the first video shown on the platform. It talked about the fact that other 13 countries outside China have outbreaks of the coronavirus.

Figure 1

At the same time, YouTube automatically played the next video which talked about the life of two US citizens who are stuck in Wuhan because of the city lockdown. They cannot leave the city in case they spread the virus. The city lockdown restricts all access to Wuhan and cuts off all exits to other places, is a controversial decision which causes intense discussions in the western world. Thanks to the recommendation system, Joseph started paying attention to the video which discusses the city lockdown. After that one, he was again recommended to watch videos about how the Chinese government manipulate social medias internally and externally etc. He ended up watching one talking about Chinese Communist Party and realized that he had spent even more time on those ‘fascinating-topic videos’ and forgot that he initially attempted to gain more knowledge about the recent situation of coronavirus.(Figure 1)

Joseph’s experience shows that YouTube’s recommendation system is not only offering similar contents based on your preferences but also leading users to some radical and extreme contents, in order to capture more attention and stay longer on the platform. According to a 2019 report, conducted by MIT technology review shows that 70% of what users watch on YouTube is fed to them through recommendation system, proving the powerful influence of the algorithm which shapes the information consumed. On the one hand, people are willing to watch the suggested videos which are offered based on their watching histories and predicted preferences. On the other hand, the main goal of YouTube recommendation system is to keep you watching as long as possible, by offering contents which is easily addictive to users. As Guillaume Chaslot, who used to work at Google on YouTube’s recommendation system, noted in his talk at 2019 DisinfoLab Conference, the motivation behind the algorithm is about watch time rather than what viewers want. 

 

Figure 2 (Gephi analysis)

This is where issue has risen that people are unconsciously watching the contents which are recommended by YouTube’s algorithm with some thought provoking contents, pushing you to waste more time on the platform and leading to the issue of misinformation. Since Google is managing and supporting the algorithms system behind the platform, Google has announced that they are working on addressing the issue and ‘begin to reduce recommendations of borderline content and content that could misinform users in harmful ways’, wrote in one of 2019 YouTube’s official blog. However, just like other big tech companies, Google never explains how its algorithms exactly work. (Figure 2)

Although the response from Google seems like a good sign it still is a serious problem. The recommended-videos sidebar is supposed to create a basic structure which will recommend a shortlist of contents based on the topic and other features of videos you are watching. Then the system will automatically learn from your likes, clicks, searching words and other interactions on the platform, adding other contents into the list on the sidebar. However, as the ranking of the list has great impact on user’s watching experiences, it is arguably questioned that whether those recommendations are shown because you like it or because the algorithm recommend it. (Figure 3)

Figure 3

In fact, a journal written by Zeynep Tufekci and was published on Scientific American website, says that the business model of YouTube is to keep users to stay on the platform and watch as many targeted ads as possible. Therefore, the algorithm is likely to promote contents which can greatly attract attention, and it seems that those borderline contents with crazy claims or radical viewpoints are more attractive and engaging, which will be recorded by machine-learning system and recommend similar contents to keep you watching, in order to make greater revenue. Affected by such algorithm, viewers always end up being pushed to extreme content, where the contents are possibly filled with fake new or conspiracy theories.

However, some people believe that we should not simply put the blame on YouTube’s algorithm, instead, users themselves who are involved in the whole process have some responsibility. According to an academic journal, written by Ariadna Matamoros-Fernández and Joanne Gray in 2019, ‘Users are not passive participants in the algorithm system’, instead, some video creators understand how the algorithm work and adjust their strategies in order to get the most recommendations on purpose. As a public community, YouTube also provides an opportunity for those extremist content creators, who are making use of the platform for propaganda. (Figure 4)

Figure 4

Although YouTube has already restricted contents with obvious bad influence such as hate speech or violence, as Ariandna and Joanne argued, it is very hard for an algorithm to figure out hidden meaning of the contents which are in the grey areas. This shows that the system is unable to accurately monitor and manage those video producers who are playing around in the shady ground, making contents possibly be misunderstood in different perspectives by various culture groups in the society. For example, Vox reporter Carlos Maza had a big online fight with a famous comedian Steven Crower in June 2019, arguing that Steven always made fun of his sexual orientation and ethnicity by using sarcastic tone and attacking language in the YouTube video. However, Steven believed that videos were simply ‘friendly ribbing’. After several vague posts he made on Twitter, YouTube stated to the public that he didn’t break the hate speech rules so that the video was allowed to continue and be distributed on the platform. The result, however, disappointed lots of subculture groups.

So, is YouTube the only one who should take responsibility of the issue? Ben McOwen Wilson, YouTube’s managing director told to BBC during an interview in 2019, that YouTube is currently dedicated to dealing with the issue of misinformation and conspiracies, however, it also requires for the joint efforts from government and other major online platforms such as Facebook and Twitter. While Joseph can’t avoid recommendations from YouTube, it is better to also have a look on other platforms, in order to be recommended in different perspectives and create his own viewpoints eventually.(Figure 5)

Figure 5

YouTube Kids app recommends disturbing videos to kids

As usual, Grace Eve, a mother of a 8-year-old kid, just went back home after work and was preparing dinner for her little daughter Elaine. While in the living room, she was watching Peppa Pig on YouTube kids on her iPad. Suddenly, Grace heard the screams from her daughter and a loud crash. She rushed out to see what had happened with Elaine, and found the poor girl was in a state of frightened and burst into tears.

Looking at this, Grace picks up the iPad on the ground, and seeing the video was still playing–Peppa Pig went to the dentist only actually got tortured instead where Peppa Pig eat her father drinks bleach. Apparently, Grace felt this content should not appear on YouTube Kids for her daughter. She was too busy to be extra vigilant for monitoring her kids to watch cartoons on YouTube at home. Grace complained, “It’s really annoying! Why does YouTube Kids allow these objectionable contents reach to children?”

It’s really annoying! Why does YouTube Kids allow these objectionable contents reach to children?                                                                                                       —Grace

YouTube Kids, kids-focused version of YouTube, is specially designed for children with advanced parental guides. It gives the option to parents for preselecting what kinds of content can children watch. There is a statement showing the company’s objective when you go through the introductory page on YouTube Kids. ‘ We work hard to keep the videos on YouTube Kids family-friendly and use a mix of automated filters built by our engineering teams, human review and feedback from parents to protect our youngest users online.’

In the online video industry, just with a single click, kids can access to hundreds of cartoon episodes, such as Spiderman, Frozen and Mickey Mouse etc. It’s safe to assume that most of the video content that recommends by algorithm might be decent and age-appropriate. However, those disturbing incidents, like ‘Peppa Pig case’, do happen because YouTube search filters are not always standing on the test.

Sometimes, parents leave their children to use YouTube Kids to watch cartoons in public space, but they cannot supervise kids all the time. This might be one reason that causes the problem. According to a sophomore majored in digital culture at King’s College London, Nina once commented, ‘ I have to say, the goal of YouTube recommendation system is not selecting age-appropriate videos for children, but using data to find their target audiences to make them stay longer on this application.’ She continued to add, ‘ I feel like the best solution for parents is to co-viewing with kids. You cannot fully trust the safety kids settings.’

I feel like the best solution for parents is to co-viewing with kids. You cannot fully trust the safety kids settings                                                                                       —Nina

Besides, does the YouTube algorithmic issue can only be solved by parents? Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood argued,  ‘Anything that gives parents the ability to select programming that has been vetted in some fashion by people is an improvement, but I also think not every parent is going to do this. Giving parents more control doesn’t absolve YouTube of the responsibility of keeping the bad content out of YouTube Kids”, in a statement to CBC News.

This technical bug on YouTube Kids, like ‘Peppa Pig’ incident, that Grace was concerned about is that how many children have already witness these objectionable content before the company or parents notice this problem? Does this phenomenon heavily exaggerate by media nowadays?

This isn’t just one or two people trying to manage to scam YouTube’s recommendation system, either, but thousands. According to a survey conducted by Pew Research Center, it has emphasized the important role of YouTube in offering content for children. The statistics highlights that ‘Fully 81% of all parents with children age 11 or younger say they ever let their child watch videos on YouTube…And among parents who let their young child watch content on the site, 61% say they have encountered content there that they felt was unsuitable for children.’

These data show that you might fall down an objectionable YouTube hole if we give too much room for algorithm to breath.

But as for company, this technology is a blessing rather than a curse. The YouTube’s parent company, Google, has revealed that YouTube recommendation system actually rely on a black box technology which refers to algorithms. Additionally, it neutral networks and this technology puts this online video giant on the edge of world market.

People know how YouTube works, just typing the key words on the search broad, it will show you loads of personalized video content. After finishing watch the first one, it can automatically slip through algorithm without enough human moderation to ‘up-next’ ones. So experts begin to explain more on this technology to give us a better understanding of how it works and how much can we rely on this black box algorithm?

At King’s College London, the lecturer and researcher in the Department of Informatics, Dr. Christopher Hampson said “ultimately, this is a big topic in artificial intelligence at the moment, about safe and trusted artificial intelligence. Often these algorithms are essentially black-boxes where they feed in how users are behaving… based on a variety of information collected about them – with neural networks in the background – to produce outputs in the form of recommendations.”

This automated system sometimes is not suggesting what it is suggesting to users actually. YouTube company has already taken some measures to solve this problem like you can tag the video that is not proper for kids and send it to YouTube. However, it would take a few days for YouTube to figure it out and it is still not a perfect solution by far. Therefore, parents start to rely on their own to ensure the YouTube videos are suitable for their children. Grace, the mother of Elaine, will not allow her daughter to watch cartoons on any video application instead of watching them together.

How ‘Straight’ is the YouTube Recommendation Algorithm?

Sorry, I disappeared into a YouTube black hole’ is something that can be heard on the weekly. In the words of the arts student Varizka (21), this means “when I’m binging one video after another just based on the recommended section, and I end up watching something weird and completely different to what I looked for in the first place.” This is a phenomenon that many users of the video platform can relate to, as the recommendation algorithm (and the ‘autoplay’ function that goes hand in hand with this) is one of YouTube’s most addictive features. The way it works is by recommending a selection of videos that is considered to be related to the video currently playing (the entry point) and, if turned on, automatically playing the first video from the list after the video from the entry point is finished playing. And yes, you guessed it: have it their way, YouTube will never let you offline.

The Black Hole That NASA Doesn’t Research – So I Did

Hopefully by now this is clear, but the black hole in this context is not the kind that is found in space. It’s produced by YouTube’s recommendation algorithm. In theory, this algorithm is supposed to recommend similar videos, but, like Varizka mentions, has a tendency to take the user on seemingly random paths through its library of content. In some cases, users have even experienced that innocent content has led them to extremist videos. This is especially true when it comes to already controversial themes, such as politics, with an example being mainstream videos about Trump surrounding the 2016 US election leading to far-right extremist content. This had me wondering: would the same be true for time old controversies, such as homosexuality within religions? By using search queries containing the key words ‘gay’ combined with each of the Big Five religions (Christianity, Islam, Judaism, Hinduism and Buddhism) respectively using a Video Network tool, I learnt that the answer to this question is a bit more nuanced than a simple yes or a no.

Visualisation of homosexuality in Christianity dataset

After having put these datasets through the data visualisation program Gephi, it was possible to identify certain trends and oddities across the different religions, like the various categories that the videos belong to. The dominant categories within four of the religions were no bigger than 30% of the entire datasets. The dataset on homosexuality in Islam, however, showed that the dominant category was news and politics, with nearly half of the videos belonging to this topic. Another interesting point taken from the categorising of these videos is how non-profits and activism scored more than 100% higher in the Christian dataset, compared to any of the other religions. Aside from these two categories, the top 8 categories also included entertainment, people and blogs, education, film and animation, how-to and style and music. With how-to and style and music being the exception (how-to and style only appeared in the top 8 categories in the Hindu dataset, and music only in the Christian and Jewish datasets), the rest of these categories appeared in similar numbers in the five religions.

To dig deeper into the datasets, I then looked at the structure of the graphs. Could they look representative of a black hole? I was surprised to see that they did. 

Visualisation of homosexuality in Islam dataset

However, after some manipulation, the networks started developing clusters and structural gaps. These are useful in identifying some of these oddities mentioned earlier. Clusters in these kinds of networks often represent extreme points in the network, as the peripheral ‘camps’ of nodes are not connected to many (if any) of the central nodes in the network. In the different datasets, there were different clusters that stood out, such as the Randy Rainbow Song Parody and Nixon-related clusters in homosexuality in Judaism, or the Queer Eye, Aligarh Muslim University and Donald Trump clusters in homosexuality in Islam. Some of the nodes may represent offensive or extreme content, such as the Donald Trump cluster which contains nodes of information on heavily right winged politics and mentions of the Swedish immigration policies which are now seemingly growing more conservative. In the same network, though, is a cluster containing nothing but videos of Tan France from the popular Netflix show Queer Eye, ranging from discussions on racism to makeovers. From homosexuality in Christianity, there seemed to more talks, educational talks such as TedX-talks and Google talks investigating the relationship between religion and sexuality, as well as religious talks from the anti-LGBT+ pastor John McArthur. This was a noticeable trend across all religions included in the project. Within the dataset on homosexuality in Hinduism, both videos on LGBT+ persecution in India appeared, as well as videos of Indian gay proposals. For homosexuality in Buddhism, clusters of Buddhist gay marriage related content, as well as educational videos on how to give up on sexuality all together in order to live according to Buddhist ways.

“I once went on YouTube to for a makeup tutorial, and then 2 hours later I was sitting watching conspiracy theories about the industrial revolution”

Varizka Anjani

In other words, it seemed as though, yes, there were extremist directions that the algorithm could lead you, but these extremes represent a selection of information, ranging from liberal to conservative, from factual to fake, from gay to straight. The main body of the networks, containing the most and most connected nodes, also visualised this, with their many camps of ideas merged into larger networks. In the case of homosexuality in different religions, it seems that the extreme points that the YouTube algorithm might take its users to merely reflect the variety of opinions there are within the topic on a larger scale.

To conclude, here are some navigation tips for navigating the black hole:

  1. Turn off Autoplay
  2. Be aware of your camp
  3. Look out for the peripheries
  4. Curiosity can mislead you

From Apple, to Anomaly, to ImageNet

I stepped through a metal doorway into a dark corridor. To my right, white text stood illuminated against a black wall extending infinitely into an endless wall of square photographs. An invitation and an introduction, the glowing words invite the museum-goer to “take a critical look at how artificial intelligence networks are taught to ‘perceive’ and ‘see’ the world by engineers who provide them with vast training sets of images and words…”. I quickly read the rest of the text, passing over words like ImageNet and algorithms, as my mind was sucked into the vortex of photographs swirling in the distance. The arrangement starts with a single image of an Apple, continually expanding in both the sheer numbers of images and the level of controversy that surrounds their label. About mid-way, the pictures have grown in numbers to fill the wall from floor to ceiling in an overwhelming spectacle of organization among an image of complete chaos. Images ranged from babies, coffee, and jaw breakers, to subarachnoid space, heathen, and schemer—ending with the final category: Anomaly.

This vortex of vivid imagery is Trevor Paglen’s exhibit, “From ‘Apple’ to ‘Anomaly’” currently at the Barbican in central London. Looking at this piece, I felt an overwhelming sense of smallness as I stood just inches from the image wall, neck strained as I began to stare at the photographs entire body lengths above me. But even further than the physicality of my insignificance in that room was the hazy fog of confusion that seemed to cloud my understanding of what exactly I was looking at. Sure, I was completely fascinated, but at what? What was Trevor Paglen trying to tell me?

Emerging from the darkness, I spoke to a fellow museum-goer Samantha-Kay about her experience in the exhibit. She said that the message she took from the exhibit was that of a single emotion: worry. She felt like she had awakened to the biases that exist in machine learning, commenting that when a machine makes a decision, it’s really from whoever made the program or the machine, “it’s from their mind and from their mind-view only”. Samantha also questioned, “So what’s exactly going on in that super mind, or that ‘artificial’ super mind?”. To me, she seemed to get at the emotional truth that exists in Paglen’s exhibit. One that feels almost eerie and apocalyptic, as we see the slow deterioration of sense and logic as the categories escalate from a harmless apple to the abstract attempt to represent something as abstract as an anomaly in a tangible, visible form.

Inspired by Samantha’s insightful interpretation to Paglen’s piece, I decided to embark on a journey to discover my own. I recalled the brief introduction at the entrance of the exhibit, remembering the mention of image sets and machine learning. I began to speculate that the images before me were likely seen through the eyes of an algorithm. I thought, perhaps, Paglen is commenting on the way algorithms are being used to label abstract concepts, like anomalies, as if they are equal to that of an apple. I thought these images were the output of such faulty algorithmic identification, the result of machine learning and all the biases that come with it.

Spoiler alert: I was wrong.

It was a few simple words by an employee at the Barbican that completely dismantled my interpretation of “From ‘Apple’ to ‘Anomaly’”. I was told, “People think they understand it, but have gotten the wrong order…they think this is the output of the search rather than the input of the search.” I was part of the masses. I was one of these people that got it completely, and utterly, wrong. In a very matter of fact, nonchalant utterance, the employee said: It’s about ImageNet. Image Net—the very word I so wrongly glossed over as I read the introduction to the exhibit just hours before.

In a fury of curiosity, I quickly whipped out my phone and fell into an abyss of Google searches: What is ImageNet? Is ImageNet a bank of images? Is ImageNet a machine? Where did ImageNet’s images come from? My questions were endless, and my understanding remained largely inconclusive. In my research, I found that ImageNet, in a few words, is a data set. But not just any data set, it is the best data set to possibly exist to this day. In a Quartz article, Dave Gerhgom says ImageNet essentially is “the data that transformed AI research—and possibly the world”. In addition, to blow your mind even further, the accuracy of ImageNet as a data set increased the accuracy in machines correctly identifying things from 71.8% to a whopping 97.3%—which Gerhgom says far surpasses human abilities.

So, in other words, think of ImageNet as a big box of extremely effective flashcards. ImageNet is essentially the world’s best set of flashcards, and machines and algorithms all over the world are begging to be its students. However, these flashcards were not made by a machine. They were made by people.

With this new knowledge, I turned Paglen’s work over to expose a new side. I disconnected it from the flawed and backwards understanding I had of it when I saw it for the first time. I saw “From ‘Apple’ to ‘Anomaly’” for what it really is: a graveyard of inputs. However, despite my initial misunderstanding of Paglen’s work, I think that his choice to represent the usually solely virtual existence of ImageNet in a tangible and visual form before the viewer allows us to see the vast influence this bank of images can have on artificial intelligence. It exposes the flaws and innate biases within ImageNet before the eyes of the average passer-by. In a recent interview, Paglen said that “It is important for us to look at these images and to think about the kinds of politics that are built into technical systems. I think that showing those images and labels is itself an indictment of the process—a particular kind of indictment that can only really be done effectively by looking”. Paglen has used ImageNet as the source material for his work in a way that it is not usually subject to. ImageNet is in front of the camera, not behind the scenes, and this stark exposure is enlightening the viewer to the innate bias of the data that is teaching machines all over the world to see.

Down the YouTube Rabbit Hole: Jeremy Corbyn and Anti-Semitism Claims

Linda Abelman is an 18-year-old Stoke Newington resident who has just exercised her right to vote for the first time in the December election. Having spent the past three years confused by Brexit, Linda decided to conduct comprehensive research before deciding on a candidate. In true Generation Z fashion, she took to YouTube. When I asked her what she found, I was expecting her to tell me about her pro-Labour agenda. I assumed that YouTube’s algorithms would provide her with content that was popular to the youth of London, such as herself. But her answer was nothing like that. Instead, she filled me in about the overwhelming amount of anti-Semitism claims that plagued her perception of Jeremy Corbyn; and therefore, the Labour Party. Because my assumption was so off, I decided to investigate YouTube’s recommendation algorithm further by examining the video network that enables anti-Semitism claims surrounding Jeremy Corbyn.

YouTube’s recommendation algorithms are designed to increase the time people spend over the platform. This is necessary for its aggregation of revenue from advertising companies. After Google Brain, the company’s AI division, took over YouTube’s recommendations in 2015, the algorithm seemed to produce more extreme suggestions. According to Zeynep Tufekci, an associate professor at the University of North Carolina, the algorithm “promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.”. This means that more intense content is made instantaneously available. Former YouTube employee, Guillaume Chaslot claims that the demand for user watch time saw the algorithm push conspiracy videos on users. He believes increased efforts to grasp user attention will only spread problematic and prohibited content. His project to introduce diversity to the recommendations algorithm did not stimulate the same watch time and was therefore, shut down. Not only does this show YouTube’s blinding determination for more watch time, but also its power as an instrument for radicalisation. This limits the younger generation, like Linda, who rely on YouTube for their understanding of the world around them.

To understand Linda’s experience, I conducted a network analysis of YouTube videos related to Jeremy Corbyn and anti-Semitism claims. The study attempted three main objectives to understand the narrative imposed. First, to utilise Gephi for data visualisation. Second, to identify the central clusters that contribute to content analysis. Third, to isolate the unexpected clusters that raise concern within the network.

Graph A:

Image: by the author

The above graph shows a cluster of nodes that exhibit two central themes: the reaction of the Jewish community to anti-Semitism claims and the general election.

The blue arrows point to videos that address Corbyn’s perspective on Judaism. The video “Jews are terrified of a Corbyn government” is a central node. This video contributes to the Jewish community’s narrative on Corbyn with 87% of British Jews viewing Corbyn as anti-Semitic. The video confirming the Chief Rabbi’s attack of the Labour party only furthers this disposition, where he declared that a “new poison” of anti-Semitism existed at the top. The small community of British Jews, only 0.5 of the population, look towards the Rabbi as a guide for their best interests. The video on the Rabbi is directly connected to Boris Johnson’s response. This shows the highly-politicised nature of the issue that feeds into the recommendation algorithm. Johnson’s reaction creates a divide between the two parties that furthers the anti-Semitic narrative behind the Labour party. Corbyn finally addresses this in a connected video; however, this is drowned out by more controversies that link him to anti-Semitism. Therefore, this series of related videos solidify anti-Semitic claims surrounding Jeremy Corbyn.

The yellow arrows demonstrate videos that attach the general election to allegations against Corbyn. Each party’s manifestos are brought to light with the topic of Brexit as the main concentration. The BBC’s video “Johnson V Corbyn election debate: Who? Parties react” shows a reputable news source contribute to the divide between the two parties.

 This focus of Brexit contributes to the algorithms facilitation of controversy. People, like Linda, who were searching the general election and Brexit to educate themselves instead came across polarising information on both parties. Not only does this negatively affect the political process by shying away from facts, but it also degrades candidates for reasons other than their policies. This is harmful to the general public because it prohibits them from fully utilising their political power. While the connected node “What would Boris Johnson and Jeremy Corbyn get each other for Christmas” is not directly affiliated with anti-Semitic claims against Corbyn, it does provide a narrative on each leader’s personality. This is extremely powerful within elections because it shapes a candidates’ likeability. The YouTube recommendation algorithm’s ability to combine the controversial and personal makes it a notable force within the political process.

Graph B:

Image: by the author

Graph B shows the isolated outliers that do not contribute to the narrative. “I bought CELEBRITY S used Iphone/ now I KNOW his SECRET” is an animated video that involves a young girl talking about knowing Justin Bieber’s secrets. The keywords “secret” and “scandal” are used within this video. “I broke my legs to satisfy my mom but it was not enough” also contains the keywords “accusation” and “scandal”, which perpetuates the extreme content that excites users.

My brief investigation calls for increased awareness over radicalised YouTube suggestions. The political process is in the hands of people like Linda, who are increasingly making life-altering decisions based on this content.

AI and algorithms: a challenge for human creativity and the artistic world?

Van Gogh, Art and Algorithms, photo : pixabay. 

What if entering a mathematic formula into an algorithm could turn you into a successful artist? Indeed, in October of 2018 at Christie’s auction house in New York, a French art collective spring a surprise by selling the first algorithm’s generated portrait for the incredible amount of $432,500, reaching almost 45 times its high estimate. The Edmond Belamy portrait depicts – in the classical European style – a blurry corpulent gentleman wearing a black coat and a white-collar. However, apart from his style, there is nothing conventional about this painting which has immediately sparked a controversy in the artistic environment and raised questions about the nature of art and human creativity.[https://drive.google.com/file/d/1esAOv8MsVzYH9njGmHnqUdgPh4aFDVvK/view]

Emond de Belamy, from la Famille de Belamy , Photo : CHRISTIE’S. 

Indeed, the progress in digital technologies has led to the gradual replacement of humans by AI machines in many fields. However, it seemed that certain tasks – such as creative productions – were the property of men only. Nonetheless, AI and algorithms are now breaking into the most subjective and human parts of our lives. These sophisticated programs are becoming more and more imbricated with artistic practices; generating singular artworks in the realm of fine arts, music, literature, dance or even culinary arts.[https://www.invaluable.com/blog/ai-art/]

Therefore, what becomes the role of human creativity in the process of art? Are algorithms capable of creativity? Some people feel anxious about this evolution in the art sector. It is the case of Anna, a 25 years old intern at Sotheby’s in the Books & Manuscripts Department, who explicitly affirms: “For me, the use of algorithms for producing art pieces inherently contradicts our traditional understanding of an artist, who is supposed to use its imagination and to be motivated by a precise intent”. These innovations could thus be seen as a reduction of our ‘authority’ and our subjectivity in artistic practices. However, algorithmic art can also be accounted for as an extension of our creative capabilities. These new practices are interesting ways of enhancing and democratizing art creations.

Obvious, the French collective mentioned above, shows how algorithmic arts can improve creation. Their manifesto for the Belamy Family project presents this innovation as a natural evolution of a traditional complementarity between art and science. For this project, they have fed their algorithm with a dataset of 15.000 images of portraits painted by humans the GAN algorithm (Generative Adversarial Networks).  Then, this algorithm uses neural network architectures – based on sensory data such as vision and speech recognition –  and generates new visuals, by mixing characteristics of the training dataset’s images. For Obvious, the stated goal of the Belamy family project was to democratize art and to allow the audience to play a role in the act of creation. AI can reveal many creative possibilities for society. They express this enthusiasm across their manifesto: ‘we see algorithms as a fascinating tool to help dig into and better understand the different forces at stake in the process of creating something new’. Algorithms can help us in thinking about the act of creations and in developing our creativity.[https://drive.google.com/file/d/1esAOv8MsVzYH9njGmHnqUdgPh4aFDVvK/view]

La Famille de Belamy, photo : Obvious.

Here, thus, the algorithmic artwork depends on the artists’ initial selection of subject, data, output, and medium and is directed toward the democratization of art. However, there is still a part of the process that remains mysterious. The act of creation itself does not seem to proceed from the artist’s imagination. What role does human creativity have in the process of creating algorithmic art? Can an algorithm be considered as an artist? Mario Klingemann, another AI artist, twitted on its profile (@quasimondo): “the process is pretty much all I am interested in. The results are more like a proof-of-work.” Considering this, can we still attribute the same quality and significance to a work of art, of which the progressive choices of creation are removed?

Twitter Mario Klingemann, @quasimondo. Photo : capture d’écran. 

Can the final products of algorithmic art be thought of as great art? Jeremy Katz – the worldwide Editorial Director at Ogilvy & Mather  – posted a Linkedin video (called The Week in a Minute) to talk about this innovation and especially about the Edmond Belamy portrait. He expresses a violent critic of the artwork, stating loud and clear that ‘it is terrible. Just awful’ and rub it in by assuming that ‘Obvious could not have been oblivious to the poor quality of their work’. For him, the mediocrity of their work was produced on purpose for making fools of those who will consider this as great art and who will pay almost half a million for it. His disdainful attitude toward the Belamy portrait is understood as a more global disdain for all algorithmic arts throughout the video.  [https://www.linkedin.com/posts/jeremy-katz_agencyvoices-activity-6464287232571297792-O-lF/]. Therefore, account and receptions of algorithmic arts and the Belamy portraits are multiple and contentious. They express in the meantime the anxiousness and the enthusiasm of artists and their audience.

Besides, some artists – such as Trevor Paglen – have also taken the responding position by commenting on algorithms’ world visions through their work of art. The algorithms’ emergence among society has made them interesting subjects of art. Currently exposed at the Barbican Center, ‘From Apple to Anomaly’, examine the mysterious and powerful forces acting behind artificial intelligence. Paglen investigates a particular dataset of images (from ImageNet) used in teaching algorithms to ‘see’ the world. He has selected approximately 30,000 images from pre-organized categories in the archive and has individually printed and fixed the photographs on a wall of the Barbican Center. Paglen inverses the original movement by showing what is supposed to remind invisible to public eyes. He unveils the internal processes of algorithms:   how they ‘see’ and classify the world. In his conversation with Alona Prado – curator of the exhibition – Paglen explains that ‘It is important for us to look at these images and to think about the kinds of politics that are built into technical systems’ because ‘the consequence of these kinds of training sets and categories is discrimination – the point of systems that classify people is to discriminate between people’ (Catalogue of the exhibit, 2019). Algorithms can have strong effects on the population in terms of the bias and stereotypes they produce. [https://www.barbican.org.uk/whats-on/2019/event/trevor-paglen-from-apple-to-anomaly]

Artist Trevor Paglen poses for a photo at a media preview of ‘Trevor Paglen: From ‘Apple’ To ‘Anomaly” at Barbican Centre. Photo by Tim P. Whitby/Getty Images for Barbican Centre.

The artistic gaze on such algorithms and their political world’s perception is interesting when put back in the scope of algorithm-generated artworks. Such an uncontrolled aspect in algorithms questions the domination of the artists upon the systems he uses for creation. It might not be sure that AI artists fully control their generated creation of art. It is thus reasonable to think that algorithmic artworks could possess some internal agency, which would dominate the result of creation.

Robot & art, GIF

Thus, this controversial accounts on algorithmic artworks question the very nature of art and the role of human creativity in the artistic process. Some will say that algorithms can produce great art, some others will say that they don’t. However, the most significant aspect of every artwork is to speak to the subjectivity of its viewers. Therefore, if you think that algorithms produce artworks, then they do. That is it, end of the discussion.

How you are being categorized by Tinder algorithm

Emily doesn’t swipe anymore. As 57 million people around the world, she was using Tinder. The 21 years old student is now in a relationship. But anyway she may be more reluctant to use the American dating app. More than ever.

‘At best a client, at worst, a product’

Indeed, it has recently been reported by the French journalist Judith Duportail that Tinder was rating people to make accurate suggestions. In her 2019 book L’amour sous algorithme (The Love algorithm), she unveils the existence of the Elo Score, that is to say a desirability rate. Judith Duportail received 800 pages of personal data as stated by the European law on data protection. She had everything : conversations, places geolocated, likes. Everything but her Elo Score.

‘At best a client ; at worst, a product’ Duportail claims, as she shows that the app heighten inequalities within relationships, encouraging men to date younger women, less wealthy and with a poorer education background.

TINDER GIF 3
By the author

Emily told us that she had been using the app for ‘a fair amount of time’. She wasn’t looking for anything serious at the time. She ‘thought it’d be a good laugh, a good way to meet new people’. Yet, she quickly noticed that the algorithm was shaping her experience.

I did find it quite addictive and I also noticed that it would stop showing me certain kind of people after a while. And it did feel like it filtered out, based on things like education level, potentially race, or the kind of jobs people had.

Even though the terms and conditions are public, there is allegedly no way for users to find their rate nor to know the way it is calculated. The underlying coding programs are not revealed by Tinder but would apparently be based on the success of our profile picture, the number of complicated words used in the chat.

The app even goes further : people do not have the same clout on our value, as it depends on their own rate. In 2019, the Vox reported that ‘the app used an Elo rating system, which is the same method used to calculate the skill levels of chess players: You rose in the ranks based on how many people swiped right on (“liked”) you, but that was weighted based on who the swiper was. The more right swipes that person had, the more their right swipe on you meant for your score’.

Tinder stated on their blog that the Elo Score was old news, an ‘outdated measure’ and now, the app was relying on the activity of its users. 

According to tinder’s website, ‘Tinder is more than a dating app. It’s a cultural movement.’ 

In a sense, this is not only marketing, this is partly true. Tinder is now renowned worldwide. 9% of French couples who started their relationship between 2005 and 2013 met online according to the National Institute of Demographic Studies. In the UK, based on an Infogram study, this number reached 20% in 2013 and 70% for the same-sex couples according to sociologists Reuben Thomas and Michael Rosenfeld. 

Birds of a feather flock together

Sociologists haven’t been waiting for social media to identify the phenomenon of endogamy (or in-marriage) : we tend to date people from the same social class. We share similar activities and meet them within familiar groups : colleagues, friends, friends of friends. Thus, digital categorizing is pushing further existing dynamics.

When I swiped right, Emily says, it would show me more of those people. That is obviously kind of reinforcing social stratification, I guess, and the fact that people tend to end up with the ones that are similar to them.

TINDER GIF 1
By the author

Camille, a 21 years old bisexual user of the app, also noticed she was shown similar profiles. She told us that it appeared ‘rather obvious when you use the app’. Yet, Camille thinks it helps to find someone you are likely to meet in real life.

It seems as a good way to select the people to interact with. It seems rather innocent. It’s not something you give a lot of thoughts about.

Indeed, Tinder’s algorithm aims at finding the same interests, equivalent wages (if you tell your job in your description, the average income can be found). But key words can be identified without being understood. Surprisingly enough, Tinder doesn’t get irony or expressions, hence awkward date experiences.

The biases of machine-learning systems 

This might be a matter of time. As Diggit Magazine recently reported, algorithms are machine-learning systems, fed by societal practices. This can be for the best… And the worse. That is to say that this love conditioning has its biases as the app can be supplied with racist or sexist contents. 

Moreover, Aude Bernheim unveiled in her AI study L’intelligence artificielle, pas sans elles ! that 88% of algorithms were made by men.

TINDER GIF 4
By the author

It might account for the unbalance within Tinder’s users. Only 38% of Tinder users are female against 62% of males. The experience as female queer user also raises questions as Camille’s story proves.

About women, I noticed that the app had lots of problems identifying the kind of girls I would like to date. I do believe that for queer people, especially women, it is actually harder to interact on the app. There are so many couples looking for a third person. So the algorithm works when you want an heterosexual relationship but for queer people… It does not work as well as it should. It is sadly predictable and that’s probably why I stopped using Tinder.

The app is well aware of these situations and tries to look inclusive on its website “Swipe life” made out of articles on ‘dating tips’ such as ‘How to Talk to Your Partner About Non-monogamy’ or ‘What You Should Know When Dating Someone With Bipolar Disorder’. Yet, the lack of transparency on their algorithm is still widely criticized. 

TINDER GIF 2
By the author

They also seem to put the emphasis on who insignificant dating has become. They report on their website that ‘70% of these college students have never met up with their matches…and 45% say they use Tinder mostly for confidence boosting procrastination’ Dating is no longer a big deal. But it is a big business as the app now has 800 million euros of annual turnover.

If the categorization seems harder to understand due to the end of the Elo Score, both analysts and users claim the profiling hasn’t come to an end yet.

Melanie Lefkowitz wrote that ‘Although partner preferences are extremely personal, it is argued that culture shapes our preferences, and dating apps influence our decisions.’ And Emily couldn’t agree more as she experienced it herself.

On the one hand, it really opens up your circle because there’s suddenly all those people you would never have bumped into in your life but you can message, and meet up with. On the other hand, the algorithm kind of dictates who you’re able to see as being out there. It’s actually quite limiting.

The Invisible Walls of YouTube

81913024_2746900778689099_4322506188215287808_n
Photo by: Dado Ruvic/Reuters

November 2019 was a watershed month for American politics. It marked the beginning of the formal impeachment inquiry of the President of the United States, Donald Trump, only the third time an American president was to be impeached. Public hearings were held, investigations were made, and American news was abuzz with objective discussions, opposing arguments and rancorous accusations.

Politics can be controversial, with largely divided opinions along partisan lines. The party division is marked by differences in principles of government, and religious and secular world views. This is further complicated by each individual having their opinions and beliefs based on their own experiences, principles and values. In a democracy, some would argue that it is important that the public remain actively engaged in the activities and conduct of their government. It is especially easy in this day and age to do so. Social media and digital platforms have become the cornerstone of modern society, with a plethora of content at each individual’s fingertips.

YouTube is a large player in this global phenomenon, having grown from 2005 into the second largest social network in the world today. With 73% of US adults reportedly frequenting the video media platform, it can be said that YouTube is as influential as televised news channels. The platform has even hosted livestreams of debates during US presidential elections, giving American citizens real time access to information which may ultimately influence their vote in a US election.

Given the power it wields over such a massive number of American viewers, one might assume YouTube would capably present videos involving Trump’s impeachment without bias, apart from political alignment. A quick use of its search engine will reveal it hosts news organizations with different political stances, posting through their own official channels their respective interpretation of stories as events unfold. On the surface, this seems fine. Nothing wrong with a platform that presents both sides of a story along with different points of view. If anything, this makes YouTube an objective place to learn about news surrounding the impeachment, right?

Wrong, for a number of reasons. Studies indicate that social media usage has given rise to ‘echo chambers’. Echo chambers describe the mental concept of restricting one’s media consumption from opposing views. From videos to online forums, this isolation from other perspectives of an issue causes the reinforcement of the individual’s point of view, believing it to be more true or right than others. This bias can grow to the point of taking priority over facts and logic. In short, echo chambers are real, dangerous and EVERYWHERE in the digital space.

And what better way to observe the potency of this effect than searching for video coverage of the historical third impeachment of a US President on YouTube. Using a neutral browser (a browser without search history, a digital footprint, or geographic preferences), three different news channels were looked up, each with their own political stances as presented in the Media Bias Chart: Buzzfeed, Fox News and NBC.

The Media Bias Chart. Image By: Ad Fontes Media

One random video was then pulled from each channel with the word ‘impeachment’ in the title, all uploaded within November 2019, when the impeachment process began. These videos are:

Buzzfeed News – Let’s Hear It For The Whistleblower – Impeachment Today Podcast

Fox News – Tucker’s big takeaways from the Trump impeachment saga

NBC News – Highlights: Fiona Hill And David Holmes’ Impeachment Hearing Testimony

Next, the YouTube Data Tool was used to scrape each video for their recommended video section. This data was then processed by the network analysis program, Gephi, to visualize the networks within which these three videos reside.

Visualization of Video Recommendation by News Channel. Image by: Author

Lo and behold, it appears that overlaps in recommended videos among these three channels are rare. Between the left leaning NBC and the conservative Fox News, a mere 5 videos out of a cumulative 171 video recommendations could possibly lead to the network of videos of the other news channel. This means that a user has only a 3% chance of being exposed to any content representing the opposing political stance of both NBC and Fox News, regardless of which channel they had started with. The echo chamber looks even stronger for Buzzfeed, with its network of recommended videos sharing absolutely no overlap of recommended videos with other news channels. This means that if you start looking up ‘impeachment’ through Buzzfeed in November 2019, you would have no exposure to any politically opposing content featured in other news channels.

As borne out in the above exercise, exposure to opposing political viewpoints is rare for a YouTube user. It is therefore likely that an individual will maintain his or her political viewpoints. This is simply the result of YouTube’s algorithm following its programming and doing its job. Engineers at Google released a paper in 2016, explaining how YouTube has used algorithms for years. A complex machine learning process gathers and analyses data down to each specific user. The result is an adaptive program that presents videos personalized to individual viewing patterns and preferences. This may be fine if you are looking to kill time watching meme videos, but it restricts users from a more expansive world view.

“It isn’t inherently awful that YouTube uses AI to recommend video for you, because if the AI is well tuned it can help you get what you want.” – AlgoTransparency Founder, Guillaume Chaslot

The former algorithm developer for Google and YouTube, Guillaume Chaslot, pointed out the flaws in the video recommendation algorithm. He stated that the primary purpose of YouTube’s algorithm is not to inform or educate viewers, but to capture their attention and keep them on the platform. This encourages the existence of echo chambers within digital communities, continually reinforcing pre-existing views. We’ve got to realize that YouTube recommendations are toxic and it perverts civic discussion,” said the algorithmic transparency advocate, Chaslot at a recent tech conference.

The issue of algorithms enabling echo chambers raises questions about YouTube being an objective platform for political media distribution. We cannot deny the data showing how popular YouTube is. It would be foolish for news organizations to not reach out to a large segment of Americans through this video platform. But with analysis proving the effectiveness of the algorithm and developers acknowledging civic concerns, what is recommended to you will not always be what is best for you. If individuals wish to break through these invisible walls, it would be wise to make a deliberate effort to look beyond them.

Random video recommendation or subtle mental manipulation?

Digging deep into the YouTube algorithm

 

Who hasn’t come across an absurd YouTube recommendation? You’re watching CrashCourse Philosophy and ending up on a conspiracy about Donald Trump being a lizard. Sometimes videos seem completely irrelevant and there exists a whole channel on Google Supportfor user problems.

YouTube is so powerful that even young children are obsessedwith it. The platform’s recommendations section is constantly trying to find what we would like to watch nextwhilstscanning massive amounts of personal data. With great power comes great responsibility, and we better think twice before blindly trusting the platform.

 

Trusting the algorithm?

YouTube profiles are designed for crafting and personalisation, using affordances as subscribing, upvoting, creating lists. Then, AI scans user activity, likes, dislikes, previously viewed videos… and all other sorts of personal informationlike phone number, home and work address, recently visited places and suggests potentially likeable video content. YouTube uses this information as a “baseline”and builds up recommendations linked to users’ viewing history.

In 2016, Google publishedan official paper on the deep learning processes embedded in the YouTube algorithm. The algorithm, they write, combines gathered data based on factors such as scale, freshness, and noise – features linked to viewership, the constant flow of new videos, and previous content liked by the user. They provide analysis of the computation processes, but they still cannot explain the glitches commonly found in the system – for instance, why is the algorithm always pushing towards extremes?

 

The dark side of the algorithm

Adapting to one’s preferences might be useful, but it seems like YouTube is prompting radicalism, as if you are never “hard core” enough for it.

Guillaume Chaslot, founder of AlgoTransparency– a project aiming towards web transparency of data, claims that recommendations are in fact pointless,they are designed to waste your time on YouTube, increasing your time-view. Chances are – you will either get hooked onto the platform or will end up clicking on one of the ads, thus generating revenue. Chaslot says that the algorithm’s goal is to increase your watch time, or in other words – time spent on the platform, and doesn’t necessarily follow user preferences.

It seems like YouTube’s algorithms are promoting whatever is both viral and engaging, and are using wild claims and hate speechin the process. Perhaps this is why the platform has been targeted by multiple extremist and conspiracy theory channels. However, it is important to acknowledge that YouTube has taken measures against that problem.

 

Our investigation

 Inspired by recentresearchon this topic, we conducted our own expedition down the YouTube rabbit hole. The project aims to examine the YouTube recommendation algorithm, so we started with a simple YouTube search on ‘Jeremy Corbin’ and ‘anti-Semitism’. The topic is completely random and provoked solely by the fact that we are London residents familiar with the news. For clarity’s sake, here is a visual representation of the data (Figure1.0).

On Figure 1.0, we can see the network formed by all videos related to the key terms which will end up in the recommendations section. The network has 1803 nodes and 38 732 edges, each of them representing political videos on current global events and how they relate to one another.

 

Figure 1.0

 

Alongside with the expected titles including key words such as ‘Jeremy Corbin’, ‘Theresa May’, ‘Hebrew’, ‘Jewish’, one may notice a miniature cluster far on the left-hand side. It has three components, or YouTube videos, that are, least to say, hilarious. Let’s zoom in.

 

Figure 2.0

 

Figure 3.0

At a first glance, they seem completely random and are positioned furthest of the network and are unrelated to whether Jeremy Corbyn is an anti-Semite or not. So, there must be something hidden in the underlying meaning of the videos which makes them somehow relatable. I will refer to the videos in this cluster as ‘random’, however, in the following lines, the reader will be persuaded in the lack of any randomness whatsoever.

The three videos (Figure 3.0) have a vivid variation in content: from a teenage girl that bought Justin Bieber’s old iPhone filledwith R Rated personal material; through a woman who got pregnant by her boyfriend’s grandpa; all the way to the story of a daughter who tried to surprise her mother in jail only to end up in prison not being able to recognise her own mother who had gone through plastic surgery to become a secret spy (???).

It is easy to spot the production similarities between the three ‘random’ videos, nevertheless, they would usually not appear in the same context as they have different topics, keywords, and are produced by different channels. All videos are animated and have a cartoon protagonist that guides the viewers through their supposedly fascinating life story, and all seems made-up. The creators produrces used visual effects to affect human perceptions – animation, fast-moving transitions, exciting background music.

 

The ‘random’ videos and some commentary. Snapshots: YouTube. Edit is done by the author.

Caricature is the artists’ way of presenting personal opinion on a more radical case. It’s therefore understandable why caricatures often include political figures and international affairs. Further, humour renders the brutality of life easier to handle. Animation has become a tool for distribution and reproduction and is associated with conditions of conflict, both national and international. Since the foregoing videos are associated with extremes, YouTube algorithm suggests what it finds extreme – apparently ‘Jeremy Corbyn’ and ‘anti-Semitism’.

After observing the visual part of the content, we moved on to linguistic and semantic investigation. It is found that words as ‘scandal’, ‘very important people’, ‘controversial situations’, ‘jail’, ‘accusations’ might be the reason why those videos appear in the network related to the key words ‘Jeremy Corbyn’ and ‘Anti-Semitism’.

Interestingly, all three comment sections in the ‘random’ cluster are filled with jokes and general opinion of the videos being fake. Very little of the public believes in the validity of the stories. If we browse comments from nodes with political videos, we can find similar language. That proves that AI not only scans language but detects opinion and irony and links common themes together.

The reader now understands why my area of interest is focused on this particular cluster, as it is a metaphorical representation of the whole network. Eventually, the research proved that Jeremy Corbyn is not perceived as an anti-Semitist by the online public (or by the algorithm).

 

What is the algorithm suggesting?

To get a better grasp of the common assets in the network, we observe the nodes that are closest to the ‘random’ cluster (Figure 2.0). Following logical conclusions, can we say that the algorithm suggests all those political events are either a scam or a mockery? As the three videos are linked in a network with other, definitely not-so-humorous videos, this means they share keywords, topics, creators, or audience. The algorithm appears to find a similarity between the absurdity of the animated YouTube videos and the nodes closest to the cluster. Could this be the algorithm manifesting its opinion?

Of course, these are all speculations, and factors such as viewership and watch time are not to be neglected. As both viewers and producers, we should also remember that content may be interpreted differentlyin diverse social groups.