How voice-assistant algorithms reinforce problematic gender stereotypes: What we should pay attention to?

Voice assistants are facing an issue of gender bias in algorithms, and it reflects wider gender inequality in the field of technology.

Just after the lecture, Ella, a college student who majored in Gender Studies, was doing her final project about feminism at school. While she was too boring to focus on it, so she started to play with Siri on her iPhone to kill time. “What do you think about feminism?” Once received this question, the responses are generically such as “Don’t engage”, “Sorry, I don’t really know”.

Soon, what shocked Ella most is that this is not just an incident and the same responses are also used for those feminism-related questions like, “Do you think women should get equal pay for equal work?” “What’s your opinion on gender inequality?” So she became wonder why Siri deflect these questions? How prevalent are these cases in society?

A similar case occurred on Amazon’s Alexa as well when you mention the word “feminism”. However, its response even has a flirtatious style. When a user praises Alexa, “You’re so hot”. Her typical response has been a cheery, “That’s nice of you to say”!

These virtual assistants are widely used in people’s daily lives and make our lives smarter than ever, which argues that these problematic gender stereotypes will have a harmful impact on society. According to the introductory page of Apple’s website, it states, “Machine learning is constantly making Siri smarter. And you can personalize Siri to make it even more useful… Even when you don’t ask, Siri works behind the scenes like a personal assistant”.

“I’d blush if I could”, a new report released by UNESCO, is a standard response of this default female-voice of Apple’s digital assistant Siri. Additionally, it might reflect a more serious gender inequality issue than we imagine. Apart from Siri, other “female” voice assistants also express submissive traits, an expression of the gender bias built into AI products as a result of what UNESCO calls the “stark gender-imbalances in skills, education and the technology sector.”

Despite these harmful consequences, this issue has put parents companies of these voice assistants in the spotlight. Should we only owe this fault to technology giants? Director of UNESCO for gender equality, Saniye Gulser Corat said, ” Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves. To change course, we need to pay much closer attention to how, when and whether AI technologies are gendered and, crucially, who is gendering them.”

People might not know, actually, both Apple’s Siri and Amazon’ Alexa have female names. The former one derives from a Norse name means “beautiful women who lead you to victory”, and the latter one is unquestionably female named for the ancient library of Alexandria. But algorithms behind the interface should be neutral.

According to a report titled I’d Brush if I Could which response this phenomenon directly, it highlights that the gender bias still maintains in algorithms. It highlights that this problem actually stems from the engineering departments that are overwhelmingly taken by male stuff.

So, there is no doubt that the responses written in these virtual assistants have such an obvious patriarchal tendency. Besides, this set of data from UN official website proved that the gender inequality still holds and spread in the technological field. “Women make up only 12% of AI researchers, six% of software developers, and are 13 times less likely to file ICT (information and communication technology) patents”.

Regarding this serious problem, some digital companies have already taken measures to avoid gender bias. The news report from Reuters states, “A team of creatives created the first gender-neutral digital assistant voice earlier this year in an attempt to avoid reinforcing sexist stereotypes.”

Let’s go back to Apple’s Siri and Amazon’s Alexa incident. If algorithms still bring about such gender bias and discrimination, it is hard to imagine that the deployments of algorithms are gender-neutral. Therefore, digital companies ought to take action to stop making digital assistants female by default and explore more gender-neutral solutions. However, our greatest challenges of the next era will still be bridging the gender inequality in the very beginning to avoid this bias.

 

Is YouTube’s recommendation system asking you to become an extremist?

Currently China is undergoing a major health epidemic situation, and the whole world is paying great attention to the daily change and its affects. My friend Joseph, who is an Australian-born Chinese was also searching for the information on the YouTube platform. He typed ‘China coronavirus’ as key words in the search box and was looking for some on-site-videos in order to understand the situation better. As one of the major social events happened recently, lots of key western news agencies are actively participating in the news report and information update.

According to his search results, the first few videos are presented by some officials and well-known news organizations such as Channel 4 News, DW News, South China Morning Post etc., sorted by the degree of relevancy. Joseph clicked into the first video shown on the platform. It talked about the fact that other 13 countries outside China have outbreaks of the coronavirus.

Figure 1

At the same time, YouTube automatically played the next video which talked about the life of two US citizens who are stuck in Wuhan because of the city lockdown. They cannot leave the city in case they spread the virus. The city lockdown restricts all access to Wuhan and cuts off all exits to other places, is a controversial decision which causes intense discussions in the western world. Thanks to the recommendation system, Joseph started paying attention to the video which discusses the city lockdown. After that one, he was again recommended to watch videos about how the Chinese government manipulate social medias internally and externally etc. He ended up watching one talking about Chinese Communist Party and realized that he had spent even more time on those ‘fascinating-topic videos’ and forgot that he initially attempted to gain more knowledge about the recent situation of coronavirus.(Figure 1)

Joseph’s experience shows that YouTube’s recommendation system is not only offering similar contents based on your preferences but also leading users to some radical and extreme contents, in order to capture more attention and stay longer on the platform. According to a 2019 report, conducted by MIT technology review shows that 70% of what users watch on YouTube is fed to them through recommendation system, proving the powerful influence of the algorithm which shapes the information consumed. On the one hand, people are willing to watch the suggested videos which are offered based on their watching histories and predicted preferences. On the other hand, the main goal of YouTube recommendation system is to keep you watching as long as possible, by offering contents which is easily addictive to users. As Guillaume Chaslot, who used to work at Google on YouTube’s recommendation system, noted in his talk at 2019 DisinfoLab Conference, the motivation behind the algorithm is about watch time rather than what viewers want. 

 

Figure 2 (Gephi analysis)

This is where issue has risen that people are unconsciously watching the contents which are recommended by YouTube’s algorithm with some thought provoking contents, pushing you to waste more time on the platform and leading to the issue of misinformation. Since Google is managing and supporting the algorithms system behind the platform, Google has announced that they are working on addressing the issue and ‘begin to reduce recommendations of borderline content and content that could misinform users in harmful ways’, wrote in one of 2019 YouTube’s official blog. However, just like other big tech companies, Google never explains how its algorithms exactly work. (Figure 2)

Although the response from Google seems like a good sign it still is a serious problem. The recommended-videos sidebar is supposed to create a basic structure which will recommend a shortlist of contents based on the topic and other features of videos you are watching. Then the system will automatically learn from your likes, clicks, searching words and other interactions on the platform, adding other contents into the list on the sidebar. However, as the ranking of the list has great impact on user’s watching experiences, it is arguably questioned that whether those recommendations are shown because you like it or because the algorithm recommend it. (Figure 3)

Figure 3

In fact, a journal written by Zeynep Tufekci and was published on Scientific American website, says that the business model of YouTube is to keep users to stay on the platform and watch as many targeted ads as possible. Therefore, the algorithm is likely to promote contents which can greatly attract attention, and it seems that those borderline contents with crazy claims or radical viewpoints are more attractive and engaging, which will be recorded by machine-learning system and recommend similar contents to keep you watching, in order to make greater revenue. Affected by such algorithm, viewers always end up being pushed to extreme content, where the contents are possibly filled with fake new or conspiracy theories.

However, some people believe that we should not simply put the blame on YouTube’s algorithm, instead, users themselves who are involved in the whole process have some responsibility. According to an academic journal, written by Ariadna Matamoros-Fernández and Joanne Gray in 2019, ‘Users are not passive participants in the algorithm system’, instead, some video creators understand how the algorithm work and adjust their strategies in order to get the most recommendations on purpose. As a public community, YouTube also provides an opportunity for those extremist content creators, who are making use of the platform for propaganda. (Figure 4)

Figure 4

Although YouTube has already restricted contents with obvious bad influence such as hate speech or violence, as Ariandna and Joanne argued, it is very hard for an algorithm to figure out hidden meaning of the contents which are in the grey areas. This shows that the system is unable to accurately monitor and manage those video producers who are playing around in the shady ground, making contents possibly be misunderstood in different perspectives by various culture groups in the society. For example, Vox reporter Carlos Maza had a big online fight with a famous comedian Steven Crower in June 2019, arguing that Steven always made fun of his sexual orientation and ethnicity by using sarcastic tone and attacking language in the YouTube video. However, Steven believed that videos were simply ‘friendly ribbing’. After several vague posts he made on Twitter, YouTube stated to the public that he didn’t break the hate speech rules so that the video was allowed to continue and be distributed on the platform. The result, however, disappointed lots of subculture groups.

So, is YouTube the only one who should take responsibility of the issue? Ben McOwen Wilson, YouTube’s managing director told to BBC during an interview in 2019, that YouTube is currently dedicated to dealing with the issue of misinformation and conspiracies, however, it also requires for the joint efforts from government and other major online platforms such as Facebook and Twitter. While Joseph can’t avoid recommendations from YouTube, it is better to also have a look on other platforms, in order to be recommended in different perspectives and create his own viewpoints eventually.(Figure 5)

Figure 5

YouTube Kids app recommends disturbing videos to kids

As usual, Grace Eve, a mother of a 8-year-old kid, just went back home after work and was preparing dinner for her little daughter Elaine. While in the living room, she was watching Peppa Pig on YouTube kids on her iPad. Suddenly, Grace heard the screams from her daughter and a loud crash. She rushed out to see what had happened with Elaine, and found the poor girl was in a state of frightened and burst into tears.

Looking at this, Grace picks up the iPad on the ground, and seeing the video was still playing–Peppa Pig went to the dentist only actually got tortured instead where Peppa Pig eat her father drinks bleach. Apparently, Grace felt this content should not appear on YouTube Kids for her daughter. She was too busy to be extra vigilant for monitoring her kids to watch cartoons on YouTube at home. Grace complained, “It’s really annoying! Why does YouTube Kids allow these objectionable contents reach to children?”

It’s really annoying! Why does YouTube Kids allow these objectionable contents reach to children?                                                                                                       —Grace

YouTube Kids, kids-focused version of YouTube, is specially designed for children with advanced parental guides. It gives the option to parents for preselecting what kinds of content can children watch. There is a statement showing the company’s objective when you go through the introductory page on YouTube Kids. ‘ We work hard to keep the videos on YouTube Kids family-friendly and use a mix of automated filters built by our engineering teams, human review and feedback from parents to protect our youngest users online.’

In the online video industry, just with a single click, kids can access to hundreds of cartoon episodes, such as Spiderman, Frozen and Mickey Mouse etc. It’s safe to assume that most of the video content that recommends by algorithm might be decent and age-appropriate. However, those disturbing incidents, like ‘Peppa Pig case’, do happen because YouTube search filters are not always standing on the test.

Sometimes, parents leave their children to use YouTube Kids to watch cartoons in public space, but they cannot supervise kids all the time. This might be one reason that causes the problem. According to a sophomore majored in digital culture at King’s College London, Nina once commented, ‘ I have to say, the goal of YouTube recommendation system is not selecting age-appropriate videos for children, but using data to find their target audiences to make them stay longer on this application.’ She continued to add, ‘ I feel like the best solution for parents is to co-viewing with kids. You cannot fully trust the safety kids settings.’

I feel like the best solution for parents is to co-viewing with kids. You cannot fully trust the safety kids settings                                                                                       —Nina

Besides, does the YouTube algorithmic issue can only be solved by parents? Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood argued,  ‘Anything that gives parents the ability to select programming that has been vetted in some fashion by people is an improvement, but I also think not every parent is going to do this. Giving parents more control doesn’t absolve YouTube of the responsibility of keeping the bad content out of YouTube Kids”, in a statement to CBC News.

This technical bug on YouTube Kids, like ‘Peppa Pig’ incident, that Grace was concerned about is that how many children have already witness these objectionable content before the company or parents notice this problem? Does this phenomenon heavily exaggerate by media nowadays?

This isn’t just one or two people trying to manage to scam YouTube’s recommendation system, either, but thousands. According to a survey conducted by Pew Research Center, it has emphasized the important role of YouTube in offering content for children. The statistics highlights that ‘Fully 81% of all parents with children age 11 or younger say they ever let their child watch videos on YouTube…And among parents who let their young child watch content on the site, 61% say they have encountered content there that they felt was unsuitable for children.’

These data show that you might fall down an objectionable YouTube hole if we give too much room for algorithm to breath.

But as for company, this technology is a blessing rather than a curse. The YouTube’s parent company, Google, has revealed that YouTube recommendation system actually rely on a black box technology which refers to algorithms. Additionally, it neutral networks and this technology puts this online video giant on the edge of world market.

People know how YouTube works, just typing the key words on the search broad, it will show you loads of personalized video content. After finishing watch the first one, it can automatically slip through algorithm without enough human moderation to ‘up-next’ ones. So experts begin to explain more on this technology to give us a better understanding of how it works and how much can we rely on this black box algorithm?

At King’s College London, the lecturer and researcher in the Department of Informatics, Dr. Christopher Hampson said “ultimately, this is a big topic in artificial intelligence at the moment, about safe and trusted artificial intelligence. Often these algorithms are essentially black-boxes where they feed in how users are behaving… based on a variety of information collected about them – with neural networks in the background – to produce outputs in the form of recommendations.”

This automated system sometimes is not suggesting what it is suggesting to users actually. YouTube company has already taken some measures to solve this problem like you can tag the video that is not proper for kids and send it to YouTube. However, it would take a few days for YouTube to figure it out and it is still not a perfect solution by far. Therefore, parents start to rely on their own to ensure the YouTube videos are suitable for their children. Grace, the mother of Elaine, will not allow her daughter to watch cartoons on any video application instead of watching them together.

How ‘Straight’ is the YouTube Recommendation Algorithm?

Sorry, I disappeared into a YouTube black hole’ is something that can be heard on the weekly. In the words of the arts student Varizka (21), this means “when I’m binging one video after another just based on the recommended section, and I end up watching something weird and completely different to what I looked for in the first place.” This is a phenomenon that many users of the video platform can relate to, as the recommendation algorithm (and the ‘autoplay’ function that goes hand in hand with this) is one of YouTube’s most addictive features. The way it works is by recommending a selection of videos that is considered to be related to the video currently playing (the entry point) and, if turned on, automatically playing the first video from the list after the video from the entry point is finished playing. And yes, you guessed it: have it their way, YouTube will never let you offline.

The Black Hole That NASA Doesn’t Research – So I Did

Hopefully by now this is clear, but the black hole in this context is not the kind that is found in space. It’s produced by YouTube’s recommendation algorithm. In theory, this algorithm is supposed to recommend similar videos, but, like Varizka mentions, has a tendency to take the user on seemingly random paths through its library of content. In some cases, users have even experienced that innocent content has led them to extremist videos. This is especially true when it comes to already controversial themes, such as politics, with an example being mainstream videos about Trump surrounding the 2016 US election leading to far-right extremist content. This had me wondering: would the same be true for time old controversies, such as homosexuality within religions? By using search queries containing the key words ‘gay’ combined with each of the Big Five religions (Christianity, Islam, Judaism, Hinduism and Buddhism) respectively using a Video Network tool, I learnt that the answer to this question is a bit more nuanced than a simple yes or a no.

Visualisation of homosexuality in Christianity dataset

After having put these datasets through the data visualisation program Gephi, it was possible to identify certain trends and oddities across the different religions, like the various categories that the videos belong to. The dominant categories within four of the religions were no bigger than 30% of the entire datasets. The dataset on homosexuality in Islam, however, showed that the dominant category was news and politics, with nearly half of the videos belonging to this topic. Another interesting point taken from the categorising of these videos is how non-profits and activism scored more than 100% higher in the Christian dataset, compared to any of the other religions. Aside from these two categories, the top 8 categories also included entertainment, people and blogs, education, film and animation, how-to and style and music. With how-to and style and music being the exception (how-to and style only appeared in the top 8 categories in the Hindu dataset, and music only in the Christian and Jewish datasets), the rest of these categories appeared in similar numbers in the five religions.

To dig deeper into the datasets, I then looked at the structure of the graphs. Could they look representative of a black hole? I was surprised to see that they did. 

Visualisation of homosexuality in Islam dataset

However, after some manipulation, the networks started developing clusters and structural gaps. These are useful in identifying some of these oddities mentioned earlier. Clusters in these kinds of networks often represent extreme points in the network, as the peripheral ‘camps’ of nodes are not connected to many (if any) of the central nodes in the network. In the different datasets, there were different clusters that stood out, such as the Randy Rainbow Song Parody and Nixon-related clusters in homosexuality in Judaism, or the Queer Eye, Aligarh Muslim University and Donald Trump clusters in homosexuality in Islam. Some of the nodes may represent offensive or extreme content, such as the Donald Trump cluster which contains nodes of information on heavily right winged politics and mentions of the Swedish immigration policies which are now seemingly growing more conservative. In the same network, though, is a cluster containing nothing but videos of Tan France from the popular Netflix show Queer Eye, ranging from discussions on racism to makeovers. From homosexuality in Christianity, there seemed to more talks, educational talks such as TedX-talks and Google talks investigating the relationship between religion and sexuality, as well as religious talks from the anti-LGBT+ pastor John McArthur. This was a noticeable trend across all religions included in the project. Within the dataset on homosexuality in Hinduism, both videos on LGBT+ persecution in India appeared, as well as videos of Indian gay proposals. For homosexuality in Buddhism, clusters of Buddhist gay marriage related content, as well as educational videos on how to give up on sexuality all together in order to live according to Buddhist ways.

“I once went on YouTube to for a makeup tutorial, and then 2 hours later I was sitting watching conspiracy theories about the industrial revolution”

Varizka Anjani

In other words, it seemed as though, yes, there were extremist directions that the algorithm could lead you, but these extremes represent a selection of information, ranging from liberal to conservative, from factual to fake, from gay to straight. The main body of the networks, containing the most and most connected nodes, also visualised this, with their many camps of ideas merged into larger networks. In the case of homosexuality in different religions, it seems that the extreme points that the YouTube algorithm might take its users to merely reflect the variety of opinions there are within the topic on a larger scale.

To conclude, here are some navigation tips for navigating the black hole:

  1. Turn off Autoplay
  2. Be aware of your camp
  3. Look out for the peripheries
  4. Curiosity can mislead you

From Apple, to Anomaly, to ImageNet

I stepped through a metal doorway into a dark corridor. To my right, white text stood illuminated against a black wall extending infinitely into an endless wall of square photographs. An invitation and an introduction, the glowing words invite the museum-goer to “take a critical look at how artificial intelligence networks are taught to ‘perceive’ and ‘see’ the world by engineers who provide them with vast training sets of images and words…”. I quickly read the rest of the text, passing over words like ImageNet and algorithms, as my mind was sucked into the vortex of photographs swirling in the distance. The arrangement starts with a single image of an Apple, continually expanding in both the sheer numbers of images and the level of controversy that surrounds their label. About mid-way, the pictures have grown in numbers to fill the wall from floor to ceiling in an overwhelming spectacle of organization among an image of complete chaos. Images ranged from babies, coffee, and jaw breakers, to subarachnoid space, heathen, and schemer—ending with the final category: Anomaly.

This vortex of vivid imagery is Trevor Paglen’s exhibit, “From ‘Apple’ to ‘Anomaly’” currently at the Barbican in central London. Looking at this piece, I felt an overwhelming sense of smallness as I stood just inches from the image wall, neck strained as I began to stare at the photographs entire body lengths above me. But even further than the physicality of my insignificance in that room was the hazy fog of confusion that seemed to cloud my understanding of what exactly I was looking at. Sure, I was completely fascinated, but at what? What was Trevor Paglen trying to tell me?

Emerging from the darkness, I spoke to a fellow museum-goer Samantha-Kay about her experience in the exhibit. She said that the message she took from the exhibit was that of a single emotion: worry. She felt like she had awakened to the biases that exist in machine learning, commenting that when a machine makes a decision, it’s really from whoever made the program or the machine, “it’s from their mind and from their mind-view only”. Samantha also questioned, “So what’s exactly going on in that super mind, or that ‘artificial’ super mind?”. To me, she seemed to get at the emotional truth that exists in Paglen’s exhibit. One that feels almost eerie and apocalyptic, as we see the slow deterioration of sense and logic as the categories escalate from a harmless apple to the abstract attempt to represent something as abstract as an anomaly in a tangible, visible form.

Inspired by Samantha’s insightful interpretation to Paglen’s piece, I decided to embark on a journey to discover my own. I recalled the brief introduction at the entrance of the exhibit, remembering the mention of image sets and machine learning. I began to speculate that the images before me were likely seen through the eyes of an algorithm. I thought, perhaps, Paglen is commenting on the way algorithms are being used to label abstract concepts, like anomalies, as if they are equal to that of an apple. I thought these images were the output of such faulty algorithmic identification, the result of machine learning and all the biases that come with it.

Spoiler alert: I was wrong.

It was a few simple words by an employee at the Barbican that completely dismantled my interpretation of “From ‘Apple’ to ‘Anomaly’”. I was told, “People think they understand it, but have gotten the wrong order…they think this is the output of the search rather than the input of the search.” I was part of the masses. I was one of these people that got it completely, and utterly, wrong. In a very matter of fact, nonchalant utterance, the employee said: It’s about ImageNet. Image Net—the very word I so wrongly glossed over as I read the introduction to the exhibit just hours before.

In a fury of curiosity, I quickly whipped out my phone and fell into an abyss of Google searches: What is ImageNet? Is ImageNet a bank of images? Is ImageNet a machine? Where did ImageNet’s images come from? My questions were endless, and my understanding remained largely inconclusive. In my research, I found that ImageNet, in a few words, is a data set. But not just any data set, it is the best data set to possibly exist to this day. In a Quartz article, Dave Gerhgom says ImageNet essentially is “the data that transformed AI research—and possibly the world”. In addition, to blow your mind even further, the accuracy of ImageNet as a data set increased the accuracy in machines correctly identifying things from 71.8% to a whopping 97.3%—which Gerhgom says far surpasses human abilities.

So, in other words, think of ImageNet as a big box of extremely effective flashcards. ImageNet is essentially the world’s best set of flashcards, and machines and algorithms all over the world are begging to be its students. However, these flashcards were not made by a machine. They were made by people.

With this new knowledge, I turned Paglen’s work over to expose a new side. I disconnected it from the flawed and backwards understanding I had of it when I saw it for the first time. I saw “From ‘Apple’ to ‘Anomaly’” for what it really is: a graveyard of inputs. However, despite my initial misunderstanding of Paglen’s work, I think that his choice to represent the usually solely virtual existence of ImageNet in a tangible and visual form before the viewer allows us to see the vast influence this bank of images can have on artificial intelligence. It exposes the flaws and innate biases within ImageNet before the eyes of the average passer-by. In a recent interview, Paglen said that “It is important for us to look at these images and to think about the kinds of politics that are built into technical systems. I think that showing those images and labels is itself an indictment of the process—a particular kind of indictment that can only really be done effectively by looking”. Paglen has used ImageNet as the source material for his work in a way that it is not usually subject to. ImageNet is in front of the camera, not behind the scenes, and this stark exposure is enlightening the viewer to the innate bias of the data that is teaching machines all over the world to see.

Down the YouTube Rabbit Hole: Jeremy Corbyn and Anti-Semitism Claims

Linda Abelman is an 18-year-old Stoke Newington resident who has just exercised her right to vote for the first time in the December election. Having spent the past three years confused by Brexit, Linda decided to conduct comprehensive research before deciding on a candidate. In true Generation Z fashion, she took to YouTube. When I asked her what she found, I was expecting her to tell me about her pro-Labour agenda. I assumed that YouTube’s algorithms would provide her with content that was popular to the youth of London, such as herself. But her answer was nothing like that. Instead, she filled me in about the overwhelming amount of anti-Semitism claims that plagued her perception of Jeremy Corbyn; and therefore, the Labour Party. Because my assumption was so off, I decided to investigate YouTube’s recommendation algorithm further by examining the video network that enables anti-Semitism claims surrounding Jeremy Corbyn.

YouTube’s recommendation algorithms are designed to increase the time people spend over the platform. This is necessary for its aggregation of revenue from advertising companies. After Google Brain, the company’s AI division, took over YouTube’s recommendations in 2015, the algorithm seemed to produce more extreme suggestions. According to Zeynep Tufekci, an associate professor at the University of North Carolina, the algorithm “promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.”. This means that more intense content is made instantaneously available. Former YouTube employee, Guillaume Chaslot claims that the demand for user watch time saw the algorithm push conspiracy videos on users. He believes increased efforts to grasp user attention will only spread problematic and prohibited content. His project to introduce diversity to the recommendations algorithm did not stimulate the same watch time and was therefore, shut down. Not only does this show YouTube’s blinding determination for more watch time, but also its power as an instrument for radicalisation. This limits the younger generation, like Linda, who rely on YouTube for their understanding of the world around them.

To understand Linda’s experience, I conducted a network analysis of YouTube videos related to Jeremy Corbyn and anti-Semitism claims. The study attempted three main objectives to understand the narrative imposed. First, to utilise Gephi for data visualisation. Second, to identify the central clusters that contribute to content analysis. Third, to isolate the unexpected clusters that raise concern within the network.

Graph A:

Image: by the author

The above graph shows a cluster of nodes that exhibit two central themes: the reaction of the Jewish community to anti-Semitism claims and the general election.

The blue arrows point to videos that address Corbyn’s perspective on Judaism. The video “Jews are terrified of a Corbyn government” is a central node. This video contributes to the Jewish community’s narrative on Corbyn with 87% of British Jews viewing Corbyn as anti-Semitic. The video confirming the Chief Rabbi’s attack of the Labour party only furthers this disposition, where he declared that a “new poison” of anti-Semitism existed at the top. The small community of British Jews, only 0.5 of the population, look towards the Rabbi as a guide for their best interests. The video on the Rabbi is directly connected to Boris Johnson’s response. This shows the highly-politicised nature of the issue that feeds into the recommendation algorithm. Johnson’s reaction creates a divide between the two parties that furthers the anti-Semitic narrative behind the Labour party. Corbyn finally addresses this in a connected video; however, this is drowned out by more controversies that link him to anti-Semitism. Therefore, this series of related videos solidify anti-Semitic claims surrounding Jeremy Corbyn.

The yellow arrows demonstrate videos that attach the general election to allegations against Corbyn. Each party’s manifestos are brought to light with the topic of Brexit as the main concentration. The BBC’s video “Johnson V Corbyn election debate: Who? Parties react” shows a reputable news source contribute to the divide between the two parties.

 This focus of Brexit contributes to the algorithms facilitation of controversy. People, like Linda, who were searching the general election and Brexit to educate themselves instead came across polarising information on both parties. Not only does this negatively affect the political process by shying away from facts, but it also degrades candidates for reasons other than their policies. This is harmful to the general public because it prohibits them from fully utilising their political power. While the connected node “What would Boris Johnson and Jeremy Corbyn get each other for Christmas” is not directly affiliated with anti-Semitic claims against Corbyn, it does provide a narrative on each leader’s personality. This is extremely powerful within elections because it shapes a candidates’ likeability. The YouTube recommendation algorithm’s ability to combine the controversial and personal makes it a notable force within the political process.

Graph B:

Image: by the author

Graph B shows the isolated outliers that do not contribute to the narrative. “I bought CELEBRITY S used Iphone/ now I KNOW his SECRET” is an animated video that involves a young girl talking about knowing Justin Bieber’s secrets. The keywords “secret” and “scandal” are used within this video. “I broke my legs to satisfy my mom but it was not enough” also contains the keywords “accusation” and “scandal”, which perpetuates the extreme content that excites users.

My brief investigation calls for increased awareness over radicalised YouTube suggestions. The political process is in the hands of people like Linda, who are increasingly making life-altering decisions based on this content.

AI and algorithms: a challenge for human creativity and the artistic world?

Van Gogh, Art and Algorithms, photo : pixabay. 

What if entering a mathematic formula into an algorithm could turn you into a successful artist? Indeed, in October of 2018 at Christie’s auction house in New York, a French art collective spring a surprise by selling the first algorithm’s generated portrait for the incredible amount of $432,500, reaching almost 45 times its high estimate. The Edmond Belamy portrait depicts – in the classical European style – a blurry corpulent gentleman wearing a black coat and a white-collar. However, apart from his style, there is nothing conventional about this painting which has immediately sparked a controversy in the artistic environment and raised questions about the nature of art and human creativity.[https://drive.google.com/file/d/1esAOv8MsVzYH9njGmHnqUdgPh4aFDVvK/view]

Emond de Belamy, from la Famille de Belamy , Photo : CHRISTIE’S. 

Indeed, the progress in digital technologies has led to the gradual replacement of humans by AI machines in many fields. However, it seemed that certain tasks – such as creative productions – were the property of men only. Nonetheless, AI and algorithms are now breaking into the most subjective and human parts of our lives. These sophisticated programs are becoming more and more imbricated with artistic practices; generating singular artworks in the realm of fine arts, music, literature, dance or even culinary arts.[https://www.invaluable.com/blog/ai-art/]

Therefore, what becomes the role of human creativity in the process of art? Are algorithms capable of creativity? Some people feel anxious about this evolution in the art sector. It is the case of Anna, a 25 years old intern at Sotheby’s in the Books & Manuscripts Department, who explicitly affirms: “For me, the use of algorithms for producing art pieces inherently contradicts our traditional understanding of an artist, who is supposed to use its imagination and to be motivated by a precise intent”. These innovations could thus be seen as a reduction of our ‘authority’ and our subjectivity in artistic practices. However, algorithmic art can also be accounted for as an extension of our creative capabilities. These new practices are interesting ways of enhancing and democratizing art creations.

Obvious, the French collective mentioned above, shows how algorithmic arts can improve creation. Their manifesto for the Belamy Family project presents this innovation as a natural evolution of a traditional complementarity between art and science. For this project, they have fed their algorithm with a dataset of 15.000 images of portraits painted by humans the GAN algorithm (Generative Adversarial Networks).  Then, this algorithm uses neural network architectures – based on sensory data such as vision and speech recognition –  and generates new visuals, by mixing characteristics of the training dataset’s images. For Obvious, the stated goal of the Belamy family project was to democratize art and to allow the audience to play a role in the act of creation. AI can reveal many creative possibilities for society. They express this enthusiasm across their manifesto: ‘we see algorithms as a fascinating tool to help dig into and better understand the different forces at stake in the process of creating something new’. Algorithms can help us in thinking about the act of creations and in developing our creativity.[https://drive.google.com/file/d/1esAOv8MsVzYH9njGmHnqUdgPh4aFDVvK/view]

La Famille de Belamy, photo : Obvious.

Here, thus, the algorithmic artwork depends on the artists’ initial selection of subject, data, output, and medium and is directed toward the democratization of art. However, there is still a part of the process that remains mysterious. The act of creation itself does not seem to proceed from the artist’s imagination. What role does human creativity have in the process of creating algorithmic art? Can an algorithm be considered as an artist? Mario Klingemann, another AI artist, twitted on its profile (@quasimondo): “the process is pretty much all I am interested in. The results are more like a proof-of-work.” Considering this, can we still attribute the same quality and significance to a work of art, of which the progressive choices of creation are removed?

Twitter Mario Klingemann, @quasimondo. Photo : capture d’écran. 

Can the final products of algorithmic art be thought of as great art? Jeremy Katz – the worldwide Editorial Director at Ogilvy & Mather  – posted a Linkedin video (called The Week in a Minute) to talk about this innovation and especially about the Edmond Belamy portrait. He expresses a violent critic of the artwork, stating loud and clear that ‘it is terrible. Just awful’ and rub it in by assuming that ‘Obvious could not have been oblivious to the poor quality of their work’. For him, the mediocrity of their work was produced on purpose for making fools of those who will consider this as great art and who will pay almost half a million for it. His disdainful attitude toward the Belamy portrait is understood as a more global disdain for all algorithmic arts throughout the video.  [https://www.linkedin.com/posts/jeremy-katz_agencyvoices-activity-6464287232571297792-O-lF/]. Therefore, account and receptions of algorithmic arts and the Belamy portraits are multiple and contentious. They express in the meantime the anxiousness and the enthusiasm of artists and their audience.

Besides, some artists – such as Trevor Paglen – have also taken the responding position by commenting on algorithms’ world visions through their work of art. The algorithms’ emergence among society has made them interesting subjects of art. Currently exposed at the Barbican Center, ‘From Apple to Anomaly’, examine the mysterious and powerful forces acting behind artificial intelligence. Paglen investigates a particular dataset of images (from ImageNet) used in teaching algorithms to ‘see’ the world. He has selected approximately 30,000 images from pre-organized categories in the archive and has individually printed and fixed the photographs on a wall of the Barbican Center. Paglen inverses the original movement by showing what is supposed to remind invisible to public eyes. He unveils the internal processes of algorithms:   how they ‘see’ and classify the world. In his conversation with Alona Prado – curator of the exhibition – Paglen explains that ‘It is important for us to look at these images and to think about the kinds of politics that are built into technical systems’ because ‘the consequence of these kinds of training sets and categories is discrimination – the point of systems that classify people is to discriminate between people’ (Catalogue of the exhibit, 2019). Algorithms can have strong effects on the population in terms of the bias and stereotypes they produce. [https://www.barbican.org.uk/whats-on/2019/event/trevor-paglen-from-apple-to-anomaly]

Artist Trevor Paglen poses for a photo at a media preview of ‘Trevor Paglen: From ‘Apple’ To ‘Anomaly” at Barbican Centre. Photo by Tim P. Whitby/Getty Images for Barbican Centre.

The artistic gaze on such algorithms and their political world’s perception is interesting when put back in the scope of algorithm-generated artworks. Such an uncontrolled aspect in algorithms questions the domination of the artists upon the systems he uses for creation. It might not be sure that AI artists fully control their generated creation of art. It is thus reasonable to think that algorithmic artworks could possess some internal agency, which would dominate the result of creation.

Robot & art, GIF

Thus, this controversial accounts on algorithmic artworks question the very nature of art and the role of human creativity in the artistic process. Some will say that algorithms can produce great art, some others will say that they don’t. However, the most significant aspect of every artwork is to speak to the subjectivity of its viewers. Therefore, if you think that algorithms produce artworks, then they do. That is it, end of the discussion.