From Apple, to Anomaly, to ImageNet

I stepped through a metal doorway into a dark corridor. To my right, white text stood illuminated against a black wall extending infinitely into an endless wall of square photographs. An invitation and an introduction, the glowing words invite the museum-goer to “take a critical look at how artificial intelligence networks are taught to ‘perceive’ and ‘see’ the world by engineers who provide them with vast training sets of images and words…”. I quickly read the rest of the text, passing over words like ImageNet and algorithms, as my mind was sucked into the vortex of photographs swirling in the distance. The arrangement starts with a single image of an Apple, continually expanding in both the sheer numbers of images and the level of controversy that surrounds their label. About mid-way, the pictures have grown in numbers to fill the wall from floor to ceiling in an overwhelming spectacle of organization among an image of complete chaos. Images ranged from babies, coffee, and jaw breakers, to subarachnoid space, heathen, and schemer—ending with the final category: Anomaly.

This vortex of vivid imagery is Trevor Paglen’s exhibit, “From ‘Apple’ to ‘Anomaly’” currently at the Barbican in central London. Looking at this piece, I felt an overwhelming sense of smallness as I stood just inches from the image wall, neck strained as I began to stare at the photographs entire body lengths above me. But even further than the physicality of my insignificance in that room was the hazy fog of confusion that seemed to cloud my understanding of what exactly I was looking at. Sure, I was completely fascinated, but at what? What was Trevor Paglen trying to tell me?

Emerging from the darkness, I spoke to a fellow museum-goer Samantha-Kay about her experience in the exhibit. She said that the message she took from the exhibit was that of a single emotion: worry. She felt like she had awakened to the biases that exist in machine learning, commenting that when a machine makes a decision, it’s really from whoever made the program or the machine, “it’s from their mind and from their mind-view only”. Samantha also questioned, “So what’s exactly going on in that super mind, or that ‘artificial’ super mind?”. To me, she seemed to get at the emotional truth that exists in Paglen’s exhibit. One that feels almost eerie and apocalyptic, as we see the slow deterioration of sense and logic as the categories escalate from a harmless apple to the abstract attempt to represent something as abstract as an anomaly in a tangible, visible form.

Inspired by Samantha’s insightful interpretation to Paglen’s piece, I decided to embark on a journey to discover my own. I recalled the brief introduction at the entrance of the exhibit, remembering the mention of image sets and machine learning. I began to speculate that the images before me were likely seen through the eyes of an algorithm. I thought, perhaps, Paglen is commenting on the way algorithms are being used to label abstract concepts, like anomalies, as if they are equal to that of an apple. I thought these images were the output of such faulty algorithmic identification, the result of machine learning and all the biases that come with it.

Spoiler alert: I was wrong.

It was a few simple words by an employee at the Barbican that completely dismantled my interpretation of “From ‘Apple’ to ‘Anomaly’”. I was told, “People think they understand it, but have gotten the wrong order…they think this is the output of the search rather than the input of the search.” I was part of the masses. I was one of these people that got it completely, and utterly, wrong. In a very matter of fact, nonchalant utterance, the employee said: It’s about ImageNet. Image Net—the very word I so wrongly glossed over as I read the introduction to the exhibit just hours before.

In a fury of curiosity, I quickly whipped out my phone and fell into an abyss of Google searches: What is ImageNet? Is ImageNet a bank of images? Is ImageNet a machine? Where did ImageNet’s images come from? My questions were endless, and my understanding remained largely inconclusive. In my research, I found that ImageNet, in a few words, is a data set. But not just any data set, it is the best data set to possibly exist to this day. In a Quartz article, Dave Gerhgom says ImageNet essentially is “the data that transformed AI research—and possibly the world”. In addition, to blow your mind even further, the accuracy of ImageNet as a data set increased the accuracy in machines correctly identifying things from 71.8% to a whopping 97.3%—which Gerhgom says far surpasses human abilities.

So, in other words, think of ImageNet as a big box of extremely effective flashcards. ImageNet is essentially the world’s best set of flashcards, and machines and algorithms all over the world are begging to be its students. However, these flashcards were not made by a machine. They were made by people.

With this new knowledge, I turned Paglen’s work over to expose a new side. I disconnected it from the flawed and backwards understanding I had of it when I saw it for the first time. I saw “From ‘Apple’ to ‘Anomaly’” for what it really is: a graveyard of inputs. However, despite my initial misunderstanding of Paglen’s work, I think that his choice to represent the usually solely virtual existence of ImageNet in a tangible and visual form before the viewer allows us to see the vast influence this bank of images can have on artificial intelligence. It exposes the flaws and innate biases within ImageNet before the eyes of the average passer-by. In a recent interview, Paglen said that “It is important for us to look at these images and to think about the kinds of politics that are built into technical systems. I think that showing those images and labels is itself an indictment of the process—a particular kind of indictment that can only really be done effectively by looking”. Paglen has used ImageNet as the source material for his work in a way that it is not usually subject to. ImageNet is in front of the camera, not behind the scenes, and this stark exposure is enlightening the viewer to the innate bias of the data that is teaching machines all over the world to see.

Down the YouTube Rabbit Hole: Jeremy Corbyn and Anti-Semitism Claims

Linda Abelman is an 18-year-old Stoke Newington resident who has just exercised her right to vote for the first time in the December election. Having spent the past three years confused by Brexit, Linda decided to conduct comprehensive research before deciding on a candidate. In true Generation Z fashion, she took to YouTube. When I asked her what she found, I was expecting her to tell me about her pro-Labour agenda. I assumed that YouTube’s algorithms would provide her with content that was popular to the youth of London, such as herself. But her answer was nothing like that. Instead, she filled me in about the overwhelming amount of anti-Semitism claims that plagued her perception of Jeremy Corbyn; and therefore, the Labour Party. Because my assumption was so off, I decided to investigate YouTube’s recommendation algorithm further by examining the video network that enables anti-Semitism claims surrounding Jeremy Corbyn.

YouTube’s recommendation algorithms are designed to increase the time people spend over the platform. This is necessary for its aggregation of revenue from advertising companies. After Google Brain, the company’s AI division, took over YouTube’s recommendations in 2015, the algorithm seemed to produce more extreme suggestions. According to Zeynep Tufekci, an associate professor at the University of North Carolina, the algorithm “promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.”. This means that more intense content is made instantaneously available. Former YouTube employee, Guillaume Chaslot claims that the demand for user watch time saw the algorithm push conspiracy videos on users. He believes increased efforts to grasp user attention will only spread problematic and prohibited content. His project to introduce diversity to the recommendations algorithm did not stimulate the same watch time and was therefore, shut down. Not only does this show YouTube’s blinding determination for more watch time, but also its power as an instrument for radicalisation. This limits the younger generation, like Linda, who rely on YouTube for their understanding of the world around them.

To understand Linda’s experience, I conducted a network analysis of YouTube videos related to Jeremy Corbyn and anti-Semitism claims. The study attempted three main objectives to understand the narrative imposed. First, to utilise Gephi for data visualisation. Second, to identify the central clusters that contribute to content analysis. Third, to isolate the unexpected clusters that raise concern within the network.

Graph A:

Image: by the author

The above graph shows a cluster of nodes that exhibit two central themes: the reaction of the Jewish community to anti-Semitism claims and the general election.

The blue arrows point to videos that address Corbyn’s perspective on Judaism. The video “Jews are terrified of a Corbyn government” is a central node. This video contributes to the Jewish community’s narrative on Corbyn with 87% of British Jews viewing Corbyn as anti-Semitic. The video confirming the Chief Rabbi’s attack of the Labour party only furthers this disposition, where he declared that a “new poison” of anti-Semitism existed at the top. The small community of British Jews, only 0.5 of the population, look towards the Rabbi as a guide for their best interests. The video on the Rabbi is directly connected to Boris Johnson’s response. This shows the highly-politicised nature of the issue that feeds into the recommendation algorithm. Johnson’s reaction creates a divide between the two parties that furthers the anti-Semitic narrative behind the Labour party. Corbyn finally addresses this in a connected video; however, this is drowned out by more controversies that link him to anti-Semitism. Therefore, this series of related videos solidify anti-Semitic claims surrounding Jeremy Corbyn.

The yellow arrows demonstrate videos that attach the general election to allegations against Corbyn. Each party’s manifestos are brought to light with the topic of Brexit as the main concentration. The BBC’s video “Johnson V Corbyn election debate: Who? Parties react” shows a reputable news source contribute to the divide between the two parties.

 This focus of Brexit contributes to the algorithms facilitation of controversy. People, like Linda, who were searching the general election and Brexit to educate themselves instead came across polarising information on both parties. Not only does this negatively affect the political process by shying away from facts, but it also degrades candidates for reasons other than their policies. This is harmful to the general public because it prohibits them from fully utilising their political power. While the connected node “What would Boris Johnson and Jeremy Corbyn get each other for Christmas” is not directly affiliated with anti-Semitic claims against Corbyn, it does provide a narrative on each leader’s personality. This is extremely powerful within elections because it shapes a candidates’ likeability. The YouTube recommendation algorithm’s ability to combine the controversial and personal makes it a notable force within the political process.

Graph B:

Image: by the author

Graph B shows the isolated outliers that do not contribute to the narrative. “I bought CELEBRITY S used Iphone/ now I KNOW his SECRET” is an animated video that involves a young girl talking about knowing Justin Bieber’s secrets. The keywords “secret” and “scandal” are used within this video. “I broke my legs to satisfy my mom but it was not enough” also contains the keywords “accusation” and “scandal”, which perpetuates the extreme content that excites users.

My brief investigation calls for increased awareness over radicalised YouTube suggestions. The political process is in the hands of people like Linda, who are increasingly making life-altering decisions based on this content.

AI and algorithms: a challenge for human creativity and the artistic world?

Van Gogh, Art and Algorithms, photo : pixabay. 

What if entering a mathematic formula into an algorithm could turn you into a successful artist? Indeed, in October of 2018 at Christie’s auction house in New York, a French art collective spring a surprise by selling the first algorithm’s generated portrait for the incredible amount of $432,500, reaching almost 45 times its high estimate. The Edmond Belamy portrait depicts – in the classical European style – a blurry corpulent gentleman wearing a black coat and a white-collar. However, apart from his style, there is nothing conventional about this painting which has immediately sparked a controversy in the artistic environment and raised questions about the nature of art and human creativity.[https://drive.google.com/file/d/1esAOv8MsVzYH9njGmHnqUdgPh4aFDVvK/view]

Emond de Belamy, from la Famille de Belamy , Photo : CHRISTIE’S. 

Indeed, the progress in digital technologies has led to the gradual replacement of humans by AI machines in many fields. However, it seemed that certain tasks – such as creative productions – were the property of men only. Nonetheless, AI and algorithms are now breaking into the most subjective and human parts of our lives. These sophisticated programs are becoming more and more imbricated with artistic practices; generating singular artworks in the realm of fine arts, music, literature, dance or even culinary arts.[https://www.invaluable.com/blog/ai-art/]

Therefore, what becomes the role of human creativity in the process of art? Are algorithms capable of creativity? Some people feel anxious about this evolution in the art sector. It is the case of Anna, a 25 years old intern at Sotheby’s in the Books & Manuscripts Department, who explicitly affirms: “For me, the use of algorithms for producing art pieces inherently contradicts our traditional understanding of an artist, who is supposed to use its imagination and to be motivated by a precise intent”. These innovations could thus be seen as a reduction of our ‘authority’ and our subjectivity in artistic practices. However, algorithmic art can also be accounted for as an extension of our creative capabilities. These new practices are interesting ways of enhancing and democratizing art creations.

Obvious, the French collective mentioned above, shows how algorithmic arts can improve creation. Their manifesto for the Belamy Family project presents this innovation as a natural evolution of a traditional complementarity between art and science. For this project, they have fed their algorithm with a dataset of 15.000 images of portraits painted by humans the GAN algorithm (Generative Adversarial Networks).  Then, this algorithm uses neural network architectures – based on sensory data such as vision and speech recognition –  and generates new visuals, by mixing characteristics of the training dataset’s images. For Obvious, the stated goal of the Belamy family project was to democratize art and to allow the audience to play a role in the act of creation. AI can reveal many creative possibilities for society. They express this enthusiasm across their manifesto: ‘we see algorithms as a fascinating tool to help dig into and better understand the different forces at stake in the process of creating something new’. Algorithms can help us in thinking about the act of creations and in developing our creativity.[https://drive.google.com/file/d/1esAOv8MsVzYH9njGmHnqUdgPh4aFDVvK/view]

La Famille de Belamy, photo : Obvious.

Here, thus, the algorithmic artwork depends on the artists’ initial selection of subject, data, output, and medium and is directed toward the democratization of art. However, there is still a part of the process that remains mysterious. The act of creation itself does not seem to proceed from the artist’s imagination. What role does human creativity have in the process of creating algorithmic art? Can an algorithm be considered as an artist? Mario Klingemann, another AI artist, twitted on its profile (@quasimondo): “the process is pretty much all I am interested in. The results are more like a proof-of-work.” Considering this, can we still attribute the same quality and significance to a work of art, of which the progressive choices of creation are removed?

Twitter Mario Klingemann, @quasimondo. Photo : capture d’écran. 

Can the final products of algorithmic art be thought of as great art? Jeremy Katz – the worldwide Editorial Director at Ogilvy & Mather  – posted a Linkedin video (called The Week in a Minute) to talk about this innovation and especially about the Edmond Belamy portrait. He expresses a violent critic of the artwork, stating loud and clear that ‘it is terrible. Just awful’ and rub it in by assuming that ‘Obvious could not have been oblivious to the poor quality of their work’. For him, the mediocrity of their work was produced on purpose for making fools of those who will consider this as great art and who will pay almost half a million for it. His disdainful attitude toward the Belamy portrait is understood as a more global disdain for all algorithmic arts throughout the video.  [https://www.linkedin.com/posts/jeremy-katz_agencyvoices-activity-6464287232571297792-O-lF/]. Therefore, account and receptions of algorithmic arts and the Belamy portraits are multiple and contentious. They express in the meantime the anxiousness and the enthusiasm of artists and their audience.

Besides, some artists – such as Trevor Paglen – have also taken the responding position by commenting on algorithms’ world visions through their work of art. The algorithms’ emergence among society has made them interesting subjects of art. Currently exposed at the Barbican Center, ‘From Apple to Anomaly’, examine the mysterious and powerful forces acting behind artificial intelligence. Paglen investigates a particular dataset of images (from ImageNet) used in teaching algorithms to ‘see’ the world. He has selected approximately 30,000 images from pre-organized categories in the archive and has individually printed and fixed the photographs on a wall of the Barbican Center. Paglen inverses the original movement by showing what is supposed to remind invisible to public eyes. He unveils the internal processes of algorithms:   how they ‘see’ and classify the world. In his conversation with Alona Prado – curator of the exhibition – Paglen explains that ‘It is important for us to look at these images and to think about the kinds of politics that are built into technical systems’ because ‘the consequence of these kinds of training sets and categories is discrimination – the point of systems that classify people is to discriminate between people’ (Catalogue of the exhibit, 2019). Algorithms can have strong effects on the population in terms of the bias and stereotypes they produce. [https://www.barbican.org.uk/whats-on/2019/event/trevor-paglen-from-apple-to-anomaly]

Artist Trevor Paglen poses for a photo at a media preview of ‘Trevor Paglen: From ‘Apple’ To ‘Anomaly” at Barbican Centre. Photo by Tim P. Whitby/Getty Images for Barbican Centre.

The artistic gaze on such algorithms and their political world’s perception is interesting when put back in the scope of algorithm-generated artworks. Such an uncontrolled aspect in algorithms questions the domination of the artists upon the systems he uses for creation. It might not be sure that AI artists fully control their generated creation of art. It is thus reasonable to think that algorithmic artworks could possess some internal agency, which would dominate the result of creation.

Robot & art, GIF

Thus, this controversial accounts on algorithmic artworks question the very nature of art and the role of human creativity in the artistic process. Some will say that algorithms can produce great art, some others will say that they don’t. However, the most significant aspect of every artwork is to speak to the subjectivity of its viewers. Therefore, if you think that algorithms produce artworks, then they do. That is it, end of the discussion.

How you are being categorized by Tinder algorithm

Emily doesn’t swipe anymore. As 57 million people around the world, she was using Tinder. The 21 years old student is now in a relationship. But anyway she may be more reluctant to use the American dating app. More than ever.

‘At best a client, at worst, a product’

Indeed, it has recently been reported by the French journalist Judith Duportail that Tinder was rating people to make accurate suggestions. In her 2019 book L’amour sous algorithme (The Love algorithm), she unveils the existence of the Elo Score, that is to say a desirability rate. Judith Duportail received 800 pages of personal data as stated by the European law on data protection. She had everything : conversations, places geolocated, likes. Everything but her Elo Score.

‘At best a client ; at worst, a product’ Duportail claims, as she shows that the app heighten inequalities within relationships, encouraging men to date younger women, less wealthy and with a poorer education background.

TINDER GIF 3
By the author

Emily told us that she had been using the app for ‘a fair amount of time’. She wasn’t looking for anything serious at the time. She ‘thought it’d be a good laugh, a good way to meet new people’. Yet, she quickly noticed that the algorithm was shaping her experience.

I did find it quite addictive and I also noticed that it would stop showing me certain kind of people after a while. And it did feel like it filtered out, based on things like education level, potentially race, or the kind of jobs people had.

Even though the terms and conditions are public, there is allegedly no way for users to find their rate nor to know the way it is calculated. The underlying coding programs are not revealed by Tinder but would apparently be based on the success of our profile picture, the number of complicated words used in the chat.

The app even goes further : people do not have the same clout on our value, as it depends on their own rate. In 2019, the Vox reported that ‘the app used an Elo rating system, which is the same method used to calculate the skill levels of chess players: You rose in the ranks based on how many people swiped right on (“liked”) you, but that was weighted based on who the swiper was. The more right swipes that person had, the more their right swipe on you meant for your score’.

Tinder stated on their blog that the Elo Score was old news, an ‘outdated measure’ and now, the app was relying on the activity of its users. 

According to tinder’s website, ‘Tinder is more than a dating app. It’s a cultural movement.’ 

In a sense, this is not only marketing, this is partly true. Tinder is now renowned worldwide. 9% of French couples who started their relationship between 2005 and 2013 met online according to the National Institute of Demographic Studies. In the UK, based on an Infogram study, this number reached 20% in 2013 and 70% for the same-sex couples according to sociologists Reuben Thomas and Michael Rosenfeld. 

Birds of a feather flock together

Sociologists haven’t been waiting for social media to identify the phenomenon of endogamy (or in-marriage) : we tend to date people from the same social class. We share similar activities and meet them within familiar groups : colleagues, friends, friends of friends. Thus, digital categorizing is pushing further existing dynamics.

When I swiped right, Emily says, it would show me more of those people. That is obviously kind of reinforcing social stratification, I guess, and the fact that people tend to end up with the ones that are similar to them.

TINDER GIF 1
By the author

Camille, a 21 years old bisexual user of the app, also noticed she was shown similar profiles. She told us that it appeared ‘rather obvious when you use the app’. Yet, Camille thinks it helps to find someone you are likely to meet in real life.

It seems as a good way to select the people to interact with. It seems rather innocent. It’s not something you give a lot of thoughts about.

Indeed, Tinder’s algorithm aims at finding the same interests, equivalent wages (if you tell your job in your description, the average income can be found). But key words can be identified without being understood. Surprisingly enough, Tinder doesn’t get irony or expressions, hence awkward date experiences.

The biases of machine-learning systems 

This might be a matter of time. As Diggit Magazine recently reported, algorithms are machine-learning systems, fed by societal practices. This can be for the best… And the worse. That is to say that this love conditioning has its biases as the app can be supplied with racist or sexist contents. 

Moreover, Aude Bernheim unveiled in her AI study L’intelligence artificielle, pas sans elles ! that 88% of algorithms were made by men.

TINDER GIF 4
By the author

It might account for the unbalance within Tinder’s users. Only 38% of Tinder users are female against 62% of males. The experience as female queer user also raises questions as Camille’s story proves.

About women, I noticed that the app had lots of problems identifying the kind of girls I would like to date. I do believe that for queer people, especially women, it is actually harder to interact on the app. There are so many couples looking for a third person. So the algorithm works when you want an heterosexual relationship but for queer people… It does not work as well as it should. It is sadly predictable and that’s probably why I stopped using Tinder.

The app is well aware of these situations and tries to look inclusive on its website “Swipe life” made out of articles on ‘dating tips’ such as ‘How to Talk to Your Partner About Non-monogamy’ or ‘What You Should Know When Dating Someone With Bipolar Disorder’. Yet, the lack of transparency on their algorithm is still widely criticized. 

TINDER GIF 2
By the author

They also seem to put the emphasis on who insignificant dating has become. They report on their website that ‘70% of these college students have never met up with their matches…and 45% say they use Tinder mostly for confidence boosting procrastination’ Dating is no longer a big deal. But it is a big business as the app now has 800 million euros of annual turnover.

If the categorization seems harder to understand due to the end of the Elo Score, both analysts and users claim the profiling hasn’t come to an end yet.

Melanie Lefkowitz wrote that ‘Although partner preferences are extremely personal, it is argued that culture shapes our preferences, and dating apps influence our decisions.’ And Emily couldn’t agree more as she experienced it herself.

On the one hand, it really opens up your circle because there’s suddenly all those people you would never have bumped into in your life but you can message, and meet up with. On the other hand, the algorithm kind of dictates who you’re able to see as being out there. It’s actually quite limiting.

The Invisible Walls of YouTube

81913024_2746900778689099_4322506188215287808_n
Photo by: Dado Ruvic/Reuters

November 2019 was a watershed month for American politics. It marked the beginning of the formal impeachment inquiry of the President of the United States, Donald Trump, only the third time an American president was to be impeached. Public hearings were held, investigations were made, and American news was abuzz with objective discussions, opposing arguments and rancorous accusations.

Politics can be controversial, with largely divided opinions along partisan lines. The party division is marked by differences in principles of government, and religious and secular world views. This is further complicated by each individual having their opinions and beliefs based on their own experiences, principles and values. In a democracy, some would argue that it is important that the public remain actively engaged in the activities and conduct of their government. It is especially easy in this day and age to do so. Social media and digital platforms have become the cornerstone of modern society, with a plethora of content at each individual’s fingertips.

YouTube is a large player in this global phenomenon, having grown from 2005 into the second largest social network in the world today. With 73% of US adults reportedly frequenting the video media platform, it can be said that YouTube is as influential as televised news channels. The platform has even hosted livestreams of debates during US presidential elections, giving American citizens real time access to information which may ultimately influence their vote in a US election.

Given the power it wields over such a massive number of American viewers, one might assume YouTube would capably present videos involving Trump’s impeachment without bias, apart from political alignment. A quick use of its search engine will reveal it hosts news organizations with different political stances, posting through their own official channels their respective interpretation of stories as events unfold. On the surface, this seems fine. Nothing wrong with a platform that presents both sides of a story along with different points of view. If anything, this makes YouTube an objective place to learn about news surrounding the impeachment, right?

Wrong, for a number of reasons. Studies indicate that social media usage has given rise to ‘echo chambers’. Echo chambers describe the mental concept of restricting one’s media consumption from opposing views. From videos to online forums, this isolation from other perspectives of an issue causes the reinforcement of the individual’s point of view, believing it to be more true or right than others. This bias can grow to the point of taking priority over facts and logic. In short, echo chambers are real, dangerous and EVERYWHERE in the digital space.

And what better way to observe the potency of this effect than searching for video coverage of the historical third impeachment of a US President on YouTube. Using a neutral browser (a browser without search history, a digital footprint, or geographic preferences), three different news channels were looked up, each with their own political stances as presented in the Media Bias Chart: Buzzfeed, Fox News and NBC.

The Media Bias Chart. Image By: Ad Fontes Media

One random video was then pulled from each channel with the word ‘impeachment’ in the title, all uploaded within November 2019, when the impeachment process began. These videos are:

Buzzfeed News – Let’s Hear It For The Whistleblower – Impeachment Today Podcast

Fox News – Tucker’s big takeaways from the Trump impeachment saga

NBC News – Highlights: Fiona Hill And David Holmes’ Impeachment Hearing Testimony

Next, the YouTube Data Tool was used to scrape each video for their recommended video section. This data was then processed by the network analysis program, Gephi, to visualize the networks within which these three videos reside.

Visualization of Video Recommendation by News Channel. Image by: Author

Lo and behold, it appears that overlaps in recommended videos among these three channels are rare. Between the left leaning NBC and the conservative Fox News, a mere 5 videos out of a cumulative 171 video recommendations could possibly lead to the network of videos of the other news channel. This means that a user has only a 3% chance of being exposed to any content representing the opposing political stance of both NBC and Fox News, regardless of which channel they had started with. The echo chamber looks even stronger for Buzzfeed, with its network of recommended videos sharing absolutely no overlap of recommended videos with other news channels. This means that if you start looking up ‘impeachment’ through Buzzfeed in November 2019, you would have no exposure to any politically opposing content featured in other news channels.

As borne out in the above exercise, exposure to opposing political viewpoints is rare for a YouTube user. It is therefore likely that an individual will maintain his or her political viewpoints. This is simply the result of YouTube’s algorithm following its programming and doing its job. Engineers at Google released a paper in 2016, explaining how YouTube has used algorithms for years. A complex machine learning process gathers and analyses data down to each specific user. The result is an adaptive program that presents videos personalized to individual viewing patterns and preferences. This may be fine if you are looking to kill time watching meme videos, but it restricts users from a more expansive world view.

“It isn’t inherently awful that YouTube uses AI to recommend video for you, because if the AI is well tuned it can help you get what you want.” – AlgoTransparency Founder, Guillaume Chaslot

The former algorithm developer for Google and YouTube, Guillaume Chaslot, pointed out the flaws in the video recommendation algorithm. He stated that the primary purpose of YouTube’s algorithm is not to inform or educate viewers, but to capture their attention and keep them on the platform. This encourages the existence of echo chambers within digital communities, continually reinforcing pre-existing views. We’ve got to realize that YouTube recommendations are toxic and it perverts civic discussion,” said the algorithmic transparency advocate, Chaslot at a recent tech conference.

The issue of algorithms enabling echo chambers raises questions about YouTube being an objective platform for political media distribution. We cannot deny the data showing how popular YouTube is. It would be foolish for news organizations to not reach out to a large segment of Americans through this video platform. But with analysis proving the effectiveness of the algorithm and developers acknowledging civic concerns, what is recommended to you will not always be what is best for you. If individuals wish to break through these invisible walls, it would be wise to make a deliberate effort to look beyond them.

Random video recommendation or subtle mental manipulation?

Digging deep into the YouTube algorithm

 

Who hasn’t come across an absurd YouTube recommendation? You’re watching CrashCourse Philosophy and ending up on a conspiracy about Donald Trump being a lizard. Sometimes videos seem completely irrelevant and there exists a whole channel on Google Supportfor user problems.

YouTube is so powerful that even young children are obsessedwith it. The platform’s recommendations section is constantly trying to find what we would like to watch nextwhilstscanning massive amounts of personal data. With great power comes great responsibility, and we better think twice before blindly trusting the platform.

 

Trusting the algorithm?

YouTube profiles are designed for crafting and personalisation, using affordances as subscribing, upvoting, creating lists. Then, AI scans user activity, likes, dislikes, previously viewed videos… and all other sorts of personal informationlike phone number, home and work address, recently visited places and suggests potentially likeable video content. YouTube uses this information as a “baseline”and builds up recommendations linked to users’ viewing history.

In 2016, Google publishedan official paper on the deep learning processes embedded in the YouTube algorithm. The algorithm, they write, combines gathered data based on factors such as scale, freshness, and noise – features linked to viewership, the constant flow of new videos, and previous content liked by the user. They provide analysis of the computation processes, but they still cannot explain the glitches commonly found in the system – for instance, why is the algorithm always pushing towards extremes?

 

The dark side of the algorithm

Adapting to one’s preferences might be useful, but it seems like YouTube is prompting radicalism, as if you are never “hard core” enough for it.

Guillaume Chaslot, founder of AlgoTransparency– a project aiming towards web transparency of data, claims that recommendations are in fact pointless,they are designed to waste your time on YouTube, increasing your time-view. Chances are – you will either get hooked onto the platform or will end up clicking on one of the ads, thus generating revenue. Chaslot says that the algorithm’s goal is to increase your watch time, or in other words – time spent on the platform, and doesn’t necessarily follow user preferences.

It seems like YouTube’s algorithms are promoting whatever is both viral and engaging, and are using wild claims and hate speechin the process. Perhaps this is why the platform has been targeted by multiple extremist and conspiracy theory channels. However, it is important to acknowledge that YouTube has taken measures against that problem.

 

Our investigation

 Inspired by recentresearchon this topic, we conducted our own expedition down the YouTube rabbit hole. The project aims to examine the YouTube recommendation algorithm, so we started with a simple YouTube search on ‘Jeremy Corbin’ and ‘anti-Semitism’. The topic is completely random and provoked solely by the fact that we are London residents familiar with the news. For clarity’s sake, here is a visual representation of the data (Figure1.0).

On Figure 1.0, we can see the network formed by all videos related to the key terms which will end up in the recommendations section. The network has 1803 nodes and 38 732 edges, each of them representing political videos on current global events and how they relate to one another.

 

Figure 1.0

 

Alongside with the expected titles including key words such as ‘Jeremy Corbin’, ‘Theresa May’, ‘Hebrew’, ‘Jewish’, one may notice a miniature cluster far on the left-hand side. It has three components, or YouTube videos, that are, least to say, hilarious. Let’s zoom in.

 

Figure 2.0

 

Figure 3.0

At a first glance, they seem completely random and are positioned furthest of the network and are unrelated to whether Jeremy Corbyn is an anti-Semite or not. So, there must be something hidden in the underlying meaning of the videos which makes them somehow relatable. I will refer to the videos in this cluster as ‘random’, however, in the following lines, the reader will be persuaded in the lack of any randomness whatsoever.

The three videos (Figure 3.0) have a vivid variation in content: from a teenage girl that bought Justin Bieber’s old iPhone filledwith R Rated personal material; through a woman who got pregnant by her boyfriend’s grandpa; all the way to the story of a daughter who tried to surprise her mother in jail only to end up in prison not being able to recognise her own mother who had gone through plastic surgery to become a secret spy (???).

It is easy to spot the production similarities between the three ‘random’ videos, nevertheless, they would usually not appear in the same context as they have different topics, keywords, and are produced by different channels. All videos are animated and have a cartoon protagonist that guides the viewers through their supposedly fascinating life story, and all seems made-up. The creators produrces used visual effects to affect human perceptions – animation, fast-moving transitions, exciting background music.

 

The ‘random’ videos and some commentary. Snapshots: YouTube. Edit is done by the author.

Caricature is the artists’ way of presenting personal opinion on a more radical case. It’s therefore understandable why caricatures often include political figures and international affairs. Further, humour renders the brutality of life easier to handle. Animation has become a tool for distribution and reproduction and is associated with conditions of conflict, both national and international. Since the foregoing videos are associated with extremes, YouTube algorithm suggests what it finds extreme – apparently ‘Jeremy Corbyn’ and ‘anti-Semitism’.

After observing the visual part of the content, we moved on to linguistic and semantic investigation. It is found that words as ‘scandal’, ‘very important people’, ‘controversial situations’, ‘jail’, ‘accusations’ might be the reason why those videos appear in the network related to the key words ‘Jeremy Corbyn’ and ‘Anti-Semitism’.

Interestingly, all three comment sections in the ‘random’ cluster are filled with jokes and general opinion of the videos being fake. Very little of the public believes in the validity of the stories. If we browse comments from nodes with political videos, we can find similar language. That proves that AI not only scans language but detects opinion and irony and links common themes together.

The reader now understands why my area of interest is focused on this particular cluster, as it is a metaphorical representation of the whole network. Eventually, the research proved that Jeremy Corbyn is not perceived as an anti-Semitist by the online public (or by the algorithm).

 

What is the algorithm suggesting?

To get a better grasp of the common assets in the network, we observe the nodes that are closest to the ‘random’ cluster (Figure 2.0). Following logical conclusions, can we say that the algorithm suggests all those political events are either a scam or a mockery? As the three videos are linked in a network with other, definitely not-so-humorous videos, this means they share keywords, topics, creators, or audience. The algorithm appears to find a similarity between the absurdity of the animated YouTube videos and the nodes closest to the cluster. Could this be the algorithm manifesting its opinion?

Of course, these are all speculations, and factors such as viewership and watch time are not to be neglected. As both viewers and producers, we should also remember that content may be interpreted differentlyin diverse social groups.

YouTube and the Echo Chamber of Secrets

There has been a variety of talk around the topic of how big the impact of social media networks was on elections in recent history. Most of the focus was on the platforms Facebook and Twitter and what role they played in them especially when it came to the topic of ‘filter bubbles’ and ‘echo chambers’ on those websites. However, many of those discussions forgot about one big play when it comes to the life online which is YouTube.

The Google Home Page - Photo by: Caio Resende
The YouTube App on a smartphone- Photo taken from freestocks.org

Social media networks have become a huge part when it comes to how people find their news daily. For instance, research by the “Pew Research Centre” shows that around 65% of the adult citizens in the USA get some of their news via Facebook which a lot people do not like. In addition to people not liking that their feed is full of news stories every day, there are people like Eli Pariser who are actively warning of the effects this development might have on politics and society in general. In one of his ‘TED Talks’, he says that people getting their news from social media will become a big problem because at this moment of time the Internet is showing us what it thinks we want to see and not what we need to see. Thus, it is creating personal bubbles for each person separately which in his opinion will be hard to escape for people

“I’m already nostalgic for the days that social media was just a fun diversion.”

Andrew Wallenstein, Co-Editor-In-Chief of Variety Magazine

 

An example of the increase of personal bubbles online can be found in the increase in the number of news organizations who are currently working on their own personal news services. Some of the news organization in questions are the New York Times and the Washington Post who both are currently working on their own version of personalized news services for their customers. However, there is already one platform who can do something similar to that which is YouTube. The platform is able to do that by using the data it has gathered on its users while also employing a great team in data analytics who are able to make the most out of that data. One instance of this is the recommended videos feature on the platform who makes use that data.

There are a variety of people who believe that this data usage, however, might also create problems. Kenneth Boyd, for instance, argues that because of that there is a risk that people could end up watching the same content repeatedly on YouTube and thus live in their own echo chamber on the platform. In addition to that, he also says that this is not a problem when it comes to cute cat videos but that it might be an immense problem when it comes to the spreading of misinformation and fake news on the platform

To investigate the claims made by Boyd and Pariser concerning the risk of people ending up in their own echo chamber on things like YouTube. Therefore, this article will have a look at the network’s videos create when one has a look at their recommended videos, which can be done via the help of tools from the Digital Methods Institute. To do that, 3 different media sources from the USA, with different political stances, BuzzFeed News, NBC News, and Fox News,, were used who all released videos around the same topic, the impeachment hearings in 2019.

The network graph of the recommended videos for the Buzzfeed News video “Let's Hear It For The Whistleblower - Impeachment Today Podcast - https://www.youtube.com/watch?v=mYBUtaPpqFc”
The network graph of the recommended videos for the Buzzfeed News video “Let’s Hear It For The Whistleblower – Impeachment Today Podcast – https://www.youtube.com/watch?v=mYBUtaPpqFc”

First of all, the above graph shows the various categories of recommended videos YouTube gives it users when it looks up a video on the impeachment hearings on BuzzFeed News. The graph shows that most of the videos people get recommended are videos which deal with the topic of news and politics. However, when one combines this network of videos with the network of videos of FOX News and NBC News, as seen in the graph below, one can see that those videos for BuzzFeed News do not have any connection to the other outlets. Furthermore, when looking at the videos of the other two media outlets one can see that there is an overlap between them. There were four videos, three on the side of FOX News and one on the side of NBC News, which played prominent roles in the network of the other media outlet.

The combined graph of the recommended videos networks for the FOX News video “Tucker's big takeaways from the Trump impeachment saga - https://www.youtube.com/watch?v=YSlX9m1iZ6M” and the NBC News video “Highlights: Fiona Hill And David Holmes’ Impeachment Hearing Testimony | NBC News - https://www.youtube.com/watch?v=Kvx2cZefnUg”
The combined graph of the recommended videos networks for the FOX News video “Tucker’s big takeaways from the Trump impeachment saga – https://www.youtube.com/watch?v=YSlX9m1iZ6M” and the NBC News video “Highlights: Fiona Hill And David Holmes’ Impeachment Hearing Testimony | NBC News – https://www.youtube.com/watch?v=Kvx2cZefnUg”

As a consequence of that one can see that both graphs show interesting aspects of how YouTube is making use of its recommended video feature. Firstly, it shows that when people are looking for news, they tend to get news like seen in the case of BuzzFeed News. However, it also shows that there is strong chance that people who only watch BuzzFeed News might just stay in their own bubble because the YouTube algorithm does not recommend them videos from sites like NBC News or FOX News. Moreover, it is also striking to see that this might also be the case with FOX News viewer because even though the graph shows ties between FOX News and NBC News, the videos which tie them together are all videos from FOX News themselves and from no other sources.

“… good idea to consider what is not being shown to you.”

Kenneth Boyd

Because of things like the creation of bubbles as seen with BuzzFeed News and with FOX News, Kenneth Boyd is warning about the risks of relying too much on platforms like YouTube when it comes to gathering news. In his opinion, it is important to hear about an political argument from all sides of the aisle and it is clear that YouTube does not offer that when looking at its recommended video feature.

Nevertheless, it has to be said that, in the end, people cannot start putting blame on platforms like YouTube for not showing people a wider view of videos and creating personalized bubbles and echo chambers but that it is rather the job of each citizen on their own to try to seek out as many diverse sources as possible on political issues and not rely on YouTube or other platforms to do that for them.