YouTube and the Echo Chamber of Secrets

There has been a variety of talk around the topic of how big the impact of social media networks was on elections in recent history. Most of the focus was on the platforms Facebook and Twitter and what role they played in them especially when it came to the topic of ‘filter bubbles’ and ‘echo chambers’ on those websites. However, many of those discussions forgot about one big play when it comes to the life online which is YouTube.

The Google Home Page - Photo by: Caio Resende
The YouTube App on a smartphone- Photo taken from freestocks.org

Social media networks have become a huge part when it comes to how people find their news daily. For instance, research by the “Pew Research Centre” shows that around 65% of the adult citizens in the USA get some of their news via Facebook which a lot people do not like. In addition to people not liking that their feed is full of news stories every day, there are people like Eli Pariser who are actively warning of the effects this development might have on politics and society in general. In one of his ‘TED Talks’, he says that people getting their news from social media will become a big problem because at this moment of time the Internet is showing us what it thinks we want to see and not what we need to see. Thus, it is creating personal bubbles for each person separately which in his opinion will be hard to escape for people

“I’m already nostalgic for the days that social media was just a fun diversion.”

Andrew Wallenstein, Co-Editor-In-Chief of Variety Magazine

 

An example of the increase of personal bubbles online can be found in the increase in the number of news organizations who are currently working on their own personal news services. Some of the news organization in questions are the New York Times and the Washington Post who both are currently working on their own version of personalized news services for their customers. However, there is already one platform who can do something similar to that which is YouTube. The platform is able to do that by using the data it has gathered on its users while also employing a great team in data analytics who are able to make the most out of that data. One instance of this is the recommended videos feature on the platform who makes use that data.

There are a variety of people who believe that this data usage, however, might also create problems. Kenneth Boyd, for instance, argues that because of that there is a risk that people could end up watching the same content repeatedly on YouTube and thus live in their own echo chamber on the platform. In addition to that, he also says that this is not a problem when it comes to cute cat videos but that it might be an immense problem when it comes to the spreading of misinformation and fake news on the platform

To investigate the claims made by Boyd and Pariser concerning the risk of people ending up in their own echo chamber on things like YouTube. Therefore, this article will have a look at the network’s videos create when one has a look at their recommended videos, which can be done via the help of tools from the Digital Methods Institute. To do that, 3 different media sources from the USA, with different political stances, BuzzFeed News, NBC News, and Fox News,, were used who all released videos around the same topic, the impeachment hearings in 2019.

The network graph of the recommended videos for the Buzzfeed News video “Let's Hear It For The Whistleblower - Impeachment Today Podcast - https://www.youtube.com/watch?v=mYBUtaPpqFc”
The network graph of the recommended videos for the Buzzfeed News video “Let’s Hear It For The Whistleblower – Impeachment Today Podcast – https://www.youtube.com/watch?v=mYBUtaPpqFc”

First of all, the above graph shows the various categories of recommended videos YouTube gives it users when it looks up a video on the impeachment hearings on BuzzFeed News. The graph shows that most of the videos people get recommended are videos which deal with the topic of news and politics. However, when one combines this network of videos with the network of videos of FOX News and NBC News, as seen in the graph below, one can see that those videos for BuzzFeed News do not have any connection to the other outlets. Furthermore, when looking at the videos of the other two media outlets one can see that there is an overlap between them. There were four videos, three on the side of FOX News and one on the side of NBC News, which played prominent roles in the network of the other media outlet.

The combined graph of the recommended videos networks for the FOX News video “Tucker's big takeaways from the Trump impeachment saga - https://www.youtube.com/watch?v=YSlX9m1iZ6M” and the NBC News video “Highlights: Fiona Hill And David Holmes’ Impeachment Hearing Testimony | NBC News - https://www.youtube.com/watch?v=Kvx2cZefnUg”
The combined graph of the recommended videos networks for the FOX News video “Tucker’s big takeaways from the Trump impeachment saga – https://www.youtube.com/watch?v=YSlX9m1iZ6M” and the NBC News video “Highlights: Fiona Hill And David Holmes’ Impeachment Hearing Testimony | NBC News – https://www.youtube.com/watch?v=Kvx2cZefnUg”

As a consequence of that one can see that both graphs show interesting aspects of how YouTube is making use of its recommended video feature. Firstly, it shows that when people are looking for news, they tend to get news like seen in the case of BuzzFeed News. However, it also shows that there is strong chance that people who only watch BuzzFeed News might just stay in their own bubble because the YouTube algorithm does not recommend them videos from sites like NBC News or FOX News. Moreover, it is also striking to see that this might also be the case with FOX News viewer because even though the graph shows ties between FOX News and NBC News, the videos which tie them together are all videos from FOX News themselves and from no other sources.

“… good idea to consider what is not being shown to you.”

Kenneth Boyd

Because of things like the creation of bubbles as seen with BuzzFeed News and with FOX News, Kenneth Boyd is warning about the risks of relying too much on platforms like YouTube when it comes to gathering news. In his opinion, it is important to hear about an political argument from all sides of the aisle and it is clear that YouTube does not offer that when looking at its recommended video feature.

Nevertheless, it has to be said that, in the end, people cannot start putting blame on platforms like YouTube for not showing people a wider view of videos and creating personalized bubbles and echo chambers but that it is rather the job of each citizen on their own to try to seek out as many diverse sources as possible on political issues and not rely on YouTube or other platforms to do that for them.

WHERE ARE YOUTUBE’S RECOMMENDATIONS TAKING US?

Corbyn’s leadership of the the Labor party since 2015, has turned it into a “welcoming refuge for anti-semitism”. Following this statement made by the the Jewish Labor Movement, the party’s responses towards such accusations have been characterised by denial.  

Corbyn has intensively been associated with antisemite behaviour. With more than 5,000 stories about Corbyn, anti-semitism and Labor since 2015 revealed by the book Bad News for Labor: antisemitism, the party and public belief, growing Jewish concerns have sprung. A recent poll made by The Jewish Chronicle exposed that 47% of British Jews would consider emigrating should he win. This would represent the largest Jewish exodus from a Western country since the 30s.

Overall, Corbyn’s media coverage has been hostile. The influence of social media platforms over newsrooms have associated him with antisemitism. The openness of these platforms has empowered the public to shape information content as much as they shape our perception of things. Increasingly, websites are being powered by algorithms that Gillepsie described as being influencers of our choices, ideas and opportunities. 

Algorithms are used to create recommendation systems. The data collection and filtering promote certain pieces of content over others. Promoted as optimisers of the user’s experience, the founder of Algo Transparency Guillaume Chaslot believes that the motivations behind the recommendation systems are flawed and often unrelated to what the viewer wants.

The YouTube App on a smartphone – Photo by: freestocks.org

YouTube maximises different relationships from users to advertisers and creates monetary value out of the watch time. After working several months on YouTube’s recommendation system, Chaslot assessed that “the problem is that the AI isn’t built to help you get what you want, it’s built to get you addicted to YouTube”. The platform’s interest comes first, and as YouTube becomes more central in people’s news consumption, such use of algorithms may threaten our ideas and perceptions.

Hence this technology calls for questioning the system’s reliability and consequences for the general public’s best interest. 

To understand YouTube’s recommendation system, we explored video networks created by repurposing data from YouTube’s ‘related videos’ feature through the use of the controversial query, ‘Jeremy Corbyn’ and ‘anti-semitism’. We established a network of the videos and their associations through YouTube’s ‘related videos’ feature. Our results showed that with around 1,800 different videos recommended, less than 15% were directly related to our query. 

Gephi Graph cluster focused on Brexit and conservative position – Photo by: Emma Neveux

With more than 75% of recommended videos unrelated to our query, we captured one specific pattern that seemed to significantly occupy the interest of the recommendation system: 24% of the recommended videos revolved around the Labor party and Corbyn, but more specifically on the heated Brexit issue. Stepping away from the relationship between Corbyn and antisemitism, for its own interest YouTube promotes a different political debate to trigger people’s attention

These recommendations displayed predominant conservative names such as Nigel Farage and Theresa May, always in the context of discrediting Corbyn’s political acts and party through his position towards the Brexit issue. The promoted videos mainly came from reliable sources such as The Guardian News, or BBC Newsnight, adding credibility to the videos, and impacting the users’ perceptions. Videos downgrade Corbyn’s integrity and reliability through titles such as: “Theresa May tells Corbyn to quit as Labor leader in final exchange” or through discrediting contents. 

With his unclear position, Corbyn’s stance towards Brexit provokes critics and misunderstandings. After being critical of the European Union, he continued to support the membership in the 2016 referendum, and by 2019 he endorsed a referendum on any Brexit withdrawal agreement with a personal stance of neutrality. This grey area in the public’s eyes offered his rivals an opportunity to focus their energy on discrediting him through accusations of uncertainty. YouTube’s recommendation system embraces this by replacing our initial controversial issue with another one to secure viewers’ watch time. Chaslot expressed to The Guardian that “YouTube is something that looks like reality, but it is distorted to make you spend more time online” and it is not “optimising for what is balanced or healthy for democracy”. 

We are only one click away from getting distracted away for hours with a recommendation system putting forward unrelated topics linked to Corbyn’s unpopularity. 

Almost 10% of the recommended videos brought up issues revolving around the Israeli and Palestinian conflict or islamophobia, which have no direct relationship with Corbyn and his antisemite attacks. Ali declared in an article for the Digital Information World that YouTube is “becoming a network for extremist content”. The mixed issues found in YouTube’s recommendation system and the amalgams it creates make us, like Chaslot, worry that the recommendations will drive people further to extremes because it is in YouTube’s interest to keep us watching for as long as possible. 

Screenshot of Youtube comments – Photo by: Emma Neveux
Screenshots of YouTube comments – Photo by: Emma Neveux
Screenshot of YouTube comments – Photo by: Emma Neveux

The recommended videos all put forward controversial names like Trump and Boris Johnson regarding race and religion, as well as controversial topics like Palestine, anti-semitism, islamophobia, etc. The most-related video found mentioning Corbyn was one from Sky News: “We should be PROUD of Corbyn’s record on Palestine” in which Michael Walker emphasised heavily Corbyn’s position towards Palestine and manipulated his support to discredit him. The comments that sprung under such videos tended to be extreme and project radical position.

The recommendation system is flawed as it focuses on the watch time and endangers our perceptions of things. Borderline content is more engaging, and taken out of context, it can lead to the radicalisation of ideas.

The lack of transparency of algorithms and the weight of the recommendation systems call for their denunciation to offer people a better overview of what is actually being recommended on YouTube. The platform ‘innocently’ facilitates our navigation and distracts us from the real issues while promoting radical positions. 

Chancellor Merkel called the issue a “challenge not just for political parties but for society as a whole”, lighting up the red light on the dangers of those systems for a free and democratic society.  

Falling down YouTube’s Rabbit Hole: How does their Recommendation Algorithm Work?

Have you ever started watching a video on YouTube and then hours later realise you have fallen into a crazy YouTube rabbit hole? We’ve all been there. But why is it that YouTube can drift us so far away from the original video we started at, and who decides what videos are recommended?

hacking-2903156_1920
Image: By PixaBay

That would be YouTube’s recommendation algorithm. The software is responsible for what videos are seen in the “Up Next” column and on user’s homepages. Over 70% of all time spent on YouTube by users is spent watching videos recommended by the platform’s algorithm. With that much power over what we watch, we should find out how it works.

There is no public information into the exact specificities of the algorithm, however, according to Google, who created the current algorithm for YouTube made by their Artificial Intelligence company, it works primarily through 2 steps:

1) Personalisation: This is when the algorithm selects videos for the particular user based on videos that were watched by other users who watch similar videos to that user and also are similar demographically.

2) Ranking: This involves the selected videos being put into order of how likely the user will watch them, using data such as how many videos they have already watched on that channel and search queries.

If the algorithm worked perfectly, then every video YouTube recommended would be incredibly interesting for us, however, this was not the case for Nick Brown. Nick is a teenager who spends lots of time on YouTube, primarily watching videos about UK politics. As a strong labour supporter, Nick spends most of his time watching videos about the party’s news. However, one Tuesday evening in December 2019, just two days before the next general election, the teen was an hour deep into a YouTube binge session gathering as much information about the upcoming election, when he was led down a strange path. “On my up next was a Maroon 5 music video” Nick stated. “I’ve never listened to Maroon 5 in my life.”

The music video is very popular on YouTube, with over 2.5 billion views. Despite the video being an obvious hit on YouTube, Nick was not so impressed. “I don’t get it” he muttered angrily, “what does the labour party have to do with Maroon 5?”.

This is a question I have been asking myself since speaking to the young teen. Why would YouTube recommend such an unrelated video? With 500 hours of videos being uploaded every minute, I understand that YouTube has a lot of footage to deal with. However, with YouTube being one of the largest and most powerful online platforms, it seems fitting to investigate what is really going on with YouTubes recommendation algorithm.

In order to uncover the truth behind how the recommendation algorithm works, I embarked on an investigation.

Fitting with the recent UK general elections in December 2019, I decided to look into what YouTube videos would be recommended when I searched ‘Jeremy Corbyn, Anti-Semitism’.

polling-station-2643466_1920
Image: By Pixabay

On December 12th 2019, the UK general election took place, which resulted in the Conservatives winning with a landslide majority. The reason for the extreme result of the 2019 election has been in speculation since it took place, with talks of Brexit and the NHS. However, one factor that may have contributed to Jeremy Corbyn’s extreme defeat is the number of allegations of anti-Semitic behaviour that have surrounded the Labour party recently, resulting in nine members of the Labour party to resign in protest.

I used a digital tool that scraped all videos that YouTube’s algorithm would recommend from my search. 1,803 videos were recommended from my search ‘Jeremy Corbyn, Anti-Semitism’, and I visualised the network of videos in to the graph below.

 

 

Screen Shot 2020-01-05 at 15.29.13
Image: by the author 

The different colours symbolise clusters of videos that are similar to each other. As I zoomed in to the blue cluster, I was equally shocked and amused at the video titles that appeared. Incredibly extreme titles such as “I broke my legs to satisfy my mom but it was not enough” or “I Like Older Men So I Got Pregnant By A Grandpa” were present. Despite the insane obscurity of the video titles, I was intrigued to click on them for that very same reason, and I was not alone in that. The videos had millions of views, with thousands of outraged comments and dislikes.

Screen Shot 2020-01-07 at 10.18.34
Image: by the author 

Despite the complete irrelevance to Jeremy Corbyn or Anti-Semitism, YouTube knows that shocking video titles are more likely to get clicked on.

This section of the graph suggests that they specifically include extreme video titles instead of finding actual relevant and personalised videos for the user. Despite the irrelevance of these videos, YouTube are obviously doing something right still, as 70% of all time spent on YouTube is occupied watching recommended videos. Guillaume Chaslot, founder of AlgoTransparency, stated that the recommendation algorithm is often not related to what the individual wants, and focuses on what is likely to get clicked on.

The investigation taught me that the algorithm just assumes that popular videos that are clicked on regularly might satisfy everyone. Pew Research centre found in a study that the 50 videos that were recommended the most times by the algorithm had been viewed over 456 million times each.

My brief investigation suggests that the algorithm isn’t really as clever as one might assume, and mainly focuses on what will get the masses to click rather than what will please the individual.

Did Youtube algorithms convey to voters that there was anti-semitism in the Labour party?

On the 12th of December, the Conservatives won the general election with an 11.2% lead over the Labour party. There has been speculation about what happened leading up to the election. An overarching theory put forward has been that Jeremy Corbyn did not represent the views of Labour voters and that the majority opinion was suppressed by the loud Corbynite minority who advocated for Corbyn’s vision of a socialist Britain.

One way in which Corbyn’s failed leadership has been said to have manifested itself is with his handling of allegations of anti-Semitism within the party. Whilst this was not a damning issue when the Conservatives were accused of islamophobia, Corbyn appears to have been much slower to respond to, and express his disdain for, the behavior than his Conservative counterparts. It was only when the leader was put under pressure that he released a public statement.

Many party members and representatives felt that their leader did not respond correctly to what is a serious issue. The sentiment became so great that in February 2019, nine MPs resigned from the party in protest. Not only was this feeling within the party leaders but also amongst the voters. Labour’s former whip, Graham Jones, said that while on the campaign trail, he encountered voters stating that the antisemitic sentiment in the party was one of the reasons why they would not be voting Labour in the 2019 general election. To the voters for whom this mattered, it would have undoubtedly served to make them feel isolated within their party.

polling station

Photo by Elliott Stallion on Unsplash

Social media was a factor of particular weight in this election campaign. Gephi – a data analysis tool that shows how videos are recommended and the link between them – can help us delve further into this point. Using YouTube as a platform, due to its great popularity, we can attempt to look into the sort of material the voter is being exposed to by way of personalised recommendations. Youtube uses the user’s history in the algorithm in order to give personalised recommendations.The aim of the investigation is to see how these videos are linked and the potential impact it has on the voters.  To specify the search, I used the keywords ‘Jeremy Corbyn’ and ‘Anti-Semitism’.

 

What does the graph show?  

Within the network, there was a focus on the graph below. This cluster in the network showed the highest volume of videos recommended that were related to the theme of anti-Semitism. Among the name Jeremy Corbyn, the names Theresa May and Nigel Farage appeared several times. This can be attributed to the fact that they were the running opponents to Jeremy Corbyn. The themes that showed up were anti-Semitism, UKIP, and Palestinian conflict.

gephi graph

Image: by the author

Thus, from this evidence, we can move to infer that – although it does no make a conclusive case that the videos impacted the voters, from the videos – we can see how the recommendations were centred around the topics above. Further to this, the YouTube recommendations move past the title and are also based on the content of the videos. For example, in the BBC Newsnight interview, there was discussions on the Labour party’s stance on the Israeli-Palestinian conflict. This video was one of the main nodes within the network. The words ‘Israel’ or ‘Palestine’ did not appear in the title. From here, the algorithm then recommends a video with ‘Palestine’ in the title. This suggests that the algorithm also considers the content as well as the title.

The central node in this cluster is The Nigel Farage Show, where he discusses his stance on Jeremy Corbyn. When the tabloid began running the anti-Semitism story, this gave Farage ample opportunity to attack his running opponent. Several videos within the cluster are opponents who are publicly speaking out against the candidate. From a voters perspective, this exacerbates the story. If there was any doubt of the candidate’s stance on the issue, the recommendation algorithm creates a higher shadow of doubt by presenting strong sentiment from other leaders. Thus, the negative perception fostered by the initial video which was likely directly about the issue, is further developed by the recommendations of videos with Corbyn’s opponents berating him for it.

 

What does the network show?  

This network demonstrates the power that social media algorithms can have on the voters. Digital campaigning has become an integral part of political campaigning globally, and has been utilised during this general election. While analysing the recommendation algorithm, one must consider to what extent these platforms are working against the politicians. In the case of the cluster examined, the specific data points, in relation to the topics recommended, evidently served to hurt Corbyn’s public perception. Any voter with doubts about the magnitude of the anti-semitism allegations would have subsequently been pushed further videos discussing it; likely with opposition politicians giving their opinions.

The existence of, or extent of, the anti-semitism within the party is not in question nor the issue. The point of this investigation is to analyse to what extent Youtube’s algorithm can influence the voters decision. Platforms like YouTube are undoubtedly a large content provider for voters in the modern age. From the information gathered, it appears that the recommendations are indeed based off similar themes within either the title or the content; thus, where a video with an opinion is viewed or searched, this might produce recommendations that serve to reinforce that opinion.

However, it also undoubtedly works on the contrary. So positive Jeremy Corbyn videos would be recommended if one searched for this. Ultimately, in tense political climates, the negative sentiment is always repeated and highlighted and in this case, YouTube algorithms facilitated the promotion of content that pushed onto voters the discussion about anti-semitism within the Labour Party.

Hey Google, do you think I’m beautiful?

Screenshot of Google search engine: By Author

 

Beauty is in the eye of the beholder. However, it seems Google Image’s search engines have their own ideas of what is “beautiful”. “This is something we should care about” urges Safiya Umoja Noble, a professor of Information Studies and African American Studies, regarding the racial biases evident in some of our most trusted search engines. In a TED talk at the University of Illinois, she explains; “Search engines are an amazing tool for us, but what we don’t see is the ethical biases that are inherently built into them”. Umoja Noble, who is African American, recounts an anecdote based on her friend’s experience with Google Image search: “When she did a search for ‘beauty’: this is what came up”. The screen fills with images of young, white women. Looking concerned, she explains that this is reinforcing the social biases of society: “they get replicated in our search engines”. How are we, as a society, to combat racial inequality if the tools that we depend on the most, inherently reinforce these barriers? Is this an algorithmic issue, or simply a reflection of the views of Google as a company?

Screenshot of video recording of Safiya Umoja Noble’s TED Talk available: https://www.youtube.com/watch?v=UXuJ8yQf6dI&t=757s : By Author

Unsurprisingly, this is not the first time Google search algorithms have come under fire. In a piece by Mind Matters, an ex-Senior Google Search Engineer, Gregory Coppola, contacted a watchdog group (project veritas) to warn users about political biases in Google’s search engine results. He states: “No private company should have either the right or power to manipulate large populations without their knowledge”. I reached out to Coppola and asked him what his approach would be to identifying biases in a search engine such as Google. In an exchange on the issue, his response was to the point – all of it is biased.

Screenshot of personal correspondence with Gregory Coppola : By Author

In his blog, Coppola proposes “Coppola’s Law” which states “the social bias in a software product is the social bias of the organisation that produced it”. And Coppola isn’t the only one with concerns. Gabriel Weinberg, CEO of rival Duck Duck Go and critic of Google search engines: “This filtering and censoring of search engines and news results in putting users in a bubble of information that mirrors and exacerbates ideological divides” quoted in The Observer.

Google search engines have a lot of critics, but then if they are so bad, why are they so popular? According to internetlivestatistics.com Google processes 1.2 trillion searches per year worldwide – that’s 1.2 trillion opportunities for reinforcing (or addressing) social divides worldwide. Surely something we are so dependent on cannot be as big a social evil as it seems? So says Google’s CEO Sundar Pichai. According to CNBC In 2018, he appeared in a hearing before US Congress to explain how Google’s algorithm worked, including the results that it favoured. He was vigorous in his defence of the search tool. “Getting access to information is an important human right” explained Pichai. But core principles of human rights are equality, non-discrimination and respect for the worth of every human, irrespective of race and culture. Search results that suggest beauty is confined to Caucasians simply to do not bear that out.

We decided to see the racial biases in Google Image search engine for ourselves and carried out a small practical experiment. We typed the search term “beauty ideal” in 6 different languages (English, Irish, Arabic, Indonesian, Japanese and Korean) in the Google Image search tool that most digital natives use for just about anything. The aim was to identify the different representations of racial groups that each translation of the term produced through the search engines ranking system using the Google Image Scraper tool.

 

These are some of the top search results from each language:

Screenshot of a sample of the search results used in the project : By Author

At first glance, the top results seem to be heavily representative of young, thin, white women which is a backwards step of beauty standards in many ways. The graph below shows the representations of racial groups in the first 20 results of each query:

Graph showing representation of racial groups in the search results: By Author

According to Google, being white seems to be a “beauty ideal” across language and culture. What’s more disturbing is that some races are not even represented in some of the top search results! This is unrepresentative of the diversity in society’s actual views, as “beauty ideals” are unique, not just to individual opinion, but cultures and ethnicities. How challenging will it be to secure equality in society, when one of our most used tools reinforces some of the very barriers that we are trying to break down?

In Copolla’s approach, can we assume that Google, a worldwide multi-billion dollar tech giant, is racially biased against certain groups? Or would these results change if I wasn’t a young, white woman myself and these results are simply what Google thinks that I want to see based on the data it has on me? The mystery of how Google’s search algorithms work makes identifying what the causes of this misrepresentation of certain racial groups difficult to uncover. But, as far as I can see; one thing is for sure – there is a lack of diversity in the image representations of “beauty ideal”.

Does this make Google racist, or is it just a user-pleaser? What about representations of gender, sexual identity or age? There are so many areas for problems to occur in search results, especially in something as subjective (and arguably vague) such as a “beauty ideal”. Search engines like Google cannot please everyone. Something they can work on however, is keeping up society’s standards of inclusion and equality.

Although we don’t need Google to tell us if we meet the standards of beauty, there is something to be said about the lack of representation in digital media’s depiction of beauty. Google is used by everyone, so everyone should feel represented. This may seem like a challenge, but Google is one of the greatest innovators – this one should be a breeze. Come on Google, aren’t we all beautiful?

 

 

Alice, Alice… I’m falling down YouTube’s algorithm hole

                                   Photo taken by researcher

On a Wednesday afternoon, 20-year-old Psychology student Hannah was sitting in University College’s Library when she typed in the word suicide prevention on YouTube’s search engine. The majority of the suggested videos were an array of music videos and not much informative content on suicide was being recommended. Moreover, as Hannah scrolled down, to her surprise, the platform suggested that she watch a video of Budd Dwyer committing suicide live on camera, a video which has over 500,000 views. Hannah was shocked. How could YouTube’s algorithm recommend such a revolting extremist video when that was not part of her search query. She thought to herself, what would the implications be if a child would have come across such a horrifying video.

Introduced in 2005, YouTube is the second most used search engine after Google. The platform hosts user friendly videos making it easy for individuals to create, discover and share video content. Over 2 billion users access the platform every month, with Americans as the largest audience (https://www.businessofapps.com/data/youtube-statistics/). According to Pew Research, the platform provides content for children, serves as a pass time for 28% of users and provides 19% of viewers with an update on current affairs (https://www.pewresearch.org/internet/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/). Therefore, it has become increasingly clear that YouTube is replacing traditional sources of media such as television, with all age demographics actively using the platform on a daily basis.

So how does it work?

YouTube’s recommendation algorithm functions by using machine learning to suggest videos. The platform is built on a software called deep learning neural networks which collects user’s personal data. The algorithm calculates users ‘watch time’, how long viewers have spent watching a video. By analysing such metrics, it suggests similar videos to maximise viewer duration on the platform. Personalizing content, it recommends videos that suit, and target users’ individuals tastes. The suggested videos will then appear in the ‘up next list’ or play automatically. The platform tends to push popular videos with ‘clickbait’ titles, headlines that are meant to attract and encourage users to click on a video link more, this in turn makes the company more profit. However, there have been concerns towards the platform’s capacity to steer individuals towards extremist content.

According to a former YouTube employee Guillaume Chaslo, “AI isn’t built to help you get what you want- it’s built to get you addicted to YouTube. Recommendations were designed to waste your time” (https://thenextweb.com/google/2019/06/14/youtube-recommendations-toxic-algorithm-google-ai/). Chaslo has underlined how the platform pushes clusters of entertainment videos as well as slightly extremist content from conspiracy theories, to political propaganda, to fake news as slightly controversial topics result in individuals clicking more because of human’s natural curiosity for the unknown.  

YouTube’s algorithm is designed to maintain user interest and increase watch time for as long as possible. Google, which in 2007 purchased YouTube for $1.65billion dollars has said that 400 hours of content are uploaded to YouTube every minute (https://www.nytimes.com/2017/11/04/business/media/youtube-kids-paw-patrol.html). By pushing content further to extremes, the algorithm steers viewers towards videos that will increase user engagement on the platform with users sharing their opinions in the comment section, re-playing videos and re-posting.

In effect, because of the low barriers of entry over 70% of the videos that the platform hosts are controversial, even though the majority of users engage less with such content (https://www.techspot.com/news/73178-youtube-recommended-videos-algorithm-keeps-surfacing-controversial-content.html). However, falling down YouTube’s algorithmic rabbit hole can be easy. In effect, children, teens and vulnerable individuals have become ever more exposed to such extremist content that might not be inappropriate for their age and can be triggering.

Should we be concerned?

YouTube has created an extension of the platform specifically for children called ‘YouTube Kids’ which allows parents to add filters and select appropriate video content. However, even this extension of the platform contains dark corners. Videos such as “PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized” have appeared on the platform and have triggered nightmares for some children. Mum of 3-year old Isaac, Staci Burns has told The New York Times’ “there are these horrible, people out there that just get their kicks off of making stuff like this to torment children”, having found ways to trick the YouTube Kids algorithm.

Although YouTube has appeared to obscure information regarding its recommendation algorithm, it has been working towards filtering and moderating extremist content and creating a user-friendly platform for all.  

A YouTube Kids spokesperson has said that the company is aiming in preventing similar situations to Staci’s in which inappropriate content is recommended to children. Moreover, the platform has been quite transparent in removing over 8 million videos since 2017. The videos that have been removed do not respect the community guidelines, as these included violent, hateful, unethical and graphic content  (https://transparencyreport.google.com/youtube-policy/featured-policies/violent-extremism?hl=en_GB).

In early December of 2019, two third year students at King’s College London, conducted an independent research, following their interview with student Hanna, to understand what YouTube’s recommendation algorithm would suggest when typing in the sensitive topic of suicide. Using YouTube’s Data Tool, Gephi, the students were able to map the different recommendation clusters in response to the keyword suicide. The researchers noticed that the most dominant cluster of videos were around entertainment, from films to music videos appearing to be more popular than suicide prevention documentaries. From this, the researchers were able to infer that YouTube tends to suggest its viewers with entertainment videos as these are more likely to increase viewer watch time on the platform, which in turn makes the company more profit.

Overall, YouTube continues to remain in the lead as one of the primary sources of video content. The platform has more than a billion users to date and has launched in 91 counties. Thus, it is clear that YouTube is one of the most visited websites worldwide. By 2025, the platform is expected to be the master of entertainment. In effect we can expect the recommendation algorithm to continue growing and hopefully improve in order to create a safer environment for all.

Creating Deepfakes to Fight Deepfakes- Will Algorithmic Detection Be Enough?

Source: Casey Chin, WIRED

A video of President Donald Trump circulated on Twitter in 2018 stating, “As you know I had the balls to withdraw from the Paris climate agreement. And so should you.” Commenters expressed shock and confusion by the president’s rhetoric. This rudimentary deepfake video was doctored by the Flemish Socialist Party to urge Belgium to follow in America’s footsteps to withdraw from the climate agreement. While their intention was not to spread disinformation, the damage was done. 

Deepfake videos show real people saying and doing things that they never actually have done. This process of face-replacement uses algorithmic deep learning made with generative adversarial networks (GANS) which mimic two neural networks at once through the learning of data sets of photos and videos of an individual. These deep machine-learning algorithms analyze one face and transposes it over another human face. 

Manipulated videos are most commonly seen in the pornography industry which accounts for 96% of deepfakes on the internet according to research at Deeptrace. Publicly available software including the popular FakeApp have been used by amateurs to create deepfake porn of celebrity faces placed over porn actresses bodies.

The weaponization of deepfakes extends beyond the porn industry into politics with world leaders targeted by deceptive videos with major implications to the fake news/post-truth epidemic plaguing modern politics. These videos can sabotage reputations and spread disinformation. The technology to produce deepfakes is also rapidly evolving as deep learning algorithms become more advanced and in the hands of of anyone motivated to create content intended to spread falsehoods. If we cannot trust the content we see with our own eyesight democratic systems destabilize.

A “shallowfake” of Speaker of the House Nancy Pelosi received millions of views on Facebook in 2019 showing the leader slurring her words during a speech using simple video editing software which slowed down the original clip. Donald Trump’s personal attorney Rudy Giuliani tweeted the video ensuing widespread mockery of Pelosi who was accused of being inebriated.  This example shows how deepfakes are defamatory by placing words into someone else’s mouth which establishes the issues of lack of consent growing alongside increasing media distrust.  

Big tech companies are developing large data sets of deepfakes to train algorithms to detect them. It appears that their best solution to remove deepfakes is to first create thousands and release them into the internet like a virus in need of antibodies

In September 2019 Facebook announced a partnership with Microsoft and researchers at several universities including the University of Oxford to form the Deepfake Detection Challenge (DFDC). Facebook dedicated $10 million to this challenge and commissioned a dataset using paid actors for the AI community to use in order to collectively find algorithms that can detect doctored videos. 

Facebook’s Deepfake Detection Challenge

“This is a fundamental threat to freedom. Manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance I believe we urgently need new tools to detect and characterize this misinformation.”

Professor Philip H. S. Torr at the University of Oxford who is working with Facebook for the DFDC.

Google is also prepared to combat deepfakes trickling into its search engine and in 2019 announced that they created a database with with over three-thousand deepfakes. According to Google’s AI blog, this dataset was made by recording videos of consenting actors and creating deepfakes using those faces transposed onto other bodies of actors from television and film. The dataset is available now for free for the research community to use in hopes of finding algorithms aimed at spotting deepfakes from their original forms. 

The efforts of tech platforms to find algorithmic solutions to deepfakes are argued by some to be unambitious. A former managing editor of fact-checking at Facebook, Brooke Binkowski, stated in an interview with Politico that Facebook is lacking incentive to regulate deepfakes because they are more concerned with user engagement as deepfake videos and fake news in general garner high attention which earns the platform revenue.

A recent study found that fake news on Twitter spreads ten times faster than true news stories. According to the study, fake stories are retweeted 70% more because sensationalized content targets our emotions which gathers higher interest. Therefore, a deepfake of a politician saying something offensive can spread virally on these sites and damage that candidates reputation if viewers are unaware that the video has been manipulated.  

Researchers at the University of California, Berkeley have designed a deepfake detector using an algorithm capable of distinguishing AI deepfakes of politicians which already exist without having to create new deepfakes. The algorithm picks up the spatio-temporal glitches that oftentimes occur in AI software which struggles to match the replaced face with the unique facial movements called “soft biometrics” of the other individual. 

Soft biometrics include subtle features like Donald Trump’s pursed lips or raised eyebrows. The algorithm learns to catch these glitches and so far has a 92% accuracy. However, this achievement will become useless one day because, “deepfake technology is developing with a virus / anti-virus dynamic,” according to the report.

Deepfake Cosmogram. Source: By Author

Will these efforts to use algorithms to detect and filter deepfakes from the internet be enough to combat disinformation in time for the sociopolitical damage they can cause? According to the recommendations of a report by NYU on deepfakes and the 2020 US Presidential Elections, tech platforms need to not only quickly develop a successful detector but also outright ban any content that spreads disinformation in the same stern tone that they remove hate speech. The report also suggests raising awareness to the issue as the second frontier to protecting the public. 

In the future the software to produce deepfakes can become sophisticated enough to outsmart algorithmic detectors and can be exploited. Mark Anderson of IEEE warns that, “an overly trusted detection algorithm that can be tricked could be weaponized by those seeking to spread false information.” Deepfake detectors can therefore serve as a double-edged sword in the battle against disinformation.

The governmental battle against deepfakes has already begun in the US. In October 2019 California passed law AB 730 criminalizing video and audio which superimposes a political candidate’s image onto another body within sixty days of an election in order to protect election tampering. The law hopes to protect the speech and image of politicians from the malices of deepfake videos. 

However, regulating deepfakes is a slippery slope as any governmental intervention of online content can be argued as a violation of free speech and can be seen as an overreach of personal freedom. Despite this pushback, a recent survey of US legislation found that twelve bills have been placed before Congress and two others apart from California have been passed in state legislation. In Virginia deepfake pornography is now considered a cybercrime and in Texas deepfakes that are defamatory to political candidates and interfere with elections have been criminalized.   

Despite these collaborative efforts between tech platforms and government to curtail deepfakes, once content is uploaded on the internet it never truly goes away and these videos will persist. However, the effort being placed into deepfake detection is a hopeful sign of signaled global alarms. 

5 Ways Journalists are Reporting on the Netflix Algorithm

The New York Times, Medium, WIRED, The Guardian and Marie Claire covered the Netflix algorithm for its readers. Photo: by the author.

Netflix and Chill? More like Netflix and a possible case of racial bias. Imagine one night, or every night in most of our cases, you are browsing Netflix for the latest Zac Efron movie. But when you come across the movie title, the artwork features a side character who only had two lines. Well if you’re an avid user, you may have noticed some coincidences where movies are marketed to you featuring a character of the same race, but minor acting creds.

Netflix’s recommendation algorithm has been making headlines for years concerning the uncanny racial bias, and journalists have been there every step of the way to uncover its true intentions when it comes to your browsing experience.

Here are 5 ways journalists have reported on the Netflix algorithm.

  1. The OG Interview
Xavier Amatriain (left) and Carlos Gomez-Uribe (right). Photo from WIRED.

So, how did we get here? In 2013, when the recommendation algorithm originally came out, WIRED was one of the first outlets to cover it. Reporter Tom Vanderbilt interviewed Netflix’s duo behind the algorithm, VP of personalisation algorithms, Carlos Gomez-Uribe, and engineering director, Xavier Amatriain. Amatriain prefaced that the algorithm is not arbitrary, “All of our analysts are TV and film buffs, and many have some experience working in the entertainment industry. They obviously have personal tastes, but their job as an analyst is to be objective, and we train them to work that way.” He then followed up when asked point blank if they track views that they not only know exactly what you watch, search for, and the time and day you perform those tasks, but they even know your scrolling behaviour. However, in the 2013 interview there was no mention of expert personal data “buffs.” WIRED focused on gaining insight into the operations by going straight to the source.

2. A Personal Story from a User

April Joyner’s article. Photo from Marie Claire.

In 2016, Marie Claire was one of the first to report on the racial bias of the algorithm with an essay of personal user experience titled, “Blackflix.” Writer, April Joyner, shared personal examples of how she noticed the bias in her normal use of Netflix, “Maybe it was the Scandal binge, or the fact that I watched a couple of films by black female directors, but suddenly a good third of my new movie recommendations feature black actors in leading roles.” This included an entirely new category named, “African American Movies,” popping up on her home screen as well. At first, she did not mind the new-found algorithm because it was in fact doing what it was supposed to. It was emphasizing exactly what she wanted to see. However, she had an issue with what viewers were not being shown. She was upset that if a viewer does not express interest in African American content, then it is hidden entirely from their Netflix experience. If films with casts of color are only being emphasized to her because she has shown an interest in this content, then is it invisible to those who don’t? Marie Claire poses questions like this about the bias based on a user who can easily identify with its readers’ and their similar frustrations.

3. The Art Personalisation Experimentation

Netflix’s example featuring Good Will Hunting. Photo from Netflix Technology Blog.

In 2017, the Netflix Tech Blog on Medium responded with real examples of their algorithm in action. If you’ve been curled up in bed reminiscing on a bad breakup, and to fulfil that void you have been watching an absurd number of rom-coms, then artwork with Matt Damon and Minnie Driver will likely appear for Good Will Hunting. However, if you’ve been feeling a bit better than that and bingeing on comedies, then artwork of Robin Williams will grace your screen. Netflix focused on behind the scenes examples that can give its users an inside look into how they attempt to highlight each individuals’ preferences. Nonetheless, this sounds like a big task to take on, and it is. The blog discusses the challenges they face behind choosing what artwork to show when there’s not enough watch history data. They even give examples of their A/B testing of the algorithm, and how they indicate the signs of which artwork is performing best overall. What is somehow left out of their blog post is any mention of personal data affecting their algorithmic pool of images. In the face of this scandal, all these challenges they overcome for the result of nine images may be more stress than it’s worth.

4. The Twitter Thread

Stacia L. Brown (@slb79) original tweet. Photo from The New York Times.
Follow-up tweet from Stacia L. Brown (@slb79). Photo from The New York Times.

In 2018, the algorithm made waves again in The New York Times. Journalist Lara Zarum focused on the tweet that brought on all the speculation. Stacia L. Brown (@slb79) tweeted out to ask her followers whether they had noticed the same race marketing tactic. The New York Times hyperlinked more examples of responses from her followers who had also noticed. The article focused on the user experience, and the bigger societal issue at hand here. In today’s world, there is a great deal of oppressive behaviour towards specific groups of people and the fight for equal rights, justice and safety for those groups. Any sense of catering to a specific race is completely unacceptable, and Zarum validates that through her inclusion of the many voices on Twitter who spoke out about their experiences. People do not want to be catered to on the basis of their race when it comes to watching a movie in the privacy of their own home. Even if Netflix had no intention of this, the journalist holds the algorithm accountable for making its users feel this way by sharing their words online. The usage of screenshots showed that if people feel upset enough to speak out against the algorithm, then the algorithm has done something wrong.

Response (@realshannon1) to Stacia L. Brown’s tweet. Photo from The New York Times.
Shannon’s profile, artwork for Black Panther. Photo from The New York Times.
Shannon’s daughter’s profile, artwork for Black Panther. Photo from The New York Times.

5. The Multiple Expert Perspectives

Quote from Tobi Aremu. Photo from The Guardian.

In the same week, The Guardian’s Nosheen Iqbal reached out for comments on both sides of the algorithm. She spoke to two users in the entertainment industry who have a good understanding of what is right when it comes to marketing a film or TV series. Film-maker, Tobi Aremu, said he can’t imagine what the filmmakers who have their content on Netflix feel. Misrepresenting their content on the basis of more views did not sit right with him, and same went for Tolani Shoneye, host on The Receipts Podcast. When she spoke to The Guardian she recalled a recent encounter with the algorithm, “There was 30 minutes of a romcom I ended up watching last week because I thought it was about the black couple I was shown on the poster.” Iqbal also got a quote from the party in question (Netflix). They claimed this can’t be possible because they don’t even have that information available at their leisure, “We don’t ask members for their race, gender or ethnicity so we cannot use this information to personalise their experience.” The Guardian presented perspectives from both sides of the industry to weigh in on what they thought of the algorithm’s most recent news, and then left the rest up to its readers on how to feel about it.

Maybe if Netflix traded up spot-on recommendations for privacy they would not be consistently under fire for their algorithm artwork. But it’s almost 2020, so who are we kidding.

We’re Here, We’re Queer and We’re Demonetised

Excerpt from Tyler Oakley’s YouTube channel. (Photo: Tyler Oakley)

“YouTube is a platform where we can express ourselves and collaborate with each other,” says Ethan (20) about the social networking site. Being the student that he is, he is sat at a coffee shop with his laptop open in front of him. While telling me of his experience using YouTube, he is quickly scrolling through his subscriptions box, where there are videos ranging from gaming videos to makeup tutorials. “I don’t really use YouTube to upload videos myself, but I think it’s just a really good place to explore whatever you want to,” he comments on his account. However, when asked about the algorithms going into the process of demonetising videos, the economics student is out of his element. “I have no idea how it works because it doesn’t really affect me, but it seems like it’s a lucrative business to be in right now.”

While some ‘YouTubers’ do indeed make a lucrative living, a question arises: Is this something that can be achieved by anyone?

According to the creators behind the Rainbow Coalition, the answer is no. In a video titled “WE’RE SUING GOOGLE/YOUTUBE – And here’s why…” posted 14 August 2019, eight LGBT+ YouTube creators announce that they are suing the companies due to biased algorithms demonetising queer content. Their concerns about this bias involves not only their livelihood being threatened, but also their content being categorised in a way that in some cases restricts it and makes it difficult for the creators to reach their audience. This is a serious claim to make, as it accuses YouTube of enabling a potentially homophobic algorithm. Given the fact that the platform is one of the biggest social media networks on the market with over a billion users, the accusation is something that the company should be taking immediate action to fix, especially in today’s political climate. Yet, these claims are far from new.

@TylerOakley, a highly influential LGBT+ YouTuber tweeted in March 2017 that “one of my recent videos “8 Black LGBTQ+ Trailblazers Who Inspired Me” is blocked …”, meaning that it would not reach audiences that browse in Restricted mode. This raises the question of why. Explained in the YouTube Help Centre, Restricted mode is a function that, when switched on, aims to hide content concerned with drugs and alcohol, sexual situations (referring to “overly detailed conversations or depictions of sex or sexual activity”), violence, mature subjects (“relating to terrorism, war, crime and political conflicts that resulted in death or serious injury…”), profane and mature language and incendiary and demeaning content. Where Oakley could have broken these guidelines, however, remains unclear.

YouTube, on the other hand, has been struggling to keep up with these issues. In an interview with YouTube creator Alfie Deyes, CEO of YouTube, Susan Wojcicki, states that there are no policies in place to demonetise queer content based on queer terminology in the title. This again is something that has since been tested by a group of YouTube users through reverse engineering. In short, reverse engineering means replicating a process, in this case the algorithm in question. If done successfully, it can be used as a research method for understanding a certain object. When these YouTubers, including data researcher Sealow, conducted their research, they found that 33% of the queer titles tested automatically got demonetised. Even more shocking, when the LGBT+ terminology was swapped for the words ‘happy’ or ‘friend’, all demonetised content was instantly monetised. This contradicts both the Wojcicki’s statement, as well as YouTube’s guidelines on monetisation.

Visualisation of the queer titles tested. Left: 33% of the 100 titles tested were demonetised. Right: 100% of the titles were monetised after changing the queer terminology with “friend” or “happy”.

Madeleine, 19-year-old student in London, helped shine some light on what the public knowledge and opinion on the subject is. When informed about this issue, she put her history book back on her table and seemed perplexed. “I had no idea that this was going on,” she admitted. Providing the student with the research on the subject, as well as a basic introduction to algorithms she could not help but look surprised. “I’ve actually always thought of algorithms as just numbers and math, and therefore neutral.” This is not something that she is alone about thinking. As our technology advances, it becomes harder for the average person to keep up with the processes going on underneath the surface. This can make us forget a very important fact: that algorithms are programmed by humans.

In an ideal world, algorithms would indeed be neutral, and so represent equality. Until then, though, algorithms are programmed by humans. This means that humans may, intentionally or unintentionally, perpetuate certain biases found outside of a computer science domain. This has been seen in case and case again, maybe most clearly in the case of facial recognition software. The way that facial recognition still fails in identifying people of colour to the same extent that it identifies white people  goes to show that there are indeed some imbedded biases in algorithms, as is also evident in the case of queer YouTube creators and their content.As the conflict between YouTube and its queer creators continues, it is important to remember that there still doesn’t exist a perfect algorithm. Not saying that the LBGT+ community should take the issue lightly, but there might be room for more communication from both sides. While working towards a future with unbiased algorithms, it may prove more efficient to be transparent about algorithmic issues so that users can learn about the processes that goes into them, as well as potentially understanding the difficulties in engineering a neutral algorithm. This is supported by Ethan and Madeleine, as neither of them had encountered this problematic situation before. As Ethan notes, “I’ve literally never come across this before. As a gay person myself I would want to know more about how these problems are tackled before I blindly support a homophobic algorithm. But like, I’m not that surprised either, because it only reflects the inequality in the real world, doesn’t it?”

“Apple Hates Twins!”

Screenshot from: https://www.youtube.com/watch?v=GFtOaupYxq4&t=14s by Ethan and Grayson Dolan

Ethan and Grayson are 19-year-old YouTubers who spend their time creating comedy videos for their 10 million subscribers. Everyone other than their mom (and die-hard fans) struggle to tell them apart. So, apparently, does Apple. “According to Apple, Ethan and I are the same person” says Grayson looking bemused as his twin brother has just demonstrated that he is able to unlock his iPhone using Face ID. “If someone out there kind of looks like you, they might be able to get into your iPhone X” says Ethan. Identical twin YouTubers Ethan and Grayson Dolan posted a video on their YouTube channel in 2017 after the release of the iPhone X showing that Ethan could unlock Grayson’s phone using Face ID technology since it could not distinguish between them. “If you’re a twin out there, make a stand! Apple hates twins! Facial recognition hates twins!” Ethan jokes. The video goes on to show Ethan unlocking Grayson’s phone when he is not around and tweeting something embarrassing from his account. “Thumbs up for twin recognition – because Apple need to change things and make sure that facial recognition gets better so that it’s twin-proof” Ethan concludes. Apple’s Face ID technology is apparently not as secure as it seems. Be warned! If you have a doppelgänger, you aren’t the only person who can unlock your phone or iPad.

Apple released their plans to introduce facial recognition as a security measure for the new iPhone models to much excitement but also, a fair amount of controversy. The big issue for most iPhone users was about accountability and privacy – who would get the information and how they might use it. There was less concern about accuracy. According to Apple’s support site, the chances of another person being able to use Face ID to unlock another user’s iPhone is 1 in 1,000,000. These odds are subsequently lowered in the case of twins, siblings that look alike and children under the age of 13 as their features have not fully developed. Does this mean that Apple are aware that there is a potential threat to their security systems and are unable to do anything about it? Or is it only an issue if you have a twin, a sibling that looks like you, or a doppelgänger? What about if you look like your mum or dad? Most teenagers idea of hell is that their parents can bypass their settings.

Photo: iPhone XR requesting Face ID verification (Photo: By Author)

According to statistic.com since 2017 (the release of the first model to integrate the Face ID software – the iPhoneX), there have been over half a billion iPhone models sold that have this technology. “Face ID is the future of how we will unlock our smart phones and protect our sensitive information” Phil Schiller, the Senior Vice President of Worldwide Marketing at Apple, explained at Apple’s 2017 special event where Face ID was announced. However, John Koetsier, a consumer technology journalist for Forbes magazine called facial recognition software on the iPhoneX “a step backwards from touch ID” but he commended the improvements of the technology in the later versions of the iPhone such as the iPhoneXR, iPhoneXS and now the iPhone 11.

To investigate the potential shortcomings of this algorithmic technology in practice, we reached out to our network on Instagram and asked if anyone had successfully unlocked someone else’s iPhone using Face ID. We were surprised at the number of replies! Tess, a 20-year-old student from London responded: “I can unlock my sister’s iPhoneXS, it doesn’t always work but if I needed to unlock it, I think I definitely could.” When asked if she thought this would be a security threat she responded: “For me and my sister? No, because we’re so close and I already know her passcode it wouldn’t be an issue for us. However, I can definitely see how there is an issue and Face ID isn’t actually that secure”, she laughed.

Gabi, 22, a student from Columbia responded that she was able to unlock her boyfriend’s (a 25-year-old French man) Face ID: “it only worked once, but it just opened, and we were both quite shocked. But we tried again, and it didn’t. Maybe it was just a one off or accidental, but I guess that means that anyone could open someone’s iPhone by chance”. This example is interesting as the respondent and her boyfriend are not only different genders, but also different races. Does this mean anyone could potentially unlock another person’s iPhone by pure chance if the security momentarily lapses, or was this a bizarre one-off?

iPhone XS users trying to unlock each other's Face ID
iPhone XS users trying to unlock each other’s Face ID (Photo: By Author)

We reached out to other iPhone users who had not had an experience like this with Face ID and asked for their thoughts on the technology. James, 21, a student from Norway replied: “I love it. It’s way faster than the Passcode and also futuristic, and, I feel very cool when I unlock my phone”. We then told them about the potential security threats that other users had had and asked for their thoughts on this. “I suppose that it problematic, but I don’t have a twin or a sibling that looks like me. I’ve never had an issue with it before, so I can’t see myself having one now” responded Anna, 22, a student from Belfast.

Similarly, we asked those who had been affected by the algorithmic issue if they still trusted the software. Gabi responded, “Yeah of course we still use it – it’s the easiest way to unlock your phone.” Similarly, Tess replied saying “Yeah I can’t see either of us not using it, passcode seems outdated at this point and even though it’s a security problem I guess, it’s not really a problem between me and my sister.”

Despite the different levels of concern, Apple’s Face ID technology has a problem – it cannot completely distinguish between twins, people that look alike, or seemingly people that look nothing alike in certain circumstances. Is this an underlying security threat to Apple and its users, or just a teething problem of software development? Our poll suggest that it will not stop consumers buying Apple products or using the technology. Apple is stepping up the R&D to iron out any of these problems and prove that they do not in fact “hate twins” – like other people, they just can’t always tell them apart.