5 Ways Journalists are Reporting on the Netflix Algorithm

The New York Times, Medium, WIRED, The Guardian and Marie Claire covered the Netflix algorithm for its readers. Photo: by the author.

Netflix and Chill? More like Netflix and a possible case of racial bias. Imagine one night, or every night in most of our cases, you are browsing Netflix for the latest Zac Efron movie. But when you come across the movie title, the artwork features a side character who only had two lines. Well if you’re an avid user, you may have noticed some coincidences where movies are marketed to you featuring a character of the same race, but minor acting creds.

Netflix’s recommendation algorithm has been making headlines for years concerning the uncanny racial bias, and journalists have been there every step of the way to uncover its true intentions when it comes to your browsing experience.

Here are 5 ways journalists have reported on the Netflix algorithm.

  1. The OG Interview
Xavier Amatriain (left) and Carlos Gomez-Uribe (right). Photo from WIRED.

So, how did we get here? In 2013, when the recommendation algorithm originally came out, WIRED was one of the first outlets to cover it. Reporter Tom Vanderbilt interviewed Netflix’s duo behind the algorithm, VP of personalisation algorithms, Carlos Gomez-Uribe, and engineering director, Xavier Amatriain. Amatriain prefaced that the algorithm is not arbitrary, “All of our analysts are TV and film buffs, and many have some experience working in the entertainment industry. They obviously have personal tastes, but their job as an analyst is to be objective, and we train them to work that way.” He then followed up when asked point blank if they track views that they not only know exactly what you watch, search for, and the time and day you perform those tasks, but they even know your scrolling behaviour. However, in the 2013 interview there was no mention of expert personal data “buffs.” WIRED focused on gaining insight into the operations by going straight to the source.

2. A Personal Story from a User

April Joyner’s article. Photo from Marie Claire.

In 2016, Marie Claire was one of the first to report on the racial bias of the algorithm with an essay of personal user experience titled, “Blackflix.” Writer, April Joyner, shared personal examples of how she noticed the bias in her normal use of Netflix, “Maybe it was the Scandal binge, or the fact that I watched a couple of films by black female directors, but suddenly a good third of my new movie recommendations feature black actors in leading roles.” This included an entirely new category named, “African American Movies,” popping up on her home screen as well. At first, she did not mind the new-found algorithm because it was in fact doing what it was supposed to. It was emphasizing exactly what she wanted to see. However, she had an issue with what viewers were not being shown. She was upset that if a viewer does not express interest in African American content, then it is hidden entirely from their Netflix experience. If films with casts of color are only being emphasized to her because she has shown an interest in this content, then is it invisible to those who don’t? Marie Claire poses questions like this about the bias based on a user who can easily identify with its readers’ and their similar frustrations.

3. The Art Personalisation Experimentation

Netflix’s example featuring Good Will Hunting. Photo from Netflix Technology Blog.

In 2017, the Netflix Tech Blog on Medium responded with real examples of their algorithm in action. If you’ve been curled up in bed reminiscing on a bad breakup, and to fulfil that void you have been watching an absurd number of rom-coms, then artwork with Matt Damon and Minnie Driver will likely appear for Good Will Hunting. However, if you’ve been feeling a bit better than that and bingeing on comedies, then artwork of Robin Williams will grace your screen. Netflix focused on behind the scenes examples that can give its users an inside look into how they attempt to highlight each individuals’ preferences. Nonetheless, this sounds like a big task to take on, and it is. The blog discusses the challenges they face behind choosing what artwork to show when there’s not enough watch history data. They even give examples of their A/B testing of the algorithm, and how they indicate the signs of which artwork is performing best overall. What is somehow left out of their blog post is any mention of personal data affecting their algorithmic pool of images. In the face of this scandal, all these challenges they overcome for the result of nine images may be more stress than it’s worth.

4. The Twitter Thread

Stacia L. Brown (@slb79) original tweet. Photo from The New York Times.
Follow-up tweet from Stacia L. Brown (@slb79). Photo from The New York Times.

In 2018, the algorithm made waves again in The New York Times. Journalist Lara Zarum focused on the tweet that brought on all the speculation. Stacia L. Brown (@slb79) tweeted out to ask her followers whether they had noticed the same race marketing tactic. The New York Times hyperlinked more examples of responses from her followers who had also noticed. The article focused on the user experience, and the bigger societal issue at hand here. In today’s world, there is a great deal of oppressive behaviour towards specific groups of people and the fight for equal rights, justice and safety for those groups. Any sense of catering to a specific race is completely unacceptable, and Zarum validates that through her inclusion of the many voices on Twitter who spoke out about their experiences. People do not want to be catered to on the basis of their race when it comes to watching a movie in the privacy of their own home. Even if Netflix had no intention of this, the journalist holds the algorithm accountable for making its users feel this way by sharing their words online. The usage of screenshots showed that if people feel upset enough to speak out against the algorithm, then the algorithm has done something wrong.

Response (@realshannon1) to Stacia L. Brown’s tweet. Photo from The New York Times.
Shannon’s profile, artwork for Black Panther. Photo from The New York Times.
Shannon’s daughter’s profile, artwork for Black Panther. Photo from The New York Times.

5. The Multiple Expert Perspectives

Quote from Tobi Aremu. Photo from The Guardian.

In the same week, The Guardian’s Nosheen Iqbal reached out for comments on both sides of the algorithm. She spoke to two users in the entertainment industry who have a good understanding of what is right when it comes to marketing a film or TV series. Film-maker, Tobi Aremu, said he can’t imagine what the filmmakers who have their content on Netflix feel. Misrepresenting their content on the basis of more views did not sit right with him, and same went for Tolani Shoneye, host on The Receipts Podcast. When she spoke to The Guardian she recalled a recent encounter with the algorithm, “There was 30 minutes of a romcom I ended up watching last week because I thought it was about the black couple I was shown on the poster.” Iqbal also got a quote from the party in question (Netflix). They claimed this can’t be possible because they don’t even have that information available at their leisure, “We don’t ask members for their race, gender or ethnicity so we cannot use this information to personalise their experience.” The Guardian presented perspectives from both sides of the industry to weigh in on what they thought of the algorithm’s most recent news, and then left the rest up to its readers on how to feel about it.

Maybe if Netflix traded up spot-on recommendations for privacy they would not be consistently under fire for their algorithm artwork. But it’s almost 2020, so who are we kidding.