We’re Here, We’re Queer and We’re Demonetised

Excerpt from Tyler Oakley’s YouTube channel. (Photo: Tyler Oakley)

“YouTube is a platform where we can express ourselves and collaborate with each other,” says Ethan (20) about the social networking site. Being the student that he is, he is sat at a coffee shop with his laptop open in front of him. While telling me of his experience using YouTube, he is quickly scrolling through his subscriptions box, where there are videos ranging from gaming videos to makeup tutorials. “I don’t really use YouTube to upload videos myself, but I think it’s just a really good place to explore whatever you want to,” he comments on his account. However, when asked about the algorithms going into the process of demonetising videos, the economics student is out of his element. “I have no idea how it works because it doesn’t really affect me, but it seems like it’s a lucrative business to be in right now.”

While some ‘YouTubers’ do indeed make a lucrative living, a question arises: Is this something that can be achieved by anyone?

According to the creators behind the Rainbow Coalition, the answer is no. In a video titled “WE’RE SUING GOOGLE/YOUTUBE – And here’s why…” posted 14 August 2019, eight LGBT+ YouTube creators announce that they are suing the companies due to biased algorithms demonetising queer content. Their concerns about this bias involves not only their livelihood being threatened, but also their content being categorised in a way that in some cases restricts it and makes it difficult for the creators to reach their audience. This is a serious claim to make, as it accuses YouTube of enabling a potentially homophobic algorithm. Given the fact that the platform is one of the biggest social media networks on the market with over a billion users, the accusation is something that the company should be taking immediate action to fix, especially in today’s political climate. Yet, these claims are far from new.

@TylerOakley, a highly influential LGBT+ YouTuber tweeted in March 2017 that “one of my recent videos “8 Black LGBTQ+ Trailblazers Who Inspired Me” is blocked …”, meaning that it would not reach audiences that browse in Restricted mode. This raises the question of why. Explained in the YouTube Help Centre, Restricted mode is a function that, when switched on, aims to hide content concerned with drugs and alcohol, sexual situations (referring to “overly detailed conversations or depictions of sex or sexual activity”), violence, mature subjects (“relating to terrorism, war, crime and political conflicts that resulted in death or serious injury…”), profane and mature language and incendiary and demeaning content. Where Oakley could have broken these guidelines, however, remains unclear.

YouTube, on the other hand, has been struggling to keep up with these issues. In an interview with YouTube creator Alfie Deyes, CEO of YouTube, Susan Wojcicki, states that there are no policies in place to demonetise queer content based on queer terminology in the title. This again is something that has since been tested by a group of YouTube users through reverse engineering. In short, reverse engineering means replicating a process, in this case the algorithm in question. If done successfully, it can be used as a research method for understanding a certain object. When these YouTubers, including data researcher Sealow, conducted their research, they found that 33% of the queer titles tested automatically got demonetised. Even more shocking, when the LGBT+ terminology was swapped for the words ‘happy’ or ‘friend’, all demonetised content was instantly monetised. This contradicts both the Wojcicki’s statement, as well as YouTube’s guidelines on monetisation.

Visualisation of the queer titles tested. Left: 33% of the 100 titles tested were demonetised. Right: 100% of the titles were monetised after changing the queer terminology with “friend” or “happy”.

Madeleine, 19-year-old student in London, helped shine some light on what the public knowledge and opinion on the subject is. When informed about this issue, she put her history book back on her table and seemed perplexed. “I had no idea that this was going on,” she admitted. Providing the student with the research on the subject, as well as a basic introduction to algorithms she could not help but look surprised. “I’ve actually always thought of algorithms as just numbers and math, and therefore neutral.” This is not something that she is alone about thinking. As our technology advances, it becomes harder for the average person to keep up with the processes going on underneath the surface. This can make us forget a very important fact: that algorithms are programmed by humans.

In an ideal world, algorithms would indeed be neutral, and so represent equality. Until then, though, algorithms are programmed by humans. This means that humans may, intentionally or unintentionally, perpetuate certain biases found outside of a computer science domain. This has been seen in case and case again, maybe most clearly in the case of facial recognition software. The way that facial recognition still fails in identifying people of colour to the same extent that it identifies white people  goes to show that there are indeed some imbedded biases in algorithms, as is also evident in the case of queer YouTube creators and their content.As the conflict between YouTube and its queer creators continues, it is important to remember that there still doesn’t exist a perfect algorithm. Not saying that the LBGT+ community should take the issue lightly, but there might be room for more communication from both sides. While working towards a future with unbiased algorithms, it may prove more efficient to be transparent about algorithmic issues so that users can learn about the processes that goes into them, as well as potentially understanding the difficulties in engineering a neutral algorithm. This is supported by Ethan and Madeleine, as neither of them had encountered this problematic situation before. As Ethan notes, “I’ve literally never come across this before. As a gay person myself I would want to know more about how these problems are tackled before I blindly support a homophobic algorithm. But like, I’m not that surprised either, because it only reflects the inequality in the real world, doesn’t it?”

“Apple Hates Twins!”

Screenshot from: https://www.youtube.com/watch?v=GFtOaupYxq4&t=14s by Ethan and Grayson Dolan

Ethan and Grayson are 19-year-old YouTubers who spend their time creating comedy videos for their 10 million subscribers. Everyone other than their mom (and die-hard fans) struggle to tell them apart. So, apparently, does Apple. “According to Apple, Ethan and I are the same person” says Grayson looking bemused as his twin brother has just demonstrated that he is able to unlock his iPhone using Face ID. “If someone out there kind of looks like you, they might be able to get into your iPhone X” says Ethan. Identical twin YouTubers Ethan and Grayson Dolan posted a video on their YouTube channel in 2017 after the release of the iPhone X showing that Ethan could unlock Grayson’s phone using Face ID technology since it could not distinguish between them. “If you’re a twin out there, make a stand! Apple hates twins! Facial recognition hates twins!” Ethan jokes. The video goes on to show Ethan unlocking Grayson’s phone when he is not around and tweeting something embarrassing from his account. “Thumbs up for twin recognition – because Apple need to change things and make sure that facial recognition gets better so that it’s twin-proof” Ethan concludes. Apple’s Face ID technology is apparently not as secure as it seems. Be warned! If you have a doppelgänger, you aren’t the only person who can unlock your phone or iPad.

Apple released their plans to introduce facial recognition as a security measure for the new iPhone models to much excitement but also, a fair amount of controversy. The big issue for most iPhone users was about accountability and privacy – who would get the information and how they might use it. There was less concern about accuracy. According to Apple’s support site, the chances of another person being able to use Face ID to unlock another user’s iPhone is 1 in 1,000,000. These odds are subsequently lowered in the case of twins, siblings that look alike and children under the age of 13 as their features have not fully developed. Does this mean that Apple are aware that there is a potential threat to their security systems and are unable to do anything about it? Or is it only an issue if you have a twin, a sibling that looks like you, or a doppelgänger? What about if you look like your mum or dad? Most teenagers idea of hell is that their parents can bypass their settings.

Photo: iPhone XR requesting Face ID verification (Photo: By Author)

According to statistic.com since 2017 (the release of the first model to integrate the Face ID software – the iPhoneX), there have been over half a billion iPhone models sold that have this technology. “Face ID is the future of how we will unlock our smart phones and protect our sensitive information” Phil Schiller, the Senior Vice President of Worldwide Marketing at Apple, explained at Apple’s 2017 special event where Face ID was announced. However, John Koetsier, a consumer technology journalist for Forbes magazine called facial recognition software on the iPhoneX “a step backwards from touch ID” but he commended the improvements of the technology in the later versions of the iPhone such as the iPhoneXR, iPhoneXS and now the iPhone 11.

To investigate the potential shortcomings of this algorithmic technology in practice, we reached out to our network on Instagram and asked if anyone had successfully unlocked someone else’s iPhone using Face ID. We were surprised at the number of replies! Tess, a 20-year-old student from London responded: “I can unlock my sister’s iPhoneXS, it doesn’t always work but if I needed to unlock it, I think I definitely could.” When asked if she thought this would be a security threat she responded: “For me and my sister? No, because we’re so close and I already know her passcode it wouldn’t be an issue for us. However, I can definitely see how there is an issue and Face ID isn’t actually that secure”, she laughed.

Gabi, 22, a student from Columbia responded that she was able to unlock her boyfriend’s (a 25-year-old French man) Face ID: “it only worked once, but it just opened, and we were both quite shocked. But we tried again, and it didn’t. Maybe it was just a one off or accidental, but I guess that means that anyone could open someone’s iPhone by chance”. This example is interesting as the respondent and her boyfriend are not only different genders, but also different races. Does this mean anyone could potentially unlock another person’s iPhone by pure chance if the security momentarily lapses, or was this a bizarre one-off?

iPhone XS users trying to unlock each other's Face ID
iPhone XS users trying to unlock each other’s Face ID (Photo: By Author)

We reached out to other iPhone users who had not had an experience like this with Face ID and asked for their thoughts on the technology. James, 21, a student from Norway replied: “I love it. It’s way faster than the Passcode and also futuristic, and, I feel very cool when I unlock my phone”. We then told them about the potential security threats that other users had had and asked for their thoughts on this. “I suppose that it problematic, but I don’t have a twin or a sibling that looks like me. I’ve never had an issue with it before, so I can’t see myself having one now” responded Anna, 22, a student from Belfast.

Similarly, we asked those who had been affected by the algorithmic issue if they still trusted the software. Gabi responded, “Yeah of course we still use it – it’s the easiest way to unlock your phone.” Similarly, Tess replied saying “Yeah I can’t see either of us not using it, passcode seems outdated at this point and even though it’s a security problem I guess, it’s not really a problem between me and my sister.”

Despite the different levels of concern, Apple’s Face ID technology has a problem – it cannot completely distinguish between twins, people that look alike, or seemingly people that look nothing alike in certain circumstances. Is this an underlying security threat to Apple and its users, or just a teething problem of software development? Our poll suggest that it will not stop consumers buying Apple products or using the technology. Apple is stepping up the R&D to iron out any of these problems and prove that they do not in fact “hate twins” – like other people, they just can’t always tell them apart.

When Recommendation becomes a threat

Warren smith a 25-years-old man, he sat on a chair with a cup of tea in hand while playing Instagram for some fun after work. ‘I Can’t believe who I found’ he exclaimed when he was in the recommendation page. He casually saw his high school classmate post. In his friend’s following list, he found even more high school friends. His first reaction was so shocked since they have lost contact for ages. He lay and there and played for another hour, the laughter disappeared; instead, he frowned at the screen. He felt amazed and worried at the same time. He wondered how does Instagram know him so well.  

Warren smith is not the only one who found his or her friend on Instagram. Everyone could rebuild their relationships with people who lost contacts and keep in touch with their followers‘Instagram relies on machine learning to create a unique feed for each user.’ according to techcrunch.Each user could experience differently on the platform. It is user targeted, it seems to know every user very well. Instgram algorithem calculate recommendation based on each user’s history of using Instagram, such as their searching history, the content they liked.  These information is useful for algorithm to know who this person interested in and who he is familiar with.

Instagram is able to recommend potential friend for Warren and others since it has attracts a large amount of users. According to the data form App Download and Usage StatisticsUsers, users could be connected together since Instagram is the second-most free download app in the app store. Over 1 billion users use Instagram every months by 2019 Users could be connected tog Using Instagram becomes a social phenomenon in the country with technology services; The more users it have, the more chances they have to connect people together. Instagram has become a widespread tool for people to social and develop their networks. It build a close relationship with users, it might know users better than a friend of his or her. 

Warren Smith felt worried when he realized that Instagram seem to know his network from the recommendation. The truth is his worry is needed, the recommendation function rise the safety issue. According to BBC NEWS a man from the United Stated accused that Instagram helped to kill her daughter. In 2017 Molly Russel, a 14-years-old girl took her own life. When her parents went through her Instagram account, they found materials about suicide. Instagram promoted content about depression and suicide to her, So they believed that Instagram need to be responsible to their daughter’s death, by promoting distressing materials. It raise the safety issue to the Instagram algorithm. It only calculate what user want to see, they do not really care if it is harmful. 

Molly Russel’s story raise the awareness of the threat of recommendation. Is it really trustworthy to consider Instagram as our friend? Kevin is a 21-year-old Asian boy from Korea,who is studying in the university for the management major. when he was asked about user experience, he looked unsatisfied, he put down his cup of water and said seriously ‘ I always see the advertisement come out while I am chill with Instagram. It just makes feel like when we are trusting Instagram, seeing it as a friend, Instagram is just trying to extract revenue from us.’ His experience shows that Instagram’s aim is not to make friend with users, but to make revenue. Since it is user-targeted, some businesses use Instagram to advertise themselves. There are over 25 million businesses on the Instagram according to omnipresence. Instagram will send advertisement based on their algorithm of user preference. Aim for Instagram is make user use Instagram as long as possible. The longer they stayed on the Instagram, the more advertisement they will see. Molly Russel trusted Instagram by allowing it to know her preference, but she the promoted content lead her the way to suicide instead of offering any help. When the algorithm only calculate without any emotions, it becomes important for users to select its promoted content and protect user’s personal information. 

Kevin was going through Instagram recommendation page before he slept Photo:By Author

  In order to protect ourselves from the worrying about the safety, it is important to know how does Instagram actually work to know people’s network. When people find it makes the communication and interaction easier through the likes, comments, repost and direct message, they are more likely to use the app. When they use more of the Instagram, they are essentially giving more personal information to Instagram. Emily is a 34-years-old white woman from the UK, she is working in advertisement company. When the question : why do you use Instagram brought to Emily. She laughed, thought for a few seconds and said with resignation ‘I feel like Instagram is a necessary tool for social, especially for young generations like all my friends and colleagues around me, if I want to have interaction and get along with them, it seems like I need to have a Instagram. Not only Emily, out of 7 of my interviews, 4 of them stated that they use Instagram to social. When users use Instagram to interact with their friend, it allows platform to know their network and graph relationship for users. It seem like users give their information passively to use the platform without to many worries. Users give the chance for Instagram to know them.

Warren Smith felt contradictory when he used the Instagram. The recommendation did help him to find his friends, but extracted his personal data at the same time. The journalism reveals the truth that Instagram recommendation do have risk to affect people’s life. But users have the right to choose what information they want to share to Instagram.

Will a like-free Instagram attract more users or not?

My flatmate Lauren, is a general Instagram user with 750 followers. She was lying on the sofa one very relaxing Friday night, scrolling down Instagram as usual. ‘Why I only got 37 likes for this post so far? I sent it yesterday and it has been over a day!’ The post she was complaining about was the selfie with me and another friend. She sent it without any filters, and simply added the words saying ‘Love this two’ below the picture.

However, as a dedicated Instagram user, with 750 active followers, she finally decided to repost. Later on, she added the ‘Sierra’ mode filter, tagged both me and another friend in that photo with the saying: ‘Loving spending time with my girls at uni (two hearts here). Will miss this next year (a flushing emoji here).’ For this post, she finally achieved 102 likes——65 like more after some efforts. She spent the rest of the Friday evening checking her Instagram account constantly and enjoying all the likes and comments.

Instagram feed

As you see, the amount of ‘likes’ on Instagram could be one of the influencing factors to my flatmate’s daily life, which brings both feelings of happiness and concerns. It was supposed to be a social media platform where most people should simply share good food or beautiful views but ended up as a competitive platform where everyone is eager to show the best side of their life, according to the Guardian. ‘ I will be upset by the not-enough-like post and more often, will just delete that’, as my flatmate said.

A 2019 Instagram research, conducted by the digital consumer intelligence company Brandwatch shows that the platform currently has more than 1 billion monthly active users, and 63% Instagram users use the app every day, making it the second most popular and engaged social media after Facebook. However, in 2016, Instagram made a big change to its algorithm by switching it into non-chronological feed, where information was shown based on the likelihood you will be interested in the contents, your interaction with other accounts and the timeliness of the post, according to a social media management platform Hootsuite. However, like other platform owners, Instagram does not publish details about their algorithmic architecture.

This is where rumours have started as users are questioning about how the algorithm works on the platform as some posts are recommended with fake ‘likes’ for good marketing, leading to misinformation, which to some extent, has an impact on user’s mental health issue. The situation is that the hidden algorithm depends on the order you are going to see the posts. Even though the algorithm is supposed to provide you with the ‘best’ posts, it surely will make users miss some contents as posts could be buried down in user’s feeds. Therefore, those posts with high like counts by other users, mostly the ones either sent by influencers or celebrities, are likely to be viewed first. As Natalia Anio said, a film studies student, who is also an active video creator with 7375 followers on Instagram, ‘Instagram definitely used to affect me as I always compared my ‘boring’ life with other girls’ or celebrities’ colourful life, especially their ‘wow’ body images, which made me not confident and not love myself for a while.’ It turns out that she only follows friends, family and people she finds inspirational right now.

Instagram definitely used to affect me as I always compared my ‘boring’ life with other girls’ or celebrities’ colourful life, especially their ‘wow’ body images, which made me not confident and not love myself for a while.’

——Ntalia Anio, a student from film studies

Instagram account by Natalia, authorised by her

In fact, according to a report published by RSPH and Young Health Movement in 2017, Instagram was considered as the most detrimental social media platform to young people’s mental health and wellbeing, together with Snapchat. As an image-focused platform, the report suggests that Instagram is easier to put users at the risk of suffering loneliness, depression and anxiety about their body, compared to other popular social media platforms.

Having been trying to make efforts on this issue, Instagram head Adam Mosseri announced the test on hiding-like feature at Facebook’s F8 developer conference in April 2019. This was meant to create a ‘less pressurised environment’, as he said at the conference, and to focus less on the amount of like counts. However, in a 2018 paper, a scholar called Kelly Cotter suggested that some influencers believed that Instagram algorithm is able to accurately evaluate the relationship between users through authentic connectivity such as commenting and replying, not only by likes. This suggests that it is not ‘like’ that matters to the engagement, but more importantly, comments and hashtags which indeed affect algorithm and distinguish posts from others with marketing purposes. In fact, Instagram has already acknowledged its ban on using Bots or other automated devices in their Terms and Conditions.

While there are many concerns about mental health problems made towards Instagram, some general users doubt that feedback does not post any mental threats to their daily use. Alicia Gigi Ku, a student of HKUST, does believe that she got lots of positive inspirations from the platform and paid very little attention to like counts, ‘I am taking it as a place where I post my portrait photography, build a visual identity and record memories, so for me ‘likes’ is more about whether my content is engaging rather than self-affirmation’, she said. In fact, an academic journal, published by Christina and Andrew in 2018, emphasises the importance of the user’s own personality and self-presentation in the use of Instagram. It suggests that users with more flexible personality with being less negatively affected by Instagram feedback than those who are more maladaptive.

‘I am taking it as a place where I post my portrait photography, build a visual identity and record memories, so for me ‘likes’ is more about whether my content is engaging rather than self-affirmation’,

                                   ——Alicia Gigi Ku, a student from HongKong

So does a like-free Instagram really make a difference?

The answer is not clear but it really depends on what kind of roles users are playing on the platform. A like-free Instagram does not mean that like counts do not matter anymore, as it only hides like counts from viewers but not from creators. Instead, 41% Canadian influencers complained that engagement dropped terribly after hiding like counts, as Hootusuite showed. While like counts still really matter, my flatmate Lauren probably would feel much better compared to those influencers.

Dear Boris, where’s my democracy?

Dear Boris, where’s my democracy? — Facebook allows us to construct our lives online, but its dark side enables tech syndicates to back shady politicians all in an effort to hijack democracy.

[Source: author]

Joseph Douglas was scrolling through his phone, tackling a blistery Waterloo Bridge towards his university, when a Facebook ad made him pause: “We need an immigration system that ensures British young people more jobs.” Astonishment, he recalled himself feeling as he relayed his experience to me. As the date of the Brexit referendum drew closer, he received increasingly more Vote Leave adverts as though his identity was deliberately exploited and targeted.

Many people, like Joseph, have felt their phone can read their mind and deliver information directly into the palm of their hands. And they are spot-on. Tech platforms are where our lives are digitally constructed. Through online activity – scrolling, clicking, watching – we have created a data profile of ourselves. An autobiography up for grabs for anyone who wants it. Notably, political parties have taken advantage of our ‘data-selves’ for their own political gain. Joseph, a young student from the north, has never left England nor owns a passport, received numerous anti-immigration ads ridden with subvert xenophobia. Here, political microtargeting is to blame. But are we okay with it?

In whichever way you understand ‘data’, it is obvious there is an increasing volume of it. Unknowingly, tech companies have curated the perfect marketing tool for government and private sectors to exploit our interests. And that’s exactly what they are doing. “The danger associated to Facebook is more insidious because [it’s] tied to redefining the social experience” Paul-Olivier Dehaye, founder of PersonalData.IO comments. With the number of people active on social media on the rise, tech companies have developed a ‘microtargeting’ algorithm to;

  1. Calculate what content will keep you scrolling, clicking and watching and;
  2. Target the adverts scattered throughout that content which you are more likely to engage with.

This has now created the risk of democracy being digitally undermined.

Technological practices in public office has resulted in increasing pressure on government transparency under the aegis of the ‘open data movement’. The Electoral Commission discovered Brexit campaigns violated multiple counts of electoral law. The Vote Leave campaign headed by PM Boris Johnson, exceeded spending limit by £500,000 of tax-payer money. Vote Leave’s carefully worded denial claims the reports are “wholly inaccurate” while Scotland Yard confirmed a possible crime under section 123(4) of the Political Parties, Elections and Referendums Act 2000 is being investigated.

The BeLeave campaign spent over £675,000 with the digital data company Aggregate.IQ to capitalise on your data. The campaigns’ involvement with microtargeting has “brought psychological propaganda and technology together in this new powerful way”, say Barrister and election expert Adam Wagner. A Labour employee commented on condition of anonymity that this “technocratic approach” of the right has, “destabilised the nation”. So what does this all mean? Psychological warfare.

A circuit board

Description automatically generated
[Source: author]

Microtargeting is an algorithm created and used by data analysists to construct comprehensive behavioural tracking profiles. Chances are various profiles of you exist in data bases of multiple tech companies. Data Analytics firm, Cambridge Analytica, became involved with British Military through SCL group and later associated with Brexit campaigns, Vote Leave, and Leave.Eu. Cambridge Analytica and Aggregate.IQ are then effectively part of British military yet exist outside British jurisdiction. Their main goal was to capture every tiny aspect of every voter’s information environment. This political microtargeting is understood in terms of exercising military strategies on a citizen populace. This was done through exploiting Facebook’s ad targeting algorithm, allowing Cambridge Analytica and Aggregate.IQ to obtain users’ ‘personality data’, bequeathing no objection from the public.

The unethically acquired material thus enabled analysts to categorise UK citizens as either ‘persuadable’ or ‘non-persuadable’ voters during the referendum. For Joseph Douglas, a young person not necessarily politically active, was to be considered a vulnerable target. Individually crafted messaged would appear on a voter’s Facebook feed customised to exploit interests and project individualised emotional triggers to tip an election. So why do political campaigns choose this strategy? To bypass public scrutiny.

We are now in the midst of an information war. 2019 is a time where governments and private companies vie for individuals’ data. The 2016 US Presidential election saw further questionable ties to Cambridge Analytica including past employees working in the elected Trump Administration. This large interconnected web traces the narrative of political malfeasance; billionaires buying companies to then ‘work’ in the heart of the government.

Though a question still remains, are we to blame? “The user is Facebooks most biased curator” says Taina Bucher, Communications and IT Professor at the University of Copenhagen. Users matter, you matter because it is your data, your online behaviour, networks, interests, and communication which feeds the data desired for algorithms and data analysts to commandeer. Facebook thus became the foundation for psychological examination, enabling Cambridge Analytica and Vote Leave to target individuals. “Seems like good marketing to me”, one citizen commented.

Thus, the innocent citizen fell victim to the immense influence imposed by mainstream media. Brexit propaganda in itself greatly shaped politicians and campaigners, as factions devoted substantial effort to tailor the press agenda in their favour. Consequencially, direct digital communication is where campaigns devoted most of their resources. Vote Leave put out ‘nearly a billion targeted digital adverts’, wrote the campaign’s director Dominic Cummings, and spent around 98% of their budget on what is essentially marketing. In Vote Leave’s defence, the anonymous Labour employee commented, “I agree politics always has an element of marketing. But good policy should often stand above that”. Cummings took this technocratic idea of politics further and advised others to “hire physicists, not communications people”.

Yet, technology is not to blame. It should be made clear this misconduct lies with those in power. These algorithms are neither good nor bad but used as a catalyst by political figures to optimise their own power. Facebook is no enemy either. The social media platform adopts the ethic of continuous change where its business plan is to constantly develop and improve algorithms. Instead, data transparency of private companies and government should be encouraged.

A system of public surveillance has now been established. Data science has been twisted into a tool to manipulate masses by exploiting targeted phenomena such as nationalism, animal rights enthusiasts, climate activists and even ‘tea-culture’. This has all been the product of billionaires erecting cash rich entities to unaccountably exploit British citizens all for political gain. No objection sanctioned from those whose data has been manipulated and twisted against them, despite the same peoples’ whose lives who are undeniably affected by this catastrophe.

Britain now faces the consequences of such a manipulative and devious campaign will shape British politics for years to come. We don’t have the time or energy to critically think about what we view or how we exist online on a daily basis. But what we can do is understand the media system’s contextual nature in order to seize back public control and effectively, democracy.

Gender bias in Artificial Intelligence: recent research unmasks the truth

The “Gender Shades Project” fights against algorithmic bias. Courtesy of Gender Shades

Applying for a job, looking for low-interest mortgage, in need of insurance, passing through an airport security check? Chances are that if you are a white male you’ll be called in for an interview, you’ll secure a great rate for your loan, you’ll be offered an interesting insurance package and you’ll breeze through to the departure gate. Chances are that if you are a dark-skinned female you will have completely different results.

With the rapid advancement of new technologies and computer science, artificial intelligence – the ability of machines to perform tasks normally associated with humans – has become commonplace. Based on complex algorithms, artificial intelligence is meant to make life easier, to facilitate everyday situations and help us save time and energy by automating intricate tasks with step-by-step instructions to a computer. As more and more businesses and institutions rely on artificial intelligence to influence decisions and facilitate behaviours, the need to build fairer and more accurate algorithms is also emerging. Obviously, the humans creating the programmes driving artificial intelligence are not perfect. So, what happens when human imperfections seep into the algorithms to reflect or uncover social issues?

Miss gendered

A recent ground-breaking study has revealed that machine algorithms deployed in facial recognition software contain biased data resulting in discrimination against women – especially dark-skinned women. Joy Buolamwini, a young MIT student and founder of the Algorithmic Justice League and advocate against ‘Algorithmic Bias’, was working on a project regarding automated facial detection when she noticed that the computer did not recognize her face. Yet, there was no problem when her lighter-skinned friend tested the programme. Then, when Buolamwini put on a white mask, the computer finally detected a face. After further testing multiple different algorithms with other pictures of herself, the results followed her initial assumption of computer bias. Indeed, two of the facial analysis demos didn’t detect her face at all. “And the other two misgendered me!” she exclaims in a video that explains her “Gender Shades” research project which evaluates and highlights the serious issue of racial and gender discrimination in automated facial analysis and datasets.

Joy Buolamwini – Wikimania 2018. Courtosy of Creative Commons

After undertaking more in-depth research, tests and analyses, “Gender Shades” reveals disturbing flaws that point to gender and racial discrimination in facial recognition algorithms. The project, testing 1270 photographs of people from varied European and African countries that were then grouped by gender and skin type, used facial recognition programmes from three different well-known companies. Overall, the programmes systematically performed better detecting males over females, even more so for lighter-skinned subjects. Cross-evaluating gender and skin found that black women were the group of people with least successful results: not one of the three programmes was able to detect 80% black female faces. That means that two out of five black women risk being inaccurately classified by artificial intelligence programmes.

In practical terms this can translate into some dubious consequences. What if a company uses inaccurate results to select targets for marketing products or to screen job applications or to apply for a student loan? In America, California Representative Jimmy Gomez, who is member of the Congressional Hispanic Caucus and serves on the House Committee on Oversight and Reform, backed a ban to temporarily stop the use of facial recognition by law enforcement officials. If the software provides inaccurate results, “that could mean even more trouble for black men and boys, who are already more than twice as likely to die during an encounter with police than their white counterparts,” he said. As points out Cathy O’Neil, mathematician, data expert and author of the best-seller Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, “these algorithms are reinforcing discrimination. Models are propping up the lucky and punishing the downtrodden, creating a ‘toxic cocktail for democracy.’”

Bias creep

Common thinking is that human emotions and flaws would not be present in objective machine learning. But as artificial intelligent systems depend on heavily complex datasets that run through programmed algorithms, then, logically, who arranges those datasets and how they are arranged can influence the results. Today, developers of artificial intelligence are largely white men and, whether intentional or not, they will reflect their own values, stereotypes and thinking – the coded gaze as Buolamwini calls it – which are then embedded in these algorithms. And when existing bias is entrenched into code we will have potentially damaging outcomes. O’Neil asserts that everyone should question their fairness, not just computer scientists and coders. “Contrary to popular opinion that algorithms are purely objective, models are opinions embedded in mathematics.” 

The risk is that having bias creep into seemingly objective and broadly used decision-making systems can seriously undermine the great strides we have made in empowering women and overcoming racial discrimination in recent decades. “Gender balance in machine learning is crucial to prevent algorithms from perpetuating gender ideologies that disadvantage women,” says Dr Susan Leavy, specialist in data analytics and researcher at the University of Dublin.

As the ‘Gender Shades’ report concludes, inclusive product testing and reporting are necessary if the industry is to create systems that work well for all of humanity. An outcome that has fuelled doubt on the assumed neutrality of Artificial Intelligence and automation as a whole. Microsoft and IBM, both used in the project, responded positively by committing to undertake changes in their facial recognition programmes. To deal with possible sources of bias, IBM for example, has several ongoing projects to address dataset bias in facial analysis – including not only gender and skin type, but also bias related to age groups, different ethnicities, and factors such as pose, illumination, resolution, expression, and decoration. Microsoft, for its part, recognizes that artificial technologies are critical for the industry and are taking action to ensure to improve the accuracy of their facial recognition technology to recognize, understand and remove bias. These are all right needed steps in the right direction if we are to use data responsibly and wisely and not have artificial intelligence work against us.

Robots are Writing the News: Is There a Reason for Fear?

Are you afraid of Robots? I don’t blame you. From killer robots in blockbuster films like The Terminator, to BBC Headlines stating ‘Robot automation will ‘take 800 million jobs by 2030’’, it can seem hard to see the positives. Robots are now infiltrating your life by writing the articles you read everyday; is this really something to fear?

artificial-intelligence-2167835_1920
Photograph: Pixabay

It’s called Automated Journalism, and it’s growing fast without most of us even knowing it exists. Automated Journalism is an algorithmic program that can write a news story with little need for any human interference. The Washington Post’s ‘robot reporter’ has already published over 850 articles, and there are predictions that 90% of all news read by the general public will be generated by AI by 2025, according to Infolab researcher, Kris Hammond.

Have you heard of Automated Journalism? Despite it becoming so widespread, most of us are completely unaware of its existence. In a survey conducted at King’s College London with 35 interviewees, only 34% were aware of the existence of Automated Journalism. With news giants such as The Wall Street Journal, The Associated Press and Forbes jumping on the bandwagon and using Automated Journalism, there is a major imbalance between the growing use of robots in journalism and the general public’s knowledge of this.

Have you heard of Automated Journalism? Despite it being so widespread, most of us are unaware of its existence. In a survey conducted at King’s College London with 35 interviewees, only 34% were aware of  the existence Automated Journalism. There is a major imbalance between the growing use of robots in journalism and the general public’s knowledge of this.

In a technology-crazed society, Artificial Intelligence is a part of our everyday lives in one form or another. From Mobile Banking to product recommendations on Amazon, we heavily rely on AI for many day-to-day tasks. Since the 1960s, when machines began to replace workers in American automotive factories, there has been a creeping ascend to more job sectors being replaced by algorithm-run programs. However, a creative and thought-driven career such as journalism could not be replaced by an algorithm, could it?

work-731198_1920
Photograph: Pixabay

The well-established global news network, Associated Press have developed an algorithm program to produce Automated Journalism. Using Natural Language Generator (NLG), the news organisation publishes thousands of articles that start as raw data and are transformed by their algorithm into a readable story. These articles include nothing more than the algorithms article layout and the data provided on the event.

Sebastian Tranæus, a Tech Lead at Noa Technologies Inc, claimed he would prefer that news reports on sport and elections were written by algorithms. “An algorithm can’t prefer a team or candidate”, Tranæus stated; “I can trust them to give me unbiased facts about the game or election”. Without the human emotion attached to the article to statistic-based articles, the article can give nothing more but a truthful account of the event. An algorithm won’t be a die-hard Chelsea supporter that will struggle to critique their performance in an article about their match.

However, there is still a great deal of discomfort in the idea of an algorithm producing our news. 84% of interviewees stated they do not trust Automated Journalism in a survey conducted by King’s College London, and would prefer news written by a human. “The idea makes me very uncomfortable” stated Juliette Garside, Investigations correspondent at The Guardian. Despite Garside’s use of algorithms to help assist her in research for articles, the journalist stated she did not want to see full feature articles written by an algorithm. “For some writing tasks robots may be appropriate”, Garside added, for short articles about “anticipated events”.

The Associated Press uses their AI reporter program to produce articles of exactly this nature. The algorithm produces articles for Minor League Baseball scores. “It’s easier to read a computer-generated article” reported Melania, a 24 year old LSE graduate: “what I want to get from an article is information, not a spiritual experience”, she stated in an interview.

For now, Automated Journalism only covers these kinds of events. Investigative journalism will always be a human’s job, which have the knowledge, feeling and persuasion to publish a good, moving piece. “Humans write some kind of bias” acclaimed Garside. But this is not necessarily a bad thing. Bias can create unique, passionate articles that are more likely to be read.

whatsapp-image-2019-10-25-at-13.33.48
Photograph: by the Author

Is Automated Journalism replacing journalists? At it’s current level of advancement, no. With the assistance of The Guardians in-house robot reporters, Garside is able to save time on research as the program will search through vast data logs for her. “It can save time by matching names of persons of interest, against hundreds of thousands of names in the data.” Journalists and Automated Journalism programs can work alongside each other to actually help improve the journalists work, rather than replacing them.

So, should we be afraid of robots writing the news?

The answer is no, not really. There is no Killer Robot taking all our jobs like Hollywood makes us believe- the ‘Killer Robot’ is merely informing us on minor league baseball. At least for now.