When Recommendation becomes a threat

Warren smith a 25-years-old man, he sat on a chair with a cup of tea in hand while playing Instagram for some fun after work. ‘I Can’t believe who I found’ he exclaimed when he was in the recommendation page. He casually saw his high school classmate post. In his friend’s following list, he found even more high school friends. His first reaction was so shocked since they have lost contact for ages. He lay and there and played for another hour, the laughter disappeared; instead, he frowned at the screen. He felt amazed and worried at the same time. He wondered how does Instagram know him so well.  

Warren smith is not the only one who found his or her friend on Instagram. Everyone could rebuild their relationships with people who lost contacts and keep in touch with their followers‘Instagram relies on machine learning to create a unique feed for each user.’ according to techcrunch.Each user could experience differently on the platform. It is user targeted, it seems to know every user very well. Instgram algorithem calculate recommendation based on each user’s history of using Instagram, such as their searching history, the content they liked.  These information is useful for algorithm to know who this person interested in and who he is familiar with.

Instagram is able to recommend potential friend for Warren and others since it has attracts a large amount of users. According to the data form App Download and Usage StatisticsUsers, users could be connected together since Instagram is the second-most free download app in the app store. Over 1 billion users use Instagram every months by 2019 Users could be connected tog Using Instagram becomes a social phenomenon in the country with technology services; The more users it have, the more chances they have to connect people together. Instagram has become a widespread tool for people to social and develop their networks. It build a close relationship with users, it might know users better than a friend of his or her. 

Warren Smith felt worried when he realized that Instagram seem to know his network from the recommendation. The truth is his worry is needed, the recommendation function rise the safety issue. According to BBC NEWS a man from the United Stated accused that Instagram helped to kill her daughter. In 2017 Molly Russel, a 14-years-old girl took her own life. When her parents went through her Instagram account, they found materials about suicide. Instagram promoted content about depression and suicide to her, So they believed that Instagram need to be responsible to their daughter’s death, by promoting distressing materials. It raise the safety issue to the Instagram algorithm. It only calculate what user want to see, they do not really care if it is harmful. 

Molly Russel’s story raise the awareness of the threat of recommendation. Is it really trustworthy to consider Instagram as our friend? Kevin is a 21-year-old Asian boy from Korea,who is studying in the university for the management major. when he was asked about user experience, he looked unsatisfied, he put down his cup of water and said seriously ‘ I always see the advertisement come out while I am chill with Instagram. It just makes feel like when we are trusting Instagram, seeing it as a friend, Instagram is just trying to extract revenue from us.’ His experience shows that Instagram’s aim is not to make friend with users, but to make revenue. Since it is user-targeted, some businesses use Instagram to advertise themselves. There are over 25 million businesses on the Instagram according to omnipresence. Instagram will send advertisement based on their algorithm of user preference. Aim for Instagram is make user use Instagram as long as possible. The longer they stayed on the Instagram, the more advertisement they will see. Molly Russel trusted Instagram by allowing it to know her preference, but she the promoted content lead her the way to suicide instead of offering any help. When the algorithm only calculate without any emotions, it becomes important for users to select its promoted content and protect user’s personal information. 

Kevin was going through Instagram recommendation page before he slept Photo:By Author

  In order to protect ourselves from the worrying about the safety, it is important to know how does Instagram actually work to know people’s network. When people find it makes the communication and interaction easier through the likes, comments, repost and direct message, they are more likely to use the app. When they use more of the Instagram, they are essentially giving more personal information to Instagram. Emily is a 34-years-old white woman from the UK, she is working in advertisement company. When the question : why do you use Instagram brought to Emily. She laughed, thought for a few seconds and said with resignation ‘I feel like Instagram is a necessary tool for social, especially for young generations like all my friends and colleagues around me, if I want to have interaction and get along with them, it seems like I need to have a Instagram. Not only Emily, out of 7 of my interviews, 4 of them stated that they use Instagram to social. When users use Instagram to interact with their friend, it allows platform to know their network and graph relationship for users. It seem like users give their information passively to use the platform without to many worries. Users give the chance for Instagram to know them.

Warren Smith felt contradictory when he used the Instagram. The recommendation did help him to find his friends, but extracted his personal data at the same time. The journalism reveals the truth that Instagram recommendation do have risk to affect people’s life. But users have the right to choose what information they want to share to Instagram.

Will a like-free Instagram attract more users or not?

My flatmate Lauren, is a general Instagram user with 750 followers. She was lying on the sofa one very relaxing Friday night, scrolling down Instagram as usual. ‘Why I only got 37 likes for this post so far? I sent it yesterday and it has been over a day!’ The post she was complaining about was the selfie with me and another friend. She sent it without any filters, and simply added the words saying ‘Love this two’ below the picture.

However, as a dedicated Instagram user, with 750 active followers, she finally decided to repost. Later on, she added the ‘Sierra’ mode filter, tagged both me and another friend in that photo with the saying: ‘Loving spending time with my girls at uni (two hearts here). Will miss this next year (a flushing emoji here).’ For this post, she finally achieved 102 likes——65 like more after some efforts. She spent the rest of the Friday evening checking her Instagram account constantly and enjoying all the likes and comments.

Instagram feed

As you see, the amount of ‘likes’ on Instagram could be one of the influencing factors to my flatmate’s daily life, which brings both feelings of happiness and concerns. It was supposed to be a social media platform where most people should simply share good food or beautiful views but ended up as a competitive platform where everyone is eager to show the best side of their life, according to the Guardian. ‘ I will be upset by the not-enough-like post and more often, will just delete that’, as my flatmate said.

A 2019 Instagram research, conducted by the digital consumer intelligence company Brandwatch shows that the platform currently has more than 1 billion monthly active users, and 63% Instagram users use the app every day, making it the second most popular and engaged social media after Facebook. However, in 2016, Instagram made a big change to its algorithm by switching it into non-chronological feed, where information was shown based on the likelihood you will be interested in the contents, your interaction with other accounts and the timeliness of the post, according to a social media management platform Hootsuite. However, like other platform owners, Instagram does not publish details about their algorithmic architecture.

This is where rumours have started as users are questioning about how the algorithm works on the platform as some posts are recommended with fake ‘likes’ for good marketing, leading to misinformation, which to some extent, has an impact on user’s mental health issue. The situation is that the hidden algorithm depends on the order you are going to see the posts. Even though the algorithm is supposed to provide you with the ‘best’ posts, it surely will make users miss some contents as posts could be buried down in user’s feeds. Therefore, those posts with high like counts by other users, mostly the ones either sent by influencers or celebrities, are likely to be viewed first. As Natalia Anio said, a film studies student, who is also an active video creator with 7375 followers on Instagram, ‘Instagram definitely used to affect me as I always compared my ‘boring’ life with other girls’ or celebrities’ colourful life, especially their ‘wow’ body images, which made me not confident and not love myself for a while.’ It turns out that she only follows friends, family and people she finds inspirational right now.

Instagram definitely used to affect me as I always compared my ‘boring’ life with other girls’ or celebrities’ colourful life, especially their ‘wow’ body images, which made me not confident and not love myself for a while.’

——Ntalia Anio, a student from film studies

Instagram account by Natalia, authorised by her

In fact, according to a report published by RSPH and Young Health Movement in 2017, Instagram was considered as the most detrimental social media platform to young people’s mental health and wellbeing, together with Snapchat. As an image-focused platform, the report suggests that Instagram is easier to put users at the risk of suffering loneliness, depression and anxiety about their body, compared to other popular social media platforms.

Having been trying to make efforts on this issue, Instagram head Adam Mosseri announced the test on hiding-like feature at Facebook’s F8 developer conference in April 2019. This was meant to create a ‘less pressurised environment’, as he said at the conference, and to focus less on the amount of like counts. However, in a 2018 paper, a scholar called Kelly Cotter suggested that some influencers believed that Instagram algorithm is able to accurately evaluate the relationship between users through authentic connectivity such as commenting and replying, not only by likes. This suggests that it is not ‘like’ that matters to the engagement, but more importantly, comments and hashtags which indeed affect algorithm and distinguish posts from others with marketing purposes. In fact, Instagram has already acknowledged its ban on using Bots or other automated devices in their Terms and Conditions.

While there are many concerns about mental health problems made towards Instagram, some general users doubt that feedback does not post any mental threats to their daily use. Alicia Gigi Ku, a student of HKUST, does believe that she got lots of positive inspirations from the platform and paid very little attention to like counts, ‘I am taking it as a place where I post my portrait photography, build a visual identity and record memories, so for me ‘likes’ is more about whether my content is engaging rather than self-affirmation’, she said. In fact, an academic journal, published by Christina and Andrew in 2018, emphasises the importance of the user’s own personality and self-presentation in the use of Instagram. It suggests that users with more flexible personality with being less negatively affected by Instagram feedback than those who are more maladaptive.

‘I am taking it as a place where I post my portrait photography, build a visual identity and record memories, so for me ‘likes’ is more about whether my content is engaging rather than self-affirmation’,

                                   ——Alicia Gigi Ku, a student from HongKong

So does a like-free Instagram really make a difference?

The answer is not clear but it really depends on what kind of roles users are playing on the platform. A like-free Instagram does not mean that like counts do not matter anymore, as it only hides like counts from viewers but not from creators. Instead, 41% Canadian influencers complained that engagement dropped terribly after hiding like counts, as Hootusuite showed. While like counts still really matter, my flatmate Lauren probably would feel much better compared to those influencers.

Dear Boris, where’s my democracy?

Dear Boris, where’s my democracy? — Facebook allows us to construct our lives online, but its dark side enables tech syndicates to back shady politicians all in an effort to hijack democracy.

[Source: author]

Joseph Douglas was scrolling through his phone, tackling a blistery Waterloo Bridge towards his university, when a Facebook ad made him pause: “We need an immigration system that ensures British young people more jobs.” Astonishment, he recalled himself feeling as he relayed his experience to me. As the date of the Brexit referendum drew closer, he received increasingly more Vote Leave adverts as though his identity was deliberately exploited and targeted.

Many people, like Joseph, have felt their phone can read their mind and deliver information directly into the palm of their hands. And they are spot-on. Tech platforms are where our lives are digitally constructed. Through online activity – scrolling, clicking, watching – we have created a data profile of ourselves. An autobiography up for grabs for anyone who wants it. Notably, political parties have taken advantage of our ‘data-selves’ for their own political gain. Joseph, a young student from the north, has never left England nor owns a passport, received numerous anti-immigration ads ridden with subvert xenophobia. Here, political microtargeting is to blame. But are we okay with it?

In whichever way you understand ‘data’, it is obvious there is an increasing volume of it. Unknowingly, tech companies have curated the perfect marketing tool for government and private sectors to exploit our interests. And that’s exactly what they are doing. “The danger associated to Facebook is more insidious because [it’s] tied to redefining the social experience” Paul-Olivier Dehaye, founder of PersonalData.IO comments. With the number of people active on social media on the rise, tech companies have developed a ‘microtargeting’ algorithm to;

  1. Calculate what content will keep you scrolling, clicking and watching and;
  2. Target the adverts scattered throughout that content which you are more likely to engage with.

This has now created the risk of democracy being digitally undermined.

Technological practices in public office has resulted in increasing pressure on government transparency under the aegis of the ‘open data movement’. The Electoral Commission discovered Brexit campaigns violated multiple counts of electoral law. The Vote Leave campaign headed by PM Boris Johnson, exceeded spending limit by £500,000 of tax-payer money. Vote Leave’s carefully worded denial claims the reports are “wholly inaccurate” while Scotland Yard confirmed a possible crime under section 123(4) of the Political Parties, Elections and Referendums Act 2000 is being investigated.

The BeLeave campaign spent over £675,000 with the digital data company Aggregate.IQ to capitalise on your data. The campaigns’ involvement with microtargeting has “brought psychological propaganda and technology together in this new powerful way”, say Barrister and election expert Adam Wagner. A Labour employee commented on condition of anonymity that this “technocratic approach” of the right has, “destabilised the nation”. So what does this all mean? Psychological warfare.

A circuit board

Description automatically generated
[Source: author]

Microtargeting is an algorithm created and used by data analysists to construct comprehensive behavioural tracking profiles. Chances are various profiles of you exist in data bases of multiple tech companies. Data Analytics firm, Cambridge Analytica, became involved with British Military through SCL group and later associated with Brexit campaigns, Vote Leave, and Leave.Eu. Cambridge Analytica and Aggregate.IQ are then effectively part of British military yet exist outside British jurisdiction. Their main goal was to capture every tiny aspect of every voter’s information environment. This political microtargeting is understood in terms of exercising military strategies on a citizen populace. This was done through exploiting Facebook’s ad targeting algorithm, allowing Cambridge Analytica and Aggregate.IQ to obtain users’ ‘personality data’, bequeathing no objection from the public.

The unethically acquired material thus enabled analysts to categorise UK citizens as either ‘persuadable’ or ‘non-persuadable’ voters during the referendum. For Joseph Douglas, a young person not necessarily politically active, was to be considered a vulnerable target. Individually crafted messaged would appear on a voter’s Facebook feed customised to exploit interests and project individualised emotional triggers to tip an election. So why do political campaigns choose this strategy? To bypass public scrutiny.

We are now in the midst of an information war. 2019 is a time where governments and private companies vie for individuals’ data. The 2016 US Presidential election saw further questionable ties to Cambridge Analytica including past employees working in the elected Trump Administration. This large interconnected web traces the narrative of political malfeasance; billionaires buying companies to then ‘work’ in the heart of the government.

Though a question still remains, are we to blame? “The user is Facebooks most biased curator” says Taina Bucher, Communications and IT Professor at the University of Copenhagen. Users matter, you matter because it is your data, your online behaviour, networks, interests, and communication which feeds the data desired for algorithms and data analysts to commandeer. Facebook thus became the foundation for psychological examination, enabling Cambridge Analytica and Vote Leave to target individuals. “Seems like good marketing to me”, one citizen commented.

Thus, the innocent citizen fell victim to the immense influence imposed by mainstream media. Brexit propaganda in itself greatly shaped politicians and campaigners, as factions devoted substantial effort to tailor the press agenda in their favour. Consequencially, direct digital communication is where campaigns devoted most of their resources. Vote Leave put out ‘nearly a billion targeted digital adverts’, wrote the campaign’s director Dominic Cummings, and spent around 98% of their budget on what is essentially marketing. In Vote Leave’s defence, the anonymous Labour employee commented, “I agree politics always has an element of marketing. But good policy should often stand above that”. Cummings took this technocratic idea of politics further and advised others to “hire physicists, not communications people”.

Yet, technology is not to blame. It should be made clear this misconduct lies with those in power. These algorithms are neither good nor bad but used as a catalyst by political figures to optimise their own power. Facebook is no enemy either. The social media platform adopts the ethic of continuous change where its business plan is to constantly develop and improve algorithms. Instead, data transparency of private companies and government should be encouraged.

A system of public surveillance has now been established. Data science has been twisted into a tool to manipulate masses by exploiting targeted phenomena such as nationalism, animal rights enthusiasts, climate activists and even ‘tea-culture’. This has all been the product of billionaires erecting cash rich entities to unaccountably exploit British citizens all for political gain. No objection sanctioned from those whose data has been manipulated and twisted against them, despite the same peoples’ whose lives who are undeniably affected by this catastrophe.

Britain now faces the consequences of such a manipulative and devious campaign will shape British politics for years to come. We don’t have the time or energy to critically think about what we view or how we exist online on a daily basis. But what we can do is understand the media system’s contextual nature in order to seize back public control and effectively, democracy.

Gender bias in Artificial Intelligence: recent research unmasks the truth

The “Gender Shades Project” fights against algorithmic bias. Courtesy of Gender Shades

Applying for a job, looking for low-interest mortgage, in need of insurance, passing through an airport security check? Chances are that if you are a white male you’ll be called in for an interview, you’ll secure a great rate for your loan, you’ll be offered an interesting insurance package and you’ll breeze through to the departure gate. Chances are that if you are a dark-skinned female you will have completely different results.

With the rapid advancement of new technologies and computer science, artificial intelligence – the ability of machines to perform tasks normally associated with humans – has become commonplace. Based on complex algorithms, artificial intelligence is meant to make life easier, to facilitate everyday situations and help us save time and energy by automating intricate tasks with step-by-step instructions to a computer. As more and more businesses and institutions rely on artificial intelligence to influence decisions and facilitate behaviours, the need to build fairer and more accurate algorithms is also emerging. Obviously, the humans creating the programmes driving artificial intelligence are not perfect. So, what happens when human imperfections seep into the algorithms to reflect or uncover social issues?

Miss gendered

A recent ground-breaking study has revealed that machine algorithms deployed in facial recognition software contain biased data resulting in discrimination against women – especially dark-skinned women. Joy Buolamwini, a young MIT student and founder of the Algorithmic Justice League and advocate against ‘Algorithmic Bias’, was working on a project regarding automated facial detection when she noticed that the computer did not recognize her face. Yet, there was no problem when her lighter-skinned friend tested the programme. Then, when Buolamwini put on a white mask, the computer finally detected a face. After further testing multiple different algorithms with other pictures of herself, the results followed her initial assumption of computer bias. Indeed, two of the facial analysis demos didn’t detect her face at all. “And the other two misgendered me!” she exclaims in a video that explains her “Gender Shades” research project which evaluates and highlights the serious issue of racial and gender discrimination in automated facial analysis and datasets.

Joy Buolamwini – Wikimania 2018. Courtosy of Creative Commons

After undertaking more in-depth research, tests and analyses, “Gender Shades” reveals disturbing flaws that point to gender and racial discrimination in facial recognition algorithms. The project, testing 1270 photographs of people from varied European and African countries that were then grouped by gender and skin type, used facial recognition programmes from three different well-known companies. Overall, the programmes systematically performed better detecting males over females, even more so for lighter-skinned subjects. Cross-evaluating gender and skin found that black women were the group of people with least successful results: not one of the three programmes was able to detect 80% black female faces. That means that two out of five black women risk being inaccurately classified by artificial intelligence programmes.

In practical terms this can translate into some dubious consequences. What if a company uses inaccurate results to select targets for marketing products or to screen job applications or to apply for a student loan? In America, California Representative Jimmy Gomez, who is member of the Congressional Hispanic Caucus and serves on the House Committee on Oversight and Reform, backed a ban to temporarily stop the use of facial recognition by law enforcement officials. If the software provides inaccurate results, “that could mean even more trouble for black men and boys, who are already more than twice as likely to die during an encounter with police than their white counterparts,” he said. As points out Cathy O’Neil, mathematician, data expert and author of the best-seller Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, “these algorithms are reinforcing discrimination. Models are propping up the lucky and punishing the downtrodden, creating a ‘toxic cocktail for democracy.’”

Bias creep

Common thinking is that human emotions and flaws would not be present in objective machine learning. But as artificial intelligent systems depend on heavily complex datasets that run through programmed algorithms, then, logically, who arranges those datasets and how they are arranged can influence the results. Today, developers of artificial intelligence are largely white men and, whether intentional or not, they will reflect their own values, stereotypes and thinking – the coded gaze as Buolamwini calls it – which are then embedded in these algorithms. And when existing bias is entrenched into code we will have potentially damaging outcomes. O’Neil asserts that everyone should question their fairness, not just computer scientists and coders. “Contrary to popular opinion that algorithms are purely objective, models are opinions embedded in mathematics.” 

The risk is that having bias creep into seemingly objective and broadly used decision-making systems can seriously undermine the great strides we have made in empowering women and overcoming racial discrimination in recent decades. “Gender balance in machine learning is crucial to prevent algorithms from perpetuating gender ideologies that disadvantage women,” says Dr Susan Leavy, specialist in data analytics and researcher at the University of Dublin.

As the ‘Gender Shades’ report concludes, inclusive product testing and reporting are necessary if the industry is to create systems that work well for all of humanity. An outcome that has fuelled doubt on the assumed neutrality of Artificial Intelligence and automation as a whole. Microsoft and IBM, both used in the project, responded positively by committing to undertake changes in their facial recognition programmes. To deal with possible sources of bias, IBM for example, has several ongoing projects to address dataset bias in facial analysis – including not only gender and skin type, but also bias related to age groups, different ethnicities, and factors such as pose, illumination, resolution, expression, and decoration. Microsoft, for its part, recognizes that artificial technologies are critical for the industry and are taking action to ensure to improve the accuracy of their facial recognition technology to recognize, understand and remove bias. These are all right needed steps in the right direction if we are to use data responsibly and wisely and not have artificial intelligence work against us.

Robots are Writing the News: Is There a Reason for Fear?

Are you afraid of Robots? I don’t blame you. From killer robots in blockbuster films like The Terminator, to BBC Headlines stating ‘Robot automation will ‘take 800 million jobs by 2030’’, it can seem hard to see the positives. Robots are now infiltrating your life by writing the articles you read everyday; is this really something to fear?

artificial-intelligence-2167835_1920
Photograph: Pixabay

It’s called Automated Journalism, and it’s growing fast without most of us even knowing it exists. Automated Journalism is an algorithmic program that can write a news story with little need for any human interference. The Washington Post’s ‘robot reporter’ has already published over 850 articles, and there are predictions that 90% of all news read by the general public will be generated by AI by 2025, according to Infolab researcher, Kris Hammond.

Have you heard of Automated Journalism? Despite it becoming so widespread, most of us are completely unaware of its existence. In a survey conducted at King’s College London with 35 interviewees, only 34% were aware of the existence of Automated Journalism. With news giants such as The Wall Street Journal, The Associated Press and Forbes jumping on the bandwagon and using Automated Journalism, there is a major imbalance between the growing use of robots in journalism and the general public’s knowledge of this.

Have you heard of Automated Journalism? Despite it being so widespread, most of us are unaware of its existence. In a survey conducted at King’s College London with 35 interviewees, only 34% were aware of  the existence Automated Journalism. There is a major imbalance between the growing use of robots in journalism and the general public’s knowledge of this.

In a technology-crazed society, Artificial Intelligence is a part of our everyday lives in one form or another. From Mobile Banking to product recommendations on Amazon, we heavily rely on AI for many day-to-day tasks. Since the 1960s, when machines began to replace workers in American automotive factories, there has been a creeping ascend to more job sectors being replaced by algorithm-run programs. However, a creative and thought-driven career such as journalism could not be replaced by an algorithm, could it?

work-731198_1920
Photograph: Pixabay

The well-established global news network, Associated Press have developed an algorithm program to produce Automated Journalism. Using Natural Language Generator (NLG), the news organisation publishes thousands of articles that start as raw data and are transformed by their algorithm into a readable story. These articles include nothing more than the algorithms article layout and the data provided on the event.

Sebastian Tranæus, a Tech Lead at Noa Technologies Inc, claimed he would prefer that news reports on sport and elections were written by algorithms. “An algorithm can’t prefer a team or candidate”, Tranæus stated; “I can trust them to give me unbiased facts about the game or election”. Without the human emotion attached to the article to statistic-based articles, the article can give nothing more but a truthful account of the event. An algorithm won’t be a die-hard Chelsea supporter that will struggle to critique their performance in an article about their match.

However, there is still a great deal of discomfort in the idea of an algorithm producing our news. 84% of interviewees stated they do not trust Automated Journalism in a survey conducted by King’s College London, and would prefer news written by a human. “The idea makes me very uncomfortable” stated Juliette Garside, Investigations correspondent at The Guardian. Despite Garside’s use of algorithms to help assist her in research for articles, the journalist stated she did not want to see full feature articles written by an algorithm. “For some writing tasks robots may be appropriate”, Garside added, for short articles about “anticipated events”.

The Associated Press uses their AI reporter program to produce articles of exactly this nature. The algorithm produces articles for Minor League Baseball scores. “It’s easier to read a computer-generated article” reported Melania, a 24 year old LSE graduate: “what I want to get from an article is information, not a spiritual experience”, she stated in an interview.

For now, Automated Journalism only covers these kinds of events. Investigative journalism will always be a human’s job, which have the knowledge, feeling and persuasion to publish a good, moving piece. “Humans write some kind of bias” acclaimed Garside. But this is not necessarily a bad thing. Bias can create unique, passionate articles that are more likely to be read.

whatsapp-image-2019-10-25-at-13.33.48
Photograph: by the Author

Is Automated Journalism replacing journalists? At it’s current level of advancement, no. With the assistance of The Guardians in-house robot reporters, Garside is able to save time on research as the program will search through vast data logs for her. “It can save time by matching names of persons of interest, against hundreds of thousands of names in the data.” Journalists and Automated Journalism programs can work alongside each other to actually help improve the journalists work, rather than replacing them.

So, should we be afraid of robots writing the news?

The answer is no, not really. There is no Killer Robot taking all our jobs like Hollywood makes us believe- the ‘Killer Robot’ is merely informing us on minor league baseball. At least for now.

AI-written articles might not be as bad as you think

Photography credits to  Elijah O’Donnell  and  Markus Spiske on http://www.unsplash.com
Edit is done by the author.

With the perpetual acceleration of digital technology, journalism, too, becomes directly dependant on artificial intelligence. Scepticism around that topic is widely spread and completely justified; the affordances that an API (Application-programming interface) could predispose are sometimes horrifying to think about – from AI-written articles completely lacking any manual human interference, to holographic television hosts. Readers tend to mistrust robo-journalism as they fear it would produce fake content. Some believe that the inclusion of AI in media decreases jobs for journalists. Others are simply not comfortable with the idea of an article generated by a robot; thus, the media provider loses audience’s trust. But are those fears justified?

Robo-journalism is here, and it comes with a cost. Directly affected by AI-written articles are the journalists. It is true that computers are now taking over media workload, hence less people will be needed in the future of news providing. On the flip side, one of the main reasons why robo-journalism is becoming widespread is that journalists themselves prefer an algorithm doing the dull part of the job for them. Computers are evidently faster when it comes to reporting data analytics or sports results which definitely comes as a plus

It turns out the more informed an individual is, the more likely it is for them to understand and tolerate AI-written articles. An empirical research focused on a group of 30 people, aged 20-50, dug deep into readers’ news preferences. Findings include that people in their 20s-early 30s are most likely to accept and be familiar with the definition of robo-journalism. However, around 65% of the young adults still prefer human-written articles. Even if some lack decisiveness on robo-journalism, all candidates agree that if they become more informed on the topic, they wouldn’t necessarily be sceptical about AI producing their news. On the contrary, people over the age of 45 tend to be more prejudiced about this idea. 

Raising awareness on robo-journalism requires measurements of education and informing the society. Universities have already started to include new media when tailoring their degrees. Programmes worldwide provide relevant education on digital culture in nearly every subject area. Ana, a 21-year old Liberal Arts student who participated in our research, shares her thoughts on the topic: “I think that robo-journalism could be less biased, but from my knowledge in the sphere of media – it is basically impossible as the robot is also engineered by a human”. The idea that algorithms are, after all, coded by people, certainly suggest a certain bias. Interestingly, recently developed AI technologies might be more advanced than you expect. 

Most automated systems are based on an algorithm that gathers, analyses, converts, and redistributes data all by itself. To say that it is made by a human is true, but a human is only responsible for ‘giving light to it’. A car needs a human to move, but the engine itself works independently. 

Tech Target defines Natural Language Generation as a high-tech automated learning algorithm that uses the methods of syntax and semantic analysis in order to produce textual content. Syntax is the grammatical arrangement of words in a sentence; main tasks involve grammatical analysis, morphological segmentation, and sentence breaking. Semantics techniques are related to symbolism and metaphors, or simply said – the meaning behind the words used. NLG rapidly combines both notions in order to firstly understand the structure of a sentence and then to apprehend what the sentence is trying to express. 

One positive asset of robo-journalism is the excellent writing skills used that lack annoying typos and poor grammar. Combining it with rapid data analysis and instant notification, in this round of the battle, the algorithm surely outruns the human journalist. 

Basic examples of existing bots are the ones of Associated Press (automated sports score and finance reports since 2013) and The Washington Post (used algorithm for monitoring results during the Rio Olympic Games in 2016), writes The Conversation. BBC News makes use of The Juicer’s API to take news and store it by conceptual category.

Thanks to quickly developing technologies, algorithms now are becoming more accessible to journalists and they help, rather than harm their writings. Meredith Broussard, a professor at New York University’s Carter Journalism institute assured audience of Knowledge@Wharton radio show that robot reporting wouldn’t “replace human journalists any time soon”. Artificial intelligence is useful for writing “multiple versions of basically the same story”. It is beneficial in financial reporting, statistics, and on a more advanced level – scanning large amounts of texts, outlining main topics, key words, sorting them by issue, date, and category. 

After a recent study, The Guardian wrote that nearly one third of the US councils are using algorithms to make better decisions on benefit claims, child abuse and allocate institutional places. The idea sounds great in theory, but the ugly underbelly is related to privacy concerns which makes people mistrust algorithms about producing their news. Objectively speaking, natural language processing is still far from perfect. Algorithms struggle with advanced usage of semantic analysis, as well as picking up on more behaviour-based elements like sarcasm or dialects. 

Not getting a joke is something most people would forgive an algorithm for but reporting a fake earthquake might not be. The wrong earthquake alert about a 9.10-magnitude earthquake in Tokyo is one of the events that justify common fears, reports The Guardian. Automated robo-journalism has made a tremendous mistake due to a simple glitch in the system which leaves 18.000 people falsely announced dead. Following the natural train of thought, one may question the ethical aspect of all of this – who is responsible? Who is to blame? Is it the company, the code developer, the device itself? 

Level of trust in selected news sources in North America in 2018
Statista 2019 
Source: Edelman

Fake news is what leaves plenty of news providers redundant. However, there might be a way in which algorithms could fix the problem of fake news. Algorithms programmed to detect and eliminate fake news can enhance global media production. Without doubt, human skills would be necessary for setting laws on transparency and personal data rights.

Tsarina Ruskova, a 31-year old female lawyer reading the news daily (with apps like Jazeera) shared that she “would rather read or hear the news by a human because it gives more sensibility. The face, the gestures, the tone of a person are not something that I’d rather see in an AI giving me the news”.

What will continue to be valued in journalism is creativity and empathy. AI-automation is not to replace the writer, but to be entangled in news creation so that writers abandon the never-really-changing ‘bureaucratic’ responsibilities and focus on the creative side of writing, not replace the writer on the first place.

It is important not to fall too hard for either perspectives on AI. Understanding both sides is salient for comprehending the pros and cons. Perhaps the perfect balance resides in the proper implementation of algorithms in the creative writing process of journalists. An efficient communication between tech developers and artists (writers, content creators) would be beneficial in crafting the perfect algorithm that would suit the specific requirements of a news article. Of course, this still remains an allegorical concept. 

Robojournalism and News Making: Better the Journalist you know or the Robojournalist you don’t?

Grace Fuller is a 21-year-old California native who lives thousands of miles away from her family in London. She left her life in the Golden State behind to pursue a degree in Digital Technologies at King’s College London, where she focuses on Artificial Intelligence. When I asked her why she chose these studies, I was expecting a generic answer. Californians have an obsession about Silicon Valley. So, I expected her to talk about the desire to work in the tech industry. But her answer was nothing like that. Instead she filled me in about the fear and uncertainty that she experienced, because of the misuse of digital systems and the threat of misinformation.

On the 22nd of June 2017, Grace was running errands around Arizona, where she was on a solo trip. She had her headphones in as she explored her new surroundings alone. Suddenly, her Daft Punk playlist stopped to a loud news alert. A bright red notification read “CALIFORNIA: 6.8 MAGNITUDE EARTHQUAKE”. 6 unanswered calls later, Grace was running along the unfamiliar streets in a panic. It took her thirty minutes to finally reach her mother. She told Grace that the earthquake did not take place that day, but in 1925.

The alert was an error in the LA Times AI software. Also, called Robojournalism, AI within the news is used to control the data behind stories whilst publishing automated news articles. However, its misuse caused Grace and many others the loss of their control over accurate information. To regain said control, Grace decided to pursue further understanding on AI through her studies. To her, this would mean saving others from experiencing the uncertainty she felt that day. “Robojournalism does not embody the humanity behind journalism, which sets the tone for dangerous errors in news-making”, she said as I walked her back to class.

But is Grace right when she says this?

Robojournalism is a key, yet underexposed, development of the 24-hour news cycle. In today’s day and age, we are seeing more generation of data than ever before. This requires Robojournalism to sort and locate data trends. Therefore, saving journalists time and energy when it comes to the statistics behind the news. In London, The Press Association’s (PA) headquarters develops Robojournalist software. PA submits samples of templates to local newspapers in a wide scale project named Radar. Along with a £622,000 grant from Google, Radar follows a system-generated template to write articles on sports, events, elections and more. “We’ve just been emailing them [local newspapers] samples of stories we’ve produced and they’ve been using a reasonable number of them,” says Peter Clifton, editor-in-chief, over PA’s blog. In May, PA started distributing 30,000 of these stories a month. These articles are all published verbatim. Therefore, eliminating journalistic principles such as transparency, accountability and humanity. The lack of these principles in Robojournalism have and continue to risk misinformation within news articles like the California earthquake error.

Robojournalism is all part of an effort to match the speed of the 24-hour news cycle, where traditional journalism is often tested. While traditional journalists appreciate the writing quality behind a piece, Robojournalism aims at quantifying news research. “It is important to separate research from writing, and robots should only be programmed to aid the traditional journalist as a tool for data” said Juliette Garside, Investigations Correspondent at The Guardian, during our interview. But at its worst, Robojournalism “could amplify the work of troll farms and fake news operations, allowing more fabricated stories to be produced”, Juliette concluded.

Another problem behind misinformation caused by Robojournalism is its easy dismissal. It seems inevitable for computer errors to take place. However, if journalists make mistakes, they are immediately held accountable in serious cases of libel and defamation. This accountability pushes the journalist to portray holistic accounts of accurate news. The correct mix of deep analysis, creative writing and factual reporting cannot be programmed onto a system.

In 2016, The Japanese Meteorological Agency sent out an alert stating that a 9.1 earthquake was on the way. The news spread over Twitter like wildfire. “I’m prepared to die”, said one Twitter user. The fear the Japanese people had to endure, regardless of the short time frame, mirrored the panic Grace felt on the other side of the world a year later. This misinformation dictates people’s wellbeing. AI news-making should not be given the power to rob people of their welfare. Or give them false information about their own circumstances. The way people make decisions is largely affected by the news they consume.

To Robojournalism’s credit, AI does not program itself. People do. This means that the actions of AI news-making systems follow the orders of human-written code. Therefore, biases associated with traditional journalism also exist within Robojournalism. We cannot put the sole blame on the machines when people are dictating their development. The caution lies within how these systems learn from human code, which cannot be controlled entirely. AI news-making must be regularly monitored. This may defeat the purpose of AI technology, but for now, we must be responsible for its growth within society. The programmers behind the LA Times and the Japanese Meteorological Agency errors must be responsible for encoding systems that failed the public.

While Robojournalism can be used as an incredible tool for research, the possibilities for misinformation worsen news-making. These systems cannot be left to their own devices. Human beings must regulate its growth and power over the production of information. The threat of misinformation relies on human-machine collaboration to avoid fatal errors. Grace is a major part of this desired collaboration as she, and many others, work for more clarity within AI’s future.

Can Algorithms Save Journalism?

roman-kraft-_Zua2hyvTBk-unsplash Man sat on a bench reading a newspaper – Photo by Roman Kraft on Unsplash

Algorithms and journalism: The journalism industry is declining and algorithms may be the solution to helping local areas get the news they deserve.

How often do you read the news? If the answer is not often, you are not alone. In the U.K., the journalism industry has been in decline due to a fall in readership, closure of newspapers and loss of journalists. Print journalism has plummeted by over 25% – from around 23,000 in 2007 to 17,000 in 2017, according to an independent review by the U.K. government. Even though there are digital alternatives, the local areas are still the most affected by this. There is a lack of coverage and journalists in pockets across the U.K. This decline pushes the industry to find alternative methods to produce content for readers in these areas.  The question is, can algorithms save journalism?   

Why is journalism in decline? 

The fall in the journalism industry can be highly attributed to the dip in print readership. The number of print newspapers distributed every year has been in steady decline. The circulation of The Times fell from 508,250 in 2010 to 417,298 in 2019, according to the Audit Bureau of Circulation. And this is only one example. The same dataset showed that the top 15 newspapers in the U.K. have all experienced a downturn in the readership of their print newspaper since 2010.

One of the main issues is that print newspapers rely heavily on classified advertisement. The rise of the internet has led to a shift to online advertising. This results in a loss of investment and job cuts, according to a Cairncross review. With fewer journalist, the coverage of local news is falling as these are the areas that cannot afford to keep the industry afloat.

While print readership is declining, digital publishing is filling this gap. In the U.K., reading or downloading digital news has risen from 20% in 2007 to 69% in 2019, according to the Office for National Statistics. This shift within the way readers are consuming the news indicates a need for change. This is where algorithms come in. The Times, The BBC and The Sunday Times all use computer generated content. Leading newspapers are incorporating new methods to shift their business into this technologically driven world.

Statistics of readership in the U.K.

Graph that shows individuals who read or download the news (2007-2018) – Photo by: Office for National Statistics (UK)

Gap in coverage in local areas  

The impact of this decline can be seen the most in the local areas of the U.K., where the population is being starved for news. Since 2007, the number of full-time journalists has fallen by over 25%, according to the Secretary of State Matt Hancock. The borough of Walsall is only one example of a local area that is not receiving the coverage their citizens deserve. This borough is the home to nearly 300,000 people and is located in the West Midlands. There is a feeling of lack of coverage, especially when it comes to local politics.   

The alternative news sources for this borough is the Express & Star, based in Wolverhampton. There is no longer a local newspaper, and this leaves citizens with a limited coverage of their area. Edie Hughes, a Conservative MP who grew up in Walsall, remembers a time when ‘There was a shared sense of people seeing the same thing in the newspaper, [it] gave lots of people good information on things that had happened locally. And now that’s gone,’. This is having a huge impact on these areas where the sense of community is forefront in their lives.

Algorithms as a solution 

The fall in journalism may not mean the end of the industry as we know it. The BBC has a solution to the lack of coverage in local areas. This is an algorithm called SalcoThe technology sources data and inputs it into a template created by journalist. Before publication, it is reviewed by an editor. An example of its recent use is with local election coverage. With 248 councils in England up for election, it has become clear that the lack of journalists means that it cannot all be covered. The use of automation allowed the BBC to reach local pockets that were lacking coverage, such as Walsall. This could be the solution for boroughs that no longer have journalist. There is a gap that needs to be filled, and algorithms may be the solution.    

Algorithms are a solution for data driven and factual articles. In an interview conducted with Juliette Garside, Investigative Journalist at The Guardian, she highlighted that ‘it’s important to separate researching from writing’. She then goes on to expand on this point by stating that some ‘kinds of research can be aided by robots’ using the examples of data sets, speed of organising information, and creating visualisations of the data. The difference between the two is key when looking at algorithms as a solution to the decline of the journalism industry. Salco demonstrates that algorithms can be valuable to fill the fall in journalist in local areas. But, in the case of articles that are written by human journalist, there is still no solution available.

Girl sitting on laptop reading the news

Woman reading the news on laptop- Photo by: Author

Have we cracked the code?  

It is important to highlight that these algorithms are still in development and research stages – Salco is only in its second phase. Technology as we know it, is not fully capable of saving the entire journalism industry. The decline in print journalism is demonstrating a shift to the digital. More readers are accessing their news online and this calls for a solution. While looking at the loss of journalist and coverage in local areas – there is no solution at present. Algorithms can be the answer to publishing factual and data-driven articles. But, they do not have the capacity to cover human-interest pieces. This will leave a gap in an area of journalism because of the overall decline in the industry. The local areas deserve more and we must continue searching for a solution. 

Not So Black & White: The Colour of Google’s Data

The Google Search Engine – Photo by: PhotoMIX Ltd.

In this day and age, we leave many things to chance. If that is beyond belief, take a moment and try googling yourself. If you appear on the first page of results, then congratulations! You must have done something considerably fantastic enough for the internet -and therefore the world- to take notice. Or perhaps your achievements have left you a few pages shy of that glorious first display of results. Nothing wrong with that, you are probably still an accomplished person. That, or you may just be ‘white’, according to recent investigations into Google’s algorithms.

Ah yes, Google. Everyone’s favourite (and oftentimes only) servant in their back pocket. Google is the search engine that many would consider the cornerstone of our highly digitized society. Simply enter a name or question, and voilà!, the answer magically presents itself in a matter of milliseconds. At best, what you are looking for will appear in that little box above all results, nicely called the ‘knowledge panel’. At worst, you might just have to spend a precious few seconds more, scrolling down or even going through a few pages of results before you get what you want. ‘A small price to pay’, some may say, as it has never been easier to get information. Anyone with access to a screen and the internet can come through with at least one more thing to add to their knowledge bank within seconds. Truly revolutionary, this search engine! Surely, it can do no wrong?

Unfortunately, it can, according to experts. Recent analysis of the methods used by Google’s search engine revealed certain unsettling biases. These methods rely heavily on algorithms: complicated formulas and equations handled by computers. It takes information, runs it through a checking system and achieves a calculated answer. But studies conducted this year alone show stark flaws in the system. In July 2019, a New York City justice reform agency, the Center for Court Innovation, published a study on risk assessment and racial fairness using algorithms. It revealed that such algorithms favoured white people, rendering them ‘safer’ in the American justice system, while Black and Hispanic citizens were deemed significantly ‘riskier’. This meant that certain assumptions were made about people of colour, and that they were more likely to engage and re-engage in criminal activity. This algorithm is similar to the one used by Google’s search engine, prioritizing results that are more commonly associated with ‘whiteness’. Sure enough, if you use Google Images to search up ‘white couples’ and ‘black couples’, the former presents a higher number of interracial couples while the latter presents exclusively black couples.

This issue is only amplified when considering how the public views search engine results. A survey conducted in April revealed that, out of 1400 searchers across various age groups, only 7% of respondents go beyond the first page of results. Moreover, 75% of all participants relied on the first two results on any given query. If search results are organized to prioritize information presenting certain influences, this can be problematic in the global usage of the search engine.

How is it that a machine, with no eyes to speak of, let alone prejudices, display racist tendencies? The answer is simpler than you think, says political journalist Stephen Bush. It is because these algorithms are ‘made by people’. “[…]any algorithm is only as good as the assumptions that are fed into it”, claims Bush. In other words, these mathematical formulas are able to churn out answers, thanks to the information that it is given. And who in the world has more information than Google? Over the last few years, the company’s executives and programmers have been subject to public scrutiny for allowing such biases to exist on a supposedly objective and accurate search engine. Among these accusations is the common belief that a large majority of data being fed into the algorithm is unmonitored and inappropriate.

To the credit of the multi-billion-dollar company, Google has taken measures to appease the masses and re-evaluate the process that, well, processes information. An adviser of Google’s search division, Danny Sullivan, made a twitter post implying that it is not the system to blame. His post suggested that any racial biases presented via the search engine are caused by users ‘mentioning’ key racial terms associated with specific searches.

Mr. Sullivan’s response to racial biases regarding the ‘couples’ search. Taken from his Twitter Profile: @dannysullivan

Additionally, one of Google’s senior engineers, Gregory Coppola came forward to state that any prejudices and segregation present in the search engine’s results are not always intentional.

“Algorithms – the series of commands to computers- don’t write themselves. People may write their own opinions into an algorithm, knowingly or otherwise.”

Gregory Coppola, Senior Google Engineer, in an interview with Mind Matters, 2019

Coppola proposes that it is not the fault of the system for any biases, but the fault of its developers and users. Google further acknowledged the presence of the problem when they revealed that 1 in 4 search results was turning up offensive or misleading content in 2017. Since then, the company has committed to taking this matter seriously and restructuring how content is analysed and presented. It has been two years since then, which leads us to the question: What has changed?

            Surely, this is a difficult question to answer. With Google holding at least 75% of global web search volume since 2013, the dominant presence of Google’s search engine is clear. It is unlikely that people will stop using it just because of perceived colour biases. At the same time, it is unclear how these racial biases are being handled internally. The company remains discreet in how their algorithms work, and it remains unclear how they have ‘restructured’ their search engine processes. Perhaps, all the layman can do is be patient but vigilant, pointing out a problem as it appears and hopefully kicking up enough dust to get into Google’s eyes. Time runs short for the company to quietly turn its veiled dials and levers as large political players and the masses demand fairer and more balanced algorithms. Perhaps it is because everyone wants answers to something not available through a quick search on a web browser.

“An algorithm is essentially just a series of assumptions with a result at the end”, says Bush. But it appears assumptions are no longer satisfactory. Maybe the people want something simpler than a few hundred thousand assumptions generated in a third of a second by Google. Maybe this time, people just want a single definitive answer from Google.

Hi Alexa, are you invading my privacy?

More than 3.2 billion digital voice assistants are in use worldwide. Should we be amused, or terrified?

A person holding black Amazon Echo Dot. Photo: Jan Kolar

One day in 2017, Peter Johnson, who lives in London, came home from work as usual. As if it knew he was home, his Amazon Echo Dot voice assistant blurt out some arbitrary messages, supposedly based on his previous conversation with the device. It repeatedly suggested him to book train tickets for journeys he had already completed and to record TV programmes that he had already viewed. Peter had not even said the word “Alexa” to wake it, but it went bizarre for a very long while.

In addition to this bizarre episode, what’s more interesting is that Peter was a former Amazon employee. He recalled volunteering to sit in a room, reading out loud a series of random meaningless words into a microphone for an unrevealed purpose. Only when the Echo was released by Amazon in 2014 did he realise what he was doing the whole time. In 2016, he purchased a Dot which is a cheaper and a smaller version of Echo. He found it useful and amusing, at least until it gone bizarre. After the incident, he eventually got rid of the only voice assistant he owned: “I felt a bit foolish,” he says. “Having worked at Amazon, and having seen how they used people’s data, I knew I couldn’t trust them.”

Alexa by the desk top. Photo: James McDonald

Some people may perceive this as a strange coincidence, but this is not the only weird case. Danielle, in Portland, Oregon, has discovered that her Echo had been recording a private conversation between her and her husband. Not only that, the Echo then sent the recorded conversation to one of her husband’s employees without their permission. Like Peter, she had not said the wake word – “Alexa” – to activate the device. “I felt invaded,” she insisted. “Immediately, I said: ‘I’m never plugging that device in again because I can’t trust it’.”

But a lot of other people do plug the device in newly each year. David Limp, who is Amazon’s senior vice-president of devices, announced that the company has sold more than 100 million Alexa-enabled devices. The total number of digital voice assistants in use worldwide is around 3.2 billion in 2019, and it is anticipated to increase to around 8 billion by 2023, according to Statista. As such, voice recognition technology is widespread across the whole world, and it is not an exaggeration to say that it is The Age of Surveillance Capitalism, as Shoshana Zuboff calls.

The surveillance by voice assistants are strangely polarizing. Technology frequently leads us to a dilemma, whether or not we should use these threatening, yet convenient services. For Facebook and Google, despite the fact that they know too much about us, we maintain using them because they are way too valuable and irreplaceable. With voice assistants this is different. Like Danielle and Peter, people lean toward one side or the other – amused or terrified. So the question is: Should we let voice assistants into our home?

AmazonGoogle and Apple have admitted that they hire people to listen to anonymized audio clips that are recorded via their voice assistants. A number of voice recordings are passed to the third-party for ‘quality control measures’, a spokespeople for “Big Three” tech companies say. But a number of experiments have shown that these recordings are used for commercial purposes as well: a YouTuber named Michollow live tested if Google and Facebook listen in and record conversations even when the applications are not open. In this experiment, Michollow makes sure every browser is closed. Then he talks about dog toys, a topic that he has never searched for in the past. He does this for two minutes. When he goes back to the same browser, dog toys advertisements popped up right away proving to him: voice assistants are invading our privacy.

Michollow live testing if Google listen in conversations. Source: Michollow YouTube

How do people feel about this issue? Nicolos Angulo, a MA Law student at King’s College London as well as a Siri user says: “It is kind of creepy that our smartphones are listening to us. When you accept all the terms and conditions, you are simply agreeing to give them access to everything including the voice recognition. It is not great nor comfortable, but because it is the only way to use the smartphone, I think I just have to accept it.” Although he acknowledges that voice assistant is a massive threat to our privacy, he feels forced by the fact that it is the only way to use many services.

At the moment, we are tolerating the boundless surveillance in return for extremely bounded service. AI technologies have enormous potential for development. The more sophisticated the service will become, the more invaded our privacy will be. In order to head off the Orwellian nightmare, we need to well consider what we should value more.

Until that night, Peter had no idea what Alexa was up to. Now he knows, it is the most bizarre technology that has the potential to do anything. Nothing is more dangerous than being ignorant. Who knows? Big Brother might be watching us already.