With the perpetual acceleration of digital technology, journalism, too, becomes directly dependant on artificial intelligence. Scepticism around that topic is widely spread and completely justified; the affordances that an API (Application-programming interface) could predispose are sometimes horrifying to think about – from AI-written articles completely lacking any manual human interference, to holographic television hosts. Readers tend to mistrust robo-journalism as they fear it would produce fake content. Some believe that the inclusion of AI in media decreases jobs for journalists. Others are simply not comfortable with the idea of an article generated by a robot; thus, the media provider loses audience’s trust. But are those fears justified?
Robo-journalism is here, and it comes with a cost. Directly affected by AI-written articles are the journalists. It is true that computers are now taking over media workload, hence less people will be needed in the future of news providing. On the flip side, one of the main reasons why robo-journalism is becoming widespread is that journalists themselves prefer an algorithm doing the dull part of the job for them. Computers are evidently faster when it comes to reporting data analytics or sports results which definitely comes as a plus
It turns out the more informed an individual is, the more likely it is for them to understand and tolerate AI-written articles. An empirical research focused on a group of 30 people, aged 20-50, dug deep into readers’ news preferences. Findings include that people in their 20s-early 30s are most likely to accept and be familiar with the definition of robo-journalism. However, around 65% of the young adults still prefer human-written articles. Even if some lack decisiveness on robo-journalism, all candidates agree that if they become more informed on the topic, they wouldn’t necessarily be sceptical about AI producing their news. On the contrary, people over the age of 45 tend to be more prejudiced about this idea.
Raising awareness on robo-journalism requires measurements of education and informing the society. Universities have already started to include new media when tailoring their degrees. Programmes worldwide provide relevant education on digital culture in nearly every subject area. Ana, a 21-year old Liberal Arts student who participated in our research, shares her thoughts on the topic: “I think that robo-journalism could be less biased, but from my knowledge in the sphere of media – it is basically impossible as the robot is also engineered by a human”. The idea that algorithms are, after all, coded by people, certainly suggest a certain bias. Interestingly, recently developed AI technologies might be more advanced than you expect.
Most automated systems are based on an algorithm that gathers, analyses, converts, and redistributes data all by itself. To say that it is made by a human is true, but a human is only responsible for ‘giving light to it’. A car needs a human to move, but the engine itself works independently.
Tech Target defines Natural Language Generation as a high-tech automated learning algorithm that uses the methods of syntax and semantic analysis in order to produce textual content. Syntax is the grammatical arrangement of words in a sentence; main tasks involve grammatical analysis, morphological segmentation, and sentence breaking. Semantics techniques are related to symbolism and metaphors, or simply said – the meaning behind the words used. NLG rapidly combines both notions in order to firstly understand the structure of a sentence and then to apprehend what the sentence is trying to express.
One positive asset of robo-journalism is the excellent writing skills used that lack annoying typos and poor grammar. Combining it with rapid data analysis and instant notification, in this round of the battle, the algorithm surely outruns the human journalist.
Basic examples of existing bots are the ones of Associated Press (automated sports score and finance reports since 2013) and The Washington Post (used algorithm for monitoring results during the Rio Olympic Games in 2016), writes The Conversation. BBC News makes use of The Juicer’s API to take news and store it by conceptual category.
Thanks to quickly developing technologies, algorithms now are becoming more accessible to journalists and they help, rather than harm their writings. Meredith Broussard, a professor at New York University’s Carter Journalism institute assured audience of Knowledge@Wharton radio show that robot reporting wouldn’t “replace human journalists any time soon”. Artificial intelligence is useful for writing “multiple versions of basically the same story”. It is beneficial in financial reporting, statistics, and on a more advanced level – scanning large amounts of texts, outlining main topics, key words, sorting them by issue, date, and category.
After a recent study, The Guardian wrote that nearly one third of the US councils are using algorithms to make better decisions on benefit claims, child abuse and allocate institutional places. The idea sounds great in theory, but the ugly underbelly is related to privacy concerns which makes people mistrust algorithms about producing their news. Objectively speaking, natural language processing is still far from perfect. Algorithms struggle with advanced usage of semantic analysis, as well as picking up on more behaviour-based elements like sarcasm or dialects.
Not getting a joke is something most people would forgive an algorithm for but reporting a fake earthquake might not be. The wrong earthquake alert about a 9.10-magnitude earthquake in Tokyo is one of the events that justify common fears, reports The Guardian. Automated robo-journalism has made a tremendous mistake due to a simple glitch in the system which leaves 18.000 people falsely announced dead. Following the natural train of thought, one may question the ethical aspect of all of this – who is responsible? Who is to blame? Is it the company, the code developer, the device itself?
Fake news is what leaves plenty of news providers redundant. However, there might be a way in which algorithms could fix the problem of fake news. Algorithms programmed to detect and eliminate fake news can enhance global media production. Without doubt, human skills would be necessary for setting laws on transparency and personal data rights.
Tsarina Ruskova, a 31-year old female lawyer reading the news daily (with apps like Jazeera) shared that she “would rather read or hear the news by a human because it gives more sensibility. The face, the gestures, the tone of a person are not something that I’d rather see in an AI giving me the news”.
What will continue to be valued in journalism is creativity and empathy. AI-automation is not to replace the writer, but to be entangled in news creation so that writers abandon the never-really-changing ‘bureaucratic’ responsibilities and focus on the creative side of writing, not replace the writer on the first place.
It is important not to fall too hard for either perspectives on AI. Understanding both sides is salient for comprehending the pros and cons. Perhaps the perfect balance resides in the proper implementation of algorithms in the creative writing process of journalists. An efficient communication between tech developers and artists (writers, content creators) would be beneficial in crafting the perfect algorithm that would suit the specific requirements of a news article. Of course, this still remains an allegorical concept.