Robojournalism and News Making: Better the Journalist you know or the Robojournalist you don’t?

Grace Fuller is a 21-year-old California native who lives thousands of miles away from her family in London. She left her life in the Golden State behind to pursue a degree in Digital Technologies at King’s College London, where she focuses on Artificial Intelligence. When I asked her why she chose these studies, I was expecting a generic answer. Californians have an obsession about Silicon Valley. So, I expected her to talk about the desire to work in the tech industry. But her answer was nothing like that. Instead she filled me in about the fear and uncertainty that she experienced, because of the misuse of digital systems and the threat of misinformation.

On the 22nd of June 2017, Grace was running errands around Arizona, where she was on a solo trip. She had her headphones in as she explored her new surroundings alone. Suddenly, her Daft Punk playlist stopped to a loud news alert. A bright red notification read “CALIFORNIA: 6.8 MAGNITUDE EARTHQUAKE”. 6 unanswered calls later, Grace was running along the unfamiliar streets in a panic. It took her thirty minutes to finally reach her mother. She told Grace that the earthquake did not take place that day, but in 1925.

The alert was an error in the LA Times AI software. Also, called Robojournalism, AI within the news is used to control the data behind stories whilst publishing automated news articles. However, its misuse caused Grace and many others the loss of their control over accurate information. To regain said control, Grace decided to pursue further understanding on AI through her studies. To her, this would mean saving others from experiencing the uncertainty she felt that day. “Robojournalism does not embody the humanity behind journalism, which sets the tone for dangerous errors in news-making”, she said as I walked her back to class.

But is Grace right when she says this?

Robojournalism is a key, yet underexposed, development of the 24-hour news cycle. In today’s day and age, we are seeing more generation of data than ever before. This requires Robojournalism to sort and locate data trends. Therefore, saving journalists time and energy when it comes to the statistics behind the news. In London, The Press Association’s (PA) headquarters develops Robojournalist software. PA submits samples of templates to local newspapers in a wide scale project named Radar. Along with a £622,000 grant from Google, Radar follows a system-generated template to write articles on sports, events, elections and more. “We’ve just been emailing them [local newspapers] samples of stories we’ve produced and they’ve been using a reasonable number of them,” says Peter Clifton, editor-in-chief, over PA’s blog. In May, PA started distributing 30,000 of these stories a month. These articles are all published verbatim. Therefore, eliminating journalistic principles such as transparency, accountability and humanity. The lack of these principles in Robojournalism have and continue to risk misinformation within news articles like the California earthquake error.

Robojournalism is all part of an effort to match the speed of the 24-hour news cycle, where traditional journalism is often tested. While traditional journalists appreciate the writing quality behind a piece, Robojournalism aims at quantifying news research. “It is important to separate research from writing, and robots should only be programmed to aid the traditional journalist as a tool for data” said Juliette Garside, Investigations Correspondent at The Guardian, during our interview. But at its worst, Robojournalism “could amplify the work of troll farms and fake news operations, allowing more fabricated stories to be produced”, Juliette concluded.

Another problem behind misinformation caused by Robojournalism is its easy dismissal. It seems inevitable for computer errors to take place. However, if journalists make mistakes, they are immediately held accountable in serious cases of libel and defamation. This accountability pushes the journalist to portray holistic accounts of accurate news. The correct mix of deep analysis, creative writing and factual reporting cannot be programmed onto a system.

In 2016, The Japanese Meteorological Agency sent out an alert stating that a 9.1 earthquake was on the way. The news spread over Twitter like wildfire. “I’m prepared to die”, said one Twitter user. The fear the Japanese people had to endure, regardless of the short time frame, mirrored the panic Grace felt on the other side of the world a year later. This misinformation dictates people’s wellbeing. AI news-making should not be given the power to rob people of their welfare. Or give them false information about their own circumstances. The way people make decisions is largely affected by the news they consume.

To Robojournalism’s credit, AI does not program itself. People do. This means that the actions of AI news-making systems follow the orders of human-written code. Therefore, biases associated with traditional journalism also exist within Robojournalism. We cannot put the sole blame on the machines when people are dictating their development. The caution lies within how these systems learn from human code, which cannot be controlled entirely. AI news-making must be regularly monitored. This may defeat the purpose of AI technology, but for now, we must be responsible for its growth within society. The programmers behind the LA Times and the Japanese Meteorological Agency errors must be responsible for encoding systems that failed the public.

While Robojournalism can be used as an incredible tool for research, the possibilities for misinformation worsen news-making. These systems cannot be left to their own devices. Human beings must regulate its growth and power over the production of information. The threat of misinformation relies on human-machine collaboration to avoid fatal errors. Grace is a major part of this desired collaboration as she, and many others, work for more clarity within AI’s future.