This paper was presented at the 2019 Future Communications conference at York University. Download it as PDF here, with all of the proper in-text citations.
An exhausted editor waits in line for her second coffee of her overnight shift, when a notification on her smartphone causes her to abandon her quest for caffeine and bolt back to the office. Glancing back at her phone she can hardly believe the headline, “9.0 magnitude earthquake hits off the coast of Vancouver, tsunami warning issued.” It has only been 30 seconds since early warning systems off the coast of British Columbia detected the quake, and before the editor has even made it back to the newsroom, she’s already approved story copy for a first draft of what will become Canada’s biggest story for a generation. The out of breath editor sprints to her desk and watches social media in real time as B.C. residents react to the devastating effects of the tremors. While other news organizations confirm the details of the quake and struggle to piece together a readable breaking story, the editor who is working alongside a small skeleton crew of overnight reporters, writers and techs, is already assigning reporters and chasing the story on the ground.
What made it possible for this newsroom to be reporting so quickly after the earthquake occurred? A commitment to automation and artificial intelligence gave this newsroom a leg up on their competition and allowed them to leverage their talented human capital and scoop other outlets. That first story was created by an automated content creation system, one that monitors for earthquakes around the clock, and has been trained to send out an alert and automatically create an early story ready for approval mere seconds after a significant event occurs.
What is artificial intelligence and automation?
When we think of artificial intelligence, the image that most often comes to mind is influenced by Hollywood, perhaps HAL 9000 from 2001: A Space Odyssey or Skynet from the Terminator movies, but most current applications of artificial intelligence are far more banal. We can define artificial intelligence in its broadest sense as “computer systems able to perform tasks normally requiring human intelligence.” (Hansen 2017) There are many types of artificial intelligence approaches but one that has grown in prominence as of late has been the field of machine learning which entails “the training of a machine to learn from data, recognize patterns, and make subsequent judgments, with little to no human intervention.” (Meredith Broussard et al. 2019 p. 673)
Machine learning techniques power many of our interactions online including email spam detectors, social media content feeds, online video platform recommendations and search engine results. These systems learn behavior by analyzing sets of data, often user interaction data, and make predictions or classifications using these models. For example, with spam detection software, every time we flag an email as spam, systems like Gmail analyze these labelled messages for characteristics that might be unique to these emails. The more data labelled as spam the system has to analyze, the more refined its spam prediction model becomes and the more accurate it will be at predicting unwanted email. Essentially, as the system receives more data, it learns.
A term that is often used alongside artificial intelligence is automation. This distinct, but often in the modern era, AI-related form of technology essentially can be defined as a “device or system that accomplishes (partially or fully) a function that was previously, or conceivably could be carried out (partially or fully) by a human operator.” ( R. Parasuraman, T.B. Sheridan, and C.D. Wickens 2000 p.287) Automation devices can be driven by artificial intelligence models, but need not be.
Our example of the computer system that monitors earthquake data from the introduction is automation that need not be driven by artificial intelligence. Such a system could be given a set of parameters of when to send out a news alert, for example any earthquake over a certain Richter scale level. This system would replace the former human operator role, such as a meteorologist or science reporter and would be considered a form of automation, but is not an AI system because it operates with a preordained set of parameters. While not mutually exclusive automation and artificial intelligence based systems both present similar challenges for organizations, managers and workers in journalism settings.
How is AI and automation being used today in newsrooms?
While for some the era of the robot journalist may be seen as a far off fantasy, even strictly within the journalism industry artificial intelligence use is increasingly widespread. Media organizations, both large and small, have embraced AI and automation in a variety of ways.
One big use of automation in a newsroom setting has been automated content generation. Both the Associated Press and Yahoo! Sports have been creating “robo” generated articles in the business and sports domains using Automated Insights’ Wordsmith platform. Using a combination of templates and the AI subfield of natural language processing, news organizations have dramatically been able to increase the volume of articles created. Artificial intelligence based data mining is being used to help investigative journalists create data-driven stories. Unsupervised learning techniques are being used to make sense of complex and massive sets of data, highlighting journalistically interesting content. The New York Times worked with Google to develop the tool Perspective, which uses artificial intelligence to automate elements of the comment moderation on their articles. Reuters has developed a social media monitoring system driven by artificial intelligence, that has enabled it to filter through 12 million tweets a day, narrowing that down to 16,000 events of which about 6,600 are then flagged as being newsworthy.
These examples represent just a small fraction of how artificial intelligence and automation are being used in the news industry. As the cost of implementing these tools drops and computational processing power becomes more accessible, the use cases for AI and automation in the newsroom is bound to grow.
Will artificial intelligence and automation replace the journalist?
Managers and editors across the country and around the world have been confronted with shrinking budgets and smaller staffs. The Canadian Media Guild – who represents workers at a variety of media organizations including the CBC, TVO, APTN and The Canadian Press – has estimated that over 12,000 newsroom positions have been lost over the last few decades. In the United States, it was estimated that newsrooms lost 40% of their full-time editorial positions from 2006 to 2014. As advertising and funding dollars have declined, newsroom leaders have had to do more with less. With this shrinking head count, could editors turn to automation and artificial intelligence to ensure stories continue to be told?
Experts estimate that only about 15% of a reporters work and approximately 9% of an editor’s job could be automated using current technologies. Even in today’s most advanced AI systems, humans still outperform computers into two areas key to journalism work: complex communication and expert thinking. AI still struggles within the realm of conversation needed to perform hard hitting interviews. As noted computational journalism researcher Nicholas Diakopoulos writes:
Reporting, listening, responding, and pushing back, negotiating with sources, and then having the creativity to compellingly put it together, or knowing when a new angle of attack is needed—AI can do none of these indispensable journalistic tasks, though it can often augment human work to make it more efficient or high quality.
Without the honed techniques of a grizzled reporter and with adherence to a strict programmed discourse structure, today’s AI systems would struggle to cope with an elusive interview guest. In most cases, AI systems would also lack the breadth of domain knowledge a trained reporter would have and the ability to think on their feet, so necessary for conducting an effective interview. It could be argued that artificial intelligence and automation may make the work of journalists easier. By allowing the mundane tasks, such as number crunching and fact checking to machines, journalists might be free up staff time to do real journalism as outlined by Diakopoulos.
Articles created by automated content creation systems won’t be winning a Pulitzer anytime soon. Even with well-designed templates and advances in natural language generation, these AI-created articles still have a robotic feel. They are still most suited for subject areas that are statistic heavy, be it weather, sports, stock reports, and don’t lend themselves to stories that are less formulaic. Well written and creative news copy will still be the domain of humans for the foreseeable future. While reporter and writer roles may be relatively safe from the impacts of artificial intelligence over the short-term, there are other positions within the newsroom ripe for automation.
Significant advances in text-to-speech technologies, such as Google’s Text-to-Speech systems, have made computer generated speech more indistinguishable from today’s highly trained on-air presenters. Just last year, China’s state news agency Xinhua revealed an on-air news anchor generated entirely using artificial intelligence. Based on real-life news anchor Qiu Hao, the AI anchor proudly declared in his first video, “Not only can I accompany you 24 hours a day, 365 days a year. I can be endlessly copied and present at different scenes to bring you the news.” Duke University’s The Reporter’s Lab is working on developing an AI driven fact checking system, with an end goal of fact checking political speeches in real time. Claimbuster, a similar piece of software developed at the University of Texas, Arlington uses AI to flag problematic pieces of political speech which need fact checking. Both systems use artificial intelligence to automate and replicate parts of work that would have been performed by journalists. Significant advancements are also being made in artificial intelligence based video editing. In 2016, IBM used their Watson AI system to create the world’s first AI-generated movie trailer. Just last year, the BBC let artificial intelligence loose on the British broadcaster’s significant archives to help create an hour-long documentary.
On-air presenters, fact checkers, and video editors are but a few positions that could be impacted by AI and automation in the near future, but AI is still not advanced enough to have a wholesale impact on overall newsroom staffing levels. What these technologies are far more likely to do over the short-term is change the role of the journalist and newsroom worker. AI and automation are more likely to create new types of jobs than eliminate them. As Nicholas Diakopoulos describes:
AI will help to enhance the speed and scale of news in routine situations, complement and augment journalists, and even create new opportunities for optimization and personalization that would not otherwise be possible, it still cannot do most newswork, and, in many cases, creates new tasks and forms of work. In short, the future of AI in journalism has a lot of people around.
So where does that leave newsroom managers, who are struggling to meet bottom lines. Will AI and automation be their saviour through tough economic times?
Can artificial intelligence and automation make money?
Over the short-term artificial intelligence and automation will have limited impacts on reducing labour costs in modern newsrooms. Artificial intelligence experts are very expensive to hire so fully embracing AI-based journalism may in fact cause newsroom labour costs to go up. So from an economic perspective, why should a beleaguered newsroom boss take a risk on artificial intelligence or automation?
A key element of attracting audiences in today’s Internet focussed media ecosphere is being first with a story. If your newsroom can be first be with meaningful stories, search and social media feed algorithms will favour your content over others, and drive eyeballs and impressions to your online content over other outlets. As we saw with our earthquake example, artificial intelligence based content creation systems can create content quicker than humans, giving editors an edge over competitors. Using its Tracer social media monitoring system, Reuters detected the 2016 Brussels airport bombing two minutes before local media and ten minutes before the BBC reported it. A study looking at 31 different events found that the Reuters Tracer tool increased the speed of news alerts 84% of the time. Consistently being first with important news stories can drive higher television ratings and increase online page views, which can lead to higher advertising revenues.
Artificial intelligence systems can help a news organization provide a scope of coverage impossible with current newsroom staffing levels. The Associated Press (AP) has invested significantly in AI based automation, including in their sports and business reporting. Since AP has embraced the automation of its earnings reports journalism, they have been able to go from writing 300 earnings reports articles per quarter to 3,700. Similarly AP has been able to significantly increase its coverage of Minor League Baseball in the United States, through the use of automated game recaps. In both examples, the AP has created new coverage markets and potentially attracted new audiences, in a way that would have been impossible using just its traditional journalists. As University of Toronto Professor Ajay Argwal explains AI presents the opportunity, “to make something that has been comparatively expensive abundant and cheap. The task that AI makes abundant and inexpensive is prediction — in other words, the ability to take information you have and generate information you didn’t previously have.” In the case of journalism the labour costs of sending reporters out to cover countless minor league baseball games or to write thousands of quarterly earnings reports was something that was previously expensive, that coverage can be made abundant and cheap through AI-based automation.
The promise of personalization
In an era when news website increasingly must compete with social media platforms, news aggregators and search engines for online audiences, how might artificial intelligence help managers and editors attract new users and keep them on their sites? One way may be to harness audience insight, and use a combination of artificial intelligence and automation, to personalize the online news experience for each individual user.
Perhaps a news reader that loves the latest sports news and craves local news doesn’t want to read articles about federal politics. Maybe a scientist wants to read about the latest advancements in technology, but struggles to find it because every time they visit their favourite news website it presents them with dozens of stories on Donald Trump. A personalized news site tailored to the interests of each individual would provide a relevance that might keep a person on a news sites for longer periods of time.
We can see the effectiveness of this personalization in the way social media feeds operate. To different degrees, Facebook, Instagram and Twitter all serve content to users based upon their interests. Each platform determines a users’ interests based upon what types of social media content they most often interact with. By continually serving up relevant content, that its users find interesting, each social media platform keeps their audience on their sites.
Using web browser cookies or perhaps a user authentication system, news websites – with the permission of their audiences – could collect information on their audience’s browsing habits on their sites and build a profile of interests, which would help it determine the best stories to show them. News organizations have already experimented with and have implemented a variety of personalization techniques. In 2013, The New York Times launched a ‘Recommended for you’ section and the paper most recently launched a personalization team. NPR’s “NPR One” app feeds its audiences a personalized stream of national shows, local newscasts, and podcasts based upon active and passive listener behaviors.
Swiss journalist Titus Plattner, who did significant research on news personalization as a fellow at Stanford, has outlined some of the risks associated with such systems. In brief Plattner believes there is a risk that personalization could reinforce filter bubbles and create an echo chamber of discourse, not expose people to new ideas, fail to highlight stories of a public interest, not foster a common societal viewpoint and eliminate the sense of privacy of individual users.
One solution to part of these challenges would be not to completely hand over all editorial control to such personalization systems. Hybrid systems could be developed that would allow editors to insert editorially important stories into a user’s news feed. Perhaps ratios could be developed so that only a certain percentage of content shown to users is based upon their use habits.
What are the dangers and risks of artificial intelligence and automation?
Artificial intelligence and automation systems may feel like a godsend for media managers tasked with ensuring their organizations remain relevant in an environment that faces constant disruption, but implementing these systems comes with significant risks and challenges.
Artificial intelligence based systems are often reliant on data and user feedback, and in a journalism settings the potential bias and inaccuracy of such data presents editors with a unique set of challenges. For example, the availability of health data for women, LGBTQ people, immigrants and low-income people and has been found to be more challenging to acquire. Investigative reporting based on datasets that don’t adequately represent certain communities, will inadvertently marginalize their concerns. Supervised learning systems are also reliant on humans to classify and label data, and these human actors can impart their biases on these models.
Automated content production systems, which create content with little editorial oversight, may present an opportunity for bad actors to manipulate their underlying data feeds, which could cause misleading information to be shared. In the case of the Associated Press the injection of bad data in their earnings report automation systems could allow an individual to manipulate a stock price for their own gain.
As editors and managers contemplate the use of automated content production systems, these newsroom leaders may have to grapple with issues of accountability when such writing features errors or even potentially libels an individual. Research looking at the United States legal system has found that American courts would face significant challenges proving that automated systems met requirements for legal malice, but many questions remain related to the legal implications of AI generated content.
These examples highlight the importance of human oversight and error detection as part of any AI or automated journalism system. Editorial roles will have to be developed to ensure that data is valid and to double check automated reports for anomalies. The automated systems themselves also must include robust error checking, which analyzes past historical results, to flag any generated stories that seem based on statistical outliers.
Will artificial intelligence be regulated?
At this point there has been little government regulation introduced specifically related to artificial intelligence, the few laws currently on the books are related mostly to self-driving cars. One regulation focus that might cause newsroom editors and managers to rethink their automation strategies would be impending and recently implemented changes to privacy laws around the world. In the post-Cambridge Analytica scandal world, citizens around the globe are more concerned than ever about protecting their personal data. With so many artificial intelligence systems reliant on some form user or audience data, organizations that implement AI systems will have to ensure that their systems fall in line with privacy regulations.
In May 2018, the European Union introduced the General Data Protection Regulation (GDPR) which implemented sweeping privacy protections for personal information flowing in and out of the EU. Here in Canada, both the Privacy Act and the Personal Information Protection and Electronic Documents Act are being modernized. Newsroom managers that oversee automated newsroom systems that use audience data, will have to keenly know how the introduction of privacy regulations both internationally and domestically impact their work. As Facebook found out post-Cambridge Analytica, even if a certain use of user data is technically legal, online audiences are increasingly sensitive about the use of their data, and newsroom managers will need to develop policies, procedures and messaging on how they use audience data and how that is communicated to users.
Are organizations ready for AI?
Introducing artificial intelligence and automation based systems presents unique challenges to any organization. The shift towards automation will require significant changes in organizational culture, both at the management and employee level. As Tim Fountaine wrote in the Harvard Business Review, “Leaders also often think too narrowly about AI requirements. While cutting-edge technology and talent are certainly needed, it’s equally important to align a company’s culture, structure, and ways of working to support broad AI adoption. But at most businesses that aren’t born digital, traditional mindsets and ways of working run counter to those needed for AI.”
In his article “Building the AI-Powered Organization” Fountaine identifies four tasks that managers will need to take on to implement AI successfully: explaining why, anticipating unique barriers to change, budgeting as much for integration as for technology, and balancing feasibility and investment levels. Applying these tasks to a newsroom setting, we can see some challenges that media managers will face when introducing artificial intelligence systems.
Employees, particularly in newsroom settings, are often resistant to change, significant efforts by newsroom leaders will be needed to convince employees of the value of automation adoption. “A compelling story helps organizations understand the urgency of change initiatives and how all will benefit from them,” writes Fountaine. “This is particularly critical with AI projects, because fear that AI will take away jobs increases employees’ resistance to it.” Managers and editors will have to convince their employees that automation can create opportunities to reach new audiences, provide a greater breadth of coverage and take certain mundane tasks off the plates.
Journalism environments also could produce unique barriers to change that newsroom managers will be forced to overcome. For example, many newsrooms and media organizations across the country are unionized, and these unions may be resistant to changes in job functions or staffing complements. Managers will have to work alongside union representatives to ensure that automation is implemented in a way that respects workplace agreements. Media organizations are often quite siloed and this could provide another barrier to newsroom automation. Many newsrooms do not have integrated roles for developers or other technicians working directly alongside editorial staff. Managers will need to create new interdisciplinary teams of both technically inclined and editorially focussed employees throughout the newsroom to ensure effective adoption of automation technology.
Reduced budgets and smaller staffs coupled with a 24-hour news cycle, rarely provides managers and editors dedicated time for planning and strategizing for large scale organizational changes such as artificial intelligence and automation adoption. The Future Today Institute (FTI) determined that very few newsroom staff are thinking about the long-term future of news in the age of artificial intelligence and automation. 53% of newsroom staff surveyed by the institute said they rarely think about the next 10-20 years of journalism, while 78% reported they never engage in any longer-term planning or scenario mapping. Even in the shorter-term FTI found that 69% of newsroom staff aren’t conducting analysis on emerging tech trends and how they will impact the news industry over the next 5-10 years. Newsroom managers will have to purposefully set aside time for technological strategy development to ensure that their organizations are not left behind in a competitive space increasingly adopting AI.
Newsroom managers will need to ensure that a significant amount of automation budgets goes to training frontline staff and other newsroom leaders. In particular, frontline journalists will need to become more data-aware by being provided training on how to manipulate, analyze and recognize the limitations of data. Managers will also need to shift their skillset, basing more decisions on data and analytics rather than solely intuition.
Finally, newsroom managers will need to ensure that AI and automation projects are feasible. Automation shouldn’t be done for automation’s sake and calculated and informed decisions must be made regarding automation implementation. A focus on realistic and easily implementable AI projects in the early stages of automation adoption may produce some easy wins, which will help build the case for further AI automation. Managers also need to recognize that not every newsroom needs to have automation, a clear business case needs to be made for each situation, to ensure that the investment in AI is worthwhile.
A major motivator for many news organizations on whether they will adopt artificial intelligence technologies is whether their competitors are already doing so. Here in Canada for example would the CBC be willing to forgo artificial intelligence adoption, if CTV is already using automation in their news production? In a marketplace where traditional newsrooms already struggle to compete with emerging online outlets, social media platforms and the impacts of big search players like Google, managers should be reluctant to cede advantages to their competitors.
Whatever their personal thoughts on its adoption, artificial intelligence and automation technologies are not going away, and newsroom leadership teams will need to develop strategies related to its implementation or on how to survive without it in the burgeoning AI era. Investment in artificial intelligence is increasing, but so is concern about the risks connected to the technology. AI approaches such as machine learning, have created many new journalism capabilities and tools such as automated content creation, investigative journalism driven by data mining, automated comment moderation and powerful forms of social media monitoring. These approaches have established new ways to tell and discover powerful stories that would have been impossible with the labour of hard working journalists alone. Much of the work of journalists, at least in the short-term, cannot be replaced by artificial intelligence and automation, but shifts in the roles of newsroom staff could be significant. Artificial intelligence may not save beleaguered newsroom managers labour costs, but can create fresh revenue sources and audience growth opportunities by expanding the scope of coverage, accelerating the speed of breaking significant news reports and by introducing a level of personalized news service that will retain audiences for longer periods.
The financial risk involved with implementing AI and automation is significant and success is not guaranteed. Newsroom leadership will face new ethical challenges in the automated newsroom related to an increasing dependency on data. Editors will have to confront issues related to data bias, inclusive datasets, inadvertent libel and the threat of bad actors tampering with data. Considerable oversight over these new forms of technology will be required. Regulation of artificial intelligence remains a large question mark, and serious effort will need to be dedicated to protecting personal data. Managers and other newsroom leaders will need to oversee substantial shifts in their organizations. Staff will need to be convinced of the merits of AI, given assurances that their jobs are being changed not eliminated and provided training to gain skills needed for success in this new digital world.
A renewed era of journalism could evolve from a smart and effective use of artificial intelligence and automation. New stories could be told in ways unimagined just a decade or two ago. With strong leadership from newsroom managers, artificial intelligence and automation could help ensure the continued importance of the craft of journalism.
Ajay Agrawal, Joshua S Gans, and Avi Goldfarb, “What to Expect From Artificial Intelligence,” MIT Sloan Management Review, Spring 2017, 23.
Broussard, Meredith, Nicholas Diakopoulos, Andrea L. Guzman, Rediet Abebe, Michel Dupagne, and Ching-Hua Chuan. “Artificial Intelligence and Journalism.” Journalism & Mass Communication Quarterly 96, no. 3 (September 2019): 673–95.
Diakopoulos, Nicholas. Automating the News: How Algorithms Are Rewriting the Media. Book, Whole. Cambridge: Harvard University Press, 2019.
Edmonds, Rick. “Newspaper industry lost 3,800 full-time editorial professionals in 2014”, Poynter, https://www.poynter.org/reporting-editing/2015/newspaper-industry-lost-3800-full-time-editorial-professionals-in-2014/ (accessed Nov. 20, 2019).
Fountaine, Tim, Brian McCarthy and Tamim Saleh. “Building the AI-Powered Organization”, Harvard Business Review, July-August 2019
Future Today Institute. “The Global Survey on Journalism’s Futures”, 2017.
Galily, Yair. “Artificial Intelligence and Sports Journalism: Is It a Sweeping Change?” Technology in Society 54 (August 2018): 47–51.
Kuo, Lily. “World’s First AI News Anchor Unveiled in China.” The Guardian, November 9, 2018, sec. World news. https://www.theguardian.com/world/2018/nov/09/worlds-first-ai-news-anchor-unveiled-in-china.
Levy, Frank, and Richard J. Murnane. The New Division of Labor: How Computers Are Creating the Next Job Market. New York, NY: Russell Sage Foundation, 2005.
Lewis, Seth C., Amy Kristin Sanders, and Casey Carmody. “Libel by Algorithm? Automated Journalism and the Threat of Legal Liability.” Journalism & Mass Communication Quarterly 96, no. 1 (March 2019): 60–81.
Parasuraman, R., T.B. Sheridan, and C.D. Wickens. “A Model for Types and Levels of Human Interaction with Automation.” IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 30, no. 3 (May 2000): 286-97.
Peiser, Jaclyn. “The Rise of the Robot Reporter.” The New York Times, February 5, 2019, sec. Business. https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html.
Plattner, Titus. “Ten Effective Ways to Personalize News Platform.” Medium, June 5, 2018. https://medium.com/jsk-class-of-2018/ten-effective-ways-to-personalize-news-platform-c0e39890170e.
Plattner, Titus. “Five Risks of News Personalization.” Medium, June 13, 2018. https://medium.com/jsk-class-of-2018/five-risks-of-news-personalizations-5bdc97fdbdcc.
Public Policy Forum (Ottawa, Ont.). The Shattered Mirror News, Democracy and Trust in the Digital Age., 2017.
Tatalovic, Mico. “AI Writing Bots Are about to Revolutionise Science Journalism: We Must Shape How This Is Done.” Journal of Science Communication 17, no. 01 (March 6, 2018).
Wang, Shan. “Out of Many, NPR One: The App That Wants to Be the ‘Netflix of Listening’ Gets More Local.” Nieman Lab. Accessed December 1, 2019. https://www.niemanlab.org/2016/01/out-of-many-npr-one-the-app-that-wants-to-be-the-netflix-of-listening-gets-more-local/