
With the growing development of generative artificial intelligence systems, journalists all over the world are discussing ethical principles and values to adopt these technologies in the newsroom. One important step in this process is developing guidelines to assist editors and reporters in deciding when and how these innovations can be safely incorporated into journalism practices.
Motivated by this discussion, TIPI recently participated in a panel titled “Developing an Ethical AI Policy for Your Newsroom” at the latest conference organized by the National Institute for Computer-Assisted Reporting (NICAR). The panel, moderated by Darla Cameron, Chief Product Officer at The Texas Tribune, featured four other participants, including Silvia DalBen Furtado, a student affiliate at TIPI and PhD candidate at the School of Journalism and Media at UT Austin.
Currently, Silvia is conducting a study with Tina Lassiter, a PhD student at the School of Information, in which they evaluate journalists’ perceptions about regulations and guidelines on the use of generative AI technologies in US newsrooms. During the panel, they presented the preliminary results of this study, which included a survey (N=310) and in-depth interviews (N=23), that highlighted these key findings:
1. Many journalists are already using or experimenting with generative AI technologies in their routines, and they agree there should be some regulation or guidance regarding its use in newsrooms.
2. Many of them don’t seem to understand the difference between AI and Generative AI. Thus, beyond discussing AI policies, it is essential to clearly explain the historical evolution of AI as a field and examine how AI technologies have shaped journalism practices over decades.
3. Our findings indicate that many AI guidelines have been developed through a top-down approach. However, the reporters and editors who participated in this study expressed a desire for more active involvement in the discussion, advocating for a bottom-up approach.
4. Moreover, journalists express interest in having standardized guidelines established by a consortium of news organizations, rather than develop individually by each newsroom. Many journalists fear that government regulations could lead to censorship of journalistic activities.
The panel also had the participation of Andrew Ford, who is the AI Experiences Journalist at USA Today/Gannett, Josh Hinkle, who works at KXAN-TV as the Director of Investigations & Innovations, and Cynthia Tu, a Data Reporter and AI specialist at the Sahan Journal.
Josh Hinkle presented the results of a nationwide survey with more than 6,000 responses from their audiences, which highlighted that 94% of the people wanted journalists to disclose their use of AI, 87% wanted to know why AI is being used, and 94% wanted to know how journalists will ensure the work remains ethical and accurate.
Reflecting on how a newsroom could evolve this discussion into an AI policy, Cynthia Tu presented five key considerations for creating such guidelines:
1. Define what “AI” means for your newsroom.
2. Conduct interviews with key stakeholders.
3. Host an AI policy workshop within your newsroom or organization.
4. Review AI policies published by other newsrooms, and explore resources and templates from supporting organizations, such as the Poynter Institute.
5. Set a timeline for how often your AI policy will be re-evaluated and updated.
We ended the panel with the awareness that technologies are not good or bad on their own, but they become good or bad depending on how they are used by humans. Thus, beyond AI guidelines to regulate the use of these technologies in newsrooms, journalists also need to engage in a critical coverage of the social impacts of AI that goes beyond the most recent hype, understanding these technologies in their historical, cultural, political, and economic contexts.