Editorial responsibility in times of generative AI

A man in a dark business suit making notes next to a microchip which is also generating data

While artificial intelligence has been used for some time across media companies for tasks such as running recommendation engines, translating texts, transcribing interviews or interacting with voice assistants, the arrival of generative AI models has shifted the focus to the editorial process and its output. This prompts a number of questions: What uses of GenAI are legitimate in content creation? Where are the limits? How do these new tools affect journalistic principles?

These questions are not only addressed internally by media players but have also become a hot topic within the public debate. Being unable to answer them clearly may undermine the trust that audiences, advertisers, investors and regulators place in media products and companies.

Everything changes to keep things unchanged

Some things do not change despite technological innovations: Publishers are responsible for the accuracy, fairness, originality and quality of every word in their stories. However, the automated generation of content questions the traditional meaning of this responsibility and the role of those involved in securing it.

There is a widespread perception and growing evidence that, in its current stage of development, GenAI tools are more prone than humans to errors, plagiarism, bias and impartiality and copyright infringement.

However, the genie is out of the bottle. Adopting GenAI tools is unavoidable, given their ability to successfully automate repetitive and tedious tasks and augment certain creative ones. Tech vendors share this perception and are actively updating their portfolios to add GenAI capabilities. Thus, it will become difficult for broadcasters and other media players to avoid GenAI.

For radio and audio publishers, the key challenge is how to increase productivity and efficiency with these tools without compromising editorial responsibility.

For radio and audio publishers, the key challenge is how to increase productivity and efficiency with these tools without compromising editorial responsibility.

Editorial guidelines

Some pioneering companies have already drafted and, in some cases, publicly shared voluntary guidelines on using AI and/or GenAI. This responds to users’ concerns about AI in media. These guidelines are also a powerful tool to help newsrooms and content creators. Your legal advisors will tell you they could be an excellent way to avoid legal headaches.

At South 180, we analyzed 17 of these formal guidelines developed by leading broadcasters, publishers, news agencies and self-regulatory bodies from eight European countries plus the U.S. and Canada. So, what do these guidelines have in common? Notably, they are built around journalistic principles and practices and keeping newsrooms in mind. It is unclear if the guidelines also cover staff working on entertainment, educational or other types of content.

All the guidelines acknowledge the benefits of GenAI to innovate and develop their services to provide additional value to users. However, there is a common agreement concerning the need for a human final decision and responsibility for material distributed.

Disclosure is another shared element — all the guidelines indicate the need to explicitly inform the public about using GenAI tools, such as directly before and after a broadcast or at the beginning of an online or social media story. However, the degree of disclosure and the minimum GenAI intervention to qualify for disclosure varies among the guidelines. There are areas of disagreement too — some guidelines strictly prohibit using speech synthesis, others allow it under certain circumstances.

Other concerns are specific to certain radio broadcasters; for example, using dialect models for speech-to-text applications to reflect the social diversity within territories served by a broadcaster. In one case, using unpublished material to feed machine-learning systems is explicitly prohibited, a reasonable journalistic practice.

We also identified attempts to draft recommendations at industry level, such as that by the Partnership on AI, set up mostly by tech companies, including OpenAI, ChatGPT’s creator. Unlike guidelines issued by individual media players, none of these frameworks is binding. This self-regulatory approach, either company-  or industry-wide, may not prevent the development and enforcement of formal regulation. This is expected from the EU’s Artificial Intelligence Act and similar discussions in other jurisdictions such as the U.S. and Australia.

Strategizing editorial responsibility

Based on an analysis of these guidelines, here are some personal recommendations for dealing with GenAI and editorial responsibility:

  • Don’t bury your head in the sand. GenAI tools have a huge potential to transform your daily routines. You should therefore explore them to avoid lagging behind your competitors.
  • Create your own framework to define your approach to GenAI — your purpose and values should be an excellent starting point.
  • Prevent disruption and confusion by tackling the issue without delay. If you feel that you don’t have the required resources or skills, ask for support to kick off the process. This disruption is not optional, and laggards will pay a high price.
  • Evaluate the business risks involved and consult your legal advisors about the current and potential implications of using your own GenAI models and applications, and third-party ones. In the latter case, pose these questions to your providers too.
  • Prioritize your ethical approach to GenAI. Consider defining your own GenAI editorial guidelines and making them public. This will not only provide guidance to your staff but also — and very importantly — preserve your audiences’ trust.

The author is co-founder and research director at South 180.

This article was taken from a RedTech special edition, “Radio Futures: AI Is Now Here!” which can can read here.

More stories about AI

Smart transmitter solutions for a sustainable future

Antenne Deutschland and Radio.Cloud launch AI radio station

AIB launches AI working group

Could AI replace the radio news writer?

Exit mobile version