LONDON — Trust is a subject to which I frequently return when thinking about how the media world is developing. Without trust, we don’t really have meaningful lives — we can’t develop friendships, make a transaction in a shop, or frankly, walk down the street. Often, that trust is vested in individuals such as friends but also professionals such as doctors, legal advisers and politicians. Since surveys began, radio has consistently emerged as the most trusted medium in most European countries.
Generative AI has thrown a wild card into many areas of our lives, especially the media. When budgets are under pressure, it’s easier to make a one-off investment than employ and manage members of staff who cannot work 24 hours each day. In radio, without its visual aspects, the illusion may even be simpler to create. My professional life has been based around innovation, but in AI, I have misgivings.
For me, the relationship between the presenter and the audience is fundamental, and once that is violated, it is difficult to know how to react. We use all types of tech tools in our everyday lives, but those remain instruments, not objects with which we can develop personal relationships. We all know the automated prompts that help us reach the correct department of the bank; frustrating though they are, they are only signposts.
I believe with radio, there is no substitute for the experienced journalist reporting from the front line whose motivation and ethics are recognized and transparent. In some contexts, using AI might seem more natural as long as transparency is maintained. For instance, we see examples of automated weather forecasts in Germany, greetings and episode names for podcasts in Sweden.
Interestingly, and more authentically generative, Norway is using translated voiceovers for major news bulletins. Depending on the accuracy, this could be a real advantage in integrating different communities or people living there briefly. More controversially, RadioGPT, devised by Futuri Media, is creating a completely artificially generated locally sounding radio station by “scraping” web information about different communities and transforming it into voiced on-air content. As a former managing editor for a national station, editorial responsibility for such a service would keep me awake at night!
Making a stand
It is against this background that I was pleased to see recent standards defined by Deutsche Welle and the BBC. Hopefully, this will become the ethical norm across the industry, though that might be too much to hope for. While recognizing Generative AI’s potential in the back-office setting, the BBC offers its audience reassurance in three areas: Transparency, the importance of human talent and creativity, and the inaccessibility of its output to systems that could “scrape” and distort. Essentially, these pledges mean that listeners will not be kept in the dark about the use of Generative AI, maintaining the BBC’s position of trust. At the same time, the BBC recognizes there is no substitute for human-to-human communication and storytelling, even though some new tools may support it.
The third criterion is highly significant. It means if we find the BBC quoted in AI-generated text, it is definitely second-hand, so we should treat it with caution and go to the source to verify it. The BBC, with its strong news reputation, has clearly decided not to grant spurious credibility to AI-generated sources.
Deutsche Welle has devised similar assurances: It emphasizes the centrality of the journalist, who might be aided by AI in their analyses.
However, anything generated through crunching big data is subject to final human review before publication. The broadcaster is interested in subtitling videos using AI, making content more widely available. In addition, it intends to increase fact checking capacity, recognizing that Generative AI makes the world of information even less dependable. Deutsche Welle has also committed to avoiding AI-generated images and raising awareness among its journalists about societal biases infiltrating existing datasets and how information is presented. Clearly, something along these lines, modified according to changing circumstances, will contribute to maintaining the relationship between broadcasters and consumers.
As I was finishing writing this, I saw the report of a conference on trust organized by the Reuters Institute. It was reassuring to note the speakers did not foresee catastrophe, rather that Generative AI is the latest of many threats to journalism over the years. I sincerely hope that assessment does not prove too complacent. A former EBU colleague, now strategy director at Dutch public broadcaster NPO, summed up trust as follows: “Generative AI is more than just a new set of tools; it reshapes how we create stories and the trust they carry. Trust hinges on knowing the faces and brands behind the news. If AI clouds that human touch, we risk losing the trust built on that connection.” Communicating with audiences transparently while paying close attention to the human relationship on which radio is based should, more than ever, inform our strategy in these exciting and hazardous times.
The author was head of Radio at the EBU until 2020, and before that managing editor of one of the BBC’s national stations. He currently advises media organizations.