New and emerging technologies are usually accompanied by standards that set out, among other things, operating parameters and optimal performance specifications. Typically, the standard appears in tandem with the technology, but there are occasions when standardization is retroactive, reacting to existing techniques and systems. This is the case with artificial intelligence, which, for decades, was largely theoretical but, in the last 10 years, has become an integral — if not always wholly welcome — part of life and business, including radio broadcasting.
Early initiatives aimed at formalizing guidelines and working practices for AI included the Framework on Responsible Practices for Synthetic Media (AI-produced material) drawn up by the Partnership on AI, established in 2016 by, among others, BBC R&D, Adobe, CBC, Meta, Microsoft, Respeecher and OpenAI. Adobe, Microsoft and the BBC, together with Truepic, arm and Intel co-founded the Coalition for Content Provenance and Authenticity (C2PA) in 2021, which has developed its own standards and metadata containing authentication information about pieces of media that can be encrypted into audio files.
Anna Bulakh, head of ethics at Respeecher, highlights current industry best practices and draft regulations but agrees there is a gap in industry-specific regulations for setting clear standards for voice cloning in content creation. “We have already adopted the C2PA standard for content credentials, which helps verify the origins of digital content,” Bulakh clarifies. “These measures are important to mitigate the malicious use of voice cloning. The most critical topics now for radio and broadcasting are probably securing explicit consent for voice cloning and ensuring live presenters’ roles are respected when using their likenesses.”
Need to quantify risks
AI standardization has moved on apace in the last 12 months, spurred on by the rapid and widescale implementation of the technology. The European Commission has monitored the rise and development of AI through its AI Watch project, which doubtless provided the background for the European Union AI Act adopted by the European Parliament on March 13 and approved by the Council of the EU in May. The act intends to “ensure that AI systems in the EU are safe and respect fundamental rights and values.” The requirements of the AI Act for general-purpose AI models and systems will be enforced across the 27 EU member states by the European AI Office, which will also represent the EU regarding international cooperation on AI.
European standards organizations ETSI (European Telecommunications Standards Institute) and CEN/CENELEC (European Committee for Standardisation/European Committee for Electrotechnical Standardisation) have both initiated programs to produce AI standards. While ETSI is primarily concerned with safety considerations concerning AI and machine learning, CEN/CENELEC is leaning more toward trust, ethics and technical issues through its Focus Group Road Map on AI, the DIN/DKE German Standardization Roadmap for AI and Joint Technical Committee 21, which will be responsible for developing and adopting AI and related data standards.
In the United States, the National Institute of Standards and Technology (NIST) and the Society of Motion Picture and Television Engineers (SMPTE) have issued guidelines on the use and scope of AI. NIST began its standardization process in January 2023 with the AI Risk Management Framework (AI RMF), a nonmandatory guidance designed to enable trustworthiness considerations in designing, developing and using AI systems. This now forms the basis of the draft AI RMF generative AI Profile issued on April 29 this year, intended to help manage the risk of generative AI.
SMPTE acknowledges that AI technical standardization is still in its infancy. Its Artificial Intelligence and Media report, published at the end of last year, outlines the technological and ethical issues involved and gives an overview of existing standards and the opportunities for future ones. “We wanted to provide a good summary of some ethical considerations and best practices around responsible use of AI in media,” comments Fred Walls, technology committee co-chair. “The EU is ahead of the U.S. in creating policies governing responsible AI, and the new act is a step in an important development. Organizations need to be aware of and quantify the risks that might apply to their specific use cases. One area of concern for radio would be the proliferation of deepfake audio. State and federal legislation has been enacted or proposed in numerous jurisdictions to ensure broadcasters do not broadcast misleading AI-generated content, which means broadcasters must be able to rigorously verify the authenticity of content.”
A long time coming
U.K. communications regulator Ofcom has published several documents relating to AI in the last year, including What Generative AI Means for the Communications Sector in June 2023 and its Strategic Approach to AI 2024/25 in March this year. In a statement, Ofcom said it is fundamental for broadcasters and audiences to explore new and emerging technologies, including synthetic media, as they become an increasing part of our daily lives. However, it reminds all its licensees of their “ongoing responsibility to comply with the Broadcasting Code in order to protect audiences from harm and maintain the high levels of trust in broadcast news as well as to ensure individuals and organizations are not treated unfairly and/or their privacy is not unwarrantably infringed.”
From the broadcaster’s perspective, BBC Director of Nations Rhodri Talfan Davies outlined the corporation’s plans for an approach to generative AI in October last year. On the issue of ethics, Davies said the BBC would always act in the public’s best interest, prioritize talent and creativity — emphasizing that “no technology can replicate or replace human creativity” — and be open and transparent. As part of the third principle, he explained that the broadcaster “will never rely solely on AI-generated research in our output.”
Bauer Media Group Director of Government Affairs and Public Policy Philip Pilcher observes that proper copyright protection is a significant parameter when using AI in broadcasting. He explains, “This is because gatekeeper platforms, which have started to integrate generative AI, compete for users’ attention and, potentially, ad revenues with copyright owners, including radio broadcasters.” He adds, “Generative AI tools largely rely on the text and mining of data, which may include copyright-protected content. In such cases, under the Digital Single Market Copyright Directive, a copyright owner may reserve text and data mining for itself to the effect that third-party providers, including generative AI tools, may not exploit the work concerned. However, it’s difficult to assess whether such reservations are respected by generative AI tools.”
As a result, Pilcher notes that the EU AI Act obligates generative AI providers to clarify policies that identify and respect rightsholders’ reservations of rights, saying, “The AI Act further establishes an obligation for generative AI providers to draw up and make publicly available a ‘sufficiently detailed summary’ of the content used for training their model.” He emphasizes, “The AI Office, which is entrusted with monitoring compliance with this obligation, should ensure that the summary in question enables rightsholders to detect and, where necessary, take action against copyright infringements.”
AI standardization has been a long time coming and occurred relatively quickly. Part of the challenge has been less about quantifying the technology and more about laying out the ethical issues surrounding something that could replace humans creatively and practically. The groundwork is being laid now, but, as with AI itself, there is still a long way to go.
The author trained as a radio journalist and worked for British Forces Broadcasting Services Radio as a technical operator, producer and presenter before moving into magazine writing during the late 1980s. He recently returned to radio through his involvement in an online station where he lives on the south coast of England
This article was taken from a RedTech special edition, “Radio Futures: Keeping an eye on AI” which you can read here.
IBC stories about AI
ENCO is transforming ad sales with AI
Jutel revolutionizes radio automation and audio production
Veritone’s AI technology is revolutionizing radio’s impact measurement