CRA unveils ‘Audio x Mixed Media Modeling’ session at Heard 2025
With the recent AI boom, several jurisdictions have started promulgating legislation to address the risks posed by unregulated AI use. As part of this process to regulate AI, there is an international effort to adhere to ethical guidelines and frameworks to ensure that AI is not used in a manner that causes harm to society.
Governments and international bodies are actively creating regulatory frameworks to manage the development and deployment of various AI technologies. Prominent examples of recent legislative initiatives include the European Union’s Artificial Intelligence Act, which establishes a regulatory framework based on the risk level assigned to specific AI technologies. Under this model, the greater the risk an AI technology poses, the more stringent and demanding the requirements that will apply. Another prominent example is the United States’ Deepfakes Accountability Act, which aims to guard against malicious or nefarious use of deepfake technologies. Furthermore, several institutions, such as the National Institute of Standards and Technology (NIST), have released frameworks to guide organizations in managing risks introduced by AI technologies.
Threat of deepfakes
Although AI offers radio stations several potential benefits, including personalized content, user engagement analytics and statistics and improved user retention, the risks in using AI-generated content, such as deepfake audio, is that such content can be used as a medium to spread misinformation, manipulate public opinion and damage or tarnish reputations. Deepfakes are a form of deep learning AI that is used to replicate a person’s “likeness” and can be used to recreate a person’s voice and visual appearance. A recent example of the nefarious use of deepfakes is the explicit deepfake of American pop singer Taylor Swift, which was disseminated across the internet.
Due to the high accuracy with which deep fakes can replicate human voices, it becomes challenging for the average audience to distinguish between genuine and simulated or fake content. Radio stations could face backlash if they are found to be relying on AI-generated content.
Key regulatory measures
It is crucial that radio stations implement appropriate checks and balances to guard against the risks identified above.
Some of the key regulatory measures radio stations should be aware of include:
- Ethical standards and guidelines — Following emerging international guidelines on responsible and ethical use of AI is critical for radio stations and is a crucial step for those seeking to mitigate the risks posed by AI technologies.
- Organizational policies — It is strongly recommended that radio stations develop and maintain internal policies regulating employees’ use of AI. Such workplace policies should regulate, among other things, acceptable use, restrictions on use, and disciplinary processes for misuse of AI by employees.
- Transparency and disclosure — Given the high degree of accuracy at which deep fakes can recreate a person’s voice, it is crucial that radio stations clearly indicate or flag specific content as being generated by or incorporating AI-generated content. Being transparent upfront about the use of AI can reduce the risk of misinformation and potential reputation harm.
- Checks and balances — Radio stations should establish appropriate checks and balances to verify the sources and creditability of any user or audience-provided content (for example, voice recordings). Furthermore, radio stations should ensure that their employees are adequately trained on the risks of AI, how to use AI technologies, and how to properly implement any relevant organizational policies.
Harms and risks
The absence of robust AI regulations can expose radio stations to some of the following harms and risks:
- Increased misinformation — Without regulation, human intervention, or proper governance, radio stations could be used to spread malicious AI-generated fake or harmful content or be targeted for misinformation campaigns (especially if a radio station were to develop the reputation of not fact-checking its content).
- Zero trust society — As radio stations are reliant upon the loyalty of their audiences, if a radio station were found to be spreading misinformation or simulated content (even if such dissemination were without malicious intent), its audiences could lose trust in the authenticity of the content they consume, which could ultimately work to erode trust in radio stations, leading to a decline in their credibility.
- Legal and ethical violations — Unregulated use of AI can violate privacy, intellectual property rights, and other legal rights and ethical standards, leading to potential litigation and damage to reputation.
Therefore, as a medium that interfaces with the public and relies on the loyalty of its audiences for revenue, radio stations should be wary before using any AI-generated content and should ensure that they have taken the necessary steps to implement appropriate checks and balances to ensure that the use of any AI technologies is consistent with the emerging legislative and ethical landscape.
The author is an executive for Technology, Media and Telecommunications for ENS Africa.
This article was taken from a RedTech special edition, “Radio Futures: Keeping an eye on AI” which you can read here.
Three stories from IBC 2024
CGI highlighting latest product innovations and services at IBC2024