London — Radio TechCon, the annual technology and engineering conference for the international radio sector, was held in November at the Institution of Engineering and Technology in London.
The day began with how modern technology and the changing media landscape influence the design and construction of new studios. In “LBC’s Millbank 2.0,” Global Media‘s Managing Engineer Jon Crew outlined the news broadcaster’s move into larger studios near the Houses of Parliament. Since the original Westminster facility opened in 2006, the number of people watching podcasts, not just listening to them, has grown. The new studio center includes two on-air studios with control rooms featuring stylized sets, cameras, LED displays and microphones on the central desk. Alongside these are a podcast studio and booths for “down-the-line” interviews. Crew explained that LBC used virtual reality to plan these areas before the main design and construction work. Asked whether the new facilities were TV or radio studios, Crew replied that it was a “grey area” and they were probably “in a category of their own.”
The first mention of AI came in “Machine Learning Sounds Terrible,” in which Matthew Martin, team lead for BBC Systems Engineering, described developing an audio monitoring system to detect errors during the switch-over from daytime transmissions on BBC local radio stations to Radio 5 Live for overnight broadcasts. Machine learning identifies spectral responses indicative of a good signal and alerts control room staff when an irregularity is detected. Martin concluded that an ML-based monitor could “solve intractable problems.”
Power outages are frustrating and inconvenient but usually confined to a particular area for a short time. But on April 28 last year, the entire Iberian Peninsula and parts of south-west France were blacked out for ten hours, longer in some regions. Both the radio and television services of Portuguese public broadcaster RTP remained on air due to emergency power supplies, but TV transmissions were later affected when a miscalibrated breaker tripped. As Monica Palomo, RTP’s director of engineering for systems and technology, recounted, this left radio as the only source of information for citizens. “Radio proved to be the most resilient in a crisis,” she said. “And when DAB failed, FM and AM were still there.”
The crucial issue of power also underpinned the presentation by Daniyal Shah, solutions manager for distribution development at the BBC. When the BBC World Service broadcast sites in Ghana faced rising energy costs, unreliable electricity supplies and the need to meet carbon-neutral targets, Shah tapped into local expertise in solar power. Installations at facilities in Accra and Sekondi-Takoradi now provide power to the BBC, Ghana Broadcasting Corporation and other broadcasters.
Engineering the future
The session “Community to Cloud” exemplified the cloud’s move into radio at all levels. When Wiltshire community station Castledown FM had to vacate its three-studio facility by April last year, its volunteer staff considered the cloud as a viable alternative to traditional premises. John Sparrow, chair, engineer and presenter at the station, described how they moved Castledown FM’s playout system to the cloud, working with a small on-air studio in new premises and two virtual setups based on RØDECaster Pro II consoles. “Cloud storage costs more,” he said, “but it is easier to have home studios and do remote productions.”
Marianne Bouchart, founder and director of the media development non-profit for data journalism, HEI-DA, provided an overview of what AI can and cannot do for radio in “Mini Masterclass: AI for Audio.” Bouchart defined AI, ML and large language models, noting that the latter are best suited to audio applications, including editing and processing. While acknowledging that AI was “scary,” Bouchart concluded that it could reshape radio and audio engineering and people in these fields should “stay informed” about its development.
“Replatforming Digital Radio” looked at how U.K. transmission provider Arqiva is overhauling the distribution chain for its DAB networks. These include two national and 60 local/regional multiplexes, carrying 702 audio services from 430 unique sources. Work began nearly five years ago to move the DAB encoding process from radio studios to the data center, where multiplexing is performed, and to adopt open-source, generic standards and install Ensemble Transport Interface multiplexers. Domain architect Richard Knight commented that after extensive testing in a specially built facility, migration has gone ahead with minimal disruption to transmissions. The project is due for completion in 2026 and will offer DAB+ capability to all U.K. listeners.
“CTRL+WIN: The BFBS Esports Revolution” detailed how the U.K. Forces broadcaster produced live coverage of its Pro League esports tournament Grand Final on Sept. 25, which it streamed on Twitch and YouTube, with updates during the day on BFBS Radio.
Muddy in many ways
A different take on AI came from Jörgen Bang, product owner at Sveriges Radio in the intriguingly titled “What Happened to the Horse in Nybro?” The question — about the rescue of a former racehorse stuck in mud near the town of Nybro in southwest Sweden — highlighted how people search for information on news websites. SR now uses an interactive app based on the EBU NEO system, a conversational AI agent that helps people find stories by asking natural language questions. Bang explained that the News Search service is still in its experimental phase, but has proven effective at finding local stories.
Dan McQuillin, founder and managing director of Broadcast Bionics, outlined a “Software-Defined Future of Broadcasting,” which is increasingly dominated by initials and acronyms. These include the Time-Addressable Media Store (TAMS) API for working with content, including audio, in the cloud; Media eXchange Layer (MXL), an EBU open source initiative for exchanging media across different AV devices; and Remote Direct Memory Access over Converged Ethernet (RoCE), a protocol for high-speed, low-latency data transfer. “This will move audio in microseconds, not milliseconds,” McQuillin said. “Everything should be considered as software now.”
TechCon’s conference segment closed with “Cleaning Up a Muddy Glastonbury,” which focused on AI for better live broadcast sound rather than the often famously wet weather at the music festival. Among the many audio challenges facing the BBC team covering this event, said Senior Technical Producer Simone Lombardi, was the on-stage spill between instruments and vocals, which can make for a “muddy” or “bloomy” sound mix. Over the last three Glastonburys, BBC sound engineers have used AI stem separation to isolate voices and instruments, particularly drums. This uses ML to separate individual feeds, producing a tighter, cleaner-sounding result. “It provides studio quality with the energy of festivals,” concluded Lombardi.
Attendance-wise, this Radio TechCon was fully booked, something the organizers said had not happened since before Covid. With a robust program of topics, it was easy to see why.
The author trained as a radio journalist and worked for British Forces Broadcasting Services Radio as a technical operator, producer and presenter before moving into magazine writing during the late 1980s. He recently returned to radio through his involvement with an online station based on England’s south coast.
These stories might interest you
MediaWorks deploys Tieline codecs
Sennheiser Spectera sits at center of NEP Australia’s audio overhaul
100% Radio embraces full virtualization with WorldCast Systems

