
Today, content is accessible almost anywhere across multiple channels, resulting in a deeply fragmented media landscape. Add artificial intelligence to the mix, and there is a tsunami of content for consumers to wade through, along with a dizzying plethora of places for broadcasters and advertisers to look for their audience. How can consumers manage their way through this content onslaught? How can broadcasters, i.e. radio, stay front and center as the go-to content for in-vehicle listeners?

At Xperi, we believe that content-first, contextual, curated discovery is the key. The machine learning-driven technology we have developed for our DTS AutoStage connected car platform offers a use case that can drive home how AI in this context can be a real boon for broadcasters, offering better engagement, content stickiness and measurement for today’s and tomorrow’s broadcasters, as well as for automakers.
How? First, because DTS AutoStage can associate linear radio with other content produced by that station or beyond, it can curate music metadata such as artist deep-dives, connect with local communities (i.e. local events), and more, to expand the overall in-vehicle audio experience. But this is just the launching pad.
Because DTS AutoStage is globally connected to tens of thousands of radio stations, it can “listen to” (and learn from) stations in a deep way that enables understanding well beyond generic labels like rock and country — after all, music genres can mean different things in different states or countries. For example, the term “country” means something very different in Austin, Texas, than it does in Topeka, Kansas.
Beyond genre
With its machine learning technology, DTS AutoStage offers a future where the platform can look at the characteristics of the music, not just the genre, including the actual playlists, then log the songs and contextualize them, add in metadata and all the behavior preferences used to characterize and describe the music, and then build a profile of what the radio station actually is. This is achieved by considering the station’s music mix, audience analytics from in-car listenership, and capturing other key indicators from the station, which may ultimately be more precise than even the station itself knows.
And this contextualizing is very granular — after all, any one piece of music has about 250 attributes associated with it. Some are just factual, such as where was the song recorded? However, there are also other qualitative and quantitative descriptors, such as beats per minute, the mood of the music and the tempo. Couple this with the intricate profile the platform can build of each listener’s preferences, and the machine learning facilitates — instantly and intuitively — deeply valuable and precise recommendations to listeners.
Let’s say you are on a road trip and have an interest in a specific sports team — the platform will know your passions and push relevant sports radio content in real time; or maybe you are not a fan of country music, but love rap, so it takes you to Eminem’s “Houdini” playing on a station that may be genre-categorized as “country” but which the platform knows is more nuanced than that; or, if it knows your geolocation, it offers up a news event breaking in your neighborhood. This is quite different from the genre boxes that would have steered you clear of that “country” station, making you miss a favorite rap song.
This future radio would be so aware of your actual preferences that it automatically tunes to content that it knows is your highest priority. You don’t have to work to discover it anymore. It will be done for you.
This future radio would be so aware of your actual preferences that it automatically tunes to content that it knows is your highest priority.
Engaged and sticky
With the benefits AI brings, publisher-related content can be surfaced directly in the radio interface, eliminating the potential friction that occurs when listeners are directed to content outside the station. In this case, the content is contextual and seamlessly surfaced and brought into the radio interface.
Technically, this concept is known as zero-layer UI design, and automakers are already starting to rethink their systems in this way. BMW’s Intelligent Suggest and the zero-layer MBUX UI by Mercedes-Benz are just two examples of how AI can be leveraged to enable context-aware recommendations and eliminate submenus by displaying the relevant information on the top layer. We at Xperi envision a near future where DTS AutoStage amplifies the elements that make broadcasters unique and distinct from other audio content providers.
This has a significant impact on the user experience. In these user cases, the in-vehicle listener will be as engaged and sticky as can be, well within the radio ecosystem, and deeply responsive to the relevant advertising content that is precisely directed their way. Additionally, the platform can feed that data back to the station promptly, enabling granular tracking and performance metrics that can lead to better decision-making for human program directors and DJs.
With DTS AutoStage as the lynchpin, this future dashboard provides a next-generation, next-level in-vehicle radio UX that could upend the search and discovery paradigm, ensuring radio not only endures but also innovates well ahead of other media in the connected car ecosystem.
The author is the senior V.P., Broadcast Radio and Digital Audio for Xperi.
This guest commentary originally appeared in the RedTech International special publication Radio Futures 2025
These stories might interest you
Radio Futures explores AI’s next wave
RedTech Magazine July/August 2025 provides roadmap for strategic balance