IBC Trends 4: Using AI to make talkback and reporting more efficient

At IBC24 there are always a few ground breakers who are thinking further ahead than most. This year they were thinking about how AI can make live radio programming and reporting of meetings more efficient.

I spoke to thought leaders Dan McQuillin and Raoul Wedel about some of the new tools they are developing for radio talkback and journalism.

One of the big ideas is to deploy AI to make talkback systems more efficient. Another big idea is using AI to make covering long events, such as council meetings, more efficient for journalists.

 

What if your telephone talkback and text system could use AI to organise your interactions with callers and texters? It would make your production team much more efficient.

Having been part of five radio station moves in my career,* the most recent being new broadcast studios in Parramatta, I’m up with the latest innovations. The 4th set of trends I spotted at IBC will take innovation further still. In technology, each good idea and new technology development builds on the last and is a foundation for the next innovation. We are in an age of continuous tech evolution.

McQuillin developed the iconic Phonebox radio talkback interface many years ago, it’s one of my favourite systems. That product has evolved and is now part of the wider Broadcast Bionics,range, which also  integrates video capture (called CameraOne) to help broadcasters deliver video to multiple output locations. Radio is no longer just audio, it is on socials, streaming and catch up, where pictures enhance the live audio content.

Vertical Video Tracking

This year Broadcast Bionics has added AI to it’s CameraOne switching system. Broadcast Bionics links with studio cameras and can continuously record video to be used when needed. To make the editing process quicker and easier for vertical video formats, the system now uses AI to help you identify and reframe each speaker to vertical orientation, then quickly edit those speakers into a vertical video. The AI tracks the speaker’s face and dynamically follows them, so they are always centre of frame. The same functions can also be used in horizontal format for other editing and framing automations. It will also convert speech to text so you can add subtitles if you choose.

I like McQuillin’s approach to how he uses AI in his products, he refers to it as augmenting human intelligence, not replacing it. ‘Augmented Intelligence,’ not Artificial Intelligence.

Talkback AI Tools

The Bionic system has more functions for grouping callers and highlighting topics they want to talk about with the addition of more internal AI integration.

The system has been able to ingest Whatsapp and other voice notes for a while, but now it has also added AI to de-noise and clean-up background noise to make voice notes more suitable for broadcast.

AI text management

One of the big breakthroughs that impressed me most was how Broadcast Bionics is now using AI to manage listener texts and social media interactions. Anyone who has seen a flurry of text or social media messages come in quickly on some hot topics will have experienced the frustration of missing some messages, or seeing the messages as they come in, but having them pushed off the screen by new incoming messages before they could be read out on air. Or trying to scroll up and down to choose a different order to read out than the order in which they were received.

I’ve seen production teams dragging messages around to display to the presenter, while they are doing an interview, but getting frustrated because they can’t keep up with the incomings or can’t find the messages they want to group together. I’ve seen presenters scanning the text line while the interviewee is talking to mention a text comment, only to turn back to it a few moments later and find that the comment they wanted to read to the interviewee has now gone off the screen. Nightmare!

The nightmare is now less terrifying, because the just added AI feature monitors the context of the texts, groups them together and identifies the most common content elements, such as those for or against an issue, or mentions of a hospital or school name being discussed. It can order and display them by most popular topic if multiple topics are being discussed.

McQuillin make the point that there is nothing artificial or generative about this tool, all the content comes from listeners, the tool just analyses and groups the messages in a way that is useful to the production team. I found it very impressive.

AI Reporting enhancements

Any reporter who has been assigned to cover a long council meeting that goes late into the evening or some other event where a lot of people speak over an extended period of time will know the frustration of collating many pages worth of notes and quotes to put together into a story. Most reporters will gain a general sense of the key points by sitting and listening, and will have taken quick notes of words spoken for a quote. They will probably have noted down the time of the best quotes in their recording to go back later and edit the grabs for broadcast. For a short press conference, this is easy, but if you’ve got hours of recordings to work on it can become a time consuming process for what may not be worth the time spent on it.

Wedel Software is about to launch a reporting tool called Sonic Scribe that can speed up the process by recording, transcribing and summarising the key topics of the meeting. It can identify a topic and align it with other people who spoke about the topic somewhere else in the meeting. It will make identifying, grouping and extracting content easier for reporters. It will link text and audio so that the reporter can edit the text and the audio at the same time. There are already a few reporting tools that have some of these features, but Wedel plans deploy AI to improve the process further.

I like that these two technology leaders know how radio works and are continuously thinking of tools that will fit in and improve existing workflows rather than inventing something that is good, but requires staff to change the way they produce, present or report to fit in with the way the machines want to do it.

 

These trends will bring immediate benefits to live talkback and news reporting.

I spoke to both Dan McQuillin and Raoul Wedel in Amsterdam. Listen to what they had to say below.

 

Dan McQuillin video summary:

On the theme of AI, or what we call augmented intelligence, that’s making smarter tools to help you do more in the studio. Not replacing talent but augmenting talent.

With CameraOne, which is our budget camera  switching system, we’re generating content automatically from the studio camera. What we want to demonstrate there is the ability to quickly repurpose and reformat that as vertical video. We partnered with Choppety https://www.choppity.com  to automatically reframe that content.

We’ve traditionally been doing live camera switching for streaming, we’ve been doing social media, now we can help using AI tools to automatically repurpose reframe that content. What we’re doing here is not just the cropping, the trimming is done by automatically using AI to detect the highlights, using AI to allow you to edit that using just a script editor and using Ai and transcription then to bake the bits on the top. Everything from the camera switching to the editing, captioning, to the reframing of is happening really, really quickly.  The AI will give you prompts and suggestions and you can use the base tools to manually change and edit it.

 

There’s a lot of people trying to make AI voices… I have no advantage in doing that, that’s not technology that I can really develop, but I don’t really believe that the part of radio which is building community, creating genuine connections is something which AI can do.

What I do think AI can do is help build those connections, so whether that’s getting social media content more quickly so we can share more, or using transcription so you can transcribe WhatsApp messages when they hit the studio so they becomes  searchable and discoverable.

We’re also showing the use of AI language models to summarize all of your social media in real time  so again instead of hundreds of messages hitting the studio maybe you read the top five of the top 10.  We’ll simply give you a list of what topics the audience is talking about, what are the best messages on each of those topics, so you can read everything your audience is thinking and feeling, that’s what we mean by augmented intelligence. It’s augmenting your ability in the studio to build those communities to create that connection.  There is nothing artificial about the content, there is nothing generative, we’re not using fake voices or generating content we are enhancing and empowering your ability to engage as much as possible with your audience so you build that genuine connection, that sense of community.

Generative AI or Spotify absolutely have a place, but radio should be able to maintain its prime position as that authentic voice of your local community, as that listening friend you have that sense of identity and relationship with. In the video Dan demonstrates this at about the 3’30 point (Chapter 3 marker)

…it’s ranking the texts in the order of the most messages and it’s figuring out what it thinks are the most key. This does a couple of things first of all it helps us to understand the mood and mind of our audience, second, sometimes we start a conversation and maybe we carry on the wrong topic, so we can show you all the topics you are talking about, are they beginning to burn with the audience? Maybe we should change and we can actually surface a topic the audience is now talking about… lots of different ways of implanting augmenting traditional radio workflows.

STEVE: To have a synthetic voice read out some of those texts might be a useful thing.

Yeah. You could have a synthetic side kick I guess with you the primary talent…  and you could say, hey Sarah the Sidekick just tell me what the audience is thinking.

A lot of what we’re doing in the UK is we now have full integration of WhatsApp and WhatsApp voice notes so we’re pulling in a lot of people who are not willing to call but willing to send a voice note. We can call those people back as well so if they’re really good we might try and say would you be on the air to say your text message. Most people are flattered to be asked, probably 75% will go on the air.

As well we transcribe and make all the WhatsApp voice notes searchable so you can find the best voice without having to listen to every one… or we can do an AI speech enhancement and de-noising, if we find a piece of content that someone sent over WhatsApp… and I don’t know why, but people will regularly be shouting in the car, or here with quite a bit of background noise… The technology now is incredible to de-noise that, enhance speech, again we’re helping you to salvage content that would not necessarily be broadcastable.  We’re not creating fake content, we’re just salvaging a piece of content and increasing the quality of it to make it pass grade, which is another great use of AI technology.

We shouldn’t be afraid of it we should just make sure that… we figure out how we use those tools enhance our workflows, to enhance our audio and be more creative.

 

Raoul Wedel video summary:

Adthos is going very well. There’s a lot of interest, we’re talking to every major group in the world and were still very excited about the future of AI and audio.

Our new AI product for journalists is called Sonic Scribe, we’re releasing that shortly.

It’s s system where we can transcribe audio content. Of course that’s not something new. But what we can do with it is edit the text and it edits the audio for you along with that. You can also prompt the audio.

What that means is if you have a city council meeting that lasts four hours, as a local journalist you can just type in, ‘give me the four highlights on the new housing projects,’ the four most important quotes, and it will play those quotes for you and highlight that for you. [Long meetings are] a lot of work. So it’s good to be able to automate and to give local journalism better tools to do their work.

How long would it take to process, say, a three-hour council meeting?

The three hours would take a couple of minutes. We also have tools for people to automatically upload the audio so they don’t have to worry about manually loading it. If you would record it on your phone or something, then it’s automatically uploaded and transcribed and available in the platform.

Wedel Software also supplies its targeted AI audio platform, Adthos, which we have covered in previous reports.

 

Subscribe to the Radioinfo Youtube Channel to get the latest videos, conference reports and awards event videos.

 

* About the author

Steve Ahern is a broadcast and digital media trainer and consultant and the founder of this website.

The five stations he moved in his career are: ABC Melbourne, AFTRS, Money FM Singapore, Nai Radio Afghanistan and ABC Sydney.

 

Previous IBC Trends Articles:

IBC Trends 1: Artificial Intelligence

IBC Trends 2: The Cloud

IBC Trends 3: Automated Content Detection

Tags: | | | | | | | | |