
How do you protect yourself against AI as a podcaster?
If you’re nervous about the effect audio deepfakes may have on the podcasting industry, know that your concerns are valid. The good news is that there are tools and other services being developed designed to help with deepfake detection.
For example, it’s becoming possible to create an inaudible digital watermark so you can tell when audio of your voice has been tampered with. Resemble AI, for example, offers digital watermarking as one of their features. Their tool, Resemble Detect, also provides technology that is trained to help identify deepfakes. But Detect would only help if you found potentially spoofed audio content using your voice; it doesn’t search to find all voice impersonations that exist.
New Pod City takes this threat seriously and is putting the final touches on a new service to protect our podcasters and creators who publish to the NPC platforms and their distribution directories.
One of the directories New Pod City distributes to is Deezer. Deezer is beginning to use AI tools to try to detect and remove content that was created from generative AI. Spotify is working on this issue, too.
“We've seen pretty much everything in the industry at this point with people trying to gain on the AI system," New Pod City Creative Director Frank Sasso says. "We have a lot of resources and programmers working on exactly these types of issues."
At this time, AI-generated audio isn’t completely banned on Spotify, though it doesn’t allow the content on its platform to be used in machine learning. In other words, if someone’s going to make a deepfake of a podcaster’s voice, they can’t do it using Spotify.
Images
