AI Voices are Dominating TikTok with Smart Technology

The rise of AI voices, notably exhibited by companies like ElevenLabs, has acquired significant attention and praise since their recent tool releases. These tools have opened the floodgates to audio fakes, presenting a new challenge on the battleground of online misinformation.

This threat, fueled by AI voices, has the potential to increase political disinformation in the lead-up to the 2024 election. Makers of misinformation can now smoothly wind their conspiracy theories into the voices of celebrities, newscasters, and politicians.

In addition to AI-generated threats from “deepfake” videos, realistic text from ChatGPT, and manipulated images from services like Midjourney, the advent of fake audio adds another layer of concern.

Disinformation watchdogs have observed an increase in videos incorporating AI voices as content creators. And, misinformation spreaders are now embracing these innovative tools. Platforms like TikTok are now urgently implementing measures to identify and label such content.

How AI Voices are Dominating TikTok

NewsGuard is a company specializing in online misinformation monitoring. It discovered a video that strikingly resembled Mr. Obama’s voice on TikTok. This video was among 17 TikTok accounts found to be spreading baseless claims using AI voices, as outlined in a report they released in September.

These accounts primarily posted videos featuring celebrity gossip narrated by AI voices but also propagated unjustified claims regarding Mr. Obama’s sexuality and a conspiracy theory involving Oprah Winfrey and the slave trade. These channels collectively collected hundreds of millions of views and comments.

Though these channels lacked an evident political agenda, NewsGuard noted that the use of AI voices to spread sensational gossip and rumors could serve as a blueprint for evil actors. It aims to manipulate public opinion and publicize falsehoods to a vast online audience.

Jack Brewster, the enterprise editor at NewsGuard, emphasized how this approach helps these accounts gain credibility and a substantial following, enabling them to attempt into even more secretive content.

TikTok has responded by mandating labels that disclose AI-generated content as fake, though these labels were absent on the flagged videos noted by NewsGuard. The platform has taken action by removing or discontinuing the recommendation of several accounts and videos that violated policies related to posing as news organizations and removing harmful misinformation. Specifically, the AI-generated voice of Mr. Obama was removed due to its highly realistic content. It violated TikTok’s false media policy.

“TikTok is the first platform to provide a tool for creators to label AI-generated content and an inaugural member of a new code of industry best practices promoting the responsible use of fake media,” said Jamie Favazza, a TikTok spokeswoman.

While NewsGuard’s report centered on TikTok, a growing news source, similar AI-generated content has been observed on YouTube, Instagram, and Facebook. These platforms allow AI-generated content portraying public figures, including newscasters, as long as it avoids spreading misinformation.

Parody videos showcasing AI-generated dialogues between politicians, celebrities, or business leaders, some of whom are deceased, have gained widespread popularity since these tools gained attention. Manipulated audio now introduces a new level of deception to videos on these platforms, which have already featured fake versions of notable figures such as Tom Cruise, Elon Musk, Gayle King, and Norah O’Donnell. Recently, TikTok and other platforms have grappled with misleading ads featuring deepfakes of celebrities like Mr. Cruise and YouTube star Mr. Beast.

The potential influence of these technologies on viewers is profound. Claire Leibowitz, head of AI and media integrity at the Partnership on AI, emphasized the importance of guidelines and recommendations for creating, sharing, and distributing AI-generated content. It is a collaborative effort involving technology and media companies.

TikTok finally responded to the evolving threat landscape. It announced the introduction of a label that allows users to indicate if their videos utilized AI. In April, the app made it mandatory for users to disclose manipulated media displaying realistic scenes and prohibited deepfakes featuring young individuals and private figures.

David G. Rand, a professor at the Massachusetts Institute of Technology consulted by TikTok, acknowledged the limited effectiveness of these labels in countering misinformation. TikTok is actively testing automated tools to detect and label AI-generated media, a step seen as potentially more effective in the short term by Mr. Rand.

In contrast, YouTube stops political ads from utilizing AI and mandates other advertisers to label their ads when AI is employed. Meta, the parent company of Facebook, integrated a label into its fact-checking toolkit in 2020 to denote whether a video was “altered.” However, X, formerly known as Twitter, requires misleading content to be “significantly and deceptively altered, manipulated or fabricated” to violate its policies. But, the company did not respond to requests for comment.

Mr. Obama’s A.I. voice was crafted using ElevenLabs’ tools, getting significant attention since its unveiling. ElevenLabs, a New York City-based company with 27 employees, responded to the misuse of their technology by restricting the voice-cloning feature to paid users. Additionally, they launched an AI detection tool capable of identifying A.I. content created through their services.

“Over 99 percent of users on our platform are creating interesting, innovative, useful content,” a representative for ElevenLabs said in an emailed statement, “but we recognize that there are instances of misuse, and we have been continually developing and releasing safeguards to control them.”

Various AI companies and academics have explored diverse methods to detect fake audio, with varying degrees of success. Some companies explored adding an invisible watermark to AI audio to signal its AI-generated nature. Others have urged AI companies to limit voice cloning, potentially prohibiting replicas of politicians like Mr. Obama. This approach is already employed in some image-generation tools like Dall-E, which refuses to generate certain political imagery.

Ms. Leibowicz at the Partnership on AI emphasized the unique challenge of flagging synthetic audio for listeners compared to visual alterations. She highlighted the difficulty in maintaining a consistent signal in lengthy audio pieces, similar to a podcast, and pondered the need for labels at regular intervals.

Even if platforms integrate AI detectors, the technology must continuously evolve to keep pace with advancements in AI voices. TikTok mentioned ongoing efforts to develop new detection methods internally and explore potential external partnerships.

Hafiz Malik, a professor at the University of Michigan-Dearborn actively involved in developing AI audio detectors, expressed surprise at the apparent inability of major tech companies to effectively solve this issue. He underlined the necessity for continual improvement in technology to combat the challenges posed by AI voices.

Explore more: Government Seeks Agreement about AI Risks with World Leaders

Visit thetricenet.com for the most up-to-date information regarding Artificial intelligence, Electric Vehicles, Mobile Phones, and Product Reviews.

Aliha Zulfiqar
Aliha Zulfiqarhttp://thetricenet.com
With a major in English Language and Literature, I'm a dedicated SEO Content Writer. Also, I love to write about technology. With over 2 years of experience, I've had the privilege of contributing to various renowned platforms. As I look forward to the future, I am committed to refining my work and delivering content that stands out.

Latest articles

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

Verified by MonsterInsights