Picture yourself scrolling through TikTok and stumbling upon a video featuring the renowned YouTuber MrBeast. He announces a giveaway of brand-new iPhones with a tempting tagline, “Click the link below to claim yours now!” Would you take the bait? Perhaps. While the video looks and sounds like MrBeast, it’s actually a deepfake – a deceptive clip generated by artificial intelligence (AI). In a recent incident, this TikTok video duped some fans into revealing personal information and paying shipping fees for non-existent phones. However, a newly developed tool called AntiFake could help thwart such fraudulent schemes.
Traditional methods for detecting deepfakes typically analyze existing video or audio files to determine their authenticity. AntiFake, on the other hand, focuses on safeguarding voice recordings to impede deepfake AI models from learning how to replicate them accurately.
This advancement could heighten the difficulty for AI to generate voices for deepfake videos and phone scams. Certain scammers have exploited deepfake voices to solicit money from individuals’ relatives or breach voice-protected bank accounts.
“Attackers leverage AI tools to commit financial fraud or deceive our loved ones,” states Zhiyuan Yu, a computer science PhD student at Washington University in St. Louis, Mo. Collaborating with the university’s Ning Zhang, they developed AntiFake to address this growing threat.
The computer scientists presented their findings at the 2023 ACM Conference on Computer and Communications Security in Copenhagen, Denmark, last November.
AntiFake complements prior research efforts that shield images from deepfake duplication by AI. Zhang emphasizes the importance of ensuring responsible AI usage for the betterment of society.
The post New tool may protect against deepfake voice scams appeared first on User's blog.