
A seemingly innocuous feature of modern smartphones has unexpectedly evolved into a formidable security threat. As it turns out, a personalized voicemail greeting can serve as a goldmine for fraudsters.
According to Transaction Network Services (TNS)—a company specializing in global telecommunications and financial infrastructure—artificial intelligence requires as little as three seconds of recorded audio to perfectly replicate a person’s voice. Even freely available AI applications are capable of this level of precision.
Generative AI systems have mastered the art of replicating human speech with astonishing accuracy. They can not only mimic vocal timbre but also capture the nuances of speech, including intonation, pacing, and distinctive pronunciation. Armed with such sophisticated fakes, cybercriminals engage in blackmail, financial extortion, reputation sabotage, and large-scale dissemination of political and other forms of disinformation.
The mounting danger posed by deepfake voice technology has compelled regulators to take decisive action. In February 2024, the U.S. Federal Communications Commission (FCC) issued an outright ban on AI-generated robocalls, following a surge in social engineering attacks of unprecedented sophistication. Cybercriminals had learned to covertly record a victim’s voice during a conversation and later repurpose the audio at will. Similar legislative measures are now being implemented worldwide—but is that enough?
Among the most prevalent AI-driven scams is the impersonation of family members, a tactic particularly effective against the elderly. Fraudsters call grandparents, imitating the voice of a grandchild, and fabricate an urgent crisis—a car accident, an arrest, or an unexpected hospitalization. Overcome by panic and concern, victims often act impulsively, failing to recognize the deception.
For top executives, the threat is exponentially more severe. Recently, CEOs of major U.S. tech firms revealed to Cybernews that criminals had repeatedly cloned the voices of their colleagues to manipulate employees. In many cases, attackers also spoofed phone numbers to make fraudulent calls appear legitimate.
John Miller, co-founder and CEO of Halcyon, a company specializing in ransomware defense, cited the high-profile cyberattack on MGM in 2023 as a chilling precedent. Using AI-powered voice cloning, hackers bypassed the corporation’s security protocols, gaining access to critical infrastructure and triggering a massive data breach.
According to Miller, seasoned professionals are trained to verify suspicious calls, but junior employees—especially those new to the organization—are far more susceptible to deception, particularly when confronted with an authoritative voice demanding immediate access to passwords or confidential information.
High-ranking executives are also prime targets for “whale phishing”—a specialized form of spear phishing that focuses on high-value individuals. The term underscores the scale of the potential “catch”: instead of mass-email phishing campaigns, cybercriminals painstakingly research a single high-profile target.
Before launching an attack, fraudsters may spend weeks meticulously gathering intelligence from social media, professional forums, and public speaking engagements. Their goal is to study the target’s communication style, inner circle, and ongoing projects—insights that enable them to craft highly convincing deception strategies.
To mitigate risk, experts recommend minimizing the use of voice messages on social media and messaging apps, even when communicating with trusted friends, family, or colleagues. Additionally, opting out of voice-based biometric authentication can prevent unauthorized access to sensitive accounts.
Instead of a personalized voicemail greeting, it is safer to use the default message provided by your telecom provider.
If you receive a call from an unknown number, avoid speaking until the caller identifies themselves—this makes it significantly more difficult for fraudsters to obtain a sample for AI cloning.
Implementing a family-wide secret passphrase can also serve as a vital safeguard. A unique verification word known only to close relatives ensures the authenticity of urgent messages. Moreover, it can act as a distress signal, alerting family members if cybercriminals gain control of social media accounts and attempt to spread false information using deepfake-generated voices.