The devil is always in the detail when it comes to security, and the rise of Voice ID and AI is no exception.
Voice ID, which uses a person’s voice as an element for authentication, is becoming increasingly popular. The Australian Taxation Office (ATO), Centrelink, ANZ Bank, and many large call centres have implemented forms of Voice ID. This technology not only lowers operating costs significantly—saving $2 million annually from 5 million calls—but also makes the user experience more natural, increasing uptake. As the cost of this technology decreases, it is expected to become more mainstream in commercial transactions.
However, with the benefits come new risks. There was a reported 50% increase in voice phishing attacks in 2022. The arrival of AI has further weaponized voice samples, making these attacks more sophisticated and harder to detect. Your biometric voice print, unlike a passphrase, cannot be changed once compromised.
Preventing bad actors from obtaining samples of your voice to train AI is becoming increasingly difficult. Social media, webinars, and public chat forums can unintentionally release voice samples, and there is also the risk of data loss through hacking of recordings at government, telecommunications, or financial agencies.
What Can We Do?
Use Multifactor Authentication
Combine Voice ID with other methods of authentication, such as text messages or authenticator codes, can provide an additional layer of security.
Opt-Out if Necessary
If you do not wish to use Voice ID, make sure to opt out with your organisation to prevent someone else from setting it up for you.
Be Mindful of Voice Phishing
Be cautious of unsolicited calls, such as fake surveys, which may be attempts to capture your voice for malicious purposes.
New technologies bring efficiency and convenience, but they also introduce new vulnerabilities. Education and vigilance are key to ensuring that these technologies are implemented securely and that we are prepared to address the risks they pose.