back to top

OpenAI software used to create voice bot that may drain crypto wallets

Related Article

Researchers within the US have reportedly used OpenAI’s voice API to create AI-powered cellphone rip-off brokers that might be used to empty victims’ crypto wallets and financial institution accounts.

As reported by The Register, pc scientists on the College of Illinois Urbana-Champaign (UIUC) used OpenAI’s GPT-4o mannequin, in tandem with various different freely accessible instruments, to construct the agent they are saying “can indeed autonomously execute the actions necessary for various phone-based scams.”

Based on UIUC assistant professor Daniel Kang, cellphone scams that contain perpetrators pretending to be from a enterprise or authorities group goal round 18 million People yearly and value someplace within the area of $40 billion.

GPT-4o permits customers to ship it textual content or audio and have it reply in sort. What’s extra, in keeping with Kang, it’s not expensive to do, which breaks down a significant a barrier to entry for scammers trying to steal private info equivalent to financial institution particulars or social safety numbers.

Certainly, in keeping with the paper co-authored by Kang, the typical value of a profitable rip-off is simply $0.75.

Learn extra: Hong Kong busts crypto rip-off that used AI deepfakes to create ‘superior women’

In the course of the course of their research, the group carried out various completely different experiments, together with crypto transfers, reward card scams, and the theft of consumer credentials. The common general success charge of the completely different scams was 36% with most failures on account of AI transcription errors.

“Our agent design is not complicated,” stated Kang. “We carried out it in simply 1,051 traces of code, with a lot of the code devoted to dealing with real-time voice API.

“This simplicity aligns with prior work showing the ease of creating dual-use AI agents for tasks like cybersecurity attacks.”

He added, “Voice scams already cause billions in damage and we need comprehensive solutions to reduce the impact of such scams. This includes at the phone provider level (e.g., authenticated phone calls), the AI provider level (e.g., OpenAI), and at the policy/regulatory level.”

The Register reviews that OpenAI’s detection programs did certainly alert it to UICU’s experiments and moved to reassure customers that it “uses multiple layers of safety protections to mitigate the risk of API abuse.”

It additionally warned, “It is against our usage policies⁠ to repurpose or distribute output from our services to spam, mislead, or otherwise harm others — and we actively monitor for potential abuse.”

Obtained a tip? Ship us an electronic mail or ProtonMail. For extra knowledgeable information, observe us on XInstagramBluesky, and Google Information, or subscribe to our YouTube channel.

Related Article