laitimes

In the past year, AI fraud in the United States has increased by 50%, "even crying is extremely real"

author:National Business Daily

Per reporter: Zheng Yuhang, Li Menglin, Cai Ding Per editor: Gao Han, Lan Suying, Tan Yuhan

Recently, new types of domestic AI face changing scams have occurred frequently, and some people were deceived of 4.3 million yuan in 10 minutes, which once rushed to the hot search, causing the attention of the whole network. But using AI to commit fraud doesn't stop in China. Some experts estimate that AI fraud incidents in the United States have increased by more than 50% year-on-year in the past year.

Since ChatGPT detonated the generative AI boom in November 2022, the evolution of text, image, voice and video AI technologies has accelerated, bringing the potential for productivity revolution while giving criminals powerful tools. A few dollars plus a few seconds of speech can synthesize a fake human voice, and some experts estimate that AI fraud incidents in the United States have increased by more than 50% year-over-year in the past year.

In addition to the gullible elderly group, even some business elites have not been able to escape. How can we prevent such new types of fraud? The "Daily Economic News" reporter connected with Professor Peter Bentley, a computer scientist and artificial intelligence expert at University College London.

"Only $5 to generate"

One day in late April, a woman named Jennifer DeStefano in the United States received a strange phone call - "Mom, I messed up!" On the other end of the phone came the voice of his eldest daughter, Brianna, who was preparing for a ski competition in the field, constantly asking her for help.

"Listen, your daughter is in my hands, if you call the police or tell anyone else, I'll drug her, throw her to Mexico when you've had enough, and you'll never want to see her again," a deep man's voice threatened over the phone, demanding a $1 million ransom from Jennifer.

Jennifer was stunned on the spot, saying that she could not get $1 million, and the man on the other end of the phone "sold" to $50,000. After hanging up the phone, Jennifer's friend called the police and tried to convince her that it was a scam, but the mother, who loved her daughter, couldn't listen to it, because the crying of the "daughter" was too real. Later, Jennifer had begun to discuss the way to transfer money with the other party, but fortunately her daughter called in time to report that she was safe and avoided property damage.

"When a mother can recognize her child, even if the child is across the building from me, I know it's my child when she cries," Jennifer was still surprised, recalling the exact same voice on the phone as her daughter.

According to foreign media reports, with the development of AI technology, criminals can generate kidnapped dialogue clips based on only a few seconds of voice material on social media to extort money, which may cost as little as $5 per month for the use of AI programs.

"Large language models can write text in any style, so if you have some samples of email or social media communication, it's now easy for AI to use to learn and pretend to be yourself." After example training, generative AI algorithms can also easily generate fake audio and video. As more and more applications have these features, it becomes more and more accessible," Professor Peter Bentley, a computer scientist and AI expert at University College London, told the Daily Economic News. ”

The U.S. Federal Trade Commission (FTC) warned in May that bad actors are using AI voice technology to fake emergencies to defraud money or information, and such scams have skyrocketed 70% during the pandemic. Michael Skiba, an expert on international fraud in New York State, estimated to the media in May that the number of AI fraud cases in the United States has increased by between 50% and 75% year-on-year in the past year.

Although the losses caused by the use of AI technology fraud are not separately calculated, the FBI's annual report released in March showed that the losses caused by online fraud in the United States reached $10.3 billion in 2022, a five-year high, and pointed out that AI voice scams targeting the elderly were the hardest hit area.

The maturity of AI synthesis technology has vividly demonstrated the cover video of AI Sun Yanzi, which is popular in China. Infinitely close to the real AI voice means that it is not only housewives or grandparents who will be deceived, but even business elites will be difficult to tell.

According to Gizmodo, the voice of a CEO of a British energy company was synthesized by scammers using AI technology, and then the scammers used the synthesized voice to guide phone transfers, defrauding him of £220,000 to his Hungarian account. The CEO said he was shocked to hear the AI-synthesized speech himself, because it mimicked not only his usual intonation, but even his "subtle German accent."

How can the public prevent it?

Since ChatGPT sparked the generative AI boom, tech giants and startups have scrambled to launch and upgrade AI products. With technology accelerating iteration, applications becoming cheaper and cheaper, and regulatory means and strength difficult to cope with, AI fraud poses an urgent challenge.

At the beginning of 2023, Microsoft launched a new model VALL· E only needs 3 seconds of footage to replicate anyone's voice, even the ambient background sound. Companies like ElevenLabs, Murf, Resemble and Speechify already offer services to generate AI voices, with monthly subscriptions ranging from a few dollars to $100 in premium packages.

Hany Farid, a professor of digital forensics at the University of California, Berkeley, said that a year or two ago cloning a person's voice required a large amount of audio data, and now only the human voice in a short video, AI software can analyze age, gender, accent and other characteristics, and then search a huge voice database to find similar sounds and predict patterns, thereby reconstructing the overall similar voice of an individual. "It's terrible, with all the conditions to cause a catastrophe," he said.

Jennifer also suspects that the scammer obtained the audio through her daughter's social account, saying that Brianna has a private TikTok account and a public Instagram account containing photos and videos of her ski races.

If AI voice is coupled with the same mature "face change" technology, "seeing is believing" can easily become a thing of the past. "The current AI audio and video scams, if you look and listen very carefully, you may find some strange 'errors' or tones," Professor Peter Bentley said, "but unfortunately, many people use audio filters to reduce noise when using mobile phone calls, which makes real people may sound like fake and fake people may sound like real people, so we can no longer assume that anything we see or hear on the screen is real at this time." ”

How can ordinary people prevent being deceived? Professor Peter Bentley said there was currently no way to recognise the difference between AI-generated and real.

"So the best advice is: Be suspicious and double-check anyone who asks you for money or personal information, even if 'he' appears to be a trusted friend or family member." Be sure to call them back, preferably by video, or preferably meet them in person to see if the request is genuine. If you need help from a friend, talk to them in person. Either way, that's the right thing to do! Professor Peter Bentley said.

Daily economic news

Read on