laitimes

AI fraud is efficient and low-cost, and the "three magics" effectively prevent potential threats

author:Shi Tianhao
AI fraud is efficient and low-cost, and the "three magics" effectively prevent potential threats

With the widespread popularity of generative AI technologies such as GPT-4 and powerful language models, there is a lot of hope for the potential of AI in various fields, such as text creation, art painting, translation, and replacing traditional human customer service.

However, in addition to this expectation of future technology, we also have to face the risk that AI technology may be misused. It concerns us that some criminals have begun to skillfully use AI technology to commit telecom fraud, which undoubtedly brings new challenges to our defense.

In 2019, the world's first AI fraud occurred in the United Kingdom, where a fraudster used advanced AI voice imitation software to impersonate the company's executives and successfully defrauded the general manager of a British energy company of 220,000 euros. In a more egregious case in China, a company legal person was defrauded of up to 4.3 million yuan in just 10 minutes of using WeChat to video chat with friends.

These appalling cases reveal the huge societal risks of AI generating false information, and we are deeply concerned about the potential for AI software to be used for a variety of criminal activities.

With the rise of artificial intelligence technology, we are in the critical period of finding the best application scenarios and commercial landing. Although such individual fraud cases will not directly stifle the development of AI, if the relevant security and privacy issues cannot be properly addressed, it may have a negative impact on the prospects of artificial intelligence and form pressure on the development of AI.

AI scams: Why is the success rate so high?

The emergence of AI scams is largely due to the rapid development of artificial intelligence technologies, especially deep learning and machine learning. Through these advanced techniques, scammers can gradually train AI that can simulate human behavior and language, and even produce extremely realistic fake videos or audio (e.g., Deepfakes, artificial intelligence's deep face changing technology).

These fake contents are extremely authentic, making it extremely difficult to identify the real and fake, which greatly increases the deceptive effect of scammers.

In this increasingly digital society, people are moving more and more frequently online and on social media, and everyone is generating a large amount of digital information. This provides a wide range of operational space for fraudsters, who can use these platforms to cooperate with AI technology to carry out large-scale and accurate fraud.

Whether it is through big data analysis to obtain the behavior habits of the defrauded object, more accurately locate the social level and economic power of the defrauded object, or through familiar preferences to approach the object, gain their trust, and finally achieve the fraud goal, it shows the efficiency and concealment of AI fraud.

Scammers often have more information and technical resources than victims, which makes it difficult for victims to detect and defend against such high-tech scams. Therefore, the asymmetry of information and technology has undoubtedly become a major advantage of AI fraud.

However, while some places have begun to develop regulations against AI scams, the lag in law and regulation is also an issue that cannot be ignored. Existing laws and regulations often fail to keep up with the rapid development of technology, leading AI scams to find enough room to operate in legal loopholes.

Finally, it cannot be ignored that economic incentives are the main factor driving scammers' actions. Since AI scams are often difficult to detect, scammers can reap higher financial gains. This lure of great interest has also made them more and more active in this activity.

In summary, it can be seen that the high success rate of AI fraud is mainly due to its technical advantages, social digitalization, information asymmetry, lagging regulations and economic incentives.

AI scams cost less than traditional scams

With the development of AI technology, it has not only changed the way many fields work, but also profoundly changed the shape of fraud, making telecom fraud more difficult to prevent.

For example, AI face changing technology has been relatively mature, and it can synthesize extremely realistic dynamic videos at a low cost. In one recent case, a woman was scammed into believing that she had been inadvertently involved in a crime and ended up being scammed out of a large amount of property.

AI scams are highly automated. Properly trained AI models are able to run automatically without human intervention. This means that once the AI scam tool is created and deployed, it can conduct fraud 24 hours a day, without human involvement, reducing the human cost of scams.

First of all, where does the technology come from? On open source communities such as GitHub, there is AI face-changing software used by open source. Among them, the most well-known are Deepfacelive and Deepfacelab, two well-known open source face changing models. Many related technical communities, forums, post bars, there are a large number of beginner series tutorials, and even Chinese version of AI ready-made model download, some trading platforms also have a large number of tutorials and teaching packages for sale, the price starts from a few yuan.

In addition, AI scams are highly scalable. Unlike traditional scams that require human involvement, AI scams can easily scale to large scale in a short period of time.

For example, a fraud group can make full use of automatic dialing systems, AI speech recognition and natural language processing technology to make hundreds of thousands or even millions of calls per day (similar to overseas IP outbound calls), covering many cities across the country. This scale of expansion only requires a few additional computing resources to allow AI scams to reach more people.

AI scams use techniques such as deepfakes and natural language processing to create a fairly realistic illusion, making fraud more difficult to detect. For example, a European energy company was scammed for about €220,000 because of a fake CEO phone call. Because they can mimic human behavior without being noticed, this reduces the likelihood that fraud will be detected and stopped, further reducing the risk cost of fraud.

However, we need to realize that developing and training an effective AI model is not easy, it requires a lot of data, specialized technical knowledge and computing resources, and all require a certain cost.

With the increase of legal supervision and the development of fraud recognition technology, the risk and cost of AI fraud may also increase. Some countries have begun to establish laws that severely punish the use of AI for fraud. Therefore, although AI scams have cost advantages in some aspects, there are still multiple risks and costs to consider when implementing AI scams.

Magic defeats magic: a key player in social platforms

In this digital age, social platforms have clearly become the main battleground for AI scams. People conduct a large number of online activities and social interactions on social platforms, providing scammers with rich target choices and information sources, making AI scams emerge on social platforms.

For example, in a scam targeting social media users, the scammer used AI technology to imitate the voice of the victim's friend to launch a fake emergency call. The victim was defrauded of a large amount of money without being able to distinguish between real and fake speech.

In addition, some scammers have used deep learning technology to create fake videos of celebrities, claiming that these celebrities are recommending some kind of investment opportunity, thus defrauding countless people's assets.

The plurality and openness of social platforms make regulation and prevention difficult. With so many users and a variety of behaviors, scammers are often able to mix in to commit crimes. Once identified, they have often defrauded them of large amounts of wealth and escaped.

Therefore, social platforms play a vital role in preventing AI scams. They need to use advanced technology for prevention, develop and enforce clear user policies, and strengthen user education and guidance to minimize the occurrence of AI fraud.

Upgrade technology magic to defeat: First, social platforms need to strengthen technical protection. With advanced technologies such as machine learning, platforms can detect and flag anomalous behavior or suspicious accounts. For example, by analyzing user behavior patterns, machine learning algorithms can look for and identify behavior that doesn't align with normal behavior patterns, often suggesting possible fraud.

Establish rules to beat magic: Second, social platforms should have clear user policies in place and enforce. On May 9, the Douyin platform released the "Platform Specification and Industry Initiative on AI-generated Content", which explicitly prohibits the use of generative AI technology to create and publish infringing content, including but not limited to portrait rights, intellectual property rights, etc. Once a violation is discovered, the platform will strictly punish it.

Other platforms also need to clearly state what is prohibited and the consequences of violating the rules. For users who violate the rules, social platforms need to set up a fair and fast handling mechanism to ensure fair treatment of every user.

Cultivate awareness magic to defeat: Third, social platforms can prevent AI scams by raising users' awareness of cybersecurity. By publishing educational content on how to use social platforms safely and teaching users how to identify and prevent AI scams, users can effectively reduce their own risk of becoming victims of fraud.

At the same time, reminding users not to trust strangers easily, and warning users to pay attention to the protection of personal information are also important measures.

Overall, social platforms play a vital role in preventing AI scams.

Just as the same hammer can be used to build a house or destroy it, AI technology to bring a better user experience to social platforms is a good thing in itself, the key is how to use it. Platforms need to take proactive actions on multiple fronts to ensure users' cybersecurity and use their technical capabilities to mitigate the drawbacks generated by AI scams.

Conclusion

AI fraud is undoubtedly one of the major challenges facing the current development of AI, and its impact cannot be ignored. However, I believe that some readers will think that it may be too pessimistic to regard it as the "last straw" that overwhelms the development of AI. In fact, technological development is always accompanied by an infinite expansion of possibilities, and it is not alarmist.

We need to recognize this problem and jointly address and reduce its potential risks to society through multi-angle efforts, including technical protection of social platforms, user education, legal supervision and other means.

The emergence of AI fraud does reveal a hidden side of AI technology, but in the same way, we can also see that many companies, research institutions and regulators are trying to find solutions to ensure the healthy development and application of AI technology.

In the future, we will see more technological innovations to help us identify and prevent AI fraud, the legal and regulatory system will be more perfect, and social platforms will provide better user security.

AI scams will not be the "last straw" that holds us back, but rather an important driver for us to move forward and develop safer and more responsible AI technologies. We need to remain optimistic, respond to challenges, and work together to create a more secure and trustworthy digital society.