laitimes

2024 Beware of Social Engineering Generated by Generative AI!

author:Shannon娱事

Generative AI applications, such as ChatGPT, have unlimited potential, but on the other hand, their use in hacking and online fraud is also worrying.

2024 Beware of Social Engineering Generated by Generative AI!

Conceptual background image of the new chatbot model ChatGPT Artificial Intelligence Research Lab and OpenAI natural language processing tools

Trend Micro today released its 2024 Annual Information Security Forecast Report, which clearly points out that with the help of generative AI, the ability of online frauds such as face change scams and spear phishing has been greatly improved.

Trend Micro points out that generative AI can generate more realistic faces, experiences, and conversational capabilities, making online scam bait more attractive and significantly improving mobile phones.

2024 Beware of Social Engineering Generated by Generative AI!

According to the FBI, social engineering techniques cybercriminals are already the most victimized crime and one of the most lucrative methods, and it is expected that in 2024, hackers will combine more diverse AI tools (such as chatbots and fake voice combinations) to create more multi-faceted threats such as virtual kidnapping.

2024 Beware of Social Engineering Generated by Generative AI!

For example, the hacked version of ChatGPT "WormGPT" that appeared this year, and the WolfGPT and FraudGPT that appeared later belong to this category, WormGPT and WolfGPT can generate text, code and other data formats that can be used to commit cybercrimes, and FraudGPT directly generates a very realistic scam web page to entice clicks.

2024 Beware of Social Engineering Generated by Generative AI!

In addition, the security of generative AI itself is also a growing problem, and large language models (LLMs) themselves are more susceptible to "data poisoning".

Data poisoning simply means that the performance and behavior of LLMs are maliciously manipulated by tampering with Xi data, or directly intruding into the model's database or process architecture, which is not only more likely to generate problematic and malicious content, but also more likely to cause LLMs to leak confidential information.

2024 Beware of Social Engineering Generated by Generative AI!

In addition to generative AI, Trend Micro also predicts that "worm automation attacks" will become the next go-to weapon for hacker groups to find system vulnerabilities and attack the cloud, such as the OAuth vulnerability in Microsoft's Azure AD this year, or the EleKtra-Leak attack by hackers through the AWS credentials exposed on GitHub last month.

Hackers only need to successfully attack a single vulnerability in the cloud to quickly spread in the cloud, and use the infected cloud-native tools to attack more victims, such as automatically collecting enterprise intelligence data, setting up huge behind-the-scenes manipulation (C&C) server communications, and even launching large-scale distributed denial-of-service (DDos) attacks.

2024 Beware of Social Engineering Generated by Generative AI!

Read on