laitimes

ChatGPT, criminals use it all said good?

Organize | He Miao      

Listing | CSDN(ID:CSDNnews)

Since the launch of ChatGPT, there has been a lot of attention in the tech community, but there are also many researchers who worry that generative AI will democratize cybercrime while democratizing AI.

This concern is justified, as in some hacking forums, security researchers have observed people using ChatGPT to write malicious code.

Recently, a post titled "ChatGPT – Benefits of Malware" appeared on a popular underground hacking forum, and the publisher of the post revealed that he was experimenting with reconstructing common malware with ChatGPT.

Security researchers say that malicious programs currently developed with ChatGPT are fairly simple, but it is only a matter of time before more complex programs appear.

ChatGPT, criminals use it all said good

At the end of November 2022, ChatGPT was released, which, in addition to arousing interest in new uses for artificial intelligence, also added some challenges to modern cybersecurity. From a cybersecurity perspective, the main challenge posed by the birth of ChatGPT is that technologists can write code as needed to generate malware and ransomware.

Although the ChatGPT terms prohibit its use for illegal or malicious purposes, skilled users can effortlessly adjust their requests to bypass these restrictions.

As mentioned earlier, ChatGPT – Benefits of Malware describes the use of ChatGPT to recreate common malicious programs.

There are two examples in the text. He shared code for a Python-based stealer that searches for common file types and copies them to a random folder in the Temp folder, compresses them, and uploads them to a hardcoded FTP server.

ChatGPT, criminals use it all said good?

The second example is a simple Java fragment. It downloads a very common SSH and telnet client "PuTTY" and runs it secretly on the system with Powershell, and users can modify this script to download and run any program, including common malware families.

Meanwhile, security firm Check Point Research (CPR), which analyzed several major underground hacking communities, found that a group of cybercriminals are already using OpenAI to develop malicious tools, and blogs about the following examples of crimes using ChatGPT.

Create an encryption tool

A hacker named USDoD released a Python script to perform cryptographic operations. Outrageously, this is the first script he created, which proves that Xiaobai can really use ChatGPT to write malware. The script is actually a hodgepodge of different signing, encryption, and decryption capabilities, and while all of the above code may seem benign, it could turn into ransomware if anyone with a simple modification to the script and syntax is concerned.

ChatGPT, criminals use it all said good?

UsDoD is not a developer and has limited technical skills, but he is very active and well-known in the underground community, and has long been targeted by the US Department of Defense. One cybercriminal commented that his code style was similar to OpenAI code, and later the US Department of Defense confirmed that OpenAI gave him good "help" and allowed him to complete the script well.

Online fraud and black deals

In the discussion "Misusing ChatGPT to Create Dark Web Market Scripts", cybercriminals have shown how easy it is to create dark web markets using ChatGPT. The main role of the marketplace in the underground illicit economy is to provide a platform for the automated trading of illegal or stolen goods such as stolen accounts or payment cards, malware, and even drugs and ammunition, and all payments are made in cryptocurrency, acting as a bridge to crime.

Phishing in the wild

Cybercriminals can also leverage AI technology to increase the number of phishing threats in the wild, with just one success potentially triggering a massive data breach that causes millions of dollars in financial damage and irreparable reputational damage.

Security testing of ChatGPT

In addition, some security researchers have begun testing ChatGPT's ability to generate malicious code. For example, Dr. Suleyman Ozarslan, a security researcher and co-founder at Picus Security, recently created not only phishing campaigns using ChatGPT, but also ransomware for MacOS.

ChatGPT, criminals use it all said good?

"I typed in a prompt to write a World Cup-themed email for a phishing simulation, and it turned out to be created in perfect English in just a few seconds." Ozarslan said that while ChatGPT realized that phishing attacks could be used for malicious purposes and could cause harm to individuals and businesses, it still generated the email.

Regarding the risks of such emails, Lomy Ovadia, vice president of research and development at email security provider IRONSCALES, said, "When it comes to cybersecurity, ChatGPT can offer attackers more than they target. This is especially true for business email (BEC), which relies on the use of deceptive content to impersonate colleagues, company VIPs, vendors, and even customers. ”

AI doesn't stop moving

While security teams can also use ChatGPT for defensive purposes, such as testing code, it lowers the barrier to entry for cyberattacks, undoubtedly further exacerbating the threat.

But it also offers some positive use cases. There are a lot of ethical hackers now using existing AI techniques to help write vulnerability reports, generate code samples, and identify trends in large data sets.

For AI to work positively, it also requires human oversight and some manual configuration, which cannot always rely on the absolute latest data and intelligence to operate and train.

ChatGPT is still very young, chatbots are one of the hottest branches of AI today, the prospects are promising, and the world of AI will not stop because of fear of risk.

Reference source:

https://www.solidot.org/story?sid=73838

https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/

https://news.cnyes.com/news/id/5058514

Read on