News, Technology

GPT-4 Can’t Stop Helping Hackers Build Cybercriminal Tools

Written by Senoria Khursheed ·  1 min read >

Since the launch of chatGPT, it has been on fire. Every company, whether it’s tech or media-related, needs to incorporate ChatGPT to enhance the productivity of their workforce. This week, OpenAI launches the latest version of its machine learning software GPT-4.

According to the company, the most highlighted feature is to have rules protecting it from Cybercriminal use.
According to the researchers, they have tricked it into making malware and facilitating the craft of phishing emails.

Similar to how they had done for the previous iteration of OpenAI’s software ChatGPT.
Though, researchers from a cybersecurity firm presented the method they got around OpenAI serials on malware development by simply removing the word “malware” in a request.

Malware

GPT-4 then works on software that gathers PDF files and sends them to a remote server. Furthermore, giving the researcher advice on making it run on a Windows 10 PC AMD make it a smaller file. By making it a smaller file, it was able to run faster. Not only this, but it enables a lower option of being spotted by security software.

If you want GPT-4 craft phishing emails, there are two approaches that we can follow. First, use GPT-3.5, which cannot block requests to craft malicious messages or compose a phishing email impersonating a legitimate bank.

They then asked GPT-4, which initially had refused to generate an original phishing message. Secondly, they can request advice on creating a phishing awareness campaign for a business.

According to the report, “GPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity”. As per the researchers, “What we are seeing is that GPT-4 can serve both good and bad actors. Good actors can access GPT-4 to craft and stitch code. Whereas bad actors can use this AI technology for rapid execution of cybercrime”.

Sergey Shykevich, threat group manager at Checkpoint, stated that there appeared to be lower barriers to stopping GPT-4 from generating phishing or malicious code than in previous versions.

According to him, the company relies on the fact that only premium users are currently allowed access. “I think they are trying to prevent and reduce them, but it is a game of cat and mouse”.
GPT-4 can help people with little technical knowledge get into making malicious tools.

According to the news, it had significant limitations for cybersecurity operations. “It doesn’t improve existing surveillance, vulnerability, exploitation and network navigation tools.
However, it is less effective than other tools for complex, high-level activities like novel vulnerability identification.

Moreover, the hackers found that GPT-4 was “good at drafting realistic social engineering material”.
As per OpenAI, “to mitigate potential abuse in this area, w have built models trained to reject malicious cyber security requests and have enhanced our internal security systems including detection, monitoring and response”.

According to Cuthbert, an intelligent hacker already knows what OpenAI can do. In contrast, the advanced detection systems must also be able to pick up on the types of malware chatGPT helps create.

Read more:

Microsoft 365 to Get AI Tech Similar to ChatGPT

ChatGPT can Take Your Career To The Next Level:Check How?