Criminals are set to use artificial intelligence like ChatGPT to commit online fraud and other serious crimes like cybercrime, and terrorism. Europol (European Union Agency for Law Enforcement Cooperation) said in a Monday report that details how AI language models can fuel hustle and that many criminals are already using ChatGPT to commit crimes.
“The potential exploitation of these types of AI systems by criminals provides a grim outlook,” The Hague-based Europol said.
Europol looked at the use of chatbots as a whole but focused on ChatGPT during a series of workshops as it is the highest-profile and most widely used, it said.
Criminals could use ChatGPT to speed up the research process significantly in areas they know nothing about, the agency found.
This could include drafting text to commit fraud or give information on “how to break into a home, to terrorism, cybercrime and child sex abuse,” it said.
The chatbot’s ability to impersonate speech styles made it particularly effective for phishing, in which users are tempted to click on fake email links that then try to steal their data, it said.
“ChatGPT’s ability to quickly produce authentic-sounding text makes it ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.”
ChatGPT can also be used to write computer code, especially for non-technically minded criminals. Europol said.
“This type of automated code generation is particularly useful for those criminal actors with little or no knowledge of coding and development,” it said.
An early study by US-Israeli cyber threat intel company Check Point Research (CPR) showed how the chatbot can be used to infiltrate online systems by creating phishing emails, Europol said.
While ChatGPT had safeguards including content moderation, which will not answer questions that have been classified harmful or biased, these could be circumvented with clever prompts, Europol said.
AI was still in its early stages and its abilities were “expected to further improve over time,” it added.
Now, the European Union’s law enforcement agency, Europol, has detailed how the model can be misused for more nefarious purposes. In fact, people are already using it to carry out illegal activities, the cops claim. Europol stated in its report.
“The impact these types of models might have on the work of law enforcement can already be anticipated. Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT.”
Although ChatGPT is better at refusing to comply with input requests that are potentially harmful, users have found ways around OpenAI’s content filter system. Some have made it spit-out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask for step-by-step guidance.
“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime, and child sexual abuse,” Europol warned.
Europol said that as more companies roll out AI features and services, it will open up new ways to use the technology for illegal activities. “Multimodal AI systems, which combine conversational chatbots with systems that can produce synthetic media, such as highly convincing deepfakes, or include sensory abilities, such as seeing and hearing,” the law enforcement org’s report suggested.