The Google Bard-based DarkBERT and DarkBART cybercriminal chatbots use to integrate Google Lens for images and offer instant access to the entire cyber underground knowledge base, representing a significant advancement for adversarial AI.
The developer behind the FraudGPT malicious chatbot is ready to develop a more sophisticated adversarial tool that will work on generative AI and Google Bard’s technology. It will support a large language model (LLM) that utilizes the entire knowledge of the Dark Web.
According to SlashNext, the creator behind FraudGPT, also famous as ‘CanadianKingpin12′ on hackers’ platform, is developing additional AI-based malicious chatbots. The news was brought to notice by an ethical hacker who was a part of another AI-based hacking tool-WormGPT
Though, it is expected that the upcoming bots, dubbed DarkBART and DarkBERT, will facilitate threat actors by offering chatGPT-like AI capabilities that will be more efficient than existing cybercriminal genie offerings.
According to the news published on Aug 1, the firm is worried and warned that the AIs will slowly and gradually lower the barrier of entry for future cybercriminals to develop sophisticated business email compromise phishing campaigns, figure out and exploit zero-day vulnerabilities, examine for complex infrastructure weaknesses, create and distribute malware.
Daniel Kelley, a researcher at SlashNext, stated that ‘The rapid progression from WormGPT to FraudGPT and now ‘DarkBERT’ in under a month underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape.’
According to the hackers, the functionality, DarkBART will be a shadow of the Google BART AI. DarkBART works on the DarkBERT Large Language Model (LLM), developed by South Korean data intelligence firm S2W to fight cybercrimes. Currently, it is only available to academic researchers, which would make malicious access to it notable.
Kelley said, “The threat actor … claims to have gained access to DarkBERT”. In addition, Canadian Kingpin 12 shared a video demonstrating that his version of DarkBERT “underwent specialized training on a vast corpus of text from the Dark Web.”
Another adversarial tool, confusingly also named DarkBERT, not related to Korean AI, will go beyond the entire Dark Web as its LLM gives threat actors an open way to the hive mind of the hacker underground to carry out cyber threats. CanadianKingPin12 claims that it will also have Google Lens Integration.
Developing Dark Web Generative AI Rapidly
Kelley observed that the developers of adversarial AI tools, likewise benevolent counterparts, will soon offer application programming interface (API) access to the chatbots, which will be a tool to enable seamless integration into cybercriminals’ workflows and code and lower the barriers to entry for the Cybercrime game.
Kelley wrote, “Such progress raises significant concerns about potential consequences, as the use cases for this type of technology will likely become increasingly intricate.”
The quick actions also show that proactive measures are needed for threat safety. Companies and organizations should offer BEC- specific training to educate employees on the nature of such attacks and the role of AI.
In addition, standard training is given to enterprise employees to evaluate phishing attacks. To overcome AI-driven threats, businesses should also strengthen the overall system and email verification policy and keyword-flagging to be on the safe side.
Kelley says, “As cyber threats evolve, cybersecurity strategies must continually adapt to counter emerging threats; a proactive and educated approach will be our most potent weapon against AI-driven cybercrime.”