HP threat intelligence finds gen AI being used to craft malware
HP, in its latest threat intelligence report, shared that cybercriminals are using generative AI to craft malicious malware. The report highlights a malicious campaign targeting French-speaking users, where malware was developed using artificial intelligence.
First detected in June, the use of AI in developing the malware was detected due to the presence of comments within the malicious, something AI does when asked to write lines of code.
The campaign reportedly used HTML smuggling to deliver a password-protected ZIP archive, that researchers unlocked using brute force. When the code within the ZIP file was analysed, researchers found that the attackers had commented on the entire code, a rarity for codes written by a human.
Structure of the code, the comments explaining each line, and the use of native language for function names and variables further point to the use of AI for writing the malware.
Security researchers have increasingly warned that cybercriminals may be using gen AI to write phishing emails. Low-level threat actors are also leveraging AI to write malware and customise it to attacks targeting various regions and platforms.
Threat actors are also using AI to increase the speed of creating malware when creating more advanced threats.
Earlier, in 2023, reports emerged that threat actors were using OpenAI’s ChatGPT to write codes and launch cyberattacks. At the time, the company updated its capabilities to ensure threat actors could not use the chatbot to write malicious emails or code. However, it seems that threat actors may have found ways to bypass the security restrictions placed in gen AI models to leverage the technology to craft and increase the scope of their malicious campaigns.
Published - September 26, 2024 12:00 pm IST