AI-powered ‘synthetic cancer’ worm represents a new frontier in cyber threats

Researchers have unveiled a new type of computer virus that harnesses the power of large language models (LLMs) to evade detection and propagate itself.  This “synthetic cancer,” as its creators dub it, portrays what could be a new era in malware. David Zollikofer from ETH Zurich and Benjamin Zimmerman from Ohio State University developed this proof-of-concept malware as part of their submission to the Swiss AI Safety Prize.  Their creation, detailed in a pre-print paper titled “Synthetic Cancer – Augmenting Worms with LLMs,” demonstrates the potential for AI to be exploited to create new, highly sophisticated cyber attacks.  Here’s a The post AI-powered ‘synthetic cancer’ worm represents a new frontier in cyber threats appeared first on DailyAI.

Jul 17, 2024 - 14:00
 13
AI-powered ‘synthetic cancer’ worm represents a new frontier in cyber threats

Researchers have unveiled a new type of computer virus that harnesses the power of large language models (LLMs) to evade detection and propagate itself. 

This “synthetic cancer,” as its creators dub it, portrays what could be a new era in malware.

David Zollikofer from ETH Zurich and Benjamin Zimmerman from Ohio State University developed this proof-of-concept malware as part of their submission to the Swiss AI Safety Prize

Their creation, detailed in a pre-print paper titled “Synthetic Cancer – Augmenting Worms with LLMs,” demonstrates the potential for AI to be exploited to create new, highly sophisticated cyber attacks. 

Here’s a blow-by-blow of how it works:

  1. Installation: The malware is initially delivered via email attachment. Once executed, it can download additional files and potentially encrypt the user’s data.
  2. Replication: The interesting stage leverages GPT-4 or similar LLMs. The worm can interact with these AI models in two ways: a) Through API calls to cloud-based services like OpenAI’s GPT-4. Or b) By running a local LLM (which could be common in future devices).
  3. GPT-4/LLM usage: Zollikofer explained to New Scientist, “We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit.” The LLM then generates a new version of the code with altered variable names, restructured logic, and potentially even different coding styles, all while maintaining the original functionality. 
  4. Spreading: The worm scans the victim’s Outlook email history and feeds this context to the AI. The LLM then generates contextually relevant email replies, complete with social engineering tactics designed to encourage recipients to open an attached copy of the worm. 

As we can see, the virus uses AI in two days: to create code to self-replicate and to write phishing content to continue spreading. 

The ability of the “synthetic cancer” worm to rewrite its own code presents a particularly challenging problem for cybersecurity experts, as it could render traditional signature-based antivirus solutions obsolete.

“The attack side has some advantages right now, because there’s been more research into that,” Zollikofer notes. 

Moreover, the worm’s ability to craft highly personalized and contextually relevant phishing emails increases the likelihood of future successful infections.

This comes just a few months after a similar AI-powered worm was reported in March. 

Researchers led by Ben Nassi from Cornell Tech created a worm that could attack AI-powered email assistants, steal sensitive data, and propagate to other systems. 

Nassi’s team targeted email assistants powered by OpenAI’s GPT-4, Google’s Gemini Pro, and the open-source model LLaVA.

“It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential,” Nassi told Wired, underlining the potential for massive data breaches.

While Nassi’s worm primarily targeted AI assistants, Zollikofer and Zimmerman’s creation goes a step further by directly manipulating the malware’s code and crafting compelling phishing emails.

Both represent potential future avenues for cybercriminals to leverage widespread AI tools to launch attacks.

AI cybersecurity fears are brewing

This has been a tumultuous few days for cyber-security in an AI context, with Disney suffering a data breach at the hands of a hacktivist group.

The group said they were fighting against tech companies to represent creators whose copyrighted work had been stolen or its value otherwise diminished.

Not long ago, OpenAI was exposed for having suffered a breach in 2023, which they tried to keep under wraps. And not long ago, OpenAI and Microsoft released a report admitting that hacker groups from Russia, North Korea, and China had been using their AI tools to craft cyber attack strategies. 

Study authors Zollikofer and Zimmerman have implemented several safeguards to prevent misuse, including not sharing the code publicly and deliberately leaving specific details vague in their paper.

“We are fully aware that this paper presents a malware type with great potential for abuse,” the researchers state in their disclosure. “We are publishing this in good faith and in an effort to raise awareness.”

Meanwhile, Nassi and his colleagues predicted that AI worms could start spreading in the wild “in the next few years” and “will trigger significant and undesired outcomes.” 

Given the rapid advancements we’ve witnessed in just four months, this timeline seems not just plausible, but potentially conservative.

The post AI-powered ‘synthetic cancer’ worm represents a new frontier in cyber threats appeared first on DailyAI.