Skip to content

State-sponsored hackers exploit ChatGPT for cyberattacks

State-sponsored hackers exploit ChatGPT for cyberattacks
By Ahmet Erarslan
Feb 15, 2024 2:10 PM

Adversaries hailing from China, Iran, North Korea, and Russia have been exploring potential applications for the generative AI service ChatGPT but have not yet deployed these tools in a significant cyber assault

State actors supported by the governments of China, Iran, North Korea and Russia are leveraging the capabilities of large language models (LLMs) utilized by generative artificial intelligence platforms like OpenAI’s ChatGPT. However, as per the Microsoft Threat Intelligence Center (MSTIC), these actors have not yet employed this technology in any significant cyber assaults. 

The researchers at MSTIC have collaborated closely with OpenAI, a long-standing and occasionally controversial partner of Microsoft with a multibillion-dollar partnership, to monitor different adversary groups. They exchange intelligence on threat actors and their evolving tactics, techniques, and procedures (TTPs). Both organizations are also cooperating with MITRE to integrate these new TTPs into the MITRE ATT&CK framework and the ATLAS knowledge base. 

According to MSTIC, over the recent years, threat actors have been closely monitoring technological advancements along with defenders. Similar to defenders, they view AI as a tool to enhance their efficiency and exploit platforms like ChatGPT for potential advantages. 

The MSTIC team highlighted that while threat actors are exploring AI technologies to enhance their operations, at present, there have been no significant attacks utilizing the LLMs under close observation. 

The team emphasized the importance of fortifying security controls against potential attacks and implementing advanced monitoring systems to preempt and thwart malicious activities. They mentioned that while threat actors have diverse motives and levels of sophistication, they share common activities such as reconnaissance, coding, malware development, and language learning, particularly English, to aid in social engineering and victim interactions. 

The team shed light on the activities of five nation-states’ advanced persistent threat (APT) groups, including those from Iran, North Korea, Russia, and China, that have been experimenting with ChatGPT. For instance, the Iranian APT Crimson Sandstorm has utilized LLMs to generate social engineering lures in phishing campaigns and develop code to evade detection. On the other hand, the North Korean APT Emerald Sleet has employed LLMs to support spear-phishing attacks and gather intelligence on North Korea-related subjects. 

 State-sponsored hackers exploit ChatGPT for cyberattacks

What have they been doing? 

 The Russian cyber espionage group Forest Blizzard, also known as APT28 or Fancy Bear, linked to Russian military intelligence via GRU Unit 26165, has been actively utilizing LLMs to support cyberattacks on targets in Ukraine. 

Among other activities, they have been observed employing LLMs to manipulate satellite communications and radar imaging technologies, potentially for military operations against Ukraine. They have also sought assistance with basic scripting tasks like file manipulation, data selection, regular expressions, and multiprocessing. MSTIC suggests that this indicates Forest Blizzard is exploring automation possibilities for their operations. 

Two Chinese APT groups are Charcoal Typhoon, also known as Aquatic Panda, ControlX, RedHotel, Bronze University, and Salmon Typhoon, also known as APT4 or Maverick Panda. 

Charcoal Typhoon has a wide-ranging focus, targeting various sectors such as government, communications, fossil fuels, and IT in Asian and European countries. On the other hand, Salmon Typhoon tends to target U.S. defense contractors, government agencies, and cryptographic technology experts. 

Charcoal Typhoon has been seen using LLMs to enhance its technical capabilities, seeking help with tool development, scripting, understanding cybersecurity tools, and creating social engineering tactics. 

Reaction 

 Salmon Typhoon is also exploring the use of LLMs, primarily to gather information on sensitive geopolitical subjects relevant to China, prominent individuals, and U.S. global influence and internal affairs. However, there was an instance where they attempted to solicit ChatGPT to write malicious code, which was declined by the model in accordance with ethical guidelines. 

All identified APT groups have had their access to ChatGPT restricted. 

In response to the MSTIC – OpenAI research, Neil Carpenter, a principal technical analyst at Orca Security, highlighted that while nation-state actors are showing interest in LLMs and generative AI, they are still in early stages and have not yet developed novel or advanced techniques. He emphasized that organizations focusing on existing best practices for defense and incident response are well-prepared, and those investing in advanced approaches like zero trust will continue to benefit. 

Carpenter noted that generative AI can assist defenders by enabling more efficient operations, such as quickly identifying critical assets and vulnerabilities during incidents like the current Ivanti vulnerabilities 

 

Source: Newsroom

 

Last Updated:  May 29, 2024 10:28 AM