Inspiring Business by Sharing Success

OpenAI models used in nation-state influence campaigns

OpenAI models used in nation-state influence campaigns

A recent report on threat actors with links to hostile nation states using OpenAI’s tools to influence operations has been published, including how this has been thwarted.

This follows a trend of nation state threat actors using AI tools to advance cyber capabilities. Different AI tools have been used to generate text and images, as well as articles and social media posts.

In a recent report published by OpenAI, it was discussed that the organisation had disrupted at least 5 influence operations. These range across different hostile nation states including Russia and China.

One case study published by OpenAI, dubbed ‘Doppelganger’, investigated an operation by a persistent Russian threat actor posting anti-Ukrainian content across the internet. The threat actors used clusters of accounts using OpenAI’s tooling, with each cluster displaying different tactics, techniques and procedures (TTPs), made up of different functional teams.

The content was designed to target audiences in Europe and North America, focused on generating content for websites and social media. Once content was published on a site, up to five accounts would interact, often commenting on the posts.

An investigation into these accounts revealed that they only ever interacted with the fake content, likely to increase the posts visibility.

Download your FREE Information Pack here

An impact assessment was conducted and determined that there was no substantial positive engagement across authentic audiences on these social media sites.

It does, however, draw attention to the dedication and capabilities hostile nation states use to attempt to influence audiences.

While this activity only related to the current war in Ukraine, it could be utilised for internal issues relating to the United Kingdom, attempting to create adverse opinions on certain subjects, which could include policing.

Due to the growing development of AI tooling and capabilities, threat actors are likely to increase their use, not only for content development but also for malicious code development and reviews. This will increase the complexity of detection, both for end users and security teams.

 

 


< Back