Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

In Russia, two operations created and spread content criticizing the US, Ukraine and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that posted on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles that attacked the US and Israel, which they translated into English and French. An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

  • AutoTL;DRB
    link
    English
    56 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.

    As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation.

    OpenAI claimed its researchers found and banned accounts associated with five covert influence operations over the past three months, which were from a mix of state and private actors.

    An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

    The US treasury sanctioned two Russian men in March who were allegedly behind one of the campaigns that OpenAI detected, while Meta also banned Stoic from its platform this year for violating its policies.

    OpenAI stated that it plans to periodically release similar reports on covert influence operations, as well as remove accounts that violate its policies.


    Saved 67% of original text.