OpenAI Bans Accounts Tied To China, Russia, North Korea, Says Chatgpt Flags More Scams Than It Fuels

Since beginning the initiative in 2024, the ChatGPT-maker has disrupted and reported over 40 networks that violated its usage policies.

Topics

  • [Image source: Chetan Jha/MITSMR India]

    OpenAI said it has disrupted and reported more than 40 malicious networks since it began public threat reporting in February 2024 and is banning accounts that violate its rules, including clusters linked to China, Russia and North Korea. 

    The company’s latest update said most bad actors are grafting AI onto old playbooks to move faster rather than gaining new offensive capabilities. 

    “When activity violates our policies, we ban accounts and, where appropriate, share insights with partners,” OpenAI wrote. 

    OpenAI also said ChatGPT is being used to identify scams up to three times more often than it is being used to run them, citing internal signals and millions of scam-detection interactions a month. 

    Case studies in the report describe Russian-language operators seeking stepwise coding help to assemble malware after direct requests were blocked, and Chinese-language operators drafting multilingual phishing lures or cleaning up tooling.

    OpenAI said it banned suspected state-linked surveillance and information-gathering accounts. Separately, media coverage noted actions against North Korea–linked activity. 

    OpenAI detailed broader criminal use as well, including a “task scam” operation likely based in Cambodia that used ChatGPT to translate and generate fake personas and outreach messages following a “ping, zing, sting” pattern before takedown. 

    The company said it shares findings with peers and authorities as part of a standing effort to blunt scams, cyber activity and covert influence operations. The stance tracks earlier quarterly reports and briefings.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.