Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • restingboredface@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

    I mean, most of these AI tools are getting a lot of training data from social media. Would you want any of the yokels on Twitter or Reddit having access to nukes? Because those statements are what you’d hear from them right before they push the big red button.

    • AngryCommieKender@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Having been in the Navy NPP, I don’t think the kids that actually do have access to nuclear reactors and weapons in the military should have access to them. I may be a bit biased as I never left the NPP school. They made me an instructor. Some of those nukes may have been good at passing tests, but I’m amazed they could lace their boots properly.