Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.
- Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
- The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
- The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
- The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
- The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
There is an entire field of study dedicated to this problem space in the general case, game theory. Veritasium has a great video on why the tit for tat algorithm alone is insufficient without some built in lenience.
Here is an alternative Piped link(s):
a great video
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Yeah but the ai aint gonna watch that.
I wish they wouldn’t. Then we’d have the better algos. But they’ll no doubt find far better ones than we have.