The question of how smart can artificial-intelligence become is often at the core of any technological advancement, and after an “experiment” using Quake 3’s “adaptive” AI (which did prove to be fake), video games could provide the answer.
The original story came through a mysterious message board thread, which was posted to Imgur. The image showed a discussion where a gamer had claimed to be running a Quake 3 Arena simulation for four years. The original poster said that the bots, which had been killing each other for quite some time, had come to a complete standstill.
The assumption was made that the bots “learned” that fighting would end in a looping stalemate, and instead opted for peace.
The poster also claimed that the AI file for each bot was 512mb after learning, with an accumulated 8GB of tactical information learned over the four years.
“They would rotate to look at me,” said the original poster, who claimed to have jumped into the game and tried to get a reaction. “I walked around a little bit and they all just kept looking at me.”
The poster then fired a gun, and the bots “all ran for the nearest weapons, took me down and the server crashed”.
After the story caused a bit of a buzz, it came out that it was actually a joke from 4Chan; but, it does raise an interesting question.
Could AI non-playable characters adapt beyond their traditional programming models? Do you think bots could “learn” to understand the limitations of a game? It may be a stretch, but let us know what you think in the comments section and the MyGaming forums.