OpenAI beats Elon Musk’s Grok in AI chess tournament

ai
Share this post on :

 

CALIFORNIA (Kashmir English): ChatGPT-maker OpenAI has beaten Grok of Elon Musk in the final of a tournament to crown the best artificial intelligence (AI) chess player.

Historically, tech companies have often used chess to assess the abilities of a computer, with even the top human players virtually unable to beat modern chess machines.

But in this chess competition, no computers designed for chess were involved, rather the contest was held between AI programs designed for everyday use.

o3 model of OpenAI emerged unbeaten in the tournament and defeated xAI’s model Grok 4 in the final, adding fuel to the fire of an ongoing rivalry between the two global tech firms.

Sam Altman and Musk, both co-founders of OpenAI, claim their latest models are the smartest in the world.

Google’s model Gemini won third place in the tournament, as it beat a different OpenAI model.

But these AI, while talented at many everyday tasks, are still improving at chess – with Grok making a number of errors during its final games including losing its queen repeatedly.

“Up until the semi finals, it seemed like nothing would be able to stop Grok 4 on its way to winning the event,” Pedro Pinhata, a writer for Chess.com, said in its coverage.

“Despite a few moments of weakness, X’s AI seemed to be by far the strongest chess player… But the illusion fell through on the last day of the tournament.”

He said Grok’s “unrecognizable” and “blundering” play enabled o3 to claim a succession of “convincing wins”.

“Grok made so many mistakes in these games, but OpenAI did not,” said chess grandmaster Hikaru Nakamura during his livestream on the final.

Before Thursday’s final, Musk had said in a post on X that xAI’s prior success in the tournament had been a “side effect” and it “spent almost no effort on chess”.

Why is AI playing chess?

The AI chess tournament was held on Kaggle, the Google-owned platform, which allows data scientists to evaluate their systems through competitions.

Tests known as benchmarks are used to examine AI models’ skills in areas such as reasoning or coding.

Scroll to Top