Why today's generative AI will never outperform humans

Published on Jun 19, 2025 in Thoughts on AI  

Machine learning has revolutionized the game of Go. In 2016, during its matches against Lee Sedol, considered the best player in the world at the time, the AlphaGo program won 4 out of 5 games.

An interesting thing from the first game is that the Go experts who commented on the event considered that the machine had made a rookie mistake. Except that this move led the machine to victory.

Artificial intelligence had invented a new strategy unknown to humans!

Many people compare the LLMs that run the current chatbots with AphaGo. And they think that in the same way, they will end up surpassing humans. But this will never happen for a very simple reason that I will explain to you.

AI is above all a matter of statistics and labelled data. For an AI to surpass humans, it must be able to go beyond its training data.

To create an evolutionary algorithm, you need a fitness function. This is a function that allows you to say if the result obtained corresponds to what you want. This is what allows a system to improve itself.

For the game of Go, it’s very simple. It’s easy to automatically check if a game is within the rules, if it’s over, and if it’s won or lost. Thus, the machine can do a large amount of experiments, without human intervention, to improve itself.

The problem with LLMs is that there is no fitness function. First, they are too generalist. And then, they deal with subjects whose result is sometimes very subjective.

To improve an LLM, you need human intervention to say whether or not the result fits what is expected. There is no 100% reliable automatic fitness function. And in the process, humans integrate their limits and biases.

This is simply why LLMs are very different from AlphaGo, and that what happened with AlphaGo will not happen with LLMs.

If we want an AI to be able to improve, it must “understand” the world. This is very easy when the “world” is a simple game defined by a few rules. But it’s much more complex when you want an AI to understand reality as a whole. And clearly, current research is very far from that.

Don’t miss my upcoming posts — hit the follow button on my LinkedIn profile