When 1+1 doesn’t equal 2: The mathematical limits of linguistic artificial intelligence
Modern artificial intelligence systems like GPTs (Generative Pre-trained Transformers) are capable of producing incredibly natural and fluent text. It almost seems like these systems are able to think and reason like humans. But if we try to do a simple math calculation, for example 1+1, we immediately see the limits of these so-called “linguistic” AIs.
The ease of human intuition
Imagine asking a friend of yours “What is 1+1?”. They will likely answer “2” without even thinking about it. For them it’s a trivial math operation that they solve immediately and intuitively. But if you ask the same question to a GPT, it will most likely be unable to correctly answer or will produce a wrong response.
This is because artificial intelligence systems like GPTs, despite their mastery in generating coherent phrases, do not have a real understanding of mathematics. Their goal is to predict the next word in a sentence, not apply mathematical rules.
Like a student who learns by heart without understanding
We can compare the GPTs’ approach to mathematics to that of a student who has memorized formulas and procedures but does not fully grasp the principles. That typically human intuition and flexibility is missing.
It’s a bit like the difference between memorizing the alphabet and actually being able to read and understand a book. GPTs are very good at the mechanical part, but struggle with deep understanding.
The importance of architecture and training set
This is because their architecture is based on neural networks trained mainly on large amounts of text, not on relationships and abstract mathematical rules. It’s as if their brain was “wired” to process natural language, not formulas and abstract equations.
AI agents can help with calculation
There are attempts to create “AI agents” that interact with GPTs to guide them step-by-step through mathematical calculations. For example, by providing the individual operations to be performed to obtain a result. In this way, GPTs are able to produce correct calculations, although still without real conceptual understanding.
The need for hybrid systems
To overcome these limitations and one day have machines capable of reasoning about mathematics on par with humans, new hybrid systems will be needed that combine the linguistic approach of GPTs with components specialized in numerical calculation and modeling abstract logical and mathematical principles.
Future prospects
There is still a long way to go, but developments in the field of artificial intelligence in recent years have been so rapid that we can be optimistic. Thanks to the collaboration between human and artificial minds, one day we may discover that even for machines 1 + 1 equals 2 in a completely natural way.