Scientists Revive a 2,400-Year-Old Math Puzzle to Find Out
Scientists Revive a 2,400 Year Old Math Puzzle to Find Out
A Puzzle as Old as Plato
Some experiments don’t need futuristic technology to raise fascinating questions they just need a bit of history. In fact, researchers at Cambridge University and the Hebrew University of Jerusalem recently turned to a math challenge that dates all the way back to Plato, around 385 B.C.E.
The story comes from one of Plato’s accounts of Socrates. A student is asked to double the area of a square. Simple enough, right? Except it isn’t. The student guesses that you just double the length of each side, not realizing that this actually makes a square four times larger, not double. The real solution is more subtle: the sides of the new square should equal the diagonal of the original.
For more than two thousand years, this problem has served as a kind of philosophical test: do we discover mathematical truths through reasoning that’s “already inside us,” or do we learn them from trial, error, and experience?
Now, oddly enough, the very same debate is being applied to artificial intelligence.
Why the Puzzle Matters for AI
At first glance, throwing Plato’s square problem at ChatGPT might seem like a gimmick. After all, it’s a chatbot trained on a mountain of text, not a geometry tutor. But that’s precisely why researchers chose it.
Most of ChatGPT’s training comes from text based data rather than images, diagrams, or geometry textbooks. So, in theory, the exact solution shouldn’t be sitting there waiting in its memory. If it managed to figure it out anyway, that would suggest something interesting perhaps even unsettling about the way AI “learns.”
The researchers weren’t just testing whether ChatGPT could spit out the right answer. They were probing a deeper question: does AI generate knowledge the way humans might, or is it simply remixing patterns from data it has seen before?
When ChatGPT Slipped Up
Things got even more intriguing when the team asked a follow up. Instead of doubling the square, what about doubling the area of a rectangle?
This time, ChatGPT confidently declared that geometry offered no solution. According to the bot, the diagonal of a rectangle couldn’t be used in the same way. And yet, the researchers knew perfectly well that a solution existed.
Here’s the odd part: the chance that this exact false claim was present somewhere in ChatGPT’s training data was extremely small. In other words, the system wasn’t just parroting back a mistake it had read online. It seemed to be improvising.
Nadav Marco, a visiting scholar in Cambridge, put it this way: “When we face a new problem, our instinct is often to try things out based on past experience. ChatGPT seemed to do something similar. Like a student fumbling toward an answer, it generated its own hypothesis even if it was wrong.”
What Counts as “Thinking”?
That word improvising makes many people uneasy when applied to machines. Can we really say an AI “thinks” when it invents a solution? Or is it still just crunching statistics behind the curtain of its black box?
The researchers suggested that what ChatGPT did resembled a concept from education called the zone of proximal development (ZPD). The idea is that there’s a space between what you already know and what you can figure out with a nudge from guidance, whether from a teacher or from prior hints.
In this case, ChatGPT wasn’t accessing stored knowledge. It was navigating its own version of a ZPD stretching beyond what it “knew” and producing a novel answer, right or wrong.
The Danger of Blind Trust
Of course, before we get carried away, there’s an important caution. AI improvisation isn’t always a virtue. A confident sounding but incorrect proof can be worse than silence, especially in classrooms where students might mistake fluency for truth.
Andreas Stylianides, professor of mathematics education, warned that unlike a math textbook, you can’t assume ChatGPT’s proofs are valid. He argued that one of the key skills for the future will be teaching students not just to solve problems but to evaluate AI generated reasoning critically.
This means the math curriculum may eventually need to include lessons on “reading” AI, much like we already train students to analyze sources for bias and reliability.
The Black Box Problem, Again
This experiment highlights, once more, the enduring “black box” dilemma in AI. We don’t really know how the model is generating its responses. Did ChatGPT make a lucky guess, or did it follow some hidden logical path? The truth is invisible to us.
That’s both unsettling and exciting. On the one hand, it’s hard to trust a system whose reasoning we can’t trace. On the other, it suggests untapped ways of using AI as a partner in discovery, rather than just as a search engine.
So, Is AI Really Learning?
The researchers themselves urged restraint. They don’t claim ChatGPT solves problems like a human mathematician would. But they do admit it behaved in a “learner like” way. That label matters.
Think of a high school student puzzling over a geometry problem. They might stumble, circle back, and propose a flawed solution before getting closer to the truth. That messy process is often more revealing of intelligence than the neat answer at the end. And ChatGPT, in its fumbling way, mirrored that process.
Looking Ahead
The study opens the door to several intriguing possibilities. Newer AI models could be tested with a wider range of mathematical challenges to see whether this learner like behavior persists. There’s also potential to pair systems like ChatGPT with interactive geometry software, allowing them to explore problems visually, not just verbally.
Imagine a classroom where a student and an AI system work through a geometry proof together, the AI tossing out tentative ideas while the student checks them against diagrams. It wouldn’t replace human teaching, but it could enrich the learning experience much like a peer study partner who’s always available.
A Final Thought
The ancient square problem was never just about math. It was about philosophy what it means to know something, whether humans are born with truths already in our minds, or whether knowledge comes only from the world outside us.
Now, centuries later, we’re asking the same question of our machines. Can they know? Can they learn? Or are they destined to remain eternal parrots, only echoing what they’ve absorbed?
The answer, at least for now, seems to lie somewhere in between. ChatGPT doesn’t think like we do but it doesn’t not think, either. It occupies a strange middle space, where its mistakes can be just as revealing as its successes.
And maybe that’s the real lesson: to pay attention not only to what AI gets right, but also to the very human like way it gets things wrong.
Open Your Mind !!!
Source: LiveScience
Comments
Post a Comment