ChatGPT has surprised researchers once again. A new study shows that the AI successfully solved a 2,000-year-old geometric problem, known as the doubling the square puzzle. This ancient challenge, famously discussed by Socrates, asks how to double the area of a square without being told that the new square’s side should match the original diagonal.

In a recent experiment, scientists discovered that ChatGPT’s reasoning process may go beyond simple pattern recall and could resemble something close to learning.


A 2,000-Year-Old Puzzle Put to the Test

Researchers from Cambridge University and the Hebrew University of Jerusalem asked ChatGPT to solve the classic “doubling the square” problem. They intentionally chose a challenge that:

  • requires abstract thinking,
  • is unlikely to exist directly in the model’s training data,
  • and reveals whether an AI can generate original reasoning rather than repeating memorized patterns.

To their surprise, ChatGPT produced a correct explanation of the geometric concept. It recognized that the area of a square doubles when the side equals the length of the diagonal of the original square — a principle rooted in the Pythagorean theorem.


Unexpected Twist: AI Makes a New Error

The real surprise came next.

When researchers asked ChatGPT to double the area of a rectangle, the AI incorrectly claimed that it was geometrically impossible. According to scientist Nadav Marko, this mistake strongly suggests that the model was not recalling a memorized answer. Instead, it appeared to be constructing its own hypothesis based on partial reasoning — a behavior somewhat similar to human learning.

The researchers concluded that ChatGPT was operating in what pedagogy calls the “zone of proximal development” — the space between what a learner already knows and what they can discover with guidance.


Does This Mean AI Can Think? Not Quite

Professor Andreas Stylianides, co-author of the study, warned against overinterpreting the results. AI does not “think” like a person. However, the experiment shows that large language models can exhibit learning-like behaviors, improvising when confronted with unfamiliar problems.

The researchers plan to:

  • test newer models on a broader range of mathematical challenges,
  • explore whether AI can solve similar geometric puzzles,
  • develop interactive systems combining ChatGPT with visual geometry tools to help students learn complex concepts.

Why This Discovery Matters

This experiment highlights a growing field of interest:
Can AI reason about problems it has never seen before?

The study suggests:

  • AI may be capable of flexible problem-solving,
  • but its reasoning remains inconsistent,
  • and combining symbolic tools with language models might boost accuracy dramatically.

For educators and researchers, this opens exciting opportunities for enhanced learning platforms and AI-assisted teaching methods.

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *