B Hari

February 28, 2026

The Ghost in the Goalpost: Why Nobody Can Agree on When AGI Will Arrive

The Ghost in the Goalpost: Why Nobody Can Agree on When AGI Will Arrive

Every major AI lab, forecaster, and philosopher defines "general intelligence" differently — and the confusion is not accidental.


In 1967, Marvin Minsky told Life Magazine that a machine with "the general intelligence of an average human being" would exist within three to eight years. It would "read Shakespeare, grease a car, play office politics, tell a joke, have a fight." Nearly sixty years later, the most advanced AI systems on the planet can draft legal briefs, write functional code, and pass medical licensing exams — yet cannot reliably fold laundry, understand a sarcastic remark from a teenager, or explain why a joke is funny rather than merely generating one.

The question of when artificial general intelligence will arrive has become the defining wager of our era. Trillions of dollars in capital allocation, national security strategies, and millions of career decisions hinge on the answer. And yet, as of February 2026, the field cannot agree on what AGI even means — let alone when it will show up.

This is not a gap in knowledge. It is a gap in honesty.


THE DEFINITION NOBODY HAS

The term "artificial general intelligence" was coined around 2001 by Shane Legg, co-founder of Google DeepMind. His original definition was straightforward: a system capable of performing "the full range of cognitive tasks." Two decades later, Legg himself admits the term has become a source of "considerable confusion" and requires urgent refinement.

He is not alone in his discomfort. Dario Amodei, CEO of Anthropic, called AGI "a marketing term" in January 2025, preferring to describe future AI as "a country of geniuses in a data center." Yann LeCun, who left Meta in late 2025 to found AMI Labs, rejects the term entirely: "I don't think human intelligence is general. So calling human-level AI 'AGI' is a misnomer."

Meanwhile, OpenAI published an internal five-level framework in 2024 that defines AGI not philosophically but economically — as systems capable of performing the work of entire organizations. Google DeepMind countered with a detailed taxonomy mapping AI performance against generality, arguing that current large language models already qualify as "Emerging AGI" — Level 1 on a five-level scale — but that the bar people historically meant by AGI corresponds to "Competent" performance, the 50th percentile of skilled adults across most cognitive tasks. That bar remains unmet.

Notice the pattern. Each organization defines AGI in a way that conveniently either demonstrates proximity to achieving it or justifies its particular technical approach as the correct path. This is not accidental. It is strategic positioning dressed up as scientific taxonomy.


THE CLASS DIVIDE IN PREDICTIONS

Strip away the definitions and look at the raw timeline predictions, and a striking sociological pattern emerges: the closer someone is to raising capital, the sooner they believe AGI will arrive.

Industry CEOs cluster at the aggressive end. Sam Altman has described AGI as "pretty close" and suggested it could arrive within two years. Amodei predicts "powerful AI" within one to three years. Demis Hassabis of Google DeepMind told an audience at the India AI Summit in February 2026 that AGI would have "10x the impact of the Industrial Revolution at 10x the speed," arriving within five years. Elon Musk has predicted AGI for two consecutive years running — 2025 in 2024, then 2026 in 2025 — a rolling one-year horizon that has become its own meme.

Academic researchers offer a different calendar entirely. A 2023 survey of 2,778 AI researchers placed the median estimate for "high-level machine intelligence" at approximately 2040. Gary Marcus of New York University has argued in The Economist that "the chances of AGI's arrival by 2027 now seem remote" and calls for fundamentally different scientific paradigms. Rodney Brooks of MIT, who publishes an annual predictions scorecard, stated flatly: "We're not going to get AGI for another 300 years." He coined an acronym for what he sees driving herd optimism — FOBAWTPALSL: "Fear Of Being A Wimpy Techno-Pessimist And Looking Stupid Later."

Professional forecasters land in the middle. Metaculus community predictions as of early 2026 place the median AGI date at November 2033. Prediction markets are more cautious still: Polymarket gives a 9% probability that OpenAI achieves AGI by 2027; Kalshi puts AGI by 2030 at 40%.

The divergence correlates almost perfectly with incentive structures. CEO timelines drive fundraising and stock valuations. Academic timelines protect reputations and research funding. Forecaster timelines reflect calibrated probabilistic reasoning without skin in the outcome. There is no neutral vantage point from which to judge, which is precisely why the question remains unresolved.


THE 2025 SENTIMENT WHIPLASH

For anyone watching the discourse closely, 2025 offered a compressed version of the entire sixty-year history of AI hype cycles. In early 2025, OpenAI released its o1 and o3 reasoning models, which demonstrated striking capabilities in mathematics, coding, and multi-step logical inference. AGI timelines compressed almost overnight. Forecasters pulled their estimates forward. The mood was electric.

Then the returns diminished. By late 2025, it became clear that reasoning models were hitting walls faster than expected. The Metaculus median AGI date extended by 2.5 years over the course of a single calendar year. Ilya Sutskever, the former OpenAI Chief Scientist who had overseen GPT-4's development, declared that "the era of just adding GPUs is over." The influential AI 2027 scenario — which had predicted month-by-month progress toward AGI — was graded by its own authors at roughly 65% of predicted pace, pushing their revised AGI median to 2032-2035.

This pattern — breakthrough, euphoria, limits, recalibration — is not new. It is the signature rhythm of AI research since the 1950s. What is different now is the velocity. Previous cycles played out over decades. This one compressed into twelve months. Whether that acceleration reflects genuine proximity to AGI or merely faster-moving hype machinery is itself the central question.


THE TECHNICAL FAULT LINES

Beneath the timeline debate lies a deeper disagreement about whether the current dominant approach — scaling large language models — can reach AGI at all.

The evidence cuts both ways. On one hand, every supposed "fundamental barrier" to AI capability has eventually fallen. Amodei has pointed this out repeatedly: "People keep coming up with barriers that end up dissolving within the big blob of compute. Semantics, reasoning, code, math — suddenly it turns out you can do all of it." Post-training scaling through reinforcement learning has opened an entirely new axis of capability improvement that did not exist two years ago.

On the other hand, the most rigorous test of genuine reasoning — the ARC-AGI-2 benchmark, specifically designed to require novel abstract reasoning that cannot be solved through memorization — tells a sobering story. Humans score above 95% with no preparation. The most capable AI model scores below 30% at high compute. This is not a gap that scaling alone has shown any ability to close.

LeCun's critique goes deeper. He argues that language is downstream of physical world understanding — a by-product of intelligence, not its substrate. "A teenager learns to drive in 20 hours," he notes. "But we still don't have Level 5 autonomous driving. A child can clean a table on the first try. But we don't have robots that can do housework." His proposed alternative, JEPA — Joint Embedding Predictive Architecture — aims to build AI that develops intuitive physics and causal understanding from sensory data, the way a child watching objects fall develops an internal model of gravity without anyone explaining Newton. He has raised 500 million euros to pursue this vision.

Epoch AI's September 2025 analysis adds a material constraint: every tenfold increase in compute scale now lengthens lead times by roughly one year. The trillion-dollar training cluster that scaling optimists envision by 2030 may not materialise until 2035 due to sheer infrastructure bottlenecks.


THE DISTINCTION THAT ACTUALLY MATTERS

The most useful framework for navigating this confusion is not a timeline. It is a distinction between two fundamentally different things that travel under the same name.

Functional AGI is AI that can perform most knowledge work at or above the level of a skilled human — writing research papers, managing projects, synthesizing legal cases, designing software architectures — autonomously and reliably. This is what Altman and Amodei are describing when they say "two to three years." By some measures, particularly in narrow domains, it is already partially here.

Philosophical AGI is AI that genuinely understands the world, reasons causally rather than statistically, handles true novelty, and possesses something like judgment. This is what LeCun insists requires fundamentally different architectures, what Brooks argues is centuries away, and what a Cambridge philosopher recently argued we may never be able to verify even if it arrives.

The economic disruption — the part that will reshape careers, industries, and national competitiveness — comes from functional AGI. It does not require a machine to understand Shakespeare the way Minsky imagined. It requires a machine to process, synthesise, and act on information faster and cheaper than a human can. That capability is arriving in stages, not as a single threshold event.

The existential questions — consciousness, moral status, alignment, control — arise from philosophical AGI. And on that front, as 80,000 Hours synthesised in February 2026: "Bottlenecks will likely hit around 2028-2032, so to a first approximation, either we reach AGI in the next five years, or progress will slow significantly."


WHAT THIS MEANS FOR THE REST OF US

The honest answer to "when will AGI arrive?" is that the question itself is broken. There is no single threshold that will be crossed, no morning when we wake up to a fundamentally different world. Instead, there is a continuum of increasingly capable systems, each of which disrupts specific domains at specific times.

For anyone making decisions today — about careers, investments, education, policy — the actionable insight is not a date. It is a recognition that functional AI capabilities are compressing faster than institutions can adapt, while the deeper questions of machine understanding and control remain genuinely open. The gap between those two realities is where most of the risk lives.

Minsky's three-to-eight-year prediction from 1967 was wrong by at least fifty-five years. Today's two-to-three-year predictions may prove more accurate, or they may join the same graveyard of premature confidence. What we can say with certainty is that the people making the loudest predictions have the most to gain from your believing them — and that the history of this field has never once rewarded the optimists on schedule.

The goalposts will move again. They always do. The question worth asking is not where the goalposts are, but what game you are preparing to play.

---

Day 3 of 7 in the series "AI & The Human Condition." Day 1 examined the investment paradox in AI deployment. Day 2 explored the capabilities AI cannot replace. Tomorrow: the nature of consciousness and whether machines could ever possess it.

B Hari

Simplicity with substance
www.bhari.com