AGI
Artificial General Intelligence
AI capable of any cognitive task a human can do — not narrow, but truly general. Doesn't exist yet, and a contested goal.
AGI (Artificial General Intelligence) refers to AI that can learn and perform any cognitive task a human can do — not just one task. Today's LLMs appear general but are still "narrow" — good at certain patterns over certain data.
Plausible (disputed) criteria for AGI: - Transfer: applying learning across domains - Self-improvement: improving its own capabilities - Causal reasoning: cause-and-effect, not just correlation - Long-horizon planning: hundreds of steps ahead - Embodiment: interaction with the physical world
Debate continues: some researchers call GPT-4 "proto-AGI"; others say not before 2030; others say never. Estimates range 2027 - 2070+.
Today's AI: an Olympic diver, a tennis pro, a chess champion — each a world-record holder in their lane, but none can cook spaghetti. AGI: sports + music + science + crafts + teaching — all at human level in one mind.
Sam Altman in 2024: "in our internal discussions we're shifting from AGI to ASI (Superintelligence)." Anthropic's Dario Amodei prefers "Powerful AI" — because AGI is so loosely defined.
In practice: GPT-5 / Claude 5 / Gemini 3 take huge steps, but the "AGI threshold" keeps moving. Companies don't publish internal evals, so no one knows who's how close. The term lives on as a marketing/ science hybrid.
- Strategy conversations — the 5-10 year trajectory of AI
- Investment/risk assessment (AGI threshold could rewrite business models)
- Ethics/regulation policy discussions
- When designing today's products — you use narrow capabilities, not AGI
- When you need a precise technical definition — everyone defines it differently
- Short-term planning — even if AGI arrives, effects unfold over years
Defining it down
Companies marketing-shift the AGI threshold ('our support bot is AGI-level'). With no agreed criteria, every claim can be made valid.
AGI ≠ ASI
General (human-level) intelligence is not super (above-human) intelligence. Even if AGI arrives, ASI is a separate leap. The two get conflated often.
Predictions have always been wrong
In 1956 they said '20 years to AGI.' In 2025 we're still debating. 80%+ of predictions are wrong — be skeptical of anyone who claims '95% sure by year X.'