Interactive Visualization
Timeline of Artificial Intelligence
Artificial intelligence is not a story that began in the 2020s. It runs from ancient automata and the first mechanical calculators through seventy years of booms and winters, into a present where frontier AI systems double their capabilities every few months and consume a city's worth of electricity.
This timeline layers a century of quantitative trends — training compute, benchmark scores, context windows, inference cost, datacenter power — over more than a hundred curated events. Zoom through deep time, overlay a data curve, filter by category, search a model or a name, or let one of twenty-one stories guide you.
About the AI Timeline
An interactive deep timeline of artificial intelligence — from its mythological roots in antiquity, through seventy years of booms and winters, to frontier systems consuming the electricity of entire cities, and into projected futures. Over 100 curated events, 21 narrated stories, eight overlayable data curves, and seven scholarly periodizations.
What you're seeing
A non-linear timeline from 1300 BCE to 2100 CE in which antiquity is compressed and the recent deep-learning era is stretched to match the pace of change. The always-on backbone curve is Moore's Law (transistor counts since 1971). Any one of seven companion curves can be layered on top, from training compute to datacenter power.
Events are curated hand by hand from peer-reviewed literature, arXiv preprints, archival sources and standard AI-history references. They span mythology and automata, computing, neural networks, game-playing AI, ethics and governance, models, policy, hardware and infrastructure. Each dot opens to a detail card with sources and links.
How to use
- Pan & zoom — scroll to zoom, drag to pan. The X-axis is adaptive: antiquity is compressed while the post-1950 computing era is stretched for detail.
- Click any event dot for a detail panel with description, year, sources and related figures.
- Search or Surprise me — use the search bar to find an event, model, lab or era; the dice button jumps to a random one.
- View — toggle the year grid, the present-year line, the two AI Winter bands, and the gradient fill below the curves.
- Periodizations — overlay one of seven scholarly historical framings (AI Eras, Industrial Revolutions, Information & Communication, Socio-economic Regimes, Energy Regimes, World History, Existential Moods). Computing Eras can be shown as an independent overlay alongside any of the seven.
- Trends — Moore's Law is the left-axis backbone. Add one companion curve on the right axis: Training Compute, METR Task Horizon, Benchmarks vs Human, Context Window, Inference Cost, or Datacenter Power.
- Categories — filter event types (mythology, neural networks, game-playing AI, ethics, models, policy, infrastructure, and more).
- Stories — click the Stories button to browse 21 narrated stories woven across AI history.
Training compute
Training Compute (cyan) charts the exponential growth of the compute used to train AI systems, measured in floating-point operations (FLOP). The cyan line is the frontier envelope — at any moment it tracks the most compute-intensive model published to date, jumping upward each time a record is set. Faint background dots show every notable model in the dataset (521 systems).
Show by Lab (sub-toggle under Training Compute) recolors the curve to reveal which research organizations have driven the trend. Each lab's monotonic record progression appears as its own coloured line; the frontier envelope is segmented by lab. Solo a lab with the S button; the Defaults button restores the five major frontier labs (Google/DeepMind, OpenAI, Anthropic, Meta, xAI).
The long-run pace is famously steep. Before 2010, frontier training compute doubled roughly every 20 months — in step with Moore's Law. Since 2010, it has doubled every ~6 months (Sevilla et al. 2022), a rate more than four times faster than hardware alone would predict, driven by larger budgets, specialised accelerators and massively parallel training.
Companion curves (right axis)
Moore's Law (blue) — transistor counts in microprocessors from the Intel 4004 (2,300 transistors, 1971) to modern AI accelerators at over 100 billion. Drawn from Karl Rupp's public dataset, itself extending work by Horowitz, Labonte, Shacham, Olukotun, Hammond and Batten.
METR Task Horizon (amber) — the length of real software-engineering tasks (in minutes of expert human time) that each frontier AI model can complete with 50% success. Horizon has roughly doubled every seven months since 2019, rising from ~3 seconds for GPT-2 to ~12 hours for Claude Opus 4.6 (Kwa et al. 2025).
Benchmarks vs Human (multi-colour) reproduces Figure 2.1.1 of the Stanford HAI AI Index 2026. Eleven reference benchmarks — ImageNet, SuperGLUE, MMLU, GPQA Diamond, OSWorld, SWE-bench, VQA, SQuAD 2.0, MATH, MMMU and AIME — are scaled so that the human baseline = 100%. Solid lines track active benchmarks; dashed lines mark benchmarks that have saturated.
Context Window (green) — the maximum input tokens a frontier language model can reason over in a single call, from GPT-2's 1,024 tokens to today's 1–10 million-token systems.
Inference Cost (magenta) — the falling price, in USD per million output tokens, of frontier-class model inference. The cheapest-so-far envelope has dropped roughly 10× per year since 2022 — sometimes called the “other Moore's Law” of AI.
Datacenter Power (vermillion) — the aggregate electrical load of the frontier AI datacenter fleet tracked by Epoch AI: from roughly 50 MW in mid-2023 to ~13 GW today and a projected ~33 GW by 2030. The solid line is historical; the dashed portion is capacity planned or under construction. Landmark dots mark individual mega-campuses (xAI Colossus, Anthropic—Amazon New Carlisle, OpenAI Stargate Abilene, Meta Prometheus, Microsoft Fairwater…).
Periodizations
Seven historical framings can be overlaid on the chart. Each is a named sequence of eras drawn from standard scholarly sources; the colour ramp for each typology is spread across Google's Turbo colormap (Mikhailov, 2019). Where eras within a framing overlap, the newest-starting one wins the strip and the tooltip lists the compounded eras.
AI Eras
GLOBAÏA curation, cross-referenced with Nils Nilsson, The Quest for Artificial Intelligence (Cambridge University Press, 2009); Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach (4th ed., Pearson, 2020); and the Stanford HAI AI Index 2026.
Industrial Revolutions
Encyclopaedia Britannica entries on the Industrial Revolution, Second Industrial Revolution, Digital Revolution, and Fourth Industrial Revolution; Klaus Schwab, The Fourth Industrial Revolution (World Economic Forum, 2016).
Information & Communication
Manuel Castells, The Rise of the Network Society (Blackwell, 1996; 2nd ed. 2010); Nick Srnicek, Platform Capitalism (Polity, 2016); Royal Society, Science in the age of AI (2024).
Socio-economic Regimes
Daniel Bell, The Coming of Post-Industrial Society (Basic Books, 1973); UNESCO, Towards Knowledge Societies (2005); Manuel Castells, The Rise of the Network Society.
Energy Regimes
Vaclav Smil, Energy Transitions: Global and National Perspectives (Praeger, 2nd expanded ed., 2017). Era boundaries follow Smil's dating convention.
World History
Seven-band periodisation compiled from standard world-history references, with the 19th–20th century framing drawn from Eric Hobsbawm's tetralogy: The Age of Revolution 1789–1848 (1962), The Age of Capital 1848–1875 (1975), The Age of Empire 1875–1914 (1987), and The Age of Extremes: The Short Twentieth Century, 1914–1991 (Pantheon, 1994).
Existential Moods
Periodisation drawn from Émile P. Torres, Human Extinction: A History of the Science and Ethics of Annihilation (Routledge, 2023) — a historical account of how thinking about human extinction has evolved across four broad moods. Cross-referenced with Toby Ord, The Precipice (Hachette, 2020); Thomas Moynihan, X-Risk: How Humanity Discovered Its Own Extinction (Urbanomic, 2020); Nick Bostrom, “Existential Risks” (Journal of Evolution and Technology, 2002); and the Cambridge Centre for the Study of Existential Risk.
The future zone
Beyond 2026, the dashed boundary marks projected territory. Scenario fans for training compute show baseline, accelerated, and constrained trajectories, bounded by physical and economic limits discussed in Sevilla et al. (2024) and Epoch AI's capacity-planning analyses.
Future events (announced datacenter buildouts, publicly committed compute budgets, scheduled model releases, policy milestones, and known unknowns) are flagged visually so projection is never confused with observation.
Data sources & methodology
Training compute
Epoch AI · Data on AI Models (CC BY 4.0). The frontier envelope and per-lab progressions are derived at load time from Epoch's notable-models dataset (521 systems); no synthetic points are introduced.
Moore's Law
Karl Rupp's microprocessor-trend-data, extending earlier compilations by Horowitz, Labonte, Shacham, Olukotun, Hammond and Batten.
METR Task Horizon
METR (Model Evaluation & Threat Research), Measuring AI Ability to Complete Long Tasks v1.1 benchmark (Kwa et al. 2025, arXiv:2503.14499; CC BY 4.0). p50 horizon length drawn directly from METR's public benchmark_results_1_1.yaml.
Benchmarks vs Human
Stanford HAI, AI Index 2026 Annual Report, Chapter 2 “Technical Performance”, Figure 2.1.1 (p. 76). Report CC BY-ND 4.0; benchmark values are factual data, and the figure is redrawn independently.
Context Window
Compiled from OpenAI, Anthropic, Google DeepMind and Meta release notes (2019–2026). Cross-referenced with taylorwilsdon/llm-context-limits (MIT).
Inference Cost
Epoch AI · LLM Inference Price Trends (CC BY 4.0), cross-referenced with Artificial Analysis and provider pricing pages.
Datacenter Power
Epoch AI · Frontier Data Centers (CC BY 4.0). Fleet-wide operational megawatts aggregated at monthly resolution from Epoch's public construction-timeline dataset; individual landmark campuses pinned at first operational date. Context from IEA (2024) Electricity 2024.
Events
Hand-curated from peer-reviewed literature (Nature, Science, NeurIPS, ICLR, ICML proceedings), arXiv preprints, official press releases, and specialist AI-history references. Every event links to its primary source.
Non-linear time axis
The X-axis uses an adaptive piecewise-linear mapping: ~1% of screen for antiquity (1300 BCE–1500 CE), ~10% for the Industrial era (1500–1900), ~30% for the electro-mechanical era (1900–1950), ~50% for the computing and deep-learning era (1950–2050), and ~10% for the projected future (2050–2100). This allocates space proportional to the density of events.
References
FOUNDATIONS & HISTORY
- Turing, A.M. (1950). Computing Machinery and Intelligence. Mind LIX(236): 433–460. DOI
- McCarthy, J., Minsky, M.L., Rochester, N. & Shannon, C.E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Archive
- Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 65(6): 386–408. DOI
- Minsky, M.L. & Papert, S.A. (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press.
- Rumelhart, D.E., Hinton, G.E. & Williams, R.J. (1986). Learning representations by back-propagating errors. Nature 323: 533–536. DOI
- LeCun, Y. et al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation 1(4): 541–551. DOI
- Hochreiter, S. & Schmidhuber, J. (1997). Long short-term memory. Neural Computation 9(8): 1735–1780. DOI
- Nilsson, N.J. (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press. DOI
- Russell, S.J. & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
- Goodfellow, I., Bengio, Y. & Courville, A. (2016). Deep Learning. MIT Press. deeplearningbook.org
- Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux.
DEEP LEARNING & SCALING
- Krizhevsky, A., Sutskever, I. & Hinton, G.E. (2012). ImageNet classification with deep convolutional neural networks. NeurIPS 25. DOI
- Goodfellow, I. et al. (2014). Generative adversarial networks. NeurIPS 27. arXiv:1406.2661
- LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature 521: 436–444. DOI
- Silver, D. et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature 529: 484–489. DOI
- Silver, D. et al. (2017). Mastering the game of Go without human knowledge. Nature 550: 354–359. DOI
- Vaswani, A. et al. (2017). Attention Is All You Need. NeurIPS 30. arXiv:1706.03762
- Devlin, J. et al. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT. arXiv:1810.04805
- Brown, T.B. et al. (2020). Language Models are Few-Shot Learners (GPT-3). NeurIPS 33. arXiv:2005.14165
- Kaplan, J. et al. (2020). Scaling Laws for Neural Language Models. arXiv:2001.08361
- Hoffmann, J. et al. (2022). Training Compute-Optimal Large Language Models (Chinchilla). arXiv:2203.15556
- Jumper, J. et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature 596: 583–589. DOI
- Sevilla, J. et al. (2022). Compute trends across three eras of machine learning. IJCNN. arXiv:2202.05924
BENCHMARKS & EVALUATION
- Russakovsky, O. et al. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision 115: 211–252. DOI
- Wang, A. et al. (2019). SuperGLUE: A stickier benchmark for general-purpose language understanding systems. NeurIPS 32. arXiv:1905.00537
- Hendrycks, D. et al. (2021). Measuring Massive Multitask Language Understanding (MMLU). ICLR. arXiv:2009.03300
- Rein, D. et al. (2023). GPQA: A Graduate-Level Google-Proof Q&A Benchmark. arXiv:2311.12022
- Jimenez, C.E. et al. (2024). SWE-bench: Can Language Models Resolve Real-World GitHub Issues? ICLR. arXiv:2310.06770
- Kwa, T., West, B., Becker, J. et al. (2025). Measuring AI Ability to Complete Long Tasks. METR. arXiv:2503.14499
- Stanford HAI (2026). AI Index 2026 Annual Report. Stanford University.
HARDWARE & INFRASTRUCTURE
- Moore, G.E. (1965). Cramming more components onto integrated circuits. Electronics Magazine 38(8): 114–117.
- Rupp, K. (2022). Microprocessor trend data (CC BY 4.0).
- Epoch AI. Data on Notable AI Models (CC BY 4.0).
- Epoch AI. LLM Inference Price Trends.
- Epoch AI. Frontier Data Centers.
- IEA (2024). Electricity 2024: Analysis and Forecast to 2026. International Energy Agency.
SAFETY, ETHICS & EXISTENTIAL RISK
- Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology 9.
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- Amodei, D. et al. (2016). Concrete Problems in AI Safety. arXiv:1606.06565
- Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
- Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette.
- Moynihan, T. (2020). X-Risk: How Humanity Discovered Its Own Extinction. Urbanomic.
- Torres, É.P. (2023). Human Extinction: A History of the Science and Ethics of Annihilation. Routledge. Publisher
- Bender, E.M. et al. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? FAccT 2021: 610–623. DOI
- Hendrycks, D. et al. (2023). An Overview of Catastrophic AI Risks. arXiv:2306.12001
PERIODIZATIONS
- Bell, D. (1973). The Coming of Post-Industrial Society. Basic Books.
- Castells, M. (2010). The Rise of the Network Society (2nd ed.). Blackwell. DOI
- Hobsbawm, E. (1962–1994). The Age of Revolution, The Age of Capital, The Age of Empire, The Age of Extremes. Pantheon.
- Schwab, K. (2016). The Fourth Industrial Revolution. World Economic Forum.
- Smil, V. (2017). Energy Transitions: Global and National Perspectives (2nd exp. ed.). Praeger.
- Srnicek, N. (2016). Platform Capitalism. Polity.
- UNESCO (2005). Towards Knowledge Societies.
- Royal Society (2024). Science in the Age of AI.
VISUALIZATION & COLOR
- Mikhailov, A. (2019). Turbo: An Improved Rainbow Colormap for Visualization. Google Research.
Every event in the timeline includes its own primary citation. References listed here cover framework, trend-curve data, and the scholarly sources behind the seven periodizations.
Explore more
Educational purpose
This interactive visualization is a non-commercial educational project by GLOBAÏA, a non-profit organization dedicated to making scientific knowledge accessible to a general audience. All data is drawn from peer-reviewed literature, arXiv preprints, and openly licensed datasets (Epoch AI, METR, Stanford HAI, Karl Rupp). The visualization, its stories, and accompanying materials are intended solely for educational and science-communication purposes. They do not constitute professional advice and should not be cited as primary research sources.
Credits
Created by GLOBAÏA. Data from Epoch AI (CC BY 4.0), METR (CC BY 4.0), Stanford HAI AI Index (CC BY-ND 4.0 for the report figure), Karl Rupp's microprocessor-trend dataset, and OpenAI / Anthropic / Google DeepMind / Meta release materials.
Suggested citation
GLOBAÏA (2026). Timeline of Artificial Intelligence [interactive visualization]. globaia.org/ai/. Accessed .