History of Artificial Intelligence
Introduction
The idea that machines would perform calculations and even "think" like humans began long before the electronic computer. On this page you will find a detailed timeline with dates, inventors and developers, from 19th-century innovations through updates as of January 2026.
From Charles Babbage to Ada Lovelace (1822β1843)
The idea of a "universal computing machine" that could be programmed was born with the English mathematician and engineer Charles Babbage (1791β1871).
Difference Engine β Babbage designs and presents a mechanical machine for computing mathematical tables (e.g. logarithms) without human error. Funding and technology of the time did not allow a full version to be completed; parts were built later in museums.
Analytical Engine β Babbage designs a general-purpose machine: memory ("Store"), processing unit ("Mill"), input/output via punched cards (inspired by the Jacquard loom). The design included conditionals and loops β in effect programming similar to a modern computer. The machine was never completed.
Ada Lovelace (1815β1852) β Translated and expanded an article on the Analytical Engine. In her notes she describes an algorithm (computation of Bernoulli numbers) and is considered the "first programmer". Lovelace proposed that the machine could work not only with numbers but with symbols β an idea close to AI.
20th century: logic, war, and computers (until 1950)
Alan Turing (1912β1954) publishes the idea of the "Turing machine" β a theoretical model that defines what can be computed at all. This is the foundation of computation theory.
Konrad Zuse β The German engineer builds the Z3, a programmable electromechanical computer. Considered one of the first automatic computers in the world.
Warren McCulloch and Walter Pitts β Paper on logical neural networks, considered the foundation of the idea of "the brain as a computing system".
ENIAC β The first public electronic computer, at the University of Pennsylvania (developers: Eckert, Mauchly, and others). First operated in 1945.
Turing, the Turing Test, and the birth of the term AI (1950β1956)
Alan Turing publishes in the journal "Mind" the paper "Computing Machinery and Intelligence" and asks: "Can machines think?" He proposes the Turing Test: if in a blind test a computer can convince a human that it is human β it is considered to "think".
Dartmouth Conference β John McCarthy, Marvin Minsky, Claude Shannon, Frank Rosenblatt, and others meet. The term "Artificial Intelligence" is officially coined. Expectations: "Within 20 years machines will do any job a human can".
Years of dream and disappointment (1960β1993)
ELIZA β Joseph Weizenbaum at MIT creates the first famous chatbot. The program simulates a conversation with a (Rogerian) psychologist using sentence reflection and questions. Many felt an emotional connection to it despite its simplicity.
"First AI winter" β Funding cuts (including DARPA), Lighthill reports, etc. Promises were not fulfilled, the field entered a crisis.
Expert systems β Programs built on rules written by human experts (e.g. MYCIN for diagnosis). Commercial success in narrow domains.
"Second AI winter" β Limitations of expert systems and hardware became clear. Again, cuts in investment and research.
From chess to general knowledge (1997β2011)
IBM Deep Blue β Beats world chess champion Garry Kasparov. The computer (team: Feng-hsiung Hsu and others) evaluates millions of moves per second. A symbolic moment, though the method is not "learning" in the modern sense.
IBM Watson β Wins the "Jeopardy!" TV show against human champions. Uses search, natural language processing, and knowledge bases. Team led by David Ferrucci.
Siri β Apple releases the voice assistant for iPhone. AI enters the pockets of millions.
The deep learning breakthrough (2012β2017)
AlexNet β Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever win the ImageNet competition in image recognition by a large margin. A turning point for deep learning.
AlphaGo β DeepMind (Google) β Beats Lee Sedol at Go. Uses neural networks and reinforcement learning. A game considered "uncomputable" until then.
"Attention Is All You Need" β Google researchers (Ashish Vaswani and others) publish the Transformer architecture. The basis for GPT, BERT, and most large language models.
Large language models and lead-up to the revolution (2020β2021)
GPT-3 β OpenAI releases a model with about 175 billion parameters. Impressive completion, Q&A, and writing capabilities. Initially accessible via API.
Innovations and developments from 2022 to January 2026
Perplexity β Founded in August 2022 (Aravind Srinivas, Denis Yarats, and others) as a conversational search engine with source-based answers. By December 2022 about 2 million monthly users. Focus on "search + AI" with citations.
ChatGPT β OpenAI releases the chatbot to the public. Based on GPT-3.5, with RLHF. One million users in five days; about 100 million monthly users by January 2023 β the fastest-growing app in history to that point.
GPT-4 β OpenAI releases GPT-4 (around 14 March). Multimodal (text and image), large context window, advanced analysis and writing. Integrated into ChatGPT Plus.
Claude β Anthropic (founded by ex-OpenAI) releases Claude β a chatbot emphasizing safety, long context, and document analysis.
Grok β Elon Muskβs xAI announces Grok, a "rebellious" style chatbot with a sense of humour. The Grok-1 model (about 314B parameters, Mixture-of-Experts) finished training in October 2023.
Grok β launch on X Premium+ β Grok becomes part of X (Twitter) Premium+ subscription (about $16/month). In March 2024 Grok-1 weights were released open source (Apache 2.0).
Gemini β Google presents Gemini as its "largest and most capable" model. Gemini 1.0 in three sizes: Ultra, Pro, Nano. Integration with Search and Google products.
Claude 3 β Anthropic launches Haiku, Sonnet, Opus. Multimodal, large context, performance competitive with GPT-4. That month xAI releases Grok-1 as open source.
DeepSeek β Chinese company (Hangzhou) gains attention with DeepSeek-V3 and open models. Focus on cost-effectiveness and code/math capabilities.
Sora β OpenAI releases Sora to the public, text-to-video generation.
DeepSeek R1 β Step-by-step reasoning model with an MIT-style open licence. Focus on math, logic, and code. Strong impact on chip and stock markets.
Grok 3 β xAI launches Grok 3, "an order of magnitude more capable than Grok 2". Trained on 200,000 H100s; extended thinking (seconds to minutes). Lighter Grok 3 mini. Available to X Premium+; later SuperGrok.
Gemini 2.5 β Google launches Gemini 2.5 Pro (experimental) as its strongest model at the time; leads on LMArena. Gemini 2.5 Flash β hybrid reasoning, context up to 1M tokens, text/image/audio/video.
GPT-5 β OpenAI launches GPT-5 with built-in "thinking": a unified system that routes between an efficient model and a depth model (GPT-5 thinking). Available to users, with upgraded access for Plus and Pro.
Claude Opus 4.5 β Anthropic presents its most advanced model: 200K tokens, hybrid thinking, leading in code, agents, and tool use. More affordable pricing ($5/M input, $25/M output). Available on Claude.ai, API, Amazon Bedrock, Google Vertex, Azure.
GPT-5.2 β Advanced model for "professional knowledge work" and long-running agents. High benchmark scores (including 70.9% on GDPval, 100% on AIME 2025 math).
GPT-5.2-Codex β Version aimed at code and security: long context, large code edits, Windows support, improved cybersecurity capabilities.
Gemini 3 β Google releases Gemini 3 Flash as default in the Gemini app β doctoral-level reasoning, improved multimodality. Gemini 3 Deep Think for Google AI Ultra subscribers β iterative thinking on math, science, and logic. SynthID, audio, and real-time speech translation updates.
Perplexity β latest models β Perplexity adds to Pro/Max access to GPT-5.2 and Seedream 4.5. On 16 January 2026: updated iPad app, ETF details in Finance, quizzes and flashcards on iPhone.
ChatGPT β Health space β Dedicated space for conversations about medicine and health.
ChatGPT β improved memory β Update that improves the ability to find and manage details from past conversations.
ChatGPT β personality system β Update for a more conversational, personalized interaction.
DeepSeek V4 (planned) β DeepSeek announces V4 expected around mid-February 2026 (near Lunar New Year), with Engram architecture for memory, focus on code and context over 1M tokens. According to internal reports β performance competitive with Claude and GPT on code and long-context tasks.
Summary
From Charles Babbageβs Analytical Engine (1837) and Ada Lovelaceβs notes (1843), through the Turing Test (1950) and the Dartmouth Conference (1956), to ChatGPT, GPT-5, Claude Opus 4.5, Gemini 3, Perplexity, Grok, and DeepSeek β the history of artificial intelligence is a story of ideas, disappointments, and breakthroughs. From 2022 the field entered a phase of growth and unprecedented public exposure.