← KeepSanity
Apr 08, 2026

When Was Artificial Intelligence Invented? (A Clear Timeline From Idea to Reality)

If you’ve ever wondered when artificial intelligence was first invented, you’re asking a question that stretches across millennia of human imagination and decades of scientific work. The short answ...

If you’ve ever wondered when artificial intelligence was first invented, you’re asking a question that stretches across millennia of human imagination and decades of scientific work. The short answer: AI as a formal scientific discipline was born in 1956 at the Dartmouth Summer Research Project, where researchers including John McCarthy coined the term “artificial intelligence” and set the agenda for the field we know today.

But that single date only tells part of the story. The dream of creating thinking machines goes back to ancient myths and mechanical curiosities, while the technical foundations emerged in the mid-20th century through breakthroughs in computing machinery and early neural networks. Understanding this timeline matters now more than ever, as generative AI tools like ChatGPT, GPT-4, and Google’s Gemini reshape how we think about machine intelligence.

Key Takeaways

What Do We Actually Mean by “When Was AI Invented”?

When someone asks “when was artificial intelligence invented,” they might be asking about different things: the first time humans imagined artificial beings, the first working computer systems that could learn or reason, or the moment AI became an organized scientific field with its own name and research agenda.

To clarify, historians generally recognize three levels of AI’s origin story:

The key dates that anchor this timeline include:

Year

Milestone

1943

McCulloch and Pitts publish their model of artificial neural networks as logical computing units

1950

Alan Turing publishes “Computing Machinery and Intelligence,” introducing the Imitation Game

1955

McCarthy, Minsky, Rochester, and Shannon coin “artificial intelligence” in their Dartmouth proposal

1956

The Dartmouth workshop convenes, marking AI’s official birth as a field

Most historians answer the question with: “AI was ‘invented’ as a field in the mid-1950s, especially at Dartmouth in 1956.”

From a perspective focused on cutting through noise, understanding this distinction helps you interpret today’s AI boom more accurately. Modern generative AI isn’t a sudden miracle-it’s the latest chapter in a 70-year scientific story with cycles of progress, setback, and revival.

Early Ideas: Long Before Computers Looked Like Brains

The dream of artificial minds predates electronics by millennia. Long before anyone built a computer program, humans imagined mechanical servants, thinking automata, and devices that could reason. These ancient ideas shaped cultural expectations that later AI researchers inherited-and they’re worth knowing if you want to understand AI’s origins fully.

Ancient Myths and Mechanical Wonders

Greek mythology gave us some of the earliest examples of artificial beings. Talos, described in the Argonautica around the 3rd century BC, was a bronze automaton built by the god Hephaestus to guard the island of Crete. While purely mythological, Talos represented the human desire to create artificial life and intelligent systems that could act autonomously.

Real mechanical ingenuity followed. Around 250 BC, the Greek engineer Ctesibius built water clocks that used hydraulic feedback to regulate time-early control systems that prefigured self-regulating machines. In the Islamic Golden Age, the engineer al-Jazari (1136–1206) created programmable humanoid automata, including a mechanical figure that poured drinks using camshaft mechanisms to execute sequential operations.

These weren’t intelligent in any modern sense. They couldn’t learn, adapt, or understand human language. But they demonstrated that human beings had been trying to externalize and automate aspects of cognition for a very long time.

Mechanizing Reasoning

Beyond physical automata, some thinkers tried to mechanize reasoning itself.

Even literature captured this impulse. Jonathan Swift’s 1726 Gulliver’s Travels satirized the idea with the Grand Academy of Lagado’s “engine,” a machine that produced books via random letter arrangements. It was meant as mockery, but it presciently foreshadowed brute-force text generation-something that would become real centuries later.

These are not AI in the modern sense. There was no machine learning, no problem solving through data, no adaptation. But they show the deep roots of humanity’s effort to understand human intelligence by trying to replicate it mechanically.

From Mechanical Brains to Digital Computers (1900–1949)

Early Machines That Could “Think”

In 1914, the Spanish engineer Leonardo Torres y Quevedo built “El Ajedrecista” (The Chess Player), an electromechanical machine that could play chess endgames-specifically, king and rook versus king. Using electromagnets and simple logic circuits, it was the first industrial robot of sorts: an automated game-playing machine that could beat a human opponent in a limited domain. This wasn’t a general computer, but it proved machines could handle tasks requiring a form of intelligent behavior.

Theoretical Foundations: Turing and Computability

The theoretical foundations came in 1936, when Alan Turing published “On Computable Numbers.” This paper introduced the concept of the Turing machine-an abstract model of computation that proved any algorithmic process could be computed by a sufficiently powerful machine. Every digital computer since then rests on Turing’s insight.

World War II Accelerates Computing

World War II proved that computers could tackle complex, non-numeric problems at scale.

The First Model of Artificial Neurons

A crucial breakthrough came in 1943, when Warren McCulloch and Walter Pitts published a paper modeling neurons as threshold logic gates. Their work showed that networks of simple artificial neurons could, in principle, compute any logical function-linking biology, logic, and computation for the first time.

By the late 1940s, researchers started using terms like “electronic brain” and “thinking machine.” The Manchester Mark 1 ran Christopher Strachey’s 1951 checkers program-one of the first examples of AI software on real hardware. Norbert Wiener’s 1948 book Cybernetics formalized ideas about feedback and self-regulating systems.

The stage was set for AI to emerge as its own discipline.

The image depicts researchers in a 1940s laboratory, meticulously working with early vacuum tube computer equipment, a significant step in the history of artificial intelligence and computer science. This setting highlights the foundational efforts in developing computing machinery that would eventually lead to advancements in machine learning and intelligent systems.

The Crucial Decade: 1950–1956 and the Birth of Artificial Intelligence

This is the core section that answers the question: when was artificial intelligence invented? Most scholars anchor the answer between Alan Turing’s 1950 paper and the 1956 Dartmouth workshop.

Turing’s 1950 Paper: “Can Machines Think?”

In 1950, Alan Turing published “Computing Machinery and Intelligence” in the journal Mind. This paper is foundational to the history of artificial intelligence for several reasons:

Turing’s paper gave AI research its intellectual charter. It framed machine intelligence as an empirical question that could be tested, not just a philosophical abstraction.

Early Proto-AI Systems (1951–1955)

Between Turing’s paper and the formal founding of AI, several early neural networks and learning programs appeared:

Year

System

Significance

1951

SNARC (Minsky & Edmonds)

First neural network machine; simulated 40 neurons learning a maze using reinforcement

1952–1953

Samuel’s Checkers Program (IBM)

Self-improving program that learned to play chess and checkers; coined “machine learning” in 1959

1955–1956

Logic Theorist (Newell & Simon)

Proved mathematical theorems using heuristic search; often called the first AI program

Arthur Samuel’s checkers program eventually beat its creator and drew with a Connecticut state champion by 1962, demonstrating that a computer program could improve through experience without explicit human intervention.

The 1955 Proposal and the Term “Artificial Intelligence”

In 1955, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon submitted a proposal for a summer research project at Dartmouth College. This document is where the term artificial intelligence first appeared in print.

The proposal was ambitious. It stated:

“We propose that a 2-month, 10-man study of artificial intelligence be carried out. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

They aimed to figure out how to make machines:

The 1956 Dartmouth Workshop: AI’s Official Birth

The Dartmouth Summer Research Project took place in the summer of 1956, funded by the Rockefeller Foundation. About 10 researchers attended, including McCarthy, Minsky, Newell, Simon, and Shannon.

The workshop didn’t produce groundbreaking theorems or working systems. What it did was more important: it brought together the leading minds, established shared terminology, and set the research agenda for decades. AI research split into several tracks that persist today:

If you have to pick a single year, 1956 is generally considered the year artificial intelligence was “invented” as a field of study. The Dartmouth workshop marks the moment AI became an organized, named discipline rather than a scattered set of ideas pursued by individual computer scientists.

John McCarthy went on to create Lisp in 1958-a programming language that dominated AI research for decades and is still used today. The AI community had been born.

Early AI Milestones After “Invention” (1956–1970s)

Once AI was named, research moved quickly. Optimism ran high. Many researchers believed that machines capable of human like tasks-or even human intelligence-might be only 20 years away.

Symbolic AI and Expert Systems Take Shape

The late 1950s and 1960s saw rapid progress in symbolic AI:

Neural Networks Rise and Fall

Early neural networks showed promise. Frank Rosenblatt’s perceptron (1957) could learn to classify patterns that were linearly separable. By 1960, perceptron hardware was adjusting 400 weights automatically.

But in 1969, Marvin Minsky and Seymour Papert published Perceptrons, a book that proved single-layer perceptrons couldn’t solve certain simple problems (like XOR). This critique halted neural network funding for over a decade, even though backpropagation algorithms that could train deep neural networks were already known.

The First AI Winter

By the early 1970s, the gap between AI promises and AI reality became undeniable. Key problems included:

DARPA funding dropped from $3 million to under $1 million annually. This was the first AI winter-a period when interest, funding, and optimism for AI research collapsed. It wouldn’t be the last.

From Winters to Deep Learning and Generative Models (1980s–2020s)

The 1980s: Expert Systems and Neural Revival

The 1980s brought a new wave of AI enthusiasm, driven by expert systems:

Neural networks also revived. In 1986, Rumelhart, Hinton, and Williams published their work on backpropagation, enabling training of multilayer networks. This teaching machines approach would eventually lead to deep learning techniques.

But the expert systems bubble burst. Knowledge acquisition proved painfully slow (10–100 rules per day, manually entered). The Lisp machine market crashed in 1987. A second AI winter followed (1987–1993), with funding dropping 90% in Japan and the US.

The 1990s: Behind-the-Scenes Progress

AI went quiet publicly but kept advancing in specific domains:

The Rise of Deep Learning

The deep learning revolution began with three ingredients: better algorithms, vastly more computing power, and big data.

Year

Milestone

2006

Hinton and colleagues re-popularize deep neural networks with pre-training techniques

2009

Researchers demonstrate that GPUs can accelerate neural network training 100x

2009

ImageNet dataset launched (14 million labeled images), enabling computer vision breakthroughs

2012

AlexNet wins ImageNet competition, halving error rates and triggering massive industry investment

AlexNet, with its 8 layers and 60 million parameters, proved that deep learning models could learn complex patterns from raw data. This sparked an AI boom that continues today, with over $100 billion in industry investment flowing into AI technologies.

The Transformer Era and Large Language Models

The current wave of generative AI traces directly to a 2017 paper: “Attention Is All You Need” by Vaswani et al. at Google. The transformer architecture introduced self-attention mechanisms that could process sequences in parallel and capture long-range dependencies in human language.

What followed:

These large language models can generate fluent text, answer questions, write code, and more. Virtual assistants powered by AI now handle everyday life tasks for hundreds of millions of users.

The image depicts a modern server room filled with rows of advanced computing equipment, illuminated by glowing lights, showcasing the computing power essential for artificial intelligence research and machine learning applications. This environment represents the backbone of intelligent systems that can process big data and perform complex tasks.

Challenges Remain

Despite the progress, current AI systems face real limitations:

Expert consensus (from figures like Yann LeCun and Yoshua Bengio) views this as a surge in narrow AI, not artificial general intelligence. The original 1956 goal of machines that truly understand human intelligence remains unfulfilled.

Why the “Invention” of AI Still Matters Today

Knowing that AI’s roots go back to the 1950s-and that the ideas themselves are even older-helps you interpret modern hype more calmly. This isn’t a technology that appeared overnight. It’s the product of seven decades of research, multiple boom-and-bust cycles, and countless incremental advances.

The Original Vision vs. Today’s Reality

The 1955 Dartmouth proposal aimed for machines that could:

Today’s large language models partially fulfill this vision. They can generate fluent natural language, handle knowledge representation tasks, and even demonstrate protein structure prediction capabilities. But they also fail at many forms of logical reasoning (scoring below 50% on certain benchmarks where average human performance is 85%), struggle with consistent factual accuracy, and require massive human intervention in their training.

Boom and Winter Cycles

The history of artificial intelligence includes repeated booms and winters:

Period

Phase

Cause

1956–1973

Early boom

Optimism about symbolic AI

1974–1980

First winter

Lighthill Report, funding cuts

1980–1987

Expert systems boom

Commercial applications

1987–1993

Second winter

Expert system limits, Lisp crash

1993–2011

Quiet progress

ML improvements, behind-the-scenes AI

2012–present

Deep learning boom

GPUs, big data, transformers

This pattern suggests that today’s AI surge might also encounter limits, regulatory pressure, or shifts in focus. Knowing the cycles helps you take a longer view.

Signal Over Noise

Understanding AI’s long timeline reinforces the value of careful, curated information over daily hype. Weekly tracking of major developments-like what KeepSanity AI provides-aligns better with how the field actually evolves: through gradual progress punctuated by occasional breakthroughs, not daily revolutions.

The “invention” of AI in 1956 didn’t create a finished technology. It kicked off an open-ended experiment in building and governing machine intelligence-an experiment that’s still running, with no clear endpoint in sight.

A person is reading intently on a tablet in a serene environment, surrounded by soft lighting that enhances their focus. This scene reflects the intersection of human intelligence and technology, reminiscent of the advancements in artificial intelligence and machine learning that shape our everyday life.

FAQ

So what year should I give if someone asks “When was AI invented?”

The most widely accepted answer is 1956, the year of the Dartmouth Summer Research Project on Artificial Intelligence. This is when AI was formally named and organized as a scientific discipline with its own research agenda, funding, and community.

For extra nuance, you could say: “The modern field of AI began in the mid-1950s, especially with John McCarthy’s 1955 proposal coining the term artificial intelligence and the 1956 Dartmouth workshop.”

Earlier work-like Alan Turing’s 1950 paper on computing machinery and intelligence, or McCulloch and Pitts’ 1943 neural network model-are crucial precursors, but they weren’t yet called “artificial intelligence.”

Who is considered the “father” of artificial intelligence?

John McCarthy is most often called the “father of AI” because he coined the term artificial intelligence in 1955 and organized the 1956 Dartmouth conference that established AI as a field.

However, AI’s parentage is shared. Other foundational figures include:

No single person invented everything. AI emerged from collaboration among many pioneering computer scientists.

What was the first true AI program?

Historians debate this, but the main candidates are:

Logic Theorist often wins the title for “first full-fledged AI program” because it demonstrated symbolic reasoning and heuristic problem solving-capabilities central to the AI research agenda. But Samuel’s work pioneered machine learning, and SNARC pioneered neural approaches. All three represent important “firsts” in different aspects of AI.

How is “old AI” different from today’s generative AI like ChatGPT?

Classic “symbolic AI” (sometimes called GOFAI, or Good Old-Fashioned AI) relied on:

Modern generative AI relies on:

Despite these differences, both approaches attempt to realize the original 1950s goals: machines that can use human language, learn from experience, and solve complex tasks. They just take different paths to get there.

Why do people talk about “AI winters” in the history of artificial intelligence?

An AI winter is a period when interest, funding, and optimism for AI research sharply decline because the technology fails to meet inflated expectations.

There have been two main winters:

  1. Mid-1970s to early 1980s: Triggered by the Lighthill Report (1973, UK), which criticized AI for overpromising. Government funding was cut dramatically.

  2. Late 1980s to early 1990s: The expert systems bubble burst when these systems hit knowledge acquisition bottlenecks and couldn’t scale. The Lisp machine market collapsed.

Knowing about AI winters puts today’s hype cycles into perspective. The field has recovered from setbacks before, and it may face them again. This history underscores why careful, weekly curation of real breakthroughs-rather than daily noise-helps you fund AI research attention wisely and stay informed without burning out on hype that may not pan out.