The Brief History of AI

Machine Learning

&

Reinforcement Learning

Preview

Listen to the Podcast Version

I. 1930s-1940s: Early Foundations

1933 - Thomas Ross's "Thinking Machine": Ross developed one of the earliest machines that could supposedly learn and remember at the University of Washington. (Ross, 1933)

1935 - Steven Smith's "Robot Rats": Smith created mechanical devices designed to navigate mazes at the University of Washington, exploring how simple machines could exhibit learning-like behavior in solving spatial problems. (Smith, 1935)

1943 - McCulloch and Pitts Neuron Model: Warren McCulloch and Walter Pitts developed a mathematical model for neurons that laid the foundation for future neural networks. (McCulloch & Pitts, 1943)

1948-1949 - W. Grey Walter's "Machina Speculatrix": Walter built autonomous robots, including the famous "turtle" robots, demonstrating how simple neural mechanisms could produce seemingly intelligent actions. (Walter, 1948-1949)

II. 1950s: Emergence of AI as a Field

1950 - Turing Test: Alan Turing introduced the concept of machine intelligence with the Turing Test, proposing that a machine could be considered intelligent if it could imitate human responses well enough to fool a human. (Turing, 1950)

1954 - Neural Network Simulation: Farley and Clark conducted the first simulation of artificial neural networks on a digital computer, laying groundwork for future computational neuroscience and machine learning research. (Farley & Clark, 1954)

1956 - Dartmouth Workshop: Considered the birth of AI, this workshop introduced the term "Artificial Intelligence" and involved key figures like John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester. (Dartmouth, 1956)

1958 - Perceptron: Frank Rosenblatt developed the perceptron, the first model for a neural network that could recognize patterns and adapt through supervised learning. (Rosenblatt, 1958)

1959 - Samuel's Checkers Player: Arthur Samuel developed a program that could play checkers and improve its performance through self-play, introducing key concepts in reinforcement learning. (Samuel, 1959)

III. 1960s-1970s: Symbolic AI and AI Winter

1965 - Logic Theorist and General Problem Solver: Allen Newell and Herbert A. Simon created programs that laid the groundwork for symbolic AI. (Newell & Simon, 1965)

1968 - Group Method of Data Handling (GMDH): Alexey Ivakhnenko introduced GMDH, a method for training neural networks, considered one of the earliest forms of deep learning. (Ivakhnenko, 1968)

1969 - Minsky and Papert's Perceptron Critique: Marvin Minsky and Seymour Papert published a critique that pointed out the limitations of single-layer perceptrons, contributing to the first AI winter. (Minsky & Papert, 1969)

1973 - Lighthill Report: Sir James Lighthill's report to the UK government criticized the lack of progress in AI, leading to significant cuts in AI research funding in the UK. (Lighthill, 1973)

1974 - Backpropagation Algorithm Proposed: Paul Werbos suggested using backpropagation in neural networks for supervised learning, laying important groundwork for future research. (Werbos, 1974)

1977 - Q-learning Foundations: Robert Bellman's work on dynamic programming laid the groundwork for future reinforcement learning algorithms like Q-learning. (Bellman, 1957)

IV. 1980s: The Rise of Artificial Neural Networks and Second AI Winter

1980 - Neocognitron: Kunihiko Fukushima introduced the Neocognitron, a hierarchical multilayered neural network for visual pattern recognition, pioneering the concept of convolutional neural networks. (Fukushima, 1980)

1980 - AI Applications and Expert Systems Boom: AI regained attention with the rise of expert systems in industry, such as XCON at Digital Equipment Corporation. (McDermott, 1980)

1982 - Hopfield Network: John Hopfield introduced a content-addressable memory neural network, now known as the Hopfield Network, an example of recurrent neural networks. (Hopfield, 1982)

1984 - Boltzmann Machines: Geoffrey Hinton and Terry Sejnowski introduced Boltzmann Machines with visible and restricted connections in hidden layers. (Hinton & Sejnowski, 1983)

1986 - Backpropagation Rediscovered: David Rumelhart, Geoffrey Hinton, and Ronald Williams reintroduced backpropagation, a significant method for training multi-layer neural networks, applicable to both supervised learning and reinforcement learning. (Rumelhart et al., 1986)

1986 - Connectionist Model: James McClelland published Parallel Distributed Processing: Explorations in the Microstructure of Cognition with David Rumelhart, introducing the connectionism model. (McClelland & Rumelhart, 1986)

1989 - Handwritten Zip Code Recognition: Yann LeCun trained a Convolutional Neural Network (CNN) end-to-end with backpropagation to recognize handwritten digits. (LeCun et al., 1989)

V. 1990s: Reinforcement Learning and Machine Learning Advances

1992 - Q-learning: Christopher Watkins developed Q-learning, an off-policy RL algorithm, and Richard Sutton formalized Temporal Difference (TD) learning. (Watkins, 1989)

1994 - SARSA: Gavin Rummery and Mahesan Niranjan introduced SARSA, an on-policy RL algorithm. (Rummery & Niranjan, 1994)

1997 - LSTM: Sepp Hochreiter and Jürgen Schmidhuber introduced LSTMs, enabling Recurrent Neural Networks to learn long-term dependencies by passing memory sequentially. (Hochreiter & Schmidhuber, 1997)

1998 - Reinforcement Learning Formalized: Richard Sutton and Andrew Barto published Reinforcement Learning: An Introduction, providing a comprehensive foundation for reinforcement learning algorithms used in robotics, game playing, and decision-making tasks. (Sutton & Barto, 1998)

VI. 2000s: Deep Learning Emergence

2006 - Deep Learning Pretraining: Geoffrey Hinton introduced deep belief networks with unsupervised pretraining followed by supervised fine-tuning, applicable to both ML and RL approaches. (Hinton et al., 2006)

2009 - ImageNet: Fei-Fei Li and her team created ImageNet, a large dataset central to deep learning's breakthroughs in computer vision. (Deng et al., 2009)

VII. 2010s: Deep Learning Revolution

2012 - AlexNet: Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the ImageNet competition with AlexNet, marking the beginning of the deep learning revolution. (Krizhevsky et al., 2012)

2013 - DQN: DeepMind introduced Deep Q-Networks (DQN), combining deep learning with Q-learning to achieve human-level performance on Atari games. (Mnih et al., 2013)

2014 - GANs: Ian Goodfellow developed GANs, advancing many AI fields like image generation. (Goodfellow et al., 2014)

2014 - Seq2Seq: Sutskever, Vinyals, and Le introduced the sequence-to-sequence framework using LSTMs, revolutionizing machine translation and establishing the encoder-decoder architecture that would influence future language models. (Sutskever et al., 2014)

2016 - WaveNet: One of the first AI models to generate natural-sounding speech, inspiring research and applications in Google and beyond. (van den Oord et al., 2016)

2016 - AlphaGo: DeepMind's AlphaGo defeated world champion Lee Sedol in Go, showcasing the power of combining deep learning with reinforcement learning. (Silver et al., 2016)

2017 - Transformer: Vaswani and colleagues at Google introduced "Attention is All You Need," a new algorithm using direct attention between inputs in a sequence, surpassing memory-based recurrent neural networks like LSTMs.(Vaswani et al., 2017)

2017 - Proximal Policy Optimization (PPO): John Schulman introduced PPO, an efficient policy gradient method for reinforcement learning. (Schulman et al., 2017)

2017 - AlphaZero: A single system that taught itself from scratch to master chess, shogi (Japanese chess), and Go, beating world-champion programs in each case. (Silver et al., 2017)

2019 - GPT-2: OpenAI released GPT-2, a large-scale transformer-based language model that demonstrated significant improvements in natural language understanding and generation. (Radford et al., 2019)

2019 - MuZero: An algorithm that masters Go, chess, shogi, and Atari without needing to be told the rules, planning winning strategies in unknown environments. (Schrittwieser et al., 2019)

VIII. 2020s: Foundation Models and Modern AI

2020 - GPT-3: OpenAI launched GPT-3, a powerful language model revolutionizing natural language processing. (Brown et al., 2020)

2020 - Vision Transformers (ViT): Alexey Dosovitskiy and colleagues introduced the Vision Transformer, applying transformer models to image recognition tasks with remarkable success. (Dosovitskiy et al., 2020)

2021 - Decision Transformer: Lili Chen and colleagues introduced the Decision Transformer, framing reinforcement learning as a sequence modeling problem using transformer architectures. (Chen et al., 2021)

2022 - ChatGPT: OpenAI introduced ChatGPT, showcasing the potential of conversational AI, becoming one of the most widely used AI tools globally. (OpenAI, 2022)

2022 - RT-1: Robotics Transformer: Anthony Brohan and colleagues developed RT-1, applying transformer models to real-world robotic control at scale. (Brohan et al., 2022)

© 2024 Maxence Boels. All rights reserved.