Image by Open AI’s Dall-E (responding to ‘futuristic robot cutting a guy’s hair)
Q: Gordon to Chat GPT 4: What is the history of AI?
A: ChatGPT:
“The history of artificial intelligence (AI) can be traced back to antiquity, with myths and stories featuring artificial beings and automated machines. However, the modern field of AI emerged in the mid-20th century as a result of advancements in computer science, mathematics, and neuroscience. Here is an overview of key milestones in the history of AI:
1943: Warren McCulloch and Walter Pitts published a paper on artificial neurons, laying the foundation for neural networks.
1950: Alan Turing proposed the Turing Test, a criterion to evaluate machine intelligence.
1956: The Dartmouth Conference, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, marked the birth of AI as an independent field of research.
Late 1950s-1960s: Early AI programs such as Samuel's checkers player, Newell and Simon's General Problem Solver, and McCarthy's Lisp programming language contributed to the development of AI.
1969: Marvin Minsky and Seymour Papert published "Perceptrons," critiquing the limitations of neural networks, which led to a decline in neural network research for the next two decades.
1970s-1980s: Expert systems, which attempted to replicate human expertise using rule-based systems, gained popularity.
1986: The backpropagation algorithm, developed by Rumelhart, Hinton, and Williams, revitalized research in neural networks.
1990s: The development of reinforcement learning techniques and algorithms, such as Q-learning, further advanced AI research.
1997: IBM's Deep Blue defeated world chess champion Garry Kasparov, showcasing the potential of AI.
2006: Geoffrey Hinton coined the term "deep learning," describing the use of deep neural networks to solve complex problems.
2012: AlexNet, a deep convolutional neural network, won the ImageNet Large Scale Visual Recognition Challenge, highlighting the potential of deep learning in computer vision.
2014: Google's DeepMind developed AlphaGo, which defeated the world Go champion in 2016, demonstrating the power of AI in mastering complex games.
2018: OpenAI's GPT-2 and Google's BERT demonstrated the potential of transformer-based models in natural language processing.
2020: OpenAI's GPT-3 showcased impressive language generation capabilities and became a benchmark for large-scale language models.
This is just a brief overview of the rich history of AI, which includes numerous breakthroughs, setbacks, and paradigm shifts. The field continues to evolve rapidly, with ongoing research pushing the boundaries of what AI can achieve.
Note from Gordon: the above was all generated by Open AI’s Chat GPT 4. As Open AI tells us, the current iteration is far from perfect; and that ‘facts’ can be misstated (just like us humans often do- except that AI seems to be less likely than most of us to be wrong). My point: always think twice before blindly accepting ‘facts’ -no matter the source!