laitimes

Archaeology of science and technology – a brief history of artificial intelligence

author:Geek Camp

Manufacturing intelligent machines has always been one of the focuses of human interest.

The ancient Egyptians and Romans were in awe of religious statues manipulated by priests behind the scenes. Medieval legends are full of stories of objects that could move and speak like human masters. There are stories of medieval sages who came into contact with a dwarf, a small artificial human who was actually as sentient and intelligent as a real person.

In fact, the 16th-century Swiss philosopher Theophrastus Bonbathes said:

"We will be like God. We will replicate God's greatest miracle – the creation of humanity. ”

Humanity's latest attempt to create "synthetic intelligence" is now known as artificial intelligence. In this article, I hope to provide a comprehensive history of AI, from its little-known days (when it wasn't even called AI) to the current era of AI.

Artificial intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. - John McCarthy

This article will divide the history of artificial intelligence into 9 milestones. These milestones are not unrelated. Instead, I will discuss their connection to the overall history of AI, as well as the lessons they have learned from previous milestones and the progress they have made.

Here are the milestones we'll cover:

  • Dartmouth Conference
  • sensor
  • The artificial intelligence boom of the 60s of the 20th century
  • The winter of artificial intelligence in the 80s of the 20th century
  • Development of expert systems
  • The advent of NLP coincided with computer vision in the 90s of the 20th century
  • The rise of big data
  • The arrival of deep learning
  • The development of generative artificial intelligence

Dartmouth Conference

The Dartmouth conference was a seminal event in the history of artificial intelligence. It was a summer research project held at Dartmouth College in New Hampshire, United States in 1956. In a sense, the conference was the first of its kind, bringing together researchers from computer science, mathematics, physics, and other seemingly different fields with the sole purpose of exploring the potential of artificial intelligence, although the term artificial intelligence had not yet been coined. Participants included John McCarthy, Marvin Minsky and other prominent scientists and scholars.

Archaeology of science and technology – a brief history of artificial intelligence

During the conference, attendees discussed a wide range of topics related to AI, such as natural language processing, problem solving, and machine learning. They also developed a roadmap for AI research, including the development of programming languages and algorithms for creating intelligent machines.

The Dartmouth conference is considered a seminal moment in the history of artificial intelligence, which marked the birth of the field of artificial intelligence and also the birth of the name "artificial intelligence".

The Dartmouth conference has had a significant impact on the entire history of AI. It helps to use AI as a field of study and encourages the development of new technologies and processes.

Archaeology of science and technology – a brief history of artificial intelligence

Participants presented a vision for artificial intelligence, which includes the creation of intelligent machines that can reason, learn and communicate like humans. This vision has sparked a wave of research and innovation in the field.

After the conference, John McCarthy and his colleagues developed the first artificial intelligence programming language, LISP. This language became the basis for AI research and still exists today.

The conference also led to the establishment of AI research labs at several universities and research institutions, including MIT, Carnegie Mellon and Stanford.

Archaeology of science and technology – a brief history of artificial intelligence

One of the most important legacies of the Dartmouth Conference was the development of the Turing test. British mathematician Alan Turing proposed the idea of a test to determine whether a machine could exhibit intelligent behavior no different from that of a human. This concept was discussed in depth at the conference and became the core idea of the field of artificial intelligence research. The Turing test is still an important benchmark for measuring the progress of artificial intelligence research.

The Dartmouth conference was a pivotal event in the history of AI. It established AI as a field of study, developed a research roadmap, and sparked a wave of innovation in the field. The development of artificial intelligence programming languages, research labs, and Turing tests are all important legacies of this conference.

Perceptron

The perceptron is an artificial neural network architecture designed by psychologist Frank Rosenblatt in 1958. It has fueled the famous "brain-inspired AI approach," in which researchers build AI systems that mimic the human brain.

Archaeology of science and technology – a brief history of artificial intelligence

Technically, a perceptron is a binary classifier that learns to divide input patterns into two classes. It works by taking a set of input values and calculating the weighted sum of those values, followed by a threshold function that determines whether the output is 1 or 0. During training, the weights can be adjusted to optimize the performance of the classifier.

Perceptrons are seen as an important milestone in artificial intelligence because it demonstrates the potential of machine learning algorithms to mimic human intelligence. It shows that machines can learn from experience and improve their performance over time, just like humans.

Archaeology of science and technology – a brief history of artificial intelligence

Perceptrons are the next major milestone after Dartmouth. The conference got a lot of excitement about the potential of AI, but it's still largely a theoretical concept, and perceptual machine fetching gives a real glimpse of the possibilities of machine learning, which is a practical implementation of AI that shows that the concept can be turned into a working system.

Perceptrons were initially touted as a breakthrough in artificial intelligence and received extensive media attention. But it was later discovered that the algorithm had significant limitations, especially when it came to classifying complex data. This led to a decline in interest in perceptron and artificial intelligence research in the late 60s and 70s of the 20th century.

Archaeology of science and technology – a brief history of artificial intelligence

But the perceptron project was later relaunched and integrated into more complex neural networks, opening up the development of deep learning and other forms of modern machine learning. Today, perceptrons are regarded as an important milestone in the history of artificial intelligence and continue to be studied and used in the research and development of new artificial intelligence technologies.

The artificial intelligence boom of the 60s of the 20th century

As we mentioned earlier, the 50s of the 20th century was an important decade for the artificial intelligence community due to the creation and popularization of perceptron artificial neural networks. Perceptrons are considered a breakthrough in artificial intelligence research and have attracted great interest in the field. It was a stimulant that came to be known as the AI craze.

The AI boom of the 60s of the 20th century was a period of great progress in the development of artificial intelligence (AI) and aroused people's interest. At the time, computer scientists and researchers were exploring new ways to create intelligent machines and program them to perform tasks traditionally thought only by human intelligence.

When the obvious flaws of perceptrons were discovered, researchers began to explore other AI methods beyond perceptrons. They focus on areas such as symbolic reasoning, natural language processing, and machine learning.

These studies have led to the development of new programming languages and tools, such as LISP and Prolog, which are specifically designed for AI applications. These new tools make it easier for researchers to experiment with new AI technologies and develop more complex AI systems.

During this time, the U.S. government also became interested in AI and began funding research projects through agencies such as the Defense Advanced Research Projects Agency (DARPA). The funding helps accelerate the development of AI and provides researchers with the resources they need to solve increasingly complex problems.

The AI boom of the 60s of the 20th century culminated in the emergence of several landmark AI systems. One example is the Universal Problem Solver (GPS) co-authored by Herbert Simon, J.C. Shaw, and Alan Newell. GPS was an early AI system that could ultimately solve a problem by searching for a large number of solutions.

Archaeology of science and technology – a brief history of artificial intelligence

Another example is the ELIZA program created by Joseph Wiesenbaum, a natural language processing program that simulates a psychotherapist.

Archaeology of science and technology – a brief history of artificial intelligence

The AI boom of the 60s of the 20th century was a period of significant progress in AI research and development. At the time, researchers were exploring new approaches to AI, developing new programming languages and tools specifically designed for AI applications. This research has led to the development of several landmark AI systems, paving the way for future AI development.

The winter of artificial intelligence in the 80s of the 20th century

The AI winter of the 80s of the 20th century refers to a period in which research and development in the field of artificial intelligence has experienced a significant slowdown. This period of stagnation occurred a decade after significant progress in AI research and development from 1974 to 1993.

Part of the reason for this is that many projects developed during the AI boom have failed to deliver on their promises. Due to the lack of new advances in the field of artificial intelligence, the artificial intelligence research community is becoming increasingly frustrated. This led to funding cuts, and many AI researchers were forced to abandon their projects and leave the field altogether.

According to the Wright Hill report of the British Scientific Research Council:

"AI has failed to achieve its ambitious goals, and so far none of the discoveries in this field have had the significant impact promised at the time."

The AI winter of the 80s of the 20th century was characterized by a significant reduction in funding for AI research and a general lack of interest in AI among investors and the public. This has led to a significant drop in the number of AI projects under development, and many research projects that are still active are unable to make significant progress due to lack of resources.

Despite the challenges of the AI winter, the field of AI has not completely disappeared. Several researchers continued to work on AI projects, and significant progress was made during this period, including the development of neural networks and the beginning of machine learning. But progress was slow, and it wasn't until the '90s that interest in AI began to re-emerge.

Overall, the AI winter of the 80s of the 20th century is an important milestone in the history of AI, demonstrating the challenges and limitations of AI research and development. It also serves as a cautionary tale for investors and policymakers to realize that the hype surrounding AI can sometimes be exaggerated, and that progress in the field requires sustained investment and commitment.

Development of expert systems

Expert system is an artificial intelligence technology developed in the 80s of the 20th century. Expert systems are designed to mimic the decision-making abilities of human experts in specific fields, such as medicine, finance, or engineering.

In the '60s and early '70s, there was a lot of optimism and excitement around AI and its potential to transform industries. But as mentioned earlier, this enthusiasm has been tempered by the AI winter, which is characterized by a lack of progress and research funding.

The development of expert systems marks a turning point in the history of artificial intelligence. As the need for AI to provide practical, scalable, powerful, and quantifiable applications increases, so does the pressure on the AI community.

Expert systems prove that AI can be applied in real life and has the potential to bring significant benefits to businesses and industries. Expert systems are used to automate decision-making processes in various fields, from diagnosing medical conditions to predicting stock prices.

At a technical level, expert systems typically consist of a knowledge base containing domain-specific information that can be used to reason about new inputs and make decisions. Expert systems include many forms of reasoning, such as deduction, induction, and induction, to mimic the decision-making process of a human expert.

Overall, expert systems are an important milestone in the history of artificial intelligence, they demonstrate the practical application of artificial intelligence technologies and pave the way for further development in the field.

Today, expert systems continue to be used in various industries, and their development has led to the emergence of other AI technologies, such as machine learning and natural language processing.

The emergence of natural language processing and computer vision in the 90s of the 20th century

In the 90s of the 20th century, the globalization of AI research began to gain some momentum. This period pioneered modern artificial intelligence research.

As mentioned in the previous section, the expert system came into play around the late 1980s and early 1990s. But they are limited because they rely on structured data and rules-based logic. They struggle to process unstructured data, such as natural language text or images, which are inherently ambiguous and context-dependent.

To solve this problem, researchers began to develop techniques for processing natural language and visual information.

In the 70s and 80s of the 20th century, the development of rule-based natural language processing and computer vision systems made significant progress. However, these systems are still limited because they rely on predetermined rules and cannot learn from data.

In the 90s of the 20th century, advances in machine learning algorithms and computing power accelerated the development of more sophisticated natural language processing and computer vision systems.

Researchers are starting to use statistical methods to learn patterns and features directly from the data, rather than relying on pre-defined rules. This approach, known as machine learning, provides more accurate and flexible models for processing natural language and visualizing information.

One of the most important milestones of this era was the development of the hidden Markov model (hMM), which allowed probabilistic modeling of natural language texts. This has led to significant advances in speech recognition, language translation, and text classification.

Archaeology of science and technology – a brief history of artificial intelligence

Similarly, in the field of computer vision, the emergence of convolutional neural networks (CNNs) has made object recognition and image classification more accurate.

Archaeology of science and technology – a brief history of artificial intelligence

These technologies are now widely used, from self-driving cars to medical imaging.

Overall, the emergence of natural language processing and computer vision is an important milestone in the history of artificial intelligence. They make the processing of unstructured data more complex and flexible.

These technologies remain the focus of AI research and development today because of their significant impact on a wide range of industries and applications.

The rise of big data

The concept of big data has been around for decades, but its prominence in the field of artificial intelligence (AI) dates back to the early 21st century. Before we dive into its relationship to artificial intelligence, let's briefly discuss the term "big data."

For data to be defined as large, it needs to meet 3 core attributes: capacity, speed, and diversity.

Capacity refers to the absolute size of a dataset, ranging from terabytes to petabytes and beyond.

Speed refers to the speed at which data is generated and needs to be processed. For example, data from social media or IoT devices can be generated in real time and needs to be processed quickly.

Diversity refers to the generation of different types of data, including structured, unstructured, and semi-structured data.

Before the advent of big data, AI was limited by the quantity and quality of data used to train and test machine learning algorithms.

Natural language processing (NLP) and computer vision are two areas where AI made significant progress in the 90s of the 20th century, but they are still limited by the amount of data available.

For example, early NLP systems were based on hand-crafted rules that had limited ability to handle the complexity and variability of natural language.

The rise of big data has changed that, providing access to vast amounts of data from a variety of sources, including social media, sensors, and other connected devices. This enables machine learning algorithms to be trained on larger datasets, which in turn enables them to learn more complex patterns and make more accurate predictions.

At the same time, advances in data storage and processing technologies such as Hadoop and Spark have made it possible to process and analyze these large data sets quickly and efficiently. This has led to the development of new machine learning algorithms, such as deep learning, which is able to learn from large amounts of data and make highly accurate predictions.

Today, from self-driving cars and personalized medicine to natural language understanding and recommendation systems, big data remains the driving force behind many of the latest advances in AI.

As the amount of data generated continues to grow exponentially, the role of big data in AI will only become more important in the coming years.

The advent of deep learning

The emergence of deep learning is an important milestone in the globalization of modern artificial intelligence.

Since the Dartmouth conference in the 50s of the 20th century, AI has been considered a legitimate field of research, with early AI research focusing on symbolic logic and rule-based systems. This involves hand-programming machines to make decisions based on a predetermined set of rules. While these systems are useful in some applications, they have limited ability to learn and adapt to new data.

It wasn't until the rise of big data that deep learning became an important milestone in the history of artificial intelligence. As the amount of data available grows, researchers need new ways to process and extract insights from vast amounts of information.

Deep learning algorithms provide a solution to this problem by enabling machines to automatically learn from large data sets and make predictions or decisions based on that learning.

Deep learning is a method of machine learning that uses artificial neural networks to simulate the structure and function of the human brain. These networks consist of layers of interconnected nodes, each performing a specific mathematical function on the input data. The output of one layer serves as the input to the next, allowing the network to extract increasingly complex features from the data.

A key advantage of deep learning is its ability to learn hierarchical representations of data. This means that the network can automatically learn to recognize patterns and features at different levels of abstraction.

For example, a deep learning network might learn to recognize the shape of a single letter, then the structure of a word, and finally the meaning of a sentence.

The development of deep learning has made major breakthroughs in areas such as computer vision, speech recognition, and natural language processing. For example, deep learning algorithms are now able to accurately classify images, recognize speech, and even generate realistic human-like language.

Deep learning is an important milestone in the history of artificial intelligence, and the rise of big data has made it possible. Its ability to automatically learn from large amounts of information has made significant strides in a wide range of applications and is likely to continue to be a key area of research and development in the coming years.

The development of generative artificial intelligence

That's where we are in the current AI timeline. Generative AI is a subfield of artificial intelligence that involves creating AI systems capable of generating new data or content that is similar to the data it was trained on. It can generate images, text, music, and even videos.

In the historical context of artificial intelligence, generative artificial intelligence can be seen as an important milestone after the rise of deep learning. Deep learning is a subset of machine learning that involves the use of multi-layered neural networks to analyze and learn from large amounts of data. It has achieved incredible success in complex games like image and speech recognition, natural language processing, and even Go.

Transformers are neural network structures that have revolutionized the status quo of generative artificial intelligence. Vaswani et al. introduced these methods in a paper published in 2017, and they have since been used for a variety of tasks, including natural language processing, image recognition, and speech synthesis.

Transformers use self-attention mechanisms to analyze the relationships between different elements in sequences, enabling them to produce more coherent and subtle outputs. This has led to the development of large language models such as GPT-4 (ChatGPT), which can generate human-like text on a wide range of topics.

The art of AI is another area where generative AI has a significant impact. By training deep learning models on large datasets of artwork, generative AI can create new and unique works of art.

The application of generative AI in art has raised debates about creativity and the nature of authorship, as well as ethical questions about using AI to create art. Some argue that the art produced by AI is not really creative because it lacks the intentionality and emotional resonance of artificial art. Others believe that AI art has its own value and can be used to explore new forms of creation.

Large language models like GPT-4 are also used in the field of creative writing, and some authors use them to generate new text or as a tool for inspiration.

This raises questions about the future of writing and the role of artificial intelligence in the creative process. Some argue that AI-generated text lacks the depth and nuance of human writing, while others see it as a tool that can enhance human creativity by providing new ideas and perspectives.

Generative AI, especially with the help of transformers and large language models, has the potential to revolutionize many fields, from art to writing to simulation. While there is debate in these areas about the nature of creativity and ethics for the use of AI, it is clear that generative AI is a powerful tool that will continue to shape the future of technology and art.

epilogue

The history of AI is both interesting and thought-provoking, with its history of disappointments and extraordinary breakthroughs.

But for apps like ChatGPT, Dalle.E, etc., we're only scratching the surface of what AI might be like.

There are challenges ahead, and there will certainly be more and harder ones.

Read on