laitimes

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

author:ChatGPT sweeping monk

On June 9, the two-day "KLCII Conference" was successfully opened at the Zhongguancun National Independent Innovation Demonstration Zone Conference Center.

KLCII is an annual international high-end professional exchange event on artificial intelligence organized by KLCII (also known as the strongest China AI Research Institute in China), positioned as the "top event for AI insiders", also known as the "AI Spring Festival Gala" - as can be seen from the line-up of participants:

Turing Award winners Geoffrey Hinton, Yann LeCun (which is also the second of the three giants of deep learning, another Bengio attended previous conferences), Joseph Sifakis and Yao Zhizhi, Zhang Cymbal, Zheng Nanning, Xie Xiaoliang, Zhang Hongjiang, Zhang Yaqin and other academicians, Stuart Russell, founder of the Center for Artificial Intelligence Systems at UC Berkeley, Max Tegmark, founder of the MIT Future Life Institute, OpenAI CEO Sam Altman (this is also his first China speech, although online), Meta, Microsoft, Google and other big manufacturers and DeepMind, Anthropic, HuggingFace, Midjourney, Stability AI and other star team members, a total of more than 200 top experts in artificial intelligence...

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

In the past two days, I followed the live broadcast of the conference, and as a liberal arts student who did not understand technology, I listened to it with relish and gained a lot.

However, after watching the speech of the final Turing Award winner and "father of deep learning" Geoffrey Hinton, a strong and mixed emotion enveloped me:

On the one hand, seeing AI researchers exploring and imagining various cutting-edge technologies, they will naturally have more confidence in the realization of AI and even the future general artificial intelligence AGI;

On the other hand, hearing cutting-edge experts and scholars discuss the risks of AI, and human ignorance and contempt for how to deal with them, and worry about the future of mankind - the most essential question, in Hinton's words: there has never been a precedent in history for more intelligent things to be controlled by less intelligent things, if frogs invented humans, who do you think will take control? Is it a frog, or a person?

Due to the explosion of information in the two-day conference, take a moment to sort out some important speech materials, record some of your feelings by the way, facilitate subsequent review and reference, and share it with everyone who cares about the progress of AI.

Note: The following marked [Note] part is a personal feeling, and the content is summarized as a quote (limited ability can not write it by yourself -_-|| ), the source is the link at the end of each section, and some parts are modified.

Sam Altman, CEO of OpenAI: AGI could be around in a decade

On June 10, during the all-day "AI Security and Alignment" forum, OpenAI co-founder Sam Altman gave an opening keynote speech — his first China speech, albeit virtually.

The presentation gave insights around the interpretability, scalability, and generalizability of the model. Sam Altman and KLCII Chairman Hongjiang Zhang conducted a Q&A on how to deepen international cooperation, how to carry out safer AI research, and how to deal with future risks of AI in the current era of AI big models.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Highlights:

  • The reason why the current AI revolution has such an impact is not only the scale of its impact, but also the speed of its progress. This brings both dividends and risks.
  • The potential dividends of AI are enormous. But we must manage risk together to use it to improve productivity and living standards.
  • With the advent of increasingly powerful AI systems, the stakes for global collaboration have never been greater. Differences of opinion between major powers have often occurred historically, but cooperation and coordination are necessary on some important matters. Advancing AGI safety is one of the most important areas where we need to find common interests. In his speech, Altman repeatedly emphasized the need for global AI safety alignment and regulation, and specifically quoted a sentence from the Tao Te Ching: A journey of a thousand miles begins with a single step. Alignment is still an open issue.
  • Imagine a future AGI system with perhaps 100,000 lines of binary code, and human regulators are unlikely to find out if such a model is doing something nefarious.
  • GPT-4 took eight months to complete the alignment. However, related research is still escalating, mainly divided into two aspects: scalability and interpretability. One is scalable supervision, trying to use AI systems to assist humans in supervising other AI systems. The second is interpretability, trying to understand the "black box" of the internal operation of large models. Ultimately, OpenAI's goal is to train AI systems to help with alignment research.
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

When asked by Zhang Hongjiang how far away from the era of general artificial intelligence (AGI), Sam Altman said, "There will be super AI systems in the next 10 years, but it is difficult to predict the specific time point", he also stressed that "new technologies are completely changing the world faster than imagined." ”

When asked if OpenAI will open source big models, Altman said there will be more open source in the future, but there are no specific models or timelines. In addition, he said that GPT-5 will not be available anytime soon. After the meeting, Altman posted his gratitude for being invited to speak at KLCII.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Chinese transcript

by Geek Park: Quoting the Tao Te Ching on the differences between great powers, Sam Altman's latest speech: AI security starts with a single step

by Tencent Technology: OpenAI CEO appeared at the "AI Spring Festival Gala", and Zhang Hongjiang Q&A: There will be super AI in 10 years

Turing Award winner Yang Likun: The GPT model will not be used for five years, and the world model is the future of AGI

Yang Likun, one of the three giants of deep learning and winner of the Turing Award, delivered a keynote speech entitled "Towards Machines that can learn, reason, and plan", in which he questioned the current route of LLM as always, and proposed another idea of machines that learn, reason, and plan: the world model.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Key Ideas of the Presentation:

  • There is still a gap between the capabilities of AI and the capabilities of humans and animals - the gap is mainly reflected in logical reasoning and planning, and large models can only "react instinctively" at present. What is self-supervised learning? Self-supervised learning is capturing dependencies in input.
  • The training system captures the dependencies between the parts we see and the parts we don't see yet. Current large models perform amazingly if they are trained on data of one trillion tokens or two trillion tokens.
  • It's easy to get carried away by its fluidity. But in the end, they make stupid mistakes. They make factual errors, logical errors, inconsistencies, and their reasoning capacity is limited, which produces harmful content. As a result, the large model needs to be retrained.
  • How can AI really plan like humans? Reference can be made to how humans and animals learn quickly – by observing and experiencing the world.
  • Yang Likun believes that the development of AI in the future faces three major challenges: learning the representation of the world, predicting the world model, and using self-supervised learning.
  • The first is the representation and predictive models of the learning world, which can of course be learned in a self-supervised way. The second is learning to reason. This corresponds to the concepts of psychologist Daniel Kahneman's Systems 1 and 2. System 1 is the human behavior or action corresponding to subliminal computing, which are those things that can be done without thinking; System 2, on the other hand, is a task that you consciously and purposefully use all your thinking power. At present, artificial intelligence can basically only realize the functions in System 1, and it is not complete; The final challenge is how to plan complex sequences of actions by breaking them down into simple tasks and running them in a hierarchical fashion.
  • Therefore, Yang Likun proposed the "World Model", which consists of six independent modules, including: configurator module, perception module, world model, cost module, actor module, and short-term memory module. He believes that designing architectures and training paradigms for world models is the real obstacle to the development of artificial intelligence in the coming decades.
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

LeCun has always expressed disdain for the idea that AI destroys humans, believing that today's AI is not as intelligent as a dog, which is superfluous. When asked if AI systems pose an existential risk to humans, LeCun said that we don't have super AI yet, so how can we make super AI systems safe?

"Asking people today if we can guarantee that superintelligent systems are safe for humans is an unanswerable question. Because we don't have a design for superintelligent systems. So you can't make something safe until you have the basic design. It's like you asked aeronautical engineers in 1930, can you make turbojets safe and reliable? And the engineer will say, "What is a turbojet?" Because the turbojet had not yet been invented in 1930. So we're kind of in the same situation. It's a bit premature to claim that we can't make these systems secure because we haven't invented them yet. Once we invent them—maybe they'll be similar to the blueprints I proposed—then it's worth discussing."

Chinese transcript

By KLCII Conference: Strong AI Roadmap for the Father of Convolutional Neural Networks: Self-Supervision, Inference, Planning

by Geek Park: Yann LeCun, one of the three giants of deep learning: Big language models can't bring AGI

Max Tegmark, professor at MIT's Center for Artificial Intelligence and Basic Interactions: Mastering AI with mechanical interpretability

Max Tegmark, currently a tenured professor of physics at MIT, scientific director of the Institute for Basic Problems, founder of the Future of Life Institute, and initiator of the famous "Pause AI Research Initiative" (the proposal at the end of March was co-signed by Elon Musk, Turing Award winner Yoshua Bengio, Apple co-founder Steve Wozniak and other 1,000+ celebrities), gave a wonderful speech at KLCII conference titled "How to Control AI" (Keeping AI under control), and had a dialogue with Academician Zhang Yaqin of Tsinghua University to discuss AI ethical safety and risk prevention.

The talk discussed in detail the mechanical interpretability of AI, which is really the study of how human knowledge is stored in the complex connections in neural networks. If research in this direction continues, it may finally be able to truly explain the ultimate question of why LLM big language models produce intelligence.

In addition to the speech, the interesting fact is that as the initiator of the "Pause AI Research Initiative", the keynote speech focused on how to conduct a more in-depth research on AI large models. Perhaps, as Max himself concluded, he is not the doomer that Professor Yang Likun, one of the Big Three of AI, said, he is actually hopeful and yearning for AI, but we can ensure that all this more powerful intelligence works for us and use it to create a more inspiring future than science fiction writers have dreamed of in the past.

Note: I thought it would be boring, but unexpectedly very exciting, and I watched the longest speech of an hour with relish! It is worthy of being a professor who lectures often, which is very engaging, but also very theoretical and simple. What's even more surprising is that not only is he not an old-fashioned AI opponent, but he is actually a better AI advocate! I will also say Chinese, and I will not forget to enroll myself while giving a speech...

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Excerpts from Highlights:

1、

Mechanical interpretability is a very interesting area. You train a complex neural network that you don't understand to perform an intelligent task, and then try to figure out how it does it.

How are we going to do this? You can have three different levels of ambition. The lowest level of ambition is to diagnose only its trustworthiness and understand how much you should trust it. For example, when you're driving, even if you don't understand how the brakes work, you at least want to know if you can trust it to slow down.

The ambition of the next level is to understand it better to make it more trustworthy. The ultimate ambition is very ambitious, and that's what I expect, which is that we'll be able to take all the knowledge they've learned from machine learning systems and re-implement them in other systems to prove that they're going to do what we want.

2、

Let's slow down, let's make sure we develop better guardrails. So the letter says let's pause, mentioned earlier. I want to be clear, it doesn't say we should pause AI, it doesn't say we should pause almost anything. We've heard so far at this conference that we should continue to do pretty much all of the wonderful research you're doing. It just says we should pause, pause the development of a more powerful system than GPT-4. So this is mostly a pause for some Western companies.

Now, the reason is that these are precisely systems, precisely the ones that can make us lose control the fastest, with super powerful systems that we do not know enough yet. The purpose of the suspension is just to make artificial intelligence more like biotechnology, in the field of biotechnology, you can't just say that you are a company, hey, I have a new drug, I found it, and it will start selling in major supermarkets in Beijing tomorrow. First you have to convince the Chinese government or U.S. government experts that it's a safe drug and that its benefits outweigh the harms, there's a review process, and then you can do it.

Let's not make that mistake and let's become more like biotech, using our most powerful systems, unlike Fukushima and Chernobyl.

3、

Yaqin Zhang: Well, Max, you've spent your career in mathematics, physics, neuroscience, and of course, artificial intelligence. It is clear that in the future we will increasingly rely on interdisciplinary competence and knowledge. We have a lot of graduate students, a lot of young people in the future.

What advice would you give young people to on how they can make career choices?

Max Tegmark: First of all, my advice is to focus on the basics in the age of artificial intelligence. Because the economy and the job market will change faster and faster. So we're moving away from this model of learning for 12 or 20 years and then doing the same thing for the rest of your life. It won't be like that.

More importantly, you need to have a solid foundation and be very good at creative and open-minded. This is how you can be agile and follow the flow.

Of course, pay attention to what's happening in the entire field of AI, not just in your own. Because in the job market, the first thing that happens is not that people are replaced by machines. People who don't work with AI will be replaced by people who do.

Can I add a little more? I see time blinking there.

I just want to say something optimistic. I felt like Yann LeCun was making fun of me. He called me the doomger. But if you look at me, I'm actually very happy and cheerful. I'm actually more optimistic about our ability to understand future AI systems than Yann LeCun. I think it's very, very promising.

I think if we go full speed ahead and get more control out of humans to machines that we don't understand, that's going to end up in a very bad way. But we don't have to do that. I think if we make an effort to look at mechanical interpretability and many of the other technical topics that we'll hear about here today, we can actually make sure that all this more powerful intelligence works for us and use it to create a more inspiring future than science fiction writers used to dream of.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Chinese Text & Video by Web3 Sky City: Mastering AI with Mechanical Explainability: Professor Max Tegmark's Wonderful Speech at KLCII Conference (with Chinese video)

Conversations with Midjourney founders: Images are just the first step, AI will revolutionize learning, creativity and organization

MidJourney is the hottest image generation engine available in OpenAI's DALL· Under fierce competition such as E 2 and the open source model Stable Diffusion, it still maintains an absolute lead in the generation of multiple styles.

Midjourney is an amazing company with 11 people changing the world and creating great products that are destined to be the story of Pre AGI's early years.

Note: The long-awaited conversation between David Holz, founder and CEO of Midjourney, and Zhang Peng of Geek Park, is all in English, no subtitles, I didn't expect to be fully understood, and I especially relished it, because the questions and answers are too exciting, especially David, when answering, I couldn't help laughing, laughing like an innocent child, he said with experience in large team management, "I never wanted to have a company, I want to have a home." He became a world-renowned unicorn with Midjourney, which until now had only about 20 people, and could change the paradigm of future startups.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Entrepreneurial driver: unleashing the human imagination

Zhang Peng: In the past 20 years, I have met many entrepreneurs at home and abroad. I found that they had something in common, and they all had a strong drive to explore and create "out of nothing."

I wonder, what drove you when you started MidJourney? What is it that you crave at that moment?

David Holz: I never thought about starting a company. I just want a home.

I hope that in the next 10 or 20 years, I can create here at Midjourney what I really care about and really want to bring to the world.

I often think about all kinds of questions. Maybe I can't solve every problem, but I can try to make everyone more capable of solving it.

So I try to think about how to solve it, how to create things. In my view, this boils down to three points. First, we must reflect on ourselves: what do we want? What exactly is the problem? Then we have to imagine: where are we headed? What are the possibilities? Finally, we must coordinate with each other and work with others to achieve what we imagine.

I think in AI, there's a big opportunity to combine these three parts and create critical infrastructure that will make us better at solving this problem. At some point, AI should be able to help us reflect on ourselves, better imagine where we're heading, and better find each other and work together. We can do these things together and fuse them into some kind of single framework. I think it's going to change the way we create things and solve problems. That's the big thing I want to do.

I think sometimes image generation (which we did first) can be confusing, but in many ways, image generation is an accepted concept. Midjourney has become a collection of super imaginations, with millions of people exploring the possibilities of this space.

In the coming years, there will be opportunities for more visual and artistic exploration that may exceed the combined exploration of all previous histories.

It doesn't solve all the problems we face, but I think it's a test, an experiment. If we can do this exploration of the visual field, then we can also do other things, and all the other things that require us to explore and think together, I think can be solved in a similar way.

So when I was thinking about how to start working on this problem, we had a lot of ideas, a lot of prototypes, but suddenly there was a breakthrough in AI, especially in the visual side, and we realized that this was a unique opportunity to create something that no one else had ever tried. This made us want to try it.

We think that maybe it won't be long before it all comes together to form something very special. This is just the beginning.

Zhang Peng: So, the image (generation) is only the first step, and your ultimate goal is to liberate the human imagination. Was that what attracted you to start Midjourney?

David Holz: I really like imaginative things. I also wish there were more creativity in the world. It's so much fun to see crazy ideas every day.

Re-understanding knowledge: Historical knowledge becomes a force for creation

Zhang Peng: That's interesting. Idea is cheap, show me the code. But now, ideas seem to be the only thing that matters. As long as you can express your ideas through a series of excellent prompts, AI can help you make it happen. So, is the definition of learning and creation changing? What do you think?

David Holz: I think one of the interesting things is that when you give people more time to create, they're also more interested in learning itself.

For example, there is a popular art style in the United States called Art Deco. I never cared what this art was, until one day, when I could make this kind of art style work through instructions, I suddenly became very interested in it and wanted to know more about its history.

I think it's interesting that we are more interested in history when it becomes something that you can use immediately and make it easier for you to create. If the user interface becomes good enough, the user feels that AI has become an extension of our thinking. AI seems to be part of our bodies and minds, and AI is somehow closely connected to history, and we will be closely connected to history. It's so interesting.

When we ask users what they want most, the response that usually comes first and second is that they want to learn the material, they want to learn not only how to use the tools, but also art, history, camera lenses, brilliance, want to know and master all the knowledge and concepts that can be used to create.

Before, knowledge was just a thing of the past, but now it has become a force for creation.

Knowledge can immediately play a bigger role in the moment, and people are eager to get more knowledge. That's pretty cool.

Chinese Transcript by Founder Park: In conversation with the founder of Midjourney: Images are just the first step, and AI will revolutionize learning, creativity and organization

Brian Christian: A new Chinese edition of Human-Machine Alignment is released

The Chinese edition of "Human-Machine Alignment" was released, and the author Brian Christian briefly introduced the main content of the whole book in 10 minutes, which sounded very rich and exciting, and also echoed the rapid development of AI at present.

Brian Christian is an award-winning scientific author. His book, The Beauty of Algorithms, has been named Amazon's Science Book of the Year and MIT Technology Review's Book of the Year. His new book, The Alignment Problem: Machine Learning and Human Values, which is currently being translated into Chinese, was named one of the five books that inspired him in 2021 by Microsoft CEO Satya Nadella.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

The book "Human-Machine Alignment" is divided into 3 parts.

The first part explores the ethical and security issues affecting today's machine learning systems.

The second part, called proxy, shifts the focus from supervised and self-supervised learning to reinforcement learning.

The third part, built on supervision, self-supervision, and reinforcement learning, discusses how we can align complex AI systems in the real world.

Yang Yaodong, Assistant Professor, Institute of Artificial Intelligence, Peking University: A review of the progress of security alignment of large language models

Note: Yang Yaodong, assistant professor of the Institute of Artificial Intelligence of Peking University, gave a wonderful speech on "Secure Alignment of Large Language Models", first of all, Chinese speech can be understood, and secondly, he explained the main research progress of the current safe alignment of large language models in very easy-to-understand language, outlining and outlining, which exceeds many of the content of RLHF progress in depth.

Because I don't understand detailed technology, I can only roughly understand the principle and record some interesting points:

OpenAI's 3 ways to Align:

  • Train AI to use human feedback
  • Train AI to assist humans in evaluation
  • Train AI to do alignment studies
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

AI big model alignment market is still a blue ocean:

  • Almost none of the existing large models have succeeded in achieving any alignment in any sense except GPT
  • The transformation technology from general to dedicated large models will be the next commanding height of large model development
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

3 ways to align safely:

  • In the pre-training phase, higher quality data is obtained through manual screening and data cleaning
  • Use the reward model to reject sampling in the output stage to improve output quality and security. Or in a live product, refuse to respond to user input.
  • In the fine-tuning (SFT and BLHF) phase, more diverse and harmless user instructions and human preference models are added for alignment, including RBRM, Constitutional Al.
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

From RLHF to RLAIF: Constitutional AI

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

⭐️ Turing Award winner Geoffrey Hinton: Superintelligence will be much faster than expected, and I am worried about humans being controlled by them

Turing Award winner and "father of deep learning" Geoffrey Hinton finale speech on the theme of Two Paths to Intelligence Two Pathways to Intelligence.

The godfather of AI gave us a study that led him to believe that superintelligence would be much faster than expected: Mortal Computation. The presentation describes a new computational structure that can achieve intelligent computing without using backpropagation to describe the internal paths of neural networks after abandoning the principle of separation of software and hardware.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans
【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Presentation Highlights:

  • Hinton proposes a whole new possibility of implementing artificial intelligence: mortal computing. Mortal computing makes software and hardware no longer separated, and physical hardware is used to do parallel computing more accurately. It can result in lower energy consumption and simpler hardware to make, but it is more difficult to train and scale to large-scale models.
  • There are two ways for intelligent groups to share knowledge, biological and digital computing, biological sharing bandwidth is low, very slow, digital copy bandwidth is high, and very fast. People are biological, while AI is digital, so once AI masters more knowledge through multimodality, they will share quickly and will soon surpass humans.
  • When AIs evolve to be smarter than humans, they are likely to pose significant risks. This includes the exploitation and deception of human beings in an attempt to gain power. And most likely the attitude towards humans is not friendly.

The reason why the new computing model is called Mortal computation by Hinton is profound:

1) Hinton said earlier that eternal life has in fact been achieved. Because the current AI big language model has learned human knowledge into quadrillion parameters, and the hardware is independent: as long as the instruction-compatible hardware is reproduced, the same code and model weights can be directly run in the future. In this sense, human intelligence (not humans) is immortalized.

2) However, this separation of hardware and software calculations is extremely inefficient in terms of the energy efficiency and scale achieved. If you abandon the computer design principle of hardware and software separation, and implement intelligence in a unified black box, it will be a new way to achieve intelligence.

4) This computing design that no longer separates software and hardware will greatly reduce energy consumption and computing scale (consider, the energy consumption of the human brain is only 20 watts)

5) But at the same time, it means that it is impossible to efficiently copy weights to replicate wisdom, that is, to give up eternal life.

Are artificial neural networks smarter than real neural networks?

What would happen if a large neural network running on multiple digital computers could acquire human knowledge directly from the world, in addition to imitating human language?

Obviously, it will become much better than humans because it observes more data.

If the neural network can manipulate the physical world by unsupervised modeling of images or videos, and its replica can also manipulate the physical world, this is not a fantasy.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

Note: Just when everyone thought the speech was over, on the penultimate page, Hinton - in a tone that was different from all previous scientists, a little emotional, mixed feelings—voiced his concern about the current rapid development of AI, which is also the world's curious voice after his recent decision to leave Google and "regret his life's work and worry about the dangers of artificial intelligence":

I think these superintelligences may be implemented much faster than I used to think.

Bad people will want to use them to do things like manipulate voters. For this, they are already using them in the United States and many other places. And it will also be used to win the war.

To make digital intelligence more efficient, we need to allow it to set some goals. However, there is an obvious problem here. There is a very obvious sub-goal that is very helpful for almost anything you want to achieve, which is to gain more power, more control. Having more control makes it easier to achieve your goals. And I find it hard to imagine how we can stop digital intelligence from trying to gain more control to achieve other goals.

Once digital intelligence starts to pursue more control, we may face more problems.

In contrast, humans rarely think about species that are more intelligent than themselves and how to interact with these species, in my observation, this kind of artificial intelligence has mastered the action of deceiving humans, because it can learn the way to deceive others by reading novels, and once artificial intelligence has the ability to "deceive", it also has the aforementioned ability to easily control humans. Control, for example, if you want to invade a building in Washington, you don't need to go there yourself, you just need to deceive people into thinking that by invading the building, you can save democracy and ultimately achieve your goals (ironic Trump).

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

At this time, Gerffery Hinton, who is over the age and has contributed his life to artificial intelligence, said:

I feel terrible, I don't know how to prevent this from happening,

But I'm old, and I hope that many young and talented researchers like you will figure out how we have these superintelligences.

This will make our lives better while preventing this act of control through deception,

……

Maybe we can set them moral principles, but at the moment, I'm still nervous.

Because so far, I can't think of an example — when the intelligence gap is large enough — where more intelligent things are controlled by things that are less intelligent.

If frogs invented humans, who do you think would take control? Is it a frog, or a person?

This also leads to my last PPT, the ending.

【AI Spring Festival Gala In-depth Summary】I have confidence in AI, and I am more worried about humans

When I listened, I seemed to be listening to "the doomsday prophecy issued by the former dragon slayer boy, when he found out that he had raised an evil dragon when he was in his twilight years and looked back on his life", just as the sun set, I deeply realized for the first time the huge risk of AI to human beings, and I sighed infinitely.

Chinese transcript by Tencent News: Hinton, the godfather of AI: Artificial intelligence in multimodal situations will be smarter than humans and will try to take the initiative

Chinese Video & Text by Web3 Sky City: Mortal Computing Giving Up Eternal Life: The Godfather of AI Hinton KLCII Conference Closing Keynote (Video Chinese attached)

Compared to Hinton, Lecun, one of the younger big three deep learning, is clearly more optimistic:

When asked if AI systems pose an existential risk to humans, LeCun said that we don't have super AI yet, so how can we make super AI systems safe?

It is reminiscent of the different attitudes of the earthlings towards the three-body civilization in "The Three-Body Problem"...

That day, I was still planning to turn off the computer in an infinite emotion, but unexpectedly, Huang Tiejun, the director of KLCII, who came on the scene at the end, made a perfect closing speech: "Can't Close".

Huang Tiejun first summarized the views of the previous speeches:

AI is getting stronger, the risks are obvious, and they are increasing day by day;

We know very little about how to build secure AI

Historical experience can be learned: drug management, nuclear weapons control, quantum computing...

But high-complexity AI systems are difficult to predict: risk testing, mechanism explanation, understanding generalization... Just to start

New Challenge Goal: AI Serves Its Own Goals or Human Goals?

Essentially, are people going to build GAI General Artificial Intelligence or AGI AI AI?

The academic consensus is that AGI artificial general intelligence: artificial intelligence that has reached the human level in all aspects of human intelligence and can adaptively respond to external environmental challenges and complete all tasks that humans can complete; It can also be called autonomous artificial intelligence, superhuman intelligence, and strong artificial intelligence.

On the one hand, everyone is enthusiastic about the construction of general artificial intelligence and investment is flocking to it.

On the other hand, scoffing at AI causing humans to become second-class citizens, but such binary opposition is not the hardest, big deal, the hard thing is, in the face of artificial intelligence like ChatGPT Near AGI, what should we do?

If humans approach risks with the same enthusiasm as they invest in building AI, it may be possible to achieve safe AI, but do you believe humans can do it? I don't know, thanks!