laitimes

Ng's latest prediction: Regarding AI, these things will not change in the next decade

author:Quantum Position

Jin Lei from Au Fei Temple

量子位 | 公众号 QbitAI

How will the development of AI change in 2024?

In Ng's latest letter, he argues that there are several things that will not change for the next decade.

Ng's latest prediction: Regarding AI, these things will not change in the next decade

(The following is the original text of Ng's letter)

Dear friends,

It's an exciting phenomenon that AI is advancing faster than ever before. However, rapid change can lead to disorientation. In this case, it is very beneficial to follow Jeff Bezos's advice to consider not only the factors that are changing, but also those that remain the same. If things haven't changed, it's even more worth putting in the effort and effort.

Here are some of the things I believe AI won't change in the next decade:

  • We need community. People who have friends and allies will perform better than those who go it alone. Even though the field of AI brings breakthroughs every week, having friends to help you tell the truth from the hype, test your ideas, support each other, and co-create with them will put you in a better position.
  • People who know how to use AI tools are more productive. Individuals and businesses who know how to manipulate data can learn the truth more effectively, make better decisions, and achieve more. As AI continues to advance, this will only become more real.
  • AI needs good data to work well. Just as humans need good data to make decisions, from what marketing strategies to pursue to what kids eat, AI needs good data as our algorithms expand, evolve, and improve.

So what do the above three points mean for each of us?

  • Let's move on to building the AI community. I want you to share what you've learned with others, motivate each other, and keep looking for friends and collaborators.
  • Keep learning Xi! It's best to make learning Xi a habit Xi. It can make you more productive and has many other benefits. If you're thinking about New Year's resolutions in 2024, include your Xi goals. As AI continues to evolve, everyone needs a plan to keep up with the bandwagon.
  • Continue to foster data-centric AI practices. As businesses adopt more and more AI tools, I've found that one of the most important things to do is to take control of your own data. I think it's going to be more and more important for individuals as well.

While the above three points are related to AI, I would like to share two other things that, unfortunately, I believe will remain unchanged over the next decade: (1) Climate change will continue to be a major challenge facing humanity. (2) Poverty, where many people barely (and may not even be able to afford) basic necessities, will remain a problem. I will continue to think about how AI climate modeling can help the former, and how we can use AI to improve everyone's quality of life.

The above is Ng's latest view on the future development of AI.

In the past few days at the beginning of the new year, many bigwigs in the AI circle have also announced their predictions for the development of artificial intelligence technology this year.

Top 8 predictions liked by LeCun

Recently, Martin Signoux, a researcher at Meta, wrote down eight predictions that he thinks AI will develop this year, and these predictions have also been highly recognized by LeCun.

Ng's latest prediction: Regarding AI, these things will not change in the next decade

(The following is the original text of Martin Signoux's prediction)

1. AI smart glasses will become a reality

With the rise of multimodal technology, leading AI companies will double down on AI-first wearables.

And what better way to host an AI assistant than a glasses form factor?

Ng's latest prediction: Regarding AI, these things will not change in the next decade

The temples are close to the ears to transmit the audio, and the camera is close to the eyes to capture the input seen, they free our hands and are also comfortable to wear.

We're leading the way with RayBan, but think about the recent OpenAI and Snapchat rumors...... Our story has just begun.

Second, ChatGPT is to AI assistants what Google is to search

2023 starts with ChatGPT and ends with Bard, Claude, Llama, Mistral, and thousands of derivatives.

As commoditization continues, ChatGPT will disappear with the reference valuation correction.

Ng's latest prediction: Regarding AI, these things will not change in the next decade

3. Goodbye LLM, hello LMM

Large multimodal models (LMMs) will continue to emerge and replace large language models in the hot discussion, including multimodal evaluation, multimodal safety, and so on.

In addition, LMM is a stepping stone towards a true general AI assistant.

Ng's latest prediction: Regarding AI, these things will not change in the next decade

Fourth, there has been no major breakthrough, but there have been improvements in all aspects

The new model will not bring a real breakthrough (GPT5 will not appear), and large language models will still have inherent limitations and are prone to hallucinations. We won't see any leaps forward enough to reliably "solve basic artificial general information (AGI) problems" by 2024.

Iterative improvements will make them "good enough" for a variety of tasks. Improvements in RAG, data management, better fine-tuning, quantification, and more will make LLMs robust/useful enough for many use cases to drive adoption of a wide range of services across industries.

Fifth, small is beautiful

Small language models (SLMs) already exist, but cost-efficiency and sustainability considerations will accelerate this trend.

Quantification will also be greatly improved, driving the major wave of on-device integration of consumer services.

6. The open-source model will defeat GPT-4, and the open-source closed-source controversy will gradually disappear

Looking back at the vibrancy and progress made by the open source community over the past 12 months, it's clear that the open source model will soon close the performance gap.

We end the year 2023 with only 13% gap between Mixtral and GPT-4 in the MMLU (Multimodal Learning Xi Understanding) test.

But most importantly, there is a realization that open source models are here to stay and drive progress, and that they will coexist with proprietary models.

7. Benchmarking will still be a challenge

No set of benchmarks, leaderboards, or evaluation tools is the ultimate choice for model evaluation.

Instead, we will see a series of improvements (like the recent HELM) and new initiatives (such as GAIA), especially in terms of multimodality.

8. There will not be much discussion about existential risks compared to existing risks

While X-risks make headlines in 2023, the public debate will focus more on current risks and controversies, such as bias, fake news, user safety, and more.

Runway CTO: Tell a new story with new tools

Anastasis Germanidis, co-founder and CTO of Runway, also shared his thoughts on the development of AI this year.

Ng's latest prediction: Regarding AI, these things will not change in the next decade

(以下是Anastasis Germanidis预测的原文)

The year 2023 marks a turning point in the development of the widespread application of AI systems covering text, images, video, audio, and other modalities.

At Runway alone, we've seen the release of video generation models like Gen-1 and Gen-2, as well as tools to give these models a new form of creative control.

In the coming year, I expect to see continued progress in the following areas:

  • Video generation: Generative video models (text-to-video, image-to-video, video-to-video) were publicly released for the first time in the past year. In the coming year, the quality, versatility and controllability of these models will continue to improve rapidly. By the end of 2024, a significant portion of video content on the internet will take advantage of these models to some extent.
  • Real-time interactivity: As large models run faster and we develop more structured ways to control them, we will start to see more novel user interfaces and products emerge around them that go beyond the common prompt-to-x or chat assistant paradigm.
  • Automated AI research: Developers have embraced coding assistants based on large language models, such as GitHub Copilot. However, few tools are specifically designed to accelerate AI research workflows, such as automating repetitive tasks such as developing and debugging model code, training, and evaluating models. More such tools will appear next year.
  • More focus on the system: A large number of conversations focus on the end-to-end training capabilities of a single network. However, in practice, AI systems deployed in real-world scenarios are often driven by a pipeline of a series of models. More frameworks for building such a modular system will appear.

In addition to technological advancements, the most rewarding part of building these systems is that with each update and enhancement of capabilities, new audiences are introduced into them, telling new stories that have not been told before. I'm excited to see this continue to happen in the coming year.

Stanford Associate Professor: Transparency of the underlying model

Percy Liang, an associate professor of computer science at Stanford University, focuses on the transparency of the underlying model.

Ng's latest prediction: Regarding AI, these things will not change in the next decade

(The following is the original text of Percy Liang's prediction)

Just over a year ago, ChatGPT made the world aware of the power of foundational models. But there's more to that power than just dazzling, eye-popping presentations. The foundational model will permeate every area and every aspect of our lives in a similar way to how computing and the Internet were transformed in the last generation of society. Given the breadth of this expected impact, we need to ask not only what AI can do, but also how it is structured. How is it managed? Who decides?

We really don't know. This is because the transparency of AI is declining. For most of the 2010s, openness was the default orientation: researchers published papers, code, and datasets. Over the past three years, transparency has waned.

There is very little publicly available information about state-of-the-art models such as GPT-4, Gemini, and Claude: what is the data used to train them?who created it, and what are the labor practices?what values are these models aligned with?how are these models used in practice?without transparency, there is no accountability, and we have already seen the lack of transparency create problems in previous generations of technologies such as social media.

In order to make the evaluation of transparency rigorous, the Foundation Model Research Center has introduced the Foundation Model Transparency Index, which is used to describe the transparency of the foundation model developers. The good news is that many aspects of transparency (e.g., having proper documentation) are achievable and align with the company's incentives. In 2024, maybe we can start to reverse this trend.

Currently, policymakers generally recognize the need to manage AI. In addition to transparency, one of the top priorities is assessment. In fact, without a scientific basis to understand the capabilities and risks of these models, we are blind. About a year ago, the Center for the Study of Fundamental Models released the Comprehensive Evaluation Language Model (HELM), a resource for evaluating foundational models, including language models and image generation models. Now we are working with MLCommons to develop an industry standard for security assessments.

But this is a difficult assessment, especially for a generic, open system. How do you cover an almost infinite space for use cases and potential harms? How do you prevent manipulation? How do you present the results in a way that the public can understand? These are open research questions, but we need to address them in a short period of time to keep up with the rapid development of AI. We need the help of the entire research community.

It's not far off to imagine that ChatGPT-style assistants will be our primary way of getting information and making decisions. Therefore, the behavior of the underlying model – including any biases and preferences – is important.

These models are said to be consistent with human values, but what values are we talking about? Again, due to the lack of transparency, we can't see what these values are, and how they are determined. Can we imagine a more democratic process eliciting values, rather than a single organization making those decisions?

OpenAI wants to fund work in this area, and Anthropic has some research in this direction, but it's still in its early stages. I hope that some of these ideas will be incorporated into the production system.

Microsoft CTO: Ready for Exponential Growth Next Year

Kevin Scott, CTO of Microsoft, also made some predictions about how AI will develop this year.

Ng's latest prediction: Regarding AI, these things will not change in the next decade

(The following is the original text of Kevin Scott's prediction)

Without a doubt, 2023 is the most exciting and interesting year of technology I've seen in quite my long career.

It's worth mentioning that I'm pretty sure I said something similar at the end of 2022, and I suspect I'll probably say the same thing this time next year, and every year for the foreseeable future – the point is, right now, in the field of artificial intelligence, we're going through a period of sustained exponential growth, which probably represents the most profound technological advancement we've ever seen.

And that's just the beginning. Modern generative AI is still in its infancy, and we are moving forward in the learning Xi. Although it feels like we've been living with them for a long time, 2023 is actually the first year that powerful AI tools like ChatGPT and Microsoft Copilots really make sense to enter the public eye as useful assistants to make people's lives easier.

By the end of next year, we'll have many new experiences, apps, and tools that will bring benefits to more and more people around the world. While the magnitude and acceleration of AI growth may keep people focused on each of the next "big next big things," if we step back a bit, it's easier to see that the opportunities in front of us are much bigger than we've already realized.

Because we only experience the product of exponential curves every few years or so, most recently GPT-4, it's easy to forget during this time how amazing the rate of growth actually is. And, by our human nature, we adapt very quickly and quickly take for granted every new set of crazy possibilities that arise.

So my hope for all of you who work in AI and technology in the coming year is that we need to be aware that the next sample of the exponential curve is coming and properly prepare for the (certainly incredible) results.

May 2024 continue to bring the excitement of discovery and continued innovation to all of us.

So, what do you think is worth looking forward to in the development of AI this year?

Reference Links:

[1]https://twitter.com/MartinSignoux/status/1740729650530365646

[2]https://twitter.com/AndrewYNg/status/1741892184977309823

[3]https://www.deeplearning.ai/the-batch/issue-229/

[4]https://twitter.com/ylecun/status/1740830697181655432

— END —

QbitAI · Headline number signed

Follow us and be the first to know about cutting-edge technology trends