Editor: Run Alan
2023The first year of the outbreak of the large model is about to pass, looking to the future, Bill Gates, Li Feifei, Ng Enda and others have made their own predictions about the development of artificial intelligence in 2024.
2023 can be said to be the spring of artificial intelligence.
Over the past year, ChatGPT has become a household name,
In the past year, we have been shocked by the various changes in AI and AI companies, and they have also become our fruits after dinner.
Generative AI has made significant strides during the year, allowing AI startups to attract significant funding.
Bigwigs in the AI space are starting to discuss the possibility of AGI, and policymakers are starting to take AI regulation seriously.
But in the eyes of leaders in the AI and tech industries, the AI wave may have just begun. Every year after that is probably the year of the most turbulent waves.
Bill Gates, Li Feifei, Ng Anda and others have recently talked about their views on the future development trend of AI.
They all talked about looking forward to larger multimodal models, more exciting new features, and more conversations around how we use and govern this technology.
Bill Gates: 2 predictions, 1 experience, 6 questions
Bill Gates published a 10,000-word long article on his official blog, portraying 2023 as a new beginning of a new era.
Article address: https://www.gatesnotes.com/The-Year-Ahead-2024?WT.mc_id=20231218210000_TYA-2024_MED-ST_&WT.tsrc=MEDST#ALChapter2
Again, this blog starts with his work at the Gates Foundation, talking about the far-reaching changes that have taken place or will occur around the world.
Regarding the development of AI technology, he said:
If I had to make a prediction, in a high-income country like the United States, I guess we're 18 to 24 months away from widespread use of AI by the general public.
In African countries, I expect to see similar levels of use in three years or so. There's still a gap, but it's much less lag than what we've seen with other innovations.
Bill Gates believes that AI, as the most far-reaching innovative technology on the planet, will completely sweep the world within three years.
Gates said in a blog post that 2023 is the first time he has used AI at work for "serious reasons."
Compared to previous years, the world has a better understanding of what AI can do on its own and "what kind of work it can do".
But for most people, there is still some way to go before AI can fully play a role in the work scene.
Based on his own data and observations, he says one of the most important lessons the industry should learn is that the product must be suitable for the people who use it.
He gave a simple example of how Pakistanis usually send voice messages to each other instead of sending text messages or emails. Therefore, it makes sense to create an application that relies on voice commands instead of typing long queries.
From the perspective that he is most concerned about, Gates put forward 5 questions, hoping that industrial intelligence can play a huge role in related fields:
- Can AI fight antibiotic resistance?
- Can AI create personalized tutors for each student?
- Can AI help treat high-risk pregnancies?
- Can AI help people assess their risk of HIV infection?
- Can AI make medical information easier for every healthcare worker?
If we make smart investments now, AI can make the world a fairer place. It can reduce or even eliminate the lag time between innovation in the rich world and innovation in the poor world.
Andrew Ng: LLMs can understand the world, and it is better to ignore AI if it does not regulate it
In a recent interview with the Financial Times, Ng said that the AI apocalyptic theory is absurd and that AI regulation will hinder the development of AI technology itself.
In his view, current regulatory measures related to AI have little effect on preventing certain problems. Such ineffective regulation would not have any positive benefits other than hindering technological progress.
So in his opinion, instead of doing low-quality supervision, it is better not to supervise.
He cited the recent U.S. government's self-commitment to Big Tech to "watermark" AI-generated content to tackle problems such as disinformation.
In his view, since the White House voluntarily pledged, some companies have instead stopped watermarking text content. As a result, he argues that the voluntary commitment approach is a failure as a regulatory approach.
On the other hand, if regulators transplant this ineffective regulation to issues such as "regulating open source AI", it is likely to completely stifle the development of open source and create a monopoly of big tech companies.
If the level of regulation that AI obtains is what it is now, then there is really no point in regulation.
Ng reiterated that, in reality, he wants the government to get hands-on and make good regulations, not the bad regulatory proposals that are now being seen. So he doesn't advocate letting it go. But between bad regulation and no regulation, he would rather have no regulation.
Ng also said in the interview that LLM now has the prototype of the world model.
"The scientific evidence I've seen shows that AI models can indeed build models of the world. So if AI has a model of the world, then I'm inclined to believe that it does understand the world. But this is my own understanding of the meaning of the word understanding.
If you have a world model, then you will understand how the world works and can predict how it will evolve in different scenarios. There is scientific evidence that LLMs can indeed build a model of the world after being trained on large amounts of data. 」
Li Feifei and Stanford HAI released seven predictions
The Challenge of Knowledge Workers
Erik Brynjolfsson, director of the Stanford Digital Economy Lab, and others expect AI companies to be able to deliver products that really impact productivity.
Knowledge workers will be affected like never before, and the jobs of creatives, lawyers, and finance professors will change dramatically.
For the past 30 years, these people have been largely unaffected by the computer revolution.
We should embrace the changes that AI brings about to make our jobs better and allow us to do new things that we couldn't do before.
The spread of disinformation
James Landay, a professor at Stanford University's School of Engineering, and others believe that we will see new large-scale multimodal models, especially in terms of video generation.
So we must also be more vigilant against serious deepfakes,
As a consumer, you need to be aware of this, and as a citizen, you need to be aware of this.
We're going to see companies like OpenAI and more startups releasing the next bigger model.
We're still going to see a lot of talk about "is this AGI? What is AGI?", but we don't have to worry about AI taking over the world,—— it's all hype.
What we should really be worried about is the harm that is happening right now – disinformation and deepfakes.
GPU shortage
Russ Altman, a professor at Stanford University, and others expressed concern about the global GPU shortage.
Big companies are trying to bring AI capabilities in-house, and GPU manufacturers like Nvidia are running at full capacity.
GPUs, or the computing power of AI, represent a new era of competitiveness, both for companies and even countries.
The race for GPUs will also put tremendous pressure on innovators to come up with hardware solutions that are cheaper and easier to manufacture and use.
Stanford University, along with many other research institutes, is working on low-power alternatives to current GPUs.
There is still a long way to go before mass commercialization is achieved, but we must move forward in order to democratize AI technology.
More useful proxies
Peter Norvig, a distinguished education fellow at Stanford University, believes that in the coming year, agents will emerge and AI will be able to connect to other services and solve real problems.
2023 is the year to be able to chat with AI, where people's relationship with AI is simply interaction through input and output.
And in 2024, we'll see the ability of agents to get work done for humans,—— make bookings, plan trips, and more.
In addition, we will move towards multimedia.
So far, there has been a lot of focus on language models, and then image models. After that, we will also have enough processing power to develop the video model,—— which will be very interesting.
What we're training now is very purposeful,—— people write down things they think are interesting and important on pages and paragraphs, people use cameras to record things that are happening.
But for video, there are cameras that can run 24 hours a day, and they capture what's happening, without any filtering, without any purposeful filtering.
- The AI model did not have this kind of data before, which would allow the model to have a better understanding of everything.
Hope for regulation
Li Feifei, co-director of HAI at Stanford University, said that in 2024, AI policies will be worth paying attention to.
Our policies should provide more opportunities for AI development by ensuring that students and researchers have access to AI resources, data, and tools.
In addition, we need to develop and use artificial intelligence safely, reliably, and reliably.
Therefore, policies should not only focus on fostering a vibrant AI ecosystem, but also on harnessing and managing AI technologies.
We need relevant legislation and executive orders, and the relevant public sector should also receive more investment.
Ask questions and give solutions
Ge Wang, a senior fellow at HAI at Stanford University, hopes that we will have enough funding to study what lives, communities, education, and society can expect from AI.
More and more of this generative AI technology will be embedded in our work, play, and communication.
We need to give ourselves time and space to think about what is permissible and where we should limit it.
Back in February, Springer Publishing, an academic journal publisher, issued a statement saying that large language models could be used when drafting articles, but were not allowed to be co-authored on any publications. The reason they cite is accountability, which is very important.
- Seriously put something out there, explain what the reasons are, and say that this is how it is understood now, and that there may be more improvements in the policy in the future.
Institutions and organizations must have this perspective and work towards putting it on paper by 2024.
Companies will face complex regulations
Jennifer King, a HAI privacy and data policy researcher at Stanford University, said that in addition to the EU's Artificial Intelligence Act this year, by mid-2024, California and Colorado will pass regulations to address automated decision-making in the context of consumer privacy.
While these regulations are limited to AI systems that train or collect personal information, both provide consumers with a choice as to whether or not to allow certain systems to use AI and personal information.
Companies will have to start thinking about what it means when customers exercise their rights, especially collectively.
For example, if a large company that uses AI to assist in the hiring process, what if hundreds of candidates reject AI? Do you have to manually review those resumes? What difference does that make? Can humans do better?—— we're just beginning to solve these problems.
Resources:
https://x.com/StanfordHAI/status/1736778609808036101?s=20
https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3
https://www.businessinsider.com/bill-gates-ai-radically-transform-jobs-healthcare-education-2023-12