laitimes

Why is artificial intelligence going to save the world?

author:虎嗅APP
Why is artificial intelligence going to save the world?

Author: Mark Anderson, title image from: "Terminator: Genesis"

Mark Anderson, one of the founders of a16z, a top venture capital firm in Silicon Valley, recently posted a voice for AI. In the face of the rapid development of generative AI represented by ChatGPT in the global market, some top AI scholars have expressed concern about the "rapid development" of AI, and even "AI doomsday theory" is spreading. Mark Anderson, a successful serial entrepreneur and investor who founded the world's first usable browser, Netscape Navigator, and one of Facebook's earliest investors, is an optimist who once shouted the declaration that "software is going to eat the world."

In this wave of AI, he still expressed optimism, arguing:

  • AI has great potential to improve lives and drive economic growth, playing a role in various fields such as science, medicine, art, etc., and has the ability to expand human capabilities and have a positive impact.
  • Much of the concern about AI is based on misconceptions and exaggerations that AI is not going to destroy the world, on the contrary, it could save the world and create a better future for us.
  • Open, transparent, and rational discussions are needed to involve all stakeholders in decision-making to address the risks and regulations of AI and avoid excessive restrictions and scrutiny. While AI may make it easier for bad actors to do bad things, addressing this risk through legal and legal use is not stopping AI's development.
  • Large AI companies and startups should be encouraged to develop AI, but should not gain government protection by misunderstanding risks, and need to operate on a level playing field.
  • Open source AI should be allowed to spread freely, promote open innovation and learning, and governments and the private sector should collaborate to use AI to address potential risks.

Here's a condensed version of Anderson's Why AI Will Save The World, which you can read:

The era of artificial intelligence has arrived, and people are alarmed by it.

Luckily, I'm here with good news: AI isn't going to destroy the world, it may actually be it.

First, a short description of artificial intelligence: AI is the application of mathematical and software code that teaches computers how to understand, synthesize, and produce knowledge in a manner similar to that of humans. Artificial intelligence is a computer program like any other program – it runs, accepts input, processes and produces output. The output of AI is useful in many fields such as programming, medicine, law, and the creative arts. AI, like any other technology, is owned and controlled by humans.

A short description of what artificial intelligence is not: not killer software and robots that suddenly come to life and decide to kill humans or destroy everything like in the movie.

A shorter description of what AI might be: a way to make everything we care about better.

Why can artificial intelligence make everything we care about better?

Human intelligence makes things better in a wide range of areas of life. Intelligence is the lever we use to create today's world, such as science, technology, mathematics, etc. Artificial intelligence offers us the opportunity to profoundly improve human intelligence, making the results of all this intelligence better and faster.

In the new era of artificial intelligence, every child will have an artificial intelligence tutor with unlimited patience, unlimited compassion, and unlimited knowledge. Everyone will have a similar AI assistant to help with all the opportunities and challenges in life. Every scientist, artist, engineer, etc. will have an AI partner who expands the achievements of their field. The decision-making ability of leaders will also be improved, producing a huge amplification effect.

AI will drive economic growth, the creation of new industries, and the creation of new jobs. Scientific breakthroughs and new technologies will expand drastically as artificial intelligence helps us further decode the laws of nature. The creative arts will enter a golden age, as AI-enhanced artists will be able to realize their visions faster and on a larger scale. Wars will also be improved and wartime mortality rates reduced.

Artificial intelligence will allow us to take on new challenges that would be impossible to cope without artificial intelligence, such as curing all diseases and achieving interstellar travel. The human qualities of AI are also underestimated, giving those with less technical skills the freedom to create and share their artistic ideas, as well as medical chatbots that help people cope with adversity and provide more compassion.

The development and dissemination of AI is not only a risk, but also a moral obligation to ourselves, to our children, and to our future. We should live better in the world of artificial intelligence, and now we can.

Second, why is there panic?

The current discussion about AI is fraught with fear and paranoia because of human instincts for competition. But this fear is wrong in most cases. We live in a peaceful world where everything is mutually reinforcing. We need to look at AI more positively and openly and see its potential.

AI will bring profound change, improve lives and change the world. We should be alert to potential problems, study AI responsibly, and ensure positive impact. AI can solve complex problems such as climate change, disease, poverty, war, cognitive limitations.

Discussions on AI risks and regulations include "Baptists" and "moonshiners." "Baptists" are people who genuinely care about the risks of AI, and they may come from academia, the ethics of science and technology, or public interest groups. "Moinsel" may benefit financially from regulation, such as CEOs of big tech companies or people with power and influence. They may see new AI regulations as barriers to competition, protecting market position or discouraging competition.

Our goal is to evaluate the arguments of all parties fairly and rationally in an open and transparent environment. We understand concerns, consider proposals, and explore solutions, not simply label. The concerns of "Baptists" cannot be ignored, and AI does present potential risks. "Moinsel" may propose value points, such as the use of regulations to promote the safe and responsible use of AI.

We need a more open, inclusive and rational dialogue environment that involves all stakeholders in decision-making, rather than a few people.

AI Risk #1: Will AI Destroy Us?

The most primitive doomsday risk of AI is that AI decides to wipe out humanity. Our culture is deeply rooted in the fear that this technology created by us could rise and destroy us. Various versions of the story such as The Myth of Prometheus, Frankenstein, and The Terminator reflect this idea. We should rationally analyze the potential risks of new technologies, because although "fire" can be used to destroy, it is also the foundation of modern civilization.

I think the notion that AI will destroy humanity is a category error. AI is not an evolutionary creature, but mathematics and code built, owned, used, and controlled by humans. Believing that AI will develop self-awareness and try to eliminate us is like superstition. AI is not a living entity, and it cannot be more alive than a toaster.

However, some people who truly believe in killer AI, known as baptists, have gained a lot of media attention. They advocate extreme restrictions on AI and even argue that we must take a precautionary stance, possibly requiring a lot of physical violence and death to prevent existential risks. In this regard, I question their motives because their position is too unscientific and extreme.

Specifically, I think three things are happening: some people overexaggerate the importance of their work to earn credit for sin; Some people are paid to be doomsday people, and their statements should be treated with caution; California has a few marginal figures attracted by the sect and some actual industry experts who have developed "AI risk" into a sect.

In fact, this doomsday sect is not new. The West has an enduring millennial apocalyptic tradition, and the AI risk sect has all the characteristics of a millennial doomsday sect. These believers, whether sincere or insincere, rational or irrational, need to be responded appropriately because they may inspire harmful behavior. Understanding their motivations and their social environment can help us find appropriate ways to respond.

AI Risk #2: Will AI Destroy Our Society?

Another view of AI risks suggests that AI has the potential to have a profound impact on society by producing harmful outputs. Such arguments argue that the risks of AI may stem from social impacts rather than actual physical harm. The social risks of AI are mainly manifested in "AI alignment", that is, AI should be aligned with human values, but which values should be aligned is a complex issue.

The problem of such risks has already manifested itself in the "trust and safety" war on social media. Social media has been under pressure from governments, activists, and others for years to censor and restrict all kinds of content. Similar concerns have emerged in the field of AI, and people have begun to pay attention to the alignment of AI.

The experience of social media censorship has taught us that, on the one hand, freedom of speech is not absolute. Any technology platform that generates or facilitates content will have some kind of limitation. On the other hand, once censorship begins, it can have a slippery slope effect that will cover more and more widely, a phenomenon that has already been reflected in social media censorship.

When it comes to AI-aligned issues, proponents argue that they can design AI-generated speech and thinking that benefits society while banning harmful content. Opponents argue that such behavior carries the arrogance of thought police and may even constitute a criminal act. This, they argued, could lead to the creation of a new authoritarian system.

While arguments for imposing strict limits on AI output may exist only among a small portion of the population, the implications of this view could have far-reaching global implications, as AI is likely to become the layer of control in the future world. Therefore, the debate over what AI is allowed to say/generate will be more important than social media censorship.

In general, we should prevent excessive censorship of AI and allow it to develop freely within the interests of society.

Don't let the thought police suppress AI.

AI Risk #3: Will AI Take All Our Jobs?

There have been fears of job loss due to technological developments for hundreds of years, most notably the outsourcing panic of the 2000s and the automation panic of the 2010s. However, while there are always voices claiming that technology will be devastating to the human workforce, the emergence of new technologies throughout history has always led to more jobs and higher wages.

Now, the advent of artificial intelligence has caused panic again, with some people thinking that AI will take away all jobs. In reality, however, if AI is allowed to evolve and be widely applied across the economy, it could trigger the most significant and lasting economic boom on record, with corresponding unprecedented job and wage growth.

The idea that predicting that automation will eliminate jobs is often based on the total labor fallacy, which holds that at any given time, the amount of labor that needs to be done in the economy is fixed, either by machines or by people. However, when technology is applied to production, we can achieve productivity growth, which in turn reduces the price of goods and services, which in turn increases demand in the economy, drives new production creation, and the birth of new industries and jobs.

When we zoom in, from the perspective of individual workers, the market sets remuneration based on the marginal productivity of workers. Workers in skilled firms are more productive than workers in traditional firms, so they receive higher wages. In short, technology can increase people's productivity, which in turn can lower the price of goods and services while raising wages. This will drive economic growth and job growth while creating new jobs and new industries.

However, one might think that this time the AI technology is different, and it can replace all human labor. But assuming that all human labor is replaced by machines, that would mean skyrocketing economic productivity, soaring consumer welfare and purchasing power, exploding new demand, and entrepreneurs creating new industries, products, and services, and hiring people and AI as quickly as possible to meet those demands. Even if AI replaces these workforces, the cycle will continue, further boosting consumer welfare, economic growth, and employment and wage growth. Therefore, AI will not wipe out jobs, and it never will.

We should be so lucky.

AI Risk #4: Will AI Lead to Severe Inequality?

Linking concerns about AI taking jobs to the potential for wealth inequality, which has been proven wrong many times in the past, is no less true now.

The real interest of the owner of a technology is to sell their product to as many customers as possible, rather than monopolizing the technology. The development and application of technology, although it may initially be occupied by large corporations and the wealthy, will eventually spread globally to the benefit of everyone. Tesla's growth is a good example: it builds expensive sports cars first, then uses that revenue to make more affordable cars, eventually making its products available to the largest possible market — global consumers.

Elon Musk, by selling Tesla products to the world, maximized his profits. Examples are not limited to cars, but also technologies such as electricity, radio, computers, the Internet, mobile phones and search engines, which are being made by companies that are trying to keep prices down and affordable for people around the world. Now, AI is following the same path, with AI products such as Microsoft's Bing and Google's Bard not only inexpensive, but even free to use. These vendors want to maximize profits by scaling up the market.

So, in reality, technology does not lead to the concentration of wealth, on the contrary, technology empowers individual users, capturing most of the value generated. As long as they operate in the free market, companies building AI will struggle to make that happen.

However, inequality in our society does exist, but it is not driven by technology, but by the parts that are most resistant to new technologies and have the most government intervention, such as areas such as housing, education, and healthcare. AI doesn't lead to more inequality, but rather, if we allow AI, it may help reduce inequality.

AI Risk #5: Will AI Cause Bad People to Do Bad Things?

Of the top five concerns about AI risk, I've explained that the first four don't pose a real threat: AI won't suddenly come to life and pose a threat to humans, AI won't destroy our society, AI won't lead to mass unemployment, and AI won't cause damaging increases in social inequality. However, the fifth concern, which I must admit, does have the potential to become a problem: AI has the potential to make it easier for bad people to do bad things.

Technology, seen from the beginning as a tool, can be used for good things, such as cooking food and building houses, but also for bad things, such as burning people or hitting people with stones. As an advanced technology, AI can certainly be used for both good and evil. Bad actors, including criminals, terrorists, and hostile governments, may all use AI to help them do bad things more easily.

This view might lead to the idea that we should stop this risk and ban AI before things get too bad. But AI is not some rare, esoteric physical material, such as plutonium. Instead, it's based on the most readily available materials – math and code. Research and learning resources for AI are widely available, including countless free online courses, books, papers, and videos. Good open source implementations are also on the rise. AI is as ubiquitous as air, and trying to stop AI from developing will require harsh totalitarian oppression — such as global government monitoring and control of all computers, or catching offending graphics cards — with consequences that we will lose a society worth protecting.

So there are two more practical ways to deal with the risk of bad actors using AI to do bad things. First, our laws already define as crimes many bad things that can be committed with AI. For example, hacking into the Pentagon, stealing money from banks, creating biological weapons, committing terrorist acts, these are all crimes. Our job should be to prevent these crimes and to prosecute the perpetrators when we can, when we cannot. We don't even need new laws. If we discover a new bad use for AI, we should legislate to ban that particular use.

Second, we should use AI as much as possible to do good, legitimate, defensive work. Let AI exert its powerful ability in the process of preventing bad things from happening. For example, if you're concerned about AI-generated dummies and fake videos, the solution should be to build new systems that allow people to verify themselves and real content through cryptographic signatures, rather than banning AI. We should also make AI useful in cyber defense, biological defense, counterterrorism, and everything else we need to do.

Overall, while banning AI may seem like a means to protect society, the really effective way is to use AI to prevent AI from being used by bad actors to do bad things. In this way, a world full of AI will be safer than the world we live in now.

8. What should we do?

I propose a few strategies for dealing with AI:

First, large AI companies should be encouraged to develop AI as quickly and aggressively as possible, but they should not be allowed to gain government protection by misunderstanding AI risks, forming cartels.

Second, startup AI companies should also be encouraged to develop AI as quickly and aggressively as possible, and they should not be protected by the government or receive government assistance. They deserve fair play.

Third, open-source AI should be allowed to spread freely to compete with large companies and startups. This not only facilitates open innovation, but also facilitates those who wish to learn AI.

Fourth, governments should work with the private sector to actively use AI to address potential risks, from AI risks to more general issues such as malnutrition, disease, and climate issues.

It's time to build。

9. Legends and heroes

I conclude with two simple statements:

The development of AI began in the 40s of the 20th century, coinciding with the invention of the computer. The first scientific paper on neural networks—the AI architecture we have today—was published in 1943. Over the past 80 years, an entire generation of AI scientists has been born, went to school, worked, and in many cases died without seeing what we have now. Each of them is a legend.

Today, a growing number of engineers — many of them young people, whose grandparents or even great-grandparents may have been involved in the creation of the idea behind AI — are struggling to make AI a reality, facing a wall of panic and doom theorists trying to portray them as reckless villains. I don't think they're reckless or villains. Each of them is a hero. My team and I are very excited to support as many of them as possible and we will be 100% standing by them and their work.

Marc Andreessen in English

This content is the author's independent opinion and does not represent the position of Tiger Sniff. It may not be reproduced without permission, please contact hezuo@huxiu.com for authorization

People who are changing and want to change the world are in the Tiger Sniff App

Read on