laitimes

Mark Anderson: AI doesn't destroy the world, it actually saves the world

author:Web3 Pathfinder
Mark Anderson: AI doesn't destroy the world, it actually saves the world

Fortunately, I'm here to bring good news: AI isn't going to destroy the world, it may in fact be it.

First, a brief introduction to what artificial intelligence is: artificial intelligence is the application of mathematical and software code to teach computers how to understand, synthesize, and generate knowledge, similar to how humans do. Artificial intelligence is just like any other computer program – it runs, accepts input, processes and generates output. The output of AI is very useful in many fields, from coding to medicine, law, and creative arts. It is owned and controlled by people, just like any other technology.

Next, a quick overview of what AI is not: it's not the kind of killer software and robots that jump up and decide to slaughter humans or otherwise destroy everything, as shown in the movie.

To put it simply, AI can make everything we care about better.

Why AI can make everything we care about better

At the heart of decades of research in the social sciences is that human intelligence can improve life outcomes in almost every area of life. Smarter people have better outcomes in almost every field of activity: academic achievement, job performance, career status, income, creativity, physical health, longevity, learning new skills, handling complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decisions, understanding other people's perspectives, creative arts, parenting outcomes, and life satisfaction.

Moreover, human intelligence is the lever we have spent thousands of years creating the world we live in today: science, technology, mathematics, physics, chemistry, medicine, energy, architecture, transportation, communications, art, music, culture, philosophy, ethics, morality. Without the application of intelligence in all these areas, we would still live in mud huts, barely earning a living from subsistence agriculture. Instead, we use intelligence to raise our standard of living by about 10,000 times over the past 4,000 years.

What AI offers us is the opportunity to deeply augment human intelligence, making all of these intellectual outputs—and many others from there, from the creation of new drugs to solutions to climate change and technologies that enable interstellar travel—even better.

The augmentation of human intelligence by AI has already begun – AI already exists around us in various forms, is now rapidly escalating through large language models like ChatGPT, and will accelerate rapidly from now on – if we allow it.

In our new era of artificial intelligence:

1. Each child will have an AI tutor with infinite patience, infinite compassion, unlimited knowledge, and unlimited help. This AI mentor will be by your child's side at every stage of their development, helping them maximize their potential and have a machine version of infinite love.

2. Everyone will have an AI assistant/coach/mentor/trainer/consultant/therapist with unlimited patience, infinite compassion, unlimited knowledge, and unlimited help. This artificial intelligence assistant will accompany all the opportunities and challenges of life and maximize the results for everyone.

3. Each scientist will have an AI assistant/collaborator/partner who can greatly expand the scope of their scientific research and achievements. Every artist, engineer, businessman, doctor, caregiver will also have the same assistant in their field.

4. Every leader—CEOs, government officials, nonprofit chairs, athletic trainers, teachers—will have the same assistants. The amplification effect of better decision-making by leaders in the populations they lead is enormous, so this intelligence enhancement may be the most important.

5. Productivity growth across the economy will accelerate dramatically, driving economic growth, the creation of new industries, the creation of new jobs, and wage growth, leading to a new era of material prosperity worldwide.

6. Scientific breakthroughs, new technologies and new drugs will expand dramatically as AI helps us further decode the laws of nature and apply them to our benefit.

7. The creative arts will enter a golden age as AI-augmented artists, musicians, writers, and filmmakers are able to realize their visions faster and on a larger scale than ever before.

8. I even think that AI will improve warfare when necessary, by drastically reducing wartime mortality. Every war is characterized by terrifying decisions made by extremely limited human leaders under extreme stress and limited information. Military commanders and political leaders will now have AI advisors to help them make better strategic and tactical decisions that minimize risk, mistakes, and unnecessary bloodshed.

9. In short, anything people do today with their natural intelligence, can be done better with artificial intelligence, and we will be able to tackle new challenges that were previously unsolvable, from curing all diseases to enabling interstellar travel.

10. And it's not just about intelligence! Perhaps the most underestimated quality of AI is its ability to humanize. AI art gives people who would otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to a compassionate AI friend can really improve their ability to cope with adversity. And, AI medical chatbots are already more compassionate than human doctors. Instead of making the world more cold and mechanized, AI with infinite patience and compassion will make the world warmer and friendlier.

The stakes are high here. The opportunities are far-reaching. AI is quite possibly one of the most important and best things our civilization has created, certainly on par with, and possibly surpassing, electricity and microprocessors.

The development and adoption of AI – far from a risk we should fear – is a moral responsibility to ourselves, our children, and our future.

We should live in a better AI world, and now we can achieve that.

So, why is there panic?

Contrary to this positive view, the current public discussion about AI is rife with hysterical fear and paranoia.

Some claim that AI will kill us all, ruin our society, take away all our jobs, lead to gross inequality, and enable the bad guys to do terrible things.

What explains the difference between potential outcomes from near-utopia to terrible dystopias?

Historically, every important new technology, from electric lights to cars, radio to the Internet, has triggered moral panic — a social contagion that makes people believe that new technologies will destroy the world or society, or both. The team at Pessimists Archive has documented these technology-driven ethical scares over the past few decades, and their history clearly shows the pattern. As it turns out, the current AI scare isn't even the first.

Now, of course, there are many new technologies that lead to bad outcomes – often those that are otherwise of great benefit to our well-being. So, it's not that the existence of a moral panic means there's nothing to worry about.

But moral panic is inherently irrational — it exaggerates what may be legitimate concerns into a hysteria that makes it difficult to deal with actually serious problems.

Now, we are facing a full-blown moral panic over AI.

This moral panic has been used by a variety of actors as a driving force for policy action – demanding new AI restrictions, regulations, and laws. These actors have made extremely exaggerated public statements about the dangers of AI – drawing energy from moral panic and further inciting it – and they all claim to be selfless defenders of the public interest.

But are they?

Are they right or wrong?

Baptism of AI and smugglers

Economists observe a long-standing pattern in this reform movement. The actors in these movements can be divided into two categories—"baptists" and "smugglers"—drawing on historical examples of Prohibition in the United States in the 2020s:

1. "Baptists" are social reformers who truly believe in social reform, and who deeply feel emotionally (rather than intellectually) the need for new restrictions, regulations, and laws to prevent social catastrophe. When it comes to alcohol bans, these actors are usually truly devout Christians who believe that alcohol is destroying the moral foundations of society. As for the risks of AI, these actors firmly believe that AI poses one or more existential dangers – tying them to a lie detector, they really think so.

2. "Smugglers" are selfish opportunists who have a financial interest in the implementation of new restrictions, regulations and laws. When it comes to alcohol bans, these are the people who made their fortunes by selling alcohol illegally when legal alcohol sales were banned. For AI risks, these are CEOs, and if they erect monopoly barriers to government-sanctioned AI vendors against new startups and open source competition, they will make more profits — the software equivalent of a too-big bank.

Cynics would say that some of these so-called baptists are also smugglers — especially those funded by universities, think tanks, activist groups, and media outlets to attack AI. If you get paid or funded for working in AI scares, chances are you're a smuggler.

The problem for smugglers is that they will win. Baptists are naïve idealists and smugglers are cynical operators, so the result of such reform campaigns is often that smugglers get what they want – regulatory capture, protective regulations isolated from competitors, monopoly formation. Baptists wonder what is wrong with their drive for social improvement.

We have just experienced a startling example – banking reform following the 2008 global financial crisis. Baptists tell us that we need new laws and regulations to break up banks that are "too big to fail" to prevent such crises from happening again. As a result, Congress passed the Dodd-Frank Act of 2010, which advertised that it met the goals of the baptists, but was actually manipulated by smugglers, the big banks. As a result, banks that were "too big to fail" in 2008 are now much bigger.

Therefore, in practice, even if the baptists are sincere - even if the baptists are correct - they are used by cunning and greedy smugglers. And that's exactly what is currently driving AI regulation.

However, it is not enough to identify these actors and suspect their motives. We should consider the baptist-smuggler argument substantively.

AI Risk #1: Will AI Kill Us?

The first and earliest AI apocalyptic risk is that AI will decide to kill humans.

The fear that technology of our own creation will revolt and destroy us is ingrained in our culture. The Greeks expressed this fear with the myth of Prometheus, who brought destructive fire and broader technology ("techne") to humanity, for which Prometheus was permanently tortured by the gods. Later, Mary Shelley created our own version for modern man in her novel Frankenstein, or Modern Prometheus, in which we develop the technology of immortality, and then this technology revolts and tries to destroy us. Of course, any panic news story about AI is indispensable to a still image of a gleaming red-eyed killer robot in James Cameron's Terminator film series.

The presumptive evolution of this myth is intended to inspire us to seriously consider the potential risks of new technologies – after all, flames can indeed be used to burn entire cities. But just as flames are also the foundation of modern civilization, used to keep us warm and safe in a cold and hostile world, this myth ignores the enormous advantages of most new technologies and actually exacerbates destructive emotions rather than rational analysis. Just because the ancients panicked like this doesn't mean we have to too; We can use reason.

My point is that the idea that AI will decide to kill humans is a profound category error. AI is not a living being that has been inspired to participate in the battle for survival of the fittest after billions of years of evolution, just like animals and us. It's math—code—computers built by, owned, used, and controlled by people. To think that it will eventually develop its own mind and decide that it has a motive that led it to try to kill us is a superstitious wave of the hand.

In short, AI has no desire, it has no goal, it doesn't want to kill you because it's not alive. And AI is a machine - it won't be more active than your toaster.

Now, apparently, there are some people who really believe in killer robots - baptists - whose dire warnings have attracted a lot of attention in the media, some of whom claim to have been studying this topic for decades and now they are very scared of what they know. Some of them are true believers and even innovators of the technology. These actors advocate strange and extreme restrictions on AI, from banning the development of AI to military strikes on data centers and nuclear war. They argue that because people like me cannot rule out the potentially catastrophic consequences of AI in the future, we must take a precautionary stance, which may require significant physical violence and death to prevent the potential risk of extinction.

My response is that their position is non-scientific - does it have testable hypotheses? Under what circumstances is this assumption wrong? How do we know when to enter a danger zone? Except "you can't prove it won't happen!" Beyond that, these questions remain largely unanswered. In fact, the position of these baptists is so unscientific and extreme—conspiracy theories about math and code—that they have called for the use of physical violence, so I'm going to do something I wouldn't normally do, casting doubt on their motives.

Specifically, I think there are three things going on:

First, remember John Von Neumann's response to Robert Oppenheimer's anxiety about creating nuclear weapons? Von Neumann said: "There are people who confess to crimes in order to claim responsibility for them. "What's the most dramatic way to claim credit for the importance of one's work without sounding obvious boasting? This explains the inconsistency between words and deeds of baptists who are building and funding AI — focusing on their actions, not their words. (Truman was even harsher after meeting Oppenheimer: "Don't let that sobbing baby in again.") ”)

Second, some baptists are actually smugglers. There is a whole profession of "AI safety expert", "AI ethicist", "AI risk researcher". They are employed by doomsayers, and their statements should be treated appropriately.

Third, California is known for its numerous denominations, from EST to the Temple of the People, from Heaven's Gate to the Manson Family. While not all sects are harmful and perhaps even useful for people who feel alienated, some sects are indeed very dangerous and sects have great difficulty in crossing the lines that lead to violence and death.

In fact, this type of sect is not new to what is now driven by "AI risk" – there is a long tradition of Western millennialism, which gave rise to the Apocalypse sect. The AI Risk sect has all the characteristics of the Millennium Apocalypse sect. I added some content to the Wikipedia explanation:

"Schabilism is the belief of a group or movement [AI risk doomsdayists] who believe that society will undergo a fundamental transformation [with the advent of AI], after which everything will change [AI utopia, doomsday or apocalypse]. Only dramatic events [AI bans, air strikes on data centers, nuclear strikes on unregulated AI] are believed to be capable of changing the world [preventing AI], and this change is expected to be done or survived by a group of devout and dedicated people. In most millennial scenarios, an impending catastrophe or battle [AI apocalypse, or its prevention] will be followed by a new, pure world [AI ban] in which believers will be rewarded [or at least recognized as always right]. ”

This pattern of apocalypse sects is so obvious that I am surprised why more people don't see it.

Make no mistake, the sect is interesting, their writings are often creative and engaging, and their members are fascinating at dinner parties and TV shows. But their extreme beliefs should not determine the future of law and society – which is clearly not the case.

AI Risk #2: Will AI Destroy Our Society?

The second widely discussed AI risk is that AI will destroy our society by producing an output of information that is "harmful" to humans.

This is a relatively new doomsday concern that diverged from the "AI risk" movement I described above and dominated to some extent. In fact, the term "AI risk" has recently changed from "AI safety" — a term used by those who fear AI will kill us directly — to "AI alignment" — a term used by those who worry about society's "harm." The original AI security people were frustrated by the shift, although they didn't know how to get it back to the way it was — they now argue that the actual AI risk topic being renamed "AI non-homicide," a term that hasn't been widely adopted but is at least clear.

The essence of AI social risk claims lies in their own term "AI alignment". Align what? Human values. What human values? That's where the problem gets tricky.

It just so happens that I have a closer look at a similar situation—the "trust and safety" battle of social media. As we all know, social media services have faced intense pressure from governments and activists for years to ban, restrict, censor, and suppress all kinds of content. And the same concerns about "hate speech" (and its mathematical counterpart, "algorithmic bias") and "misinformation" moved directly from the social media context to the new realm of "AI alignment."

Important lessons I've learned in the social media wars are:

On the one hand, there is no absolute position on freedom of expression. First, every country, including the United States, considers at least some content illegal. Second, there are kinds of content, such as child pornography and incitement to actual violence, that are generally considered by almost every society to be impermissible – legal or not – and this is widely agreed by almost every society. As a result, any technology platform that promotes or generates content – speech – will have some restrictions.

On the other hand, the slippery slope is not a fallacy, but an inevitability. Once a framework is in place to restrict even outrageous content – for example, for hate speech, restrictions on specific harmful words, or misinformation, clearly false statements, such as "the Pope is dead" – various government agencies, activist pressure groups, and non-governmental entities will step into action to demand increasing levels of censorship and suppression of speech they deem a threat to society and/or their personal preferences. They will do it all the time, even in a blatant criminal way. On social media, this cycle has been going on for a decade, with only specific exceptions getting more prevalent.

So, now this dynamic is formed around "AI alignment". Its proponents claim they have the wisdom to engineer AI-generated speech and thinking that is beneficial to society and prohibit AI from generating speech and thinking that is harmful to society. Their opponents claim that the thought police, extremely arrogant and arbitrary—often at least overtly criminal in the United States—are in fact trying to become a new fusion government-corporate-academic authoritative dictatorship copied directly from George Orwell's 1984.

Since proponents of "trust and safety" and "AI alignment" are gathered in a very small slice of the global population that describes the American coastal elite—which includes many in the tech industry—many of my readers may find themselves inclined to advocate for dramatic limits on AI output to avoid destroying society. I will not try to convince you now, I will simply state the nature of this demand and the fact that the majority of the world neither agrees with your ideology nor wants to see you win.

If you disagree with the dominant niche ethics that are being imposed on social media and AI through escalating speech norms, you should also realize that how AI is allowed to operate will be more important than anything else — probably more important. You should be aware that a small, isolated group of political-social engineers is trying to decide this, and they claim that they are protecting you.

In short, don't let the thought police suppress AI.

AI Risk Three: Will AI Take All Our Jobs?

Fears of job losses due to mechanization, automation, computerization or artificial intelligence have become a recurring panic for centuries since machinery such as mechanical looms first appeared. While every major new technology in history has led to more jobs and higher wages, every wave of this panic has been accompanied by claims that "this time is different" — that this time it will happen, and this time it will be a fatal blow to human labor. However, this has never happened.

In our recent past, we went through two cycles of technology-driven unemployment scares — the outsourcing panic of the 2000s and the automation panic of the 2010s. Despite the constant fanatical claims of mass unemployment by many mouths, experts, and tech industry executives over the past two decades, as of the end of 2019 (before the COVID outbreak), there were more jobs and higher wages in the world than at any time in history.

However, this misconception is not going to die.

As it turns out, it's back.

This time, we finally have the technology that can take all jobs and make human workers redundant – true artificial intelligence. This time history won't repeat itself, AI will lead to mass unemployment, not rapid growth in the economy, jobs and wages, right?

No, that won't happen — in fact, if AI is allowed to grow and spread across the economy, it could trigger the most dramatic and lasting economic boom ever recorded and lead to record employment and wage growth — the complete opposite of fear. Here's why.

The central misconception that automation wipes out jobs is the so-called "total labor fallacy." This fallacy is the misconception that at any given point in time, the total amount of labor that needs to be done in the economy is fixed, either by machines or by humans—and if it is done by machines, there is no work for humans to do.

The total labor fallacy stems from naïve intuition, but here naïve intuition is wrong. When technology is applied to production, we gain productivity – increasing output by reducing inputs. The result is lower prices for goods and services. As the prices of goods and services fall, we pay less for them, which means we now have additional spending power that can be used to buy other things. This increases demand in the economy and drives new production activities, including new products and industries, which in turn create new jobs for people previously replaced by machines. The result is a larger economy with greater material prosperity, more industries, more products, and more jobs.

The good news doesn't stop there. We can also get higher wages. This is because, at the level of individual workers, the market determines remuneration based on the marginal productivity of workers. In a technology-infused enterprise, a worker will be more productive than a worker in a traditional enterprise. Either the employer will give the worker a higher wage because he is more productive now, or other employers will do it out of pure self-interest. As a result, the introduction of technology into an industry often not only increases the number of jobs in that industry, but also raises wages.

In short, technology enables people to work more efficiently. This leads to lower prices for existing goods and services and higher wages. This, in turn, drives economic growth and job growth, while contributing to the creation of new jobs and industries. If the market economy can function properly, if technology can be freely introduced, it will be a never-ending continuous upward cycle. As Milton Friedman observed, "human desires and needs are endless" – we always want more than we have. A market economy that is embedded in technology is how we are closer to achieving everything that everyone can imagine, but never quite achieve. That's why technology doesn't destroy jobs and never does.

For those who have not been exposed to these ideas, these are shocking thoughts that may take some time to understand. But I swear I didn't make them up out of thin air – in fact, you can find all of them in standard economics textbooks. I recommend the chapter "The Curse of the Machine" in Henry Hazlitt's A Lesson in Economics, and Frederic Bastiat's satirical Candle Maker's Petition, which describes its unfair competition with the lighting industry, a modern adaptation for our time.

But you may be thinking right now that this time things are different. This time, with the advent of artificial intelligence, we have the technology to replace all human labor.

But, based on the principles I described above, imagine what it would mean if all existing human labor was replaced by machines.

This would mean that the rate at which economic productivity growth will take off will be absolutely astronomical, far exceeding any historical precedent. The prices of existing goods and services will generally fall to near zero. Consumer welfare will skyrocket. The purchasing power of consumers will surge. New demand in the economy will explode. Entrepreneurs will create a dizzying array of new industries, products, and services, and hire as many people and AI as quickly as possible to meet all the new needs.

What if artificial intelligence replaces these workforces again? This cycle will repeat, pushing consumer welfare, economic growth, and job and wage growth to the next level. It would be a linear ascent into a material utopia that Adam Smith or Karl Marx never dreamed of.

We should be so lucky.

AI Risk 4: Will AI Lead to Severe Inequality?

Speaking of Karl Marx, concerns about AI taking jobs lead directly to the next claimed AI risk: Well, Mark, assuming AI does take away all jobs, for good or bad purposes, wouldn't that lead to massive wealth inequality, since all the economic rewards of AI go to the owner and the average person gets nothing?

It happens that this is a central Marxist view that the owners of the means of production - the bourgeoisie - must steal all the wealth of society from those who are really engaged in practical work - the proletariat. This is another fallacy that will not die no matter how much reality refutes it repeatedly. But let's debunk it thoroughly.

The flaw with this theory is that as an owner of technology, you have no reason to keep it to yourself – in fact, quite the opposite, you have a reason to sell it to as many customers as possible. The world's largest market is the world's population, which is 8 billion people. So virtually every new technology—even those that were initially sold only to big, high-paying companies or wealthy consumers—spreads rapidly until it reaches the largest possible mass market and eventually to every corner of the globe.

A classic example of this is Elon Musk's so-called "secret plan" for Tesla in 2006, which he of course publicly released:

The first step is to build [expensive] sports cars

The second step is to use the money to build a more affordable car

The third step is to use the money to build a more affordable car

...... That, of course, is exactly what he did, and he ended up being the richest man in the world.

This last point is crucial. Would Elon have gotten richer if he only sold cars to the rich today? No. If he only made cars for himself, would he be richer than he is now? Of course not. No, he maximizes his profits by selling to the world's largest possible market—the world's population.

In short, everyone can have such technology – as we have seen in the past, not just cars, but electricity, radio, computers, the Internet, mobile phones, and search engines. The companies developing these technologies are motivated to push prices down until everyone in the world can afford them. That's exactly what's already happening in AI — that's why you can now use state-of-the-art generative AI, like Microsoft's Microsoft Bing and Google's Google Bard, for free or at low cost, and this trend will continue. This is not because these suppliers are stupid or generous, but because they are greedy – they want to maximize the size of the market and thus the profits.

So what's happening is contrary to the idea that technology drives the concentration of wealth – that individual customers of technology, ultimately including everyone across the globe, are instead empowered and reap much of the value created. As with previous technologies, companies building AI — assuming they have to operate in a free market — will race to make this happen.

Marx was wrong, and he is still wrong.

This is not to say that inequality is not a problem in our society. It does exist, except that it is not driven by technology, but by the opposite factor, namely those sectors of the economy that are most resistant to new technologies and the most government intervention to prevent the adoption of new technologies such as artificial intelligence – especially housing, education, and health care. The real risk of AI and inequality is not that AI will lead to more inequality, but that we will not allow AI to be used to reduce inequality.

AI Risk 5: Will AI Cause Bad People to Do Bad Things?

So far, I've explained that four of the five most frequently posed risks of AI don't actually exist – AI won't resurrect and kill us, AI won't ruin our society, AI won't lead to mass unemployment, AI won't lead to serious inequality increases. But now let's talk about the fifth risk, which I actually agree with: AI will make it easier for bad people to do bad things.

In a sense, this is a heavy statement. Technology is a tool. Starting with fire and stone, tools can be used to do good – cooking food and building houses – but also to do bad – burning people and hitting people. Any technology can be used for both good and bad. That's right. There's no doubt that AI will make it easier for criminals, terrorists, and hostile governments to do bad things.

This has led some to ask, well, since that's the case, let's ban AI in this case so that this doesn't happen. Unfortunately, AI is not a mysterious substance that is difficult to obtain, like plutonium. On the contrary, it is the most accessible material in the world – mathematics and code.

The artificial intelligence cat has run out of the baggage. You can learn how to build AI through thousands of free online courses, books, papers, and videos, and great open source implementation projects pop up every day. AI is like air – it will be everywhere. To stop this, the required centralized oppression will be so harsh that a world government that monitors and controls all computers is needed? Armed thugs on black helicopters snatch rebel graphics processors? In that case, we will not have a society to protect.

So we have two very straightforward ways to deal with the risk of AI doing bad things for bad guys, and that's what we should be focusing on.

First, we already have laws criminalizing most of the bad things people might do with AI. Hacking into Pentagon computer systems? That's a crime. Stealing money from a bank? That's also a crime. Making biological weapons? That's a crime. Carry out a terrorist attack? That's also a crime. We simply need to focus on preventing these crimes where possible and prosecuting when we can't stop them. We don't even need new laws – I don't know of any of the bad uses that have actually been proposed to be legal. If a new bad use is identified, we ban that use. Proof of it.

But you'll notice that I mentioned one of the things in it — I said we should focus on preventing AI-assisted crime before bad uses happen — does that mean that prevention means banning AI? Well, there is another way to prevent such behavior, and that is to use AI as a defensive tool. The ability to make AI dangerous in the hands of bad people also gives it great power in the hands of good people — especially those who do good things from happening.

For example, if you're worried about AI generating fake characters and fake videos, the answer is to build new systems where people can verify themselves and the real thing with cryptographic signatures. Before artificial intelligence, the creation and modification of digital content, both real and false, already existed; The answer isn't banning word processors and Photoshop — or artificial intelligence — but using technology to build a system that actually solves the problem.

Therefore, the second point, let us actively invest human and material resources and use artificial intelligence to carry out good and legitimate defense work. Let's play the role of AI in cyber defense, biological defense, hunting down terrorists, and everything we do to protect ourselves, our communities, and our country.

Of course, there are already a lot of smart people doing this work inside and outside of government – but if we put all the effort and wisdom that is currently too focused on the futile prospect of banning AI to using AI to prevent bad people from doing bad things, I believe a world full of AI will be safer than it is today.

The real risks of not pursuing maximum force and speed to carry out artificial intelligence

Finally, there's a real AI risk that is probably the scariest:

AI is not only developing in relatively liberal societies in the West, it is also developing in the hands of a country's ruling party.

A country has a very different vision of AI than we do – they see it as a mechanism for centralized population control, and they have never kept it secret, they are already pursuing their agenda. And they don't plan to limit their AI strategy to one country, they intend to roll it out widely wherever they have 5G networks, offer BRI borrowing, and offer consumer-friendly apps like Tiktok as the front end for their centralized control of AI.

The biggest risk of AI is that one country gains AI dominance on a global scale, while we – the US and the West – do not.

I came up with a simple strategy to solve this problem – in fact, this is the strategy adopted by President Ronald Reagan when he won the first Cold War with the Soviet Union.

"We win, they lose."

Instead of letting unfounded panic about killer AI, harmful AI, job-destroying AI, and AI leading to increased inequality put us on the pass, we in the United States and the West should go all out in support of AI.

We should fight to win the race for global AI technological superiority and ensure that one country does not achieve that status.

In the process, we should introduce AI into our economy and society as quickly as possible to maximize economic productivity and human potential.

This is the best way to offset the risks of true AI, and to ensure that our way of life is not replaced by a darker vision of a certain country.

What should we do?

I propose a simple plan:

1. Large AI companies should be allowed to build AI as quickly and aggressively as possible – but should not be allowed to achieve a regulatory monopoly, and should not be allowed to create a government cartel protected from market competition because of the risk of wrong AI. This will maximize the technological and social returns to the amazing capabilities of these companies, which are the treasures of modern capitalism.

2. AI startups should be allowed to build AI as quickly and aggressively as possible. They will neither face the protections granted by the government to large corporations, nor will they receive government assistance. They should be allowed to compete freely. If startups don't succeed, their presence in the market will continue to incentivize big companies to do their best – benefiting our economy and society.

3. Open-source AI should be allowed to spread freely and compete with large AI companies and startups. For open source, there should be no regulatory hurdles. Even where open source hasn't beaten companies, its widespread availability is a boon for students all over the world who want to learn how to build and use AI to be part of the future of technology and ensure that AI is available to them no matter who they are or how much money they have.

4. To offset the risk of bad actors using AI to do bad things, governments, in partnership with the private sector, should actively apply AI in every area of potential risk to maximize society's defenses. This should not be limited to AI-related risks, but also include more general issues such as malnutrition, disease and climate change. AI can be a powerful tool for problem solving, and we should embrace it as such.

5. To prevent one country from achieving global AI dominance, we should harness the full power of our private sector, scientific institutions, and governments to work together to promote absolute global superiority in AI between the United States and the West, including eventually within a country's own soil. We win, they lose.

This is how we are using artificial intelligence to save the world.

Now is the time to act.

Heroes and legends

Finally, I would like to conclude with two simple statements.

The development of artificial intelligence began in the 40s of the last century, at the same time as the invention of the computer. The first scientific paper on neural networks—the architecture of our artificial intelligence today—was published in 1943. Over the past 80 years, an entire generation of AI scientists has been born, gone to school, worked, and even some have passed away without seeing the rewards we are getting now. They are legends, everyone is.

Today, a growing number of engineers — many of whom are young and may have grandparents or even great-grandparents involved in the creation of the ideas behind AI — are working to make AI a reality, despite the abundance of panic and catastrophe theories trying to portray them as reckless villains. I don't think they're reckless or villains. They are all heroes. My company and I are very excited to support as many AI workers as possible, and we will fully support them and their work.

Read on