laitimes

Artificial Intelligence (AI)

author:Yule Music Circle

First, what is AI?

From Siri to self-driving cars, artificial intelligence (AI) is rapidly evolving. While anthropomorphic robots are often referred to as artificial intelligence in science fiction, it could refer to anything from Google's search engine to IBM's Watson to self-driving cars.

Narrow AI (or weak AI) is a phrase that is currently used to describe AI that aims to achieve a specific goal (such as facial recognition, internet search, or driving a car). However, many researchers hope to build extensive artificial intelligence (AGI: strong artificial intelligence) in the future. While AI in the narrow sense may outperform humans in specific skills such as chess or arithmetic, AGI will outperform humans in almost all cognitive efforts.

Artificial Intelligence (AI)

Second, how safe is AI?

In the short term, the goal of minimizing the negative impact of AI on society has spurred research in a variety of fields, from economics and jurisprudence to technical issues such as validation, effectiveness, safety, and control. When an AI system manages your car, plane, pacemaker, automated trading system, or power grid, it becomes even more important to do what you want to do. Preventing a destructive arms race with lethal autonomous weapons is another short-term issue.

In the long run, what would happen if the driving force of AGI succeeded and AI systems outperformed humans in all cognitive activities? Irving John Good pointed out in 1965 that building better AI systems is a cognitive task in itself. Theoretically, such a system could undergo repeated self-improvements that would lead to an explosion of intelligence that far outweighs human intelligence. This superintelligence could help humanity eradicate war, disease, and hunger by developing breakthrough new technologies, so the development of AGI could be the most significant event in human history. Other scientists, however, worry that this could be the last time unless we manage to match AI's goals to ours before it becomes superintelligent.

Some say AGI will never be developed, while others claim it will always be beneficial. The FLI is aware of both possibilities, as well as the possibility that ai systems are causing considerable harm, whether intentionally or unintentionally. We believe that current research will help us better plan and prevent possible negative consequences in the future, allowing us to benefit from AI while avoiding its drawbacks.

Third, the harm brought by AI

Most scientists believe that super AI is unlikely to experience human emotions, such as love or hate, and there is no reason to believe that AI will become wayward for better or worse. Conversely, when it comes to how AI is a concern, scientists believe that one of two scenarios is more likely to occur:

1, AI is used for harmful things, automatic weapons are AI systems programmed to kill people: these weapons if in the wrong hands, it can easily cause huge deaths. In addition, the AI arms race could inadvertently lead to an AI war with catastrophic victims. To prevent being thwarted by an adversary, the "shutdown" of these weapons is designed to be difficult to crack, leaving humans out of control in such situations. This risk exists even in narrow AI, but it escalates as AI intelligence and autonomy expand.

2) AI is used for beneficial things, but can be disruptive in the process: this happens when we can't properly match AI's goals to ours. For example, if you order a smart car to take you to the airport as fast as possible, it may take you there, but you will receive all kinds of tickets and even accidents. If a superintelligent machine is assigned a large-scale geoengineering project, it could end up wreaking havoc on our biosphere and seeing humans' efforts to stop it as a danger that must be dealt with.

Fourth, AI has attracted attention

Many leading AI experts, along with Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and other prominent figures in the tech community, have expressed concerns about the harm posed by AI through the media and open letters.

The quest for AGI will eventually triumph, a concept that has long been considered science fiction, years or even millennia from now. However, the creation of many AI milestones, technologies that experts believe will be reachable decades from now, have now been completed, prompting many scholars to consider the prospect of superintelligence in our lifetimes. While some experts believe that human-level AI will take thousands of years, most AI researchers at the 2015 Puerto Rico conference predicted that it will arrive in 2060. It is prudent to start basic security research immediately, as it can take decades to complete.

We have no way of knowing how AI will act because it has the ability to get smarter than any human being. We cannot use previous technological advances as a basis, because we have never produced anything that can outperform us, intentionally or unintentionally. Our own development may be the best sign we may face. People today rule the world not because they are the strongest, fastest, or the biggest, but because they are the smartest. If we are no longer the smartest, can we still maintain power?

According to the FLI, as long as we win the competition between the growing power of technology and the wisdom with which we manage it, our civilization will prosper. As far as AI technology is concerned, FLI believes that the best way to succeed is to encourage AI safety research, not to stifle the former.

Read on