laitimes

P vs. Fifty years of NP: AI is solving unsolvable problems

P vs. Fifty years of NP: AI is solving unsolvable problems

The P and NP problem has always been a difficult problem in the field of computing, so in the past 50 years, what in-depth research has people done on this problem? Let's dig deeper into this century's conundrum in this article.

Author | Lance Fortnow

Compile the | Don

Edit | Twilight

On May 4, 1971, the great computer scientist and mathematician Steve Cook first posed the problem of P and NP to the world in his paper The Complexity of Theorem Proving Procedures. Today, 50 years later, the world is still trying to solve the most famous problem in this field of computing. In fact, 12 years ago (2009), I also had some discussions on this issue, you can see the previous review of "The Current Situation of P and NP Problems".

文章地址:Fortnow, L. The status of the P versus NP problem. Commun. ACM 52, 9 (Sept. 2009), 78–86. https://doi.org/10.1145/1562164.1562186

Computer theory has not developed much in recent years. Since that article was published in 2009, the P vs. NP problem and the theory behind it has not changed significantly, but the computational world has indeed changed. Cloud computing, for example, has promoted the rapid development of social networks, smartphones, the economy, financial technology, spatial computing, online education and other fields. What's more, cloud computing has also helped the rise of data science and machine learning.

In 2009, one of the world's top 10 technology companies was seen alone – Microsoft was alone. But as of September 2020, the top seven companies by market capitalization are Apple, Microsoft, Amazon, Alphabet (Google), Alibaba, Facebook and Tencent, which are equally divided. Not only are the changes evident in large companies, but so is the demand for computer talent. According to statistics, between 2009 and 2020, the number of computer science graduates in the United States has more than tripled, but this still cannot meet the market demand for talents in this field.

The problem of P and NP has long been a source of puzzles in mathematics and computing, and it is included in one of the millennium puzzles of the Clay Institute of Mathematics. And the group offers millions of dollars in bounties to researchers who can tackle the problem. I will use some examples at the end of the article to explain the P and NP problem, which does not allow us to understand it in essence, but it can also be seen that many of the thoughts and achievements of P and NP have promoted research and development in this field.

1

P and NP issues

If someone asks you if you can find some people on Weibo who are friends with each other, the number of people is about 300. How would you answer this question?

If you work for a social platform company and have access to the entire platform's database, i.e. you can see everyone's friends list, you can try iterating through all the 300-person groups and see if they have the same following people, and if so, they're called cliques. But the amount of computation of such an algorithm is too large and the number is too large, and it is usually impossible to traverse it all.

You can also play a little clever, that is, start with a small group, and then slowly expand this small group to include people who are friends with each other. Of course, it may be difficult to do in practice. In fact, in theory, there is no best solution to this problem, and no one knows whether there is a better solution than to traverse one by one.

This example is actually a typical P and NP problem. NP represents a class of problems that can effectively test the accuracy of a solution. For example, when you know that 300 people may form a group, you can quickly check whether the 44850 pairs of users paired by them are all friends with each other. The clique problem is an NP problem.

P represents a problem for which a solution can be found effectively. We do not know whether the problems of these 300 target populations are also of a P-solvable nature.

In fact, it is surprising that the cluster problem has an "NP complete" nature. That is, if and only if P= NP, we can solve the problem of clustering quickly and efficiently.

Many other problems have NP-complete nature, such as the 3 Coloring problem (whether it is possible to dye the map with only three colors and then make the two adjacent parcels not have the same color), the traveler problem (finding the shortest path through the list of cities, allowing the traveler to return to the departure city after all the cities in the path), and so on.

Formally, P stands for "deterministic polynomial time", which is a class of problems that can be solved within the polynomial time limit of the input length. NP stands for "non-deterministic polynomial time". In actual algorithm development, we would do well to look at the problems of P and NP from a different perspective: we can think of the former as a valid computation and the latter as a problem that can be effectively checked.

If you want to know more about P and NP problems, you can go to the 2009 review paper, or some other popular science books to understand for yourself. There are also more formal introductory work, such as the 1979 book by Michael Garey and David Johnson, which is a must-see for readers who want to understand the complete problem of NP:

Garey, M. and Johnson, D. Computers and Intractability. A Guide to the Theory of NP-Completeness.W.H. Freeman and Company, New York, (1979).

P vs. Fifty years of NP: AI is solving unsolvable problems

2

Why discuss P and NP issues

On that Tuesday afternoon in 1971, when Cook presented his paper on NP completeness at the ACM Symposium on Computational Theory, he proved that satisfies NP completeness and tautisms are NP-hard. The paper also inferred that Tautology was a problem without P properties, but of course, this problem was not well demonstrated at the time. But in any case, this paper, and the method of proof in it, marks a major breakthrough in complexity theory.

Trying to prove a mathematical concept is often challenging. The basic concepts of algorithms and proofs date back at least to ancient Greek times, and of course, they never considered problems like NP and P. The theoretical basis for efficient computation and non-determinism was only developed in the 1960s. But the issue of P and NP has been raised a long time before this, it's just that we haven't officially named them.

Kurt Gödel wrote a letter to von Neumann in 1956. In the letter he gave a preliminary description of the P and NP problems. The letter was not discovered until 1988 and was widely circulated.

Richard Karp was the first real person to bring the P and NP problems into everyone's field of vision. He introduced the issue in his 1972 paper and subsequently received widespread attention.

We know that many of the famous combination problems are NP complete, including clique, 3-coloring, and the traveling merchant problem. In 1973, Leonard Levin, who was in Russia at the time, published a new paper based on the results of his independent study two years earlier, in which he defined the P and NP problems. By the time Levin's paper spread to the West, the P and NP problem had also established itself as the most important problem in computing.

3

Optiland

In a classic 1995 paper, Russell Impagliazzo described 5 levels of different degrees of possibility for P and NP problems:

Algorithm: P=NP or theoretical equivalent, such as NP's fast Probilistic algorithm

Heuristics: NP problems are difficult to solve in the worst case, but on average they can be solved

Pessiland: We can easily create difficult NP problems, which is the worst of all possible, because we can neither solve the puzzle in an average sense nor get any obvious advantage from the difficulty of these problems

Minicrypt: There is a problem with the one-way function of encryption, but we do not have public key cryptography

Cryptomania: Public key cryptography, that is, two parties can exchange encrypted information through a public channel and then decrypt it through the public key

The above 5 levels are not formally defined, and are artificially prescribed through people's understanding of P and NP issues. But it is widely believed that cryptomania is the most likely of this class.

Impagliazzo draws on the core idea of P and NP theory – "we can't have everything".

We may be able to solve difficult NP problems, or solve important keys to cryptography, but we cannot overcome both at the same time.

But maybe we're moving towards the de facto Optiland — where great strides in machine learning and hardware and software optimization have allowed us to solve problems that were unimaginable to some extent, including speech recognition, protein folding resolution, and so on. But most of the time, our crypto protocol is still secure, so don't worry too much.

In a 2009 review, I argued in the chapter "What if P=NP" that by using Occam's Razor Rule, learning would be easy—we only need to find the smallest program that aligns with the data, which is the key core of the problem. At this point, visual recognition, speech recognition, translation, and other tasks that would have been so difficult to solve become insignificant. We will also make better predictions and understandings of weather, earthquakes and other natural phenomena, as well as modeling.

Today, we can use face recognition to unlock mobile phones, we can talk to some smart device voice to ask questions and get ideal answers, we can translate what we say and enter into another language. Our phones receive alerts about weather and other unexpected events, and their predictions are much better than we could have done more than a decade ago. At the same time, apart from brute force-like attacks on small key lengths, our cryptography is basically robust and secure. So now, let's see how recent advances in computing, optimization, and learning have brought us to Optiland!

4

Solve difficult problems

In 2016, Bill Cook and his colleagues decided to challenge the question of how to access every pub in the UK at the shortest distance possible. They listed 24,727 known bars and walked around them. It was a walking trip spanning 45495239 meters, about 28,269 miles, longer than a circle around the globe.

In fact, Cook did a trick, he didn't really go to every bar, he ignored some of them to make the walk less exaggerated. After this matter was publicized in the British media, many people left a message at the bottom saying: You did not come to this bar next to my house. So Cook and his company restarted their plans, increasing the list of bars to 49,687, and the overall length of the trip reached a staggering 63739687 meters, or 39,606 miles. But in fact, compared to the previous trip, this new wine hunting trip only needs to travel 40% more distance to reach more than twice the number of bars.

P vs. Fifty years of NP: AI is solving unsolvable problems

Traverse a panoramic view of 49,687 bars in the UK

This pub crawl is in a way a variant of the traveling merchant problem, one of the most famous NP complete problems. The number of possible visits through all 49,687 bars is approximately equal to 3 plus the magnitude of 211761 zeros. Of course, Cook's computer doesn't search the entire collection, but uses a variety of optimization techniques. Even more impressive, the tour comes with proof of optimality based on the duality of linear programs.

In addition to the traveling merchant problem, we have also seen significant advances in solving satisfies and mixed integer programming, a variant of linear programming in which the solution requirements for some variables are integers. When we use high-precision heuristics, aided by fast processors, dedicated hardware systems, and distributed cloud computing, people can usually solve problems with tens of thousands of variables and tens of millions of constraints that arise in practice.

When faced with NP problems, people can usually express NP problems as satisfiability or mixed integer programming problems and throw them to the best solvers to find the answer automatically with the power of computers. These tools have been successfully used in circuit and code validation, automated testing, computational biology, system security, product and packaging design, financial transactions, and even difficult mathematical problem solving.

5

Data science and machine learning

It's often impossible to ignore the revolutionary impact of machine learning in recent years, especially neural networks. The conceptual basis of artificial neural network modeling is basically the calculation of a weighted threshold function. This idea originated in the work of Warren Mcculloch and Walter Pitts in the 1940s. In the 1990s, Yoshua Bengio, Geoffrey Hinton, and Yann Lecun developed backpropagation algorithms to deepen the layer count of deep neural networks with extraordinary results.

At the same time, there are breakthroughs in computer hardware computing, storage, etc., those faster, more distributed computing units, those dedicated hardware and massive data help to promote machine learning to complete many human-like functions. The ACM recognized the contributions of Bengio, Hinton and LeCun and presented them with the Turing Award in 2018.

Some students may ask, how is machine learning related to P and NP problems? Occam Razor said: Do not add entities unless necessary. If P=NP, we can use this idea to create powerful learning algorithms: find the smallest circuit that is consistent with the data. Even if P ≠ NP, machine learning can learn and approximate this idea, which gives it powerful capabilities.

Still, neural networks may not be truly "minimal" circuits, and of course may be as small as possible. The deep learning methods we use today are usually structurally fixed, and what can be changed is the weights on the neuronal connections. In order to be able to achieve sufficient generalization of expression capabilities, these networks usually have hundreds or thousands of weights. This limits the capabilities of deep networks (i.e. not simple enough). They can do a good job of face recognition, but they can't learn multiplication from examples.

Universal distribution and GPT-3

Let's consider the distribution scenario on an infinite set of binary strings. Although we can't have a uniform distribution, we can create a distribution with the same probability for each string of the same length. However, some characters are more important than others. For example π the first million digits are more meaningful than the randomly generated one million digits.

We may want to put higher probabilities on more meaningful characters. Now we have a lot of ways to do this. In fact, a general distribution close to any other computable distribution has been found, and this distribution has a strong connection to learning — for example, any algorithm that can learn this distribution at a small error rate will be able to learn all the computable distributions.

But the problem is that even if P = NP, this distribution is usually not calculatable. If P=NP, we can still get some useful information by creating a distribution that is common to other valid computable distributions.

So what can we get out of machine learning? Let's consider generative pre-training transformers (GPT).

Released in May 2020, GPT-3 has 175 billion parameters and trains 410 billion tokens. These tokens come from a lot of text corpora. It is able to answer questions, write text according to prompts, and even do some basic coding work. Although there is still a long way to go, GPT-3 is widely praised for the naturalness of the content it generates.

In a sense, we can think of GPT-3 as a special distribution method. In it we can look at the probability that the algorithm produces the output, which is a weakened version of the general distribution. If we limit the universal distribution to having a given prefix, a random sample suggested by that prefix is provided. GPT-3 can also build on such tips, dealing with a wide range of domain knowledge without further training. With the release of this series of studies, we will be closer to a common metric by which built-in learning can be performed: learning a random sample from a given context.

Science and medicine

In science, we understand this by doing large-scale simulations. For example, in exploring the reaction of nuclear fusion, we have achieved some good results. Researchers can apply a formalized approach to research to create a hypothesis for a physical system, then use that hypothesis, and constantly use that hypothesis for reactions and simulations. If the results we get do not match the actual results, the model is discarded and we start over.

Once we have a powerful model, we can do a lot of expensive tests in real-world experiments in a physical simulation system. If P=NP, we can use the Occam razor method to create the hypothesis that we find the smallest circuit that is consistent with the data. Machine learning techniques can advance along this technical path, automating the creation of hypotheses. When we have given the data, whether it is through simulation or real experimentation, machine learning can create models to fit the data to achieve the best match. We can use these models to make predictions and then test those predictions as before.

While these techniques allow us to find assumptions and models that may be missing, they can also lead to false positives. Humans tend to accept hypotheses with a 95% confidence level (which means that only one in 20 bad hypotheses passes the test). Machine learning and data science tools allow us to generate hypotheses that are at risk of being modeled out of the ground. This limits the scope of its work, for example, medical workers can not bear these risks, if there are these problems in their diagnosis, it will be a big problem. Biological systems are also an extremely complex structure. We know that human DNA forms a complex code that describes how our bodies are formed and the functions they perform. Unfortunately, we don't know much about how it works at the moment.

On November 30, 2020, Google-owned DeepMind released AlphaFold, a new algorithm for predicting protein shape and structure based on amino acid sequences. AlphaFold's predictions achieve almost the same accuracy as actual experiments construct amino acid sequences and measure protein shapes. But there's some debate about whether DeepMind actually "solves" the problem of protein folding, and it's too early to assess its effects, but in the long run, it could give us a new digital tool to study proteins, to understand how they interact with each other, and to understand how DNA is designed to fight disease.

Thinking beyond the P and NP problem: chess

NP is like a maze, operating on a board of any size. Sudoku is also np-complete problem, which needs to be solved from the numerical settings given in some squares. But when we ask who wins from a given initial setup, is there no way to give an accurate answer? Even if we have the premise of P=NP, it will not necessarily give us a perfect chess program to solve the problem, it is like designing a program that guarantees that the white chess piece can take this step, force the black chess piece to take that step, and then the white chess piece goes this step according to the plan, so that the black chess piece ... in the end, the white chess piece wins. One cannot accomplish all these white and black alternating pieces on P=NP alone. Games like this are often referred to as PSPACE-hard, where it is difficult to compute, or use a reasonable amount of memory, and solve the problem within the agreed time. Depending on the precise limits of the rules, chess and Go may be even more difficult.

This does not mean that if P= NP, you can't get a good chess program. In fact, in a way, the larger the chess program, the more intelligent it is. We can find an effective computer program that beats all other programs of slightly smaller size. At the same time, even without P=NP, computers became very powerful in chess and Go. In 1997, IBM's Deep Blue defeated the then-chess world champion.

In addition, machine learning has brought huge advances to computer games. Let's talk about the infamous AlphaZero, an artificial intelligence program developed by DeepMind in 2017.

AlphaZero uses a technique known as Monte Carlo Tree Search MCTS, which randomly moves two players to determine the best course of action. AlphaZero uses deep learning to predict the optimal distribution of game locations to optimize your chances of winning with MCTS. While AlphaZero isn't the first to use MCTS, it doesn't have any built-in human strategy or uses any existing game databases. AlphaZero only learned the rules of the game. This allows AlphaZero to shine in both the sports of chess and Go, which have no similarities in rules and purposes except for alternating movements and fixed-size boards. DeepMind has also recently made new moves on MuZero. It didn't even get the full rules of the game, just some representation of the board's position, and a list of legitimate moves, and some understanding of which positions were to lose or win. That said, we have now reached a stage where pure machine learning can easily beat most humans or heuristics in high-complexity problems such as chess or Go. Human prior knowledge only adds to the snake and gets in the way. For games like chess and Go, machine learning can succeed in situations where P=NP can't. It's incredible.

Explainable artificial intelligence

Many machine learning algorithms seem to have been able to achieve good results, but we don't know why. If we look closely at the internal parameters of the neural network for speech translation or image recognition, it is difficult to understand why it makes such actions or processing. Some people may ask, it has this ability, why should we care? Here are a few reasons: trust, fairness, security, causation.

Trust: How do we know if neural networks are working properly? We cannot analyze and understand other intermediate variables other than examining inputs and outputs. Different applications have different trust levels. If Netflix recommends a poor movie, that's fine, but if a self-driving car recommends a turn that hits a wall, that's a big deal.

Fairness: Many applications learn on training sets, and the data in the training set may not be completely fair or unbiased. If we don't understand the procedure, we may not be able to correct the bias and discrimination in it. Racial discrimination is a serious topic.

Security: If we use machine learning to monitor data security systems or even security systems, then an unexplained machine learning model may not let you know what the vulnerabilities he has, especially if our adversaries are adaptable. If we can understand the structure of the code and the network, we can find and fix these security vulnerabilities. Of course, if our enemies have the code, they also have the potential to find vulnerabilities and target their organization.

Causation: For now, the most we can do is check if the machine learning algorithm is only relevant to the type of output we want. But understanding code can help us understand causality in data to produce better scientific theories and medical outcomes.

If P=NP, can we get better computer programs? If you have a fast algorithm that solves np-complete problems, you can use it to find the shortest path to match the traveling merchant problem, but you won't know why this approach works. On the other hand, we all want to be able to get an interpretable algorithm because we can get a deeper understanding of its properties. In the workshops, we are all working on explainable AI, such as the ACM Fairness Accountability and Trust conference.

Limitations of machine learning

While machine learning has made remarkable progress over the past few decades, these systems are far from perfect. In most applications, they are still crushed by humans. We will continue to improve machine learning capabilities through new and optimized algorithms, collecting more data and developing faster hardware. Machine learning does seem to have a lot of limitations. As we saw above, machine learning allows us to approximate P=NP indefinitely, but never to that extent. For example, machine learning is slow to crack passwords, which we'll discuss later.

Machine learning also doesn't seem to be able to learn simple arithmetic relationships. For example, summarize a large number of number laws, and multiply large numbers. One can imagine combining machine learning and symbolic mathematical tools to work well. While we've seen some progress in the proof application of the theorems, we're still far from the function of our dreams. I'm also working on a related paper.

Similarly, P=NP will make these tasks much easier, or at least easier to handle. Machine learning often does not perform well when faced with samples that are distributed differently from the training data. This may be due to low probability marginal conditions, such as when all races are not well included in the training data, and the recognition of people from some countries or races is relatively poor. Deep neural network algorithms may have millions of parameters, so they may not achieve a good generalization distribution. If P=NP, then the smallest size model can be generated and the best generalization can be made, but if we can't experiment, we never know if this is a P=NP problem.

Like machine learning, we don't have any work that comes close to true general ai. This general artificial intelligence refers to a true understanding of a subject, or a truly conscious or self-aware artificial system. Defining these terms can be tricky and controversial. Personally, I haven't seen a formal rational definition of general ai yet, I've just grasped the perception of its concept and summarized it. I suspect we'll never achieve true general AI, even if P=NP.

6

cryptology

While we've made a lot of progress in solving NP problems, there's still no progress in many areas of cryptography. Includes multiple forms of encryption, including one-way functions, secure hashing, and public key cryptography. An effective NP algorithm is actually able to crack all cryptographic systems except those whose information is theoretically secure (such as one-time passwords and some quantum physics security systems). We've seen a lot of successful cybersecurity attacks, but they usually stem from poor server setup, poor random number generators, or some human error, almost none of which are due to problems with cryptography itself.

Most of today's CPU chips have AEC built-in, so once we use public key cryptography to set up the private key, we can send encrypted data as easily as sending plain text. Crypto provides the underlying technical support for blockchains and cryptocurrencies, which means that trust in crypto is high enough to exchange cash and Bitcoin. A 1994 study by Michael Kearns and Lesilie Valiant showed that learning the smallest circuits, even the smallest bounded layer neural networks, can be used to break down prime factors and crack public-key cryptographic systems. But so far, machine learning has not been successfully used to crack cryptographic protocols.

One might ask, since we've made a lot of progress on many other NP issues, why is it just a cryptographic failure? In cryptography, we can choose the question, specially designed for this scenario separately designed method to encrypt, so as to achieve a good effect. Other NP problems are usually performed using a generic, program-generated approach. These methods of automatic matching may not be tailor-made, not the most appropriate and difficult.

Quantum computing is the only one we know of that can threaten the security of the Internet's public key protocol. Shor's algorithm can be used for prime factorization and other related number theory calculations on large numbers. This concern can be addressed in several ways. While quantum computing has made some amazing advances so far, it's a long way from being able to crack today's cryptographic systems, which can't handle enough entanglement bits. Some estimate that it may take decades or even centuries to actually use Shor's algorithm + quantum computer to pose a threat to current public keys. In addition, researchers have made good progress in developing public-key cryptographic systems that are resistant to quantum attacks. We'll cover quantum computing in more detail later in this article.

Factorization problems are not NP-complete at present, and even if we don't have large-scale quantum computers, mathematical breakthroughs will certainly lead to efficient and useful solutions. Regardless of how we think about the future of quantum computing, some computers with multiple public key systems may solve the problem of factorization.

7

Friction-like complexity

Then again, what advantage can we have in the face of so many incalculable problems? Or what can we learn from it? I thought of cryptography. But since the Creator made certain computational problems very difficult and complex, even difficult to solve and implement, there must be an internal reason, which is very similar to many friction phenomena in nature. In the physical world, friction usually requires us to work extra energy to overcome, but without the constant resistance of friction, we can't even walk, run, and move forward. Similarly, in the computer world, complexity can lead to some computational difficulties, but without it, we may encounter more intractable problems similar to the inability to move forward. In many cases, P=NP will eliminate this friction.

Many recently published papers on computational theory tell us that if frictional computational complexity is eliminated, there will be many negative effects. For example, if computational complexity is eliminated, people will not be able to express their thoughts, and people will only be able to see the actions taken by others without knowing the purpose behind their actions. Economists have a term: Preference Revelation, a phenomenon that attempts to infer the true purpose behind it based on the behavior we take. In the past large amount of time, we usually did not have a large amount of training data to support the training of similar models, so this program has also become a kind of "artwork" with a high degree of inaccuracy in the air, which cannot be used.

Today, we collect large amounts of personal information from people's online search records, photos and videos of their social media accounts, purchase records of game accounts, as well as browsing records online, real-life footprint information, and residual privacy information in various smart devices. So the dataset is already sufficient. At the same time, machine learning can also have the ability to process this complex information, so it can make very accurate predictions and estimates. Computers often know more about us than we know about ourselves.

Our current technology is powerful enough to even develop a smart glasses that let you put them on and immediately know all kinds of information about the person in front of you, name, age, height and weight, hobbies, and even political preferences. That is to say, in the era of big data, due to the existence of machine learning and a large amount of private information, some problems that were originally very complex and almost impossible to achieve were overcome by computers, which also brought about the leakage of privacy - complexity can no longer provide us with privacy protection. We need to protect the privacy of individuals through laws and corporate responsibilities.

The phenomenon of "friction" in the computer world can transcend privacy. The U.S. government removed controls on airline pricing in 1978, so travelers who want to find the cheapest route will need to make multiple phone calls to many airlines or look for them through travel agents. But travel agencies usually don't do their best to help you find the cheapest one, but look for the route that benefits them the most. Airlines have different survival philosophies, some may be committed to maintaining a high level of service quality, so the price is slightly more expensive, and some want to attract more passengers with low prices. Today, we can easily find the route information of the cheapest airline through computer programs, so airlines are also running to compete on price and expect to calculate the best pricing to improve attendance, at which point service attitudes and experience may be sacrificed.

The "friction," or complexity, of computers also helps combat cheating. When I was in college in 1980, I was abused by calculus problems every day, and I spent all day in various mathematical calculations, and it was better to live than to die. But to this day, these calculus problems are brothers in front of Mathematica and Matlab, and they are easily solved with one line of instructions. I'm a teacher now, and in my classes, I can't even leave some homework questions that can't be searched online for students to train. Even more ridiculously, I could even use GPT-3 or its subsequent optimization code to generate some homework. So when tools like GPT can already automatically answer these complex questions, how do we motivate students, or prevent them from cheating and being lazy?

Stock trading is also a hard-hit area. In the past, stock trading often required a large exchange, as we saw in the movies, where traders directed buys and sells with a handsome gesture, matching the best price with one look. But now, the algorithm automatically adapts to the best price and buys and sells stocks. Although occasionally leads to the phenomenon of "flash crash". Machine learning algorithms are already powerful, they can replace humans to make some decisions, they can also do face recognition, match social media content with users, and make some judicial decisions. These decision-making systems all provide convenience for people, but they also pose great social challenges. For example, the issue of discrimination and political polarization is being widened. The issue is complex and we cannot put it into words.

The above problems are only a small part of such scenarios. As computer scientists, our aim is to make computation as efficient and simple as possible, but we must preserve the cost of reducing computational complexity, that is, computing "friction.".

8

The power of quantum computers

With the failure of Moore's Law, computer researchers have turned their attention to the field of quantum computers, and over the years, the research and application of quantum computers has experienced substantial growth. Big tech companies like Google, Microsoft, and IBM, as well as startups of all kinds, are investing heavily in quantum computers for research. The United States has launched a national quantum computing research program, and other countries such as China are also following suit.

In 2019, Google announced that they had achieved "quantum supremacy" by using a quantum computer with 53 qubits, solving many computing tasks that currently cannot be solved by conventional computers. Although many people question this statement, we are undoubtedly at the beginning of a new era of quantum computing. Still, we're still a long way from being able to run peter Shor's quantum algorithms, and having a real quantum computer. Conservatively, we still need tens of thousands of qubits to overcome. In general, a quantum computer can be understood as a system of states represented by bits, such as 2^53 states of a 53 qubit computer. This may show that we can solve the NP-complete problem by creating a particularly large number of state bits, that is, using quantum computing– that is, to perform miracles. Unfortunately, at this time, we cannot prove that quantum computers can fully manipulate these state bits, that is, we do not know what algorithm to use to solve the NP-complete problem, which in this way is beyond the limitations of Grover's algorithm.

9

Read on