laitimes

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

author:New Zhiyuan

Editor: Momoko So sleepy

In 2023, what major events have happened in the computer field?

The annual year-end inventory is here!

In 2023, everyone can blurt out major events in the field of computer science, such as a series of large models of ChatGPT, AI painting artifact Midjourney, AI video generation Gen-2, and Pika......

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Researchers have made subtle but important progress on the most classic question of "P and NP".

Shor's algorithm, the killer application of quantum computing, received its first major upgrade in nearly 30 years. And researchers have finally learned how to theoretically find the shortest path through a common type of network.

In addition, cryptographers have demonstrated that machine Xi models and machine-generated content must also deal with hidden vulnerabilities and messages when making unexpected connections with AI.

Top 1: 50 years of P and NP problems, "meta-complexity" theory opens the way

For 50 years, computer scientists have been trying to solve the biggest and unanswered questions in their field: "P and NP".

To put it simply, "P and NP" is to discuss the known difficult computational problems, how difficult they are, and whether there are more efficient algorithms.

However, 50 years of scientists trying to solve this problem have ended in failure.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Just when many scientists feel like a breakthrough is imminent, they always encounter insurmountable obstacles that prove that their methods don't work.

Over time, scientists began to question why even proving a problem "hard" was so difficult in itself.

In the effort to answer such introspective questions, an emerging field of meta-complexity theory has emerged. It provides the best insight into this question so far.

In August, Quanta introduced the idea of "meta-complexity" in an article and the exploration that scientists have begun.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Three mathematicians have different views on the limitations of mathematical reasoning

The "P and NP" problem can solve countless log problems, render all cryptography meaningless, and even reveal the nature of things that humans can know.

In simple terms, P is those that can be easily solved, like in alphabetical order. NP are those problems where the solution is easy to check, such as Sudoku.

Since all the problems that are easy to solve are also easy to check, the problems in P also belong to NP.

But some NP problems seem so difficult to solve that you can't intuitively come up with a solution to the Sudoku puzzle without first trying many possibilities.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

By studying these introspective questions, the researchers learned that the difficulty of proving computational difficulty is closely related to the fundamental problems that may seem unrelated at first glance.

How hard is it to find hidden patterns in apparently random data, and if there are really difficult problems, how often do they arise?

Scott Aaronson, a complexity theorist at the University of Texas at Austin, said, "It's clear that meta-complexity is closely tied to the core of things."

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

The "P and NP" problem leads researchers into the difficult journey of "meta-complexity" theory.

However, for the researchers of meta-complexity, this journey into uncharted territory is its own reward.

Top 2: Large models emerge, who can open the black box

emergence, which can become the hot word of the year in the field of large models.

In a 2022 paper, the OpenAI team gave a definition of "emergence":

If it doesn't exist in a model with a smaller parameter scale, but it exists in a model with a larger scale, then this capability is emerging.
2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

For example, if you give a large model a few memes and ask what movie it represents, the LLM will predict the next token based on the existing knowledge, and finally give an answer.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Answer: Finding Nemo

Not only that, but the tasks that small models can't do, many of which seem to have little to do with analyzing text, from multiplication to generating executable computer code to apparently decoding movies based on emojis.

OpenAI's research also shows that for some tasks and some models, there is a complexity threshold beyond which the model's capabilities grow rapidly.

However, it has to be admitted that as LLM capabilities continue to grow, new concerns have arisen.

Not only do these powerful AI systems fabricate lies, create social biases, but they are also incapable of handling even some of the most basic elements of human language.

On top of that, these AIs are still a black box, and the internal reasoning logic cannot be known.

However, research on opening the "black box" of AI is also emerging. For example, the OpenAI team used GPT-4 to interpret 300,000 GPT-2 neurons, and even proposed to use GPT-2 to supervise GPT-4 in the latest study.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

All in all, there is still a long way to go to uncover the inner workings of large models.

Top 3: 40 years ago, the algorithm found the shortest path

Computer scientists have long known algorithms that can quickly traverse graph networks (networks of nodes connected by edges) that have a cost to connect them, such as a toll road connecting two cities.

But for decades, scientists couldn't find any fast algorithm to determine the shortest path if only the cost and return of one path were considered.

At the end of last year, three researchers from Rutgers University came up with a workable algorithm.

Their new algorithm finds the shortest path in the graph from a given "source" node to every other node, almost catching up with the speed achieved by positive weighting algorithms long ago.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Address: https://arxiv.org/abs/2203.03456

It is worth mentioning that the Dijkstra algorithm was developed by the Dutch computer scientist Edsger Dijkstra as early as 1956, which can find the shortest path on a graph with only positive weights.

In this regard, the researchers reversed their thinking and gave the shortest path algorithm for negative weight graphs.

In March, Xiaorui Sun, a Chinese computer scientist at the University of Chicago, proposed a faster algorithm that breaks the most difficult instances of group isomorphism problems to solve at a faster pace.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Address: https://arxiv.org/abs/2303.15412

It can determine precisely when two mathematical objects known as groups are the same.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

In addition, other big algorithmic news this year includes a new method for calculating prime numbers by combining random and deterministic methods, refuting a long-held conjecture about the performance of information-finite algorithms.

and an analysis showing how an unintuitive idea can improve the performance of gradient descent algorithms, which are ubiquitous in machine Xi programs and other fields.

Top 4: AI raw pictures have exploded, and the technology behind them has been precipitated for many years

This year, DALL· E, Midjourney, Stable Diffusion and other image generation tools, which are very popular among people.

Just give a text prompt, and the AI can create a work of art according to your requirements.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

However, the technology behind these AI artists has actually gone through years of accumulation-

Diffusion models are based on the principles of fluid diffusion in physics, and they effectively convert vague noise into clear graphics – like separating evenly mixed cream from coffee again and returning it to a clear shape.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

In addition, AI tools have also made progress in improving the clarity of existing images, although this is still a long way from the scene in the TV series where the police repeatedly shout "Enhance!"

Recently, researchers have begun to study other physical processes beyond diffusion to find new ways to generate images in machines.

One of the new methods is based on the Poisson equation, which describes the process by which the electric field force changes over distance. This approach has proven to be more efficient in handling errors and is easier to train than diffusion models in some cases.

Top 5: After 30 years, the speed of quantum factor decomposition operations has soared

For decades, Shor's algorithm has been seen as a symbol of the power of quantum computers.

This set of algorithms, developed by Peter Shor in 1994, allows quantum computers to take advantage of their quantum physics properties to decompose large numbers into prime factors faster than classical computers. This poses a potential threat to most of the current Internet security systems.

In August 2023, a computer scientist developed a faster variant of Shor's algorithm, the first major improvement since the algorithm was invented.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Still, a truly practical quantum computer is still out of reach.

In practice, small errors can quickly accumulate, destroying calculations and further eliminating all the benefits of quantum computing.

In fact, late last year, a group of computer scientists showed that for a particular problem, classical algorithms are about the same as quantum algorithms that contain errors.

But there is hope: the August study showed that some error-correcting codes, known as low-density parity, are at least 10 times more efficient than current standards.

Top 6:密码学+AI的隐藏秘密

In an unusual study at the intersection of cryptography and artificial intelligence.

Recently, a team of computer scientists demonstrated that backdoors can be embedded in machine-Xi models that are not only virtually undetectable, but their concealment is supported by logic similar to modern state-of-the-art cryptography.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

However, the team is primarily working on simpler models, so it's unclear whether this finding can also be applied to the more complex models used in today's AI technology.

However, these research results provide a possible direction for how the system can defend against such security vulnerabilities in the future.

Because of these security concerns, Cynthia Rudin strongly recommends using explainable models to gain a deeper understanding of the inner workings of machine Xi algorithms.

At the same time, researchers like Yael Tauman Kalai are making progress in the areas of security and privacy, which are extremely important for upcoming quantum technologies.

In the related field of steganography, research results show how information can be hidden in machine-generated media in an absolutely safe way.

Top 7: Vector injection semantics make LLM inference more efficient

Despite the power of artificial intelligence, there are two major problems with the artificial neural networks that underpin most modern systems:

1. It consumes a lot of resources when training and running

2. It is easy to become an incomprehensible "black box"

As a result, many researchers believe that perhaps now is the time for a new approach –

Concepts are represented by thousands of hyperdimensional vectors, rather than artificial neurons to detect individual features or characteristics.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

Such a system is more versatile, more capable of handling errors, and much more computationally efficient.

Moreover, researchers can directly manipulate the ideas and relationships considered by the model to gain a deeper understanding of its reasoning process.

In March of this year, a team from IBM Research in Zurich used hyper-dimensional computing with neural networks to solve a classic problem in abstract visual reasoning, "Raven's Progressive Matrix".

It renders images of geometric objects in a 3x3 grid. One position in the grid is empty, and the object must select the image that best fits the blank from a set of candidate images.

2023 7 major breakthroughs in computer science!P and NP50 years of classic problems, large models emerge on the list

To solve this problem using hyperdimensional computation, the team first created a hypervector dictionary to represent the objects in each image.

Each hypervector in the dictionary represents some combination of an object and its properties.

The team then trained the neural network to examine the image and generate a bipolar supervector, an element that could be +1 or −1, as close as possible to some kind of supervector in the dictionary.

As a result, the resulting hypervector contains information about all the objects in the image and their properties.

The method they proposed was nearly 88% accurate for a set of problems, compared to less than 61% for a solution using only neural networks.

Hyperdimensional computing is still in its early stages, but with its application in larger-scale testing, we may see this new approach begin to show its potential.

Resources:

https://www.quantamagazine.org/the-biggest-discoveries-in-computer-science-in-2023-20231220/

Read on