Edit/Green Lotus
Scientists can take decades to determine the laws of physics, from how gravity affects objects to why energy can't be created or destroyed.
Researchers at Purdue University in the United States proposed parsimonial neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance precision and simplicity, reducing the time to discover the laws of physics to a few days. The study is one of the first demonstrations of discovering the laws of physics from data using machine learning, and the tool is available online through nanoHUB.
The study was published in Scientific Reports under the title "Parsimonious neural networks learn interpretable physical laws."

Machine learning (ML) plays an increasingly important role in the physical sciences and significant advances have been made in embedding domain knowledge into models.
One of the main disadvantages of using ML in the physical sciences is that models often fail to learn the basic physical properties of the systems "at hand," such as constraints or symmetries, limiting their ability to generalize. In addition, most ML models lack interpretability. In many areas, these limitations can be compensated for by large amounts of data, but in areas such as materials science, obtaining data is expensive and time-consuming, which is often impossible.
To address this challenge, progress has been made in using basic physics knowledge to improve model accuracy and/or reduce the amount of data required during training.
Less explored is the use of ML for scientific discovery, i.e., extracting laws of theory from observational data.
Here, the researchers propose a minimalist neural network (PNN), which combines neural networks with evolutionary optimization to find models that balance accuracy and simplicity.
Starting with neural networks, find minimalist neural networks as networks that interpret data in the easiest way possible.
The powerful functionality and versatility of the method are demonstrated by developing classical mechanical models and predicting the melting temperature of the material from the fundamental properties.
Two examples
As a first example, consider the dynamics of particles in the external Lennard Jones (LJ) potential with and without friction.
Feed-forward neural networks (FFNNs) are able to match training/validation/test data well, however, the predictive power of the network is poor. And these FFNNs are unexplainable.
After identifying the disadvantages of FFNNs, minimalist neural networks (PNNs) were introduced. Start with a general-purpose neural network and use genetic algorithms to find models with controllable simplicity. A neural network consists of three hidden layers and an output layer with two values, and the position and velocity of the particles are one time step ahead of the input.
The starting neural network provides a highly flexible mapping from input position and velocity to output location and velocity, and PNN strives to balance simplicity and accuracy when reproducing training data. In this example, four possible activation functions are considered: linear, linear rectification function (relu), hyperbolic tangent (tanh), and exponential linear unit (elu).
(a) Training/Validation/Testing the PNN Model on the Set 1 RMSE Compared to feed-Forward Networks (b) We see that the conservation of energy between PNN1 and the verlet integrator is comparable (TE: total energy) (c) The inverse trajectory generated by the forward PNN1 shows good reversibility (d) The visualization of the PNN model 1 found by the genetic algorithm attempts to predict position and velocity one step ahead.
The PNN produced by the genetic optimization of p = 1 reproduces the training, validation, and test data more accurately than the complex FFNN. Notably, the optimal PNN (PNN1) also has excellent long-term energy conservation and time reversibility. PNN1 learns that time reversibility and total energy are a constant of motion. This is in stark contrast to physically agnostic FFNNs and even simple physics-based models like the first-order Euler integral.
In the second example, PNN learns classical mechanics, where friction is proportional to velocity, and finds the same stable integrator based on the position Verlet method, all from observational data.
The emergence of verlet style integrators comes from data remarkable. Due to its stability, this integrator series is the first choice for molecular dynamics simulations. Importantly, the researchers found more complex models that reproduced the data more accurately than PNN1, but did not exhibit time reversibility or conserve energy. This suggests that simplicity is essential for learning models that can provide insight into the physical systems and versatility "at hand" .
Optimize the law of melting temperature
To demonstrate the versatility and versatility of PNN, the researchers applied it to discover the law of melting from experimental data. The goal is to predict the melting temperature of a material from basic atomic and crystal properties.
To this end, experimental melting temperatures and basic physical quantities of 218 materials, including oxides, metals, and other single element crystals, were collected.
The law of melting discovered by PNN. (The red dot represents the famous Lindemann law, while the blue dot indicates other models found.) The black dotted lines indicate the Pareto frontier of the models, some of which perform better than Lindemann's laws and are also simpler. The tree model is highlighted and tagged. )
The PNN model represents various trade-offs between accuracy and simplicity, from which we can define the Pareto frontier of the best model. The PNN method found several simple and accurate expressions. The simplest non-trivial relation is given by PNN A, which approximates the melting temperature proportional to the Debye temperature:
This makes physical sense because the Debye temperature is related to the characteristic atomic vibration frequency, while harder and stronger bonds tend to result in higher Debye temperatures and melting temperatures. Next up is complexity, where PNN B adds a correction proportional to the shear modulus:
This is also in the physical sense, as shear stiffness is closely related to melting. The complexity of PNN is slightly higher than that of PNN B, and the famous Lindemann melting law was discovered.
Here TD is the Debye temperature of the material, and f and C are empirical constants. Remarkably, this law, derived in 1910 using physical intuition, is very close, but not the optimal Pareto frontier in the precision-complexity space. For completeness, a model with the lowest RMS error is described, with PNN C predicting the melting temperature as:
Very interestingly, the model combines The Lindemann expression with the Debaie temperature and volumetric (non-shear) modulus. Considering the above expression, this combination is not surprising, but at this point the choice of volume versus shear modulus is unclear and should be explored further.
In summary, the researchers propose minimalist neural networks capable of learning interpretable physical models from the data; importantly, they can extract the potential symmetries of the current problem and provide physical insights. This is achieved by balancing precision with simplicity, with adjustable parameters that control the relative importance of the two terms and generate a series of Pareto optimal models. Future work should explore other complexity metrics.
Tool address: https://nanohub.org/resources/pnndemo
Reference: https://techxplore.com/news/2021-12-scientists-physical-laws-faster-machine.html
Artificial Intelligence × [ Biological Neuroscience Mathematics Physics Materials ]
"ScienceAI" focuses on the intersection and integration of artificial intelligence with other cutting-edge technologies and basic sciences.
Welcome to follow the stars and click Likes and Likes and Are Watching in the bottom right corner.