laitimes

Adaptation Technology: Five years to speed up a hundred times - the operation of the CAE solver is not copied

author:Star of Inspiration

Editor's Note: This article is written by Guo Zhipeng, founder of Inspiration Star Investment Company Adaptation Technology. In this article, he will share with us his PATH TO THE CAE quest. As Guo Zhipeng said, "The strongest competitiveness of Shichuang Technology is our understanding and development ability of core solvers." ”

Below, together with enjoy.

01

"Can you describe in one sentence what a solver is?" A lot of investor friends have asked me this.

Before talking about solvers, we must have a clear concept of CAE. CAE, also known as computer-aided engineering, uses solvers to parse and simulate real physical phenomena and reproduce the engineering production process in a "virtual" way. If CAD is the tool we use to assist in design, then what CAE has to do is to use "virtual" to evaluate these "actual" designs, which is also the core and most essential role of CAE. As the most important core element, what the solver has to do is to "accurately" grasp and restore the real physical phenomenon, and then give the criteria for evaluating and judging the quality of the "production process".

On this basis, if I could describe the solver in one sentence, I would say, "The CAE solver is a package of matrix operations that has the ability to describe specific physical phenomena."

First, it must be able to solve some type of physical problem (by solving PDEs that describe this physical process – systems of partial differential equations);

Second, it is a code package or library;

Furthermore, the main function of this library is to do matrix (logical and/or algebraic) operations.

This may seem complicated, but in fact, the CAE solver and the calculator in our daily life is not essentially different, when we type in some numbers and operator signs on the calculator, click the equal sign "=" that orders the calculation to produce a result, the whole calculation process is completed by the "solver", it is embedded in the device, we can't see it, but we know it exists.

The CAE solver is similar. Take the fluid mechanics solver, in the solution also need to enter some parameters, such as material property parameters (such as viscosity), temperature, initial conditions (such as speed, pressure), boundary conditions (slip, no slip, mirror, symmetry), etc., these parameters are entered into the software "calculator", click on the order calculation of the equal sign "=", you can get the result, based on this result we will do subsequent analysis.

The process of calculating fluid morphology is more complex than the simple calculation process of the calculator. First of all, flow is a process that continues to evolve over time, and its solution is not constant, but has a solution at every moment; second, the assessment of flow pattern can not rely on only a number, but needs multidimensional indicators to evaluate, such as speed, pressure, disorder, interface pattern, etc., and even includes vortex, curvature, energy dissipation, etc. People have invented a series of dimensionless numbers to measure the state of flow, the more famous ones are Reynolds numbers, Plant numbers, Rayleigh numbers, and so on. The reason there are so many indicators is that the degrees of freedom of the flow itself (the variables, space, and time involved) are so large that we cannot simply evaluate them as "good" or "bad".

Adaptation Technology: Five years to speed up a hundred times - the operation of the CAE solver is not copied

Cornerstone of the Five Disciplines of Industrial Software[1]

It is not difficult to see that the development of CAE solvers requires multidisciplinary cross-integration coupled with strong programming capabilities, and the high threshold of its research and development is evident. As for whether to develop an autonomous CAE solver, I often give an example: CAE solvers are as important to industrial CAE software as an aero engine is the heart of an airplane.

GE and PH's aero engines are far ahead of us in terms of technical strength, and it is not possible to directly assemble them on the aircraft after being bought? However, the aero engine, known as the "pearl in the crown of industry", is the core equipment that integrates many cutting-edge technologies, and it is urgent to accelerate the pace of its independent and controllable localization substitution, as of the end of 2020, the investment in the establishment of the "aero engine and gas turbine" national science and technology major special project ("two aircraft special project") approved by the Central State Council has reached 300 billion yuan. [2]

In the same way, cae solver for industrial CAE software is the engine, is the heart, and this heart has been from the United States, Germany and other developed industrial countries across the ocean, for the mainland industry to create "alms" with the blood rushing power, we enjoy these "American taste", "German taste" sweetness, although there are many dissatisfaction with them, but still in front of this only choice. However, Huawei's sanctions, MATLAB ban and other events have made everyone wake up from the comfort of the boiled frog, and this heart has slowly become a "time bomb", and I don't know when it will be possible to make the body that depends on it scale. So everyone began to hold high the banner of "localization substitution", the Ministry of Industry and Information Technology also listed industrial software as one of the "new five basics" of the 14th Five-Year Plan, the overall policy environment is thriving, waiting for CAE manufacturers and sharp tests, curve overtaking.

The biggest obstacle in front of all CAE manufacturers is the solver.

Others have spent ten or even twenty years to overcome the research and development results, we want to use a year or two to nibble down, which is unrealistic in the eyes of most people, the ability and courage to gnaw hard bones, it can be said that there are very few, only a few. It was in this harsh reality that I founded Adaptive Technology and decided to devote myself to the development of an autonomous CAE solver.

02

Five years ago, I was the "first generation" of Sigma Technology and the only algorithm engineer at that time. As a "researcher," my idea is simple: prototype the algorithm, then optimize it, and program it in fortran90— a less prescriptive but efficient language. 5 years of doctoral career, coupled with nearly 7 years of domestic and foreign work experience, I have accumulated a lot of understanding of physical models and numerical algorithms, the most direct benefit for me is that for the physical model involved in an engineering problem, I no longer need to try all the numerical algorithms to do one of the best screening, because I already know which algorithm is most effective for this type of model. Therefore, when developing the hydrodynamic solver, I did not hesitate to choose LBM– the lattice Boltzmann method. I told myself that this solver must not be allowed to become a bunch of in-house code in the "lab", we must make this code a real computing engine to serve industrial customers and create real value.

At that time, I stood in the middle of the crossroads, one way was to download the open source software library, do secondary development based on this library, and form a "own" solver; the other way was to start from 0, parse, decompose the model, select, adapt the numerical algorithm, code, optimize, iterate, and finally form an autonomous solver.

Choosing a good open source software library can be said to be "standing on the shoulders of giants", after all, these open source software packages are written by a large number of algorithm and software engineers, and their overall workload can reach hundreds of people-years. If it is a good library, a high-quality library, its code specifications, running efficiency are very high, on this basis for secondary development, iteration, if the successor team itself is very strong, it is possible to create a better code base, which is actually an inheritance and upgrade process. But from another point of view, this inheritance and upgrading also has great risks.

First of all, how high the "standing on the shoulders of the giant" can stand depends on how tall the "giant" itself is. As I mentioned earlier, this open source software library must be high-quality, but how to judge whether it is high-quality or not? Taking a step back, if it is indeed high-quality, it is uncertain whether it can be downloaded and bought. The Internet is flooded with a large number of open source code, of which the so-called open source solvers, without hesitation, are mostly garbage and not actually competitive. There are very few truly valuable open source libraries, and most of the time there is inaccessibility due to a range of political factors.

Secondly, most of the open source code libraries available to be obtained come from universities, research institutes, national laboratories or some "amateur" hobbyists, who are based on code packages written by certain national and government projects, the essence of which is to facilitate research, through these "formed" code combined with secondary development, so that researchers can carry out their own research more quickly. So in general, these code bases are not for commercial purposes, they just provide a framework for researchers to generate new models and knowledge more efficiently.

In addition, open source software libraries have a series of problems with computational stability. Because of its heavily studied properties, developers are more concerned about whether it can solve a certain class or several problems, rather than spending a lot of effort to consider various situations (especially complex boundary conditions) to improve and optimize the stability of the code. In this case, if two completely different fields require code adaptation, there will be many hidden dangers of computational stability, and the final stability is completely determined by the technical level of the latecomers.

Therefore, the solver that is "its own" based on open source software depends on the quality or height of the open source code itself, and also on the technical level of the successor itself. If the level of the latter is below the former, it will at most play to the level of the former itself, and if the latter level is much higher than the former, then the multiplier effect of the product of the two abilities can be exerted. However, for the cae solver type of open source software, especially for the more difficult fluid, structural shaping solver, the latter situation is very rare, because for these more difficult CAE solvers, its development itself is not easy, the successor to the technical strength to surpass the predecessors, unless there is organized and inherited inheritance, otherwise its possibilities are minimal.

So what about the second way, that is, to develop from 0 to 1?

Obviously, compared with the inheritance of open source, the uncertainty of successful independent development of CAE solvers is greater, after all, there must be a "solver" in the worst case of inheriting open source for development, but if you explore independently for independent development, the possible result is 0 results, nothing is developed, or it is more likely to develop a "short" solver, no competitiveness, this solver may have certain significance for research work, but it is completely impossible to do commercialization. This kind of "dwarf" is not a special case, but a common phenomenon, and it is also the most essential reason why there has been no truly commercial solver in China.

If commercial solvers require the maturity of the algorithm and software itself to reach level 9 (3 levels each for accuracy, stability, and computational efficiency), then most of the solvers we can see so far can only reach 3-6 levels. The real commercial, industrial, and internationally competitive solver, in terms of fluid mechanics, there are actually few in the world. So you can imagine how challenging this is, but the biggest advantage of developing the solver completely independently is that we can start with "a blank piece of paper", have all the degrees of freedom, understand and grope from another dimension, the whole development process does not have any framework or ceiling, all the ceiling depends on the developer himself.

Adaptation Technology: Five years to speed up a hundred times - the operation of the CAE solver is not copied

The technical maturity of the solver

For industrial applications, the most important product attribute of industrial simulation software is actually timeliness, that is, to save time and submit efficiency for customers as much as possible under the premise of meeting customer needs, and the ability to translate into a solver, that is, computational stability and computational efficiency, in order to explore computational accuracy under this premise.

There are many factors that affect the accuracy of the calculation, and compared with the "numerical order" in the algorithm, the calculation accuracy depends more on the influence of boundary conditions or environmental constraints than on the solution algorithm itself. I remember a famous scholar who said: In simulation calculations, if your input is garbage, the result must also be garbage. It has to be admitted that even if there is a solver with "perfect" accuracy, if it is full of loopholes in the input stage, then the calculation result has no reference value at all. Therefore, if we want to make a fuss about the accuracy of calculations, we should focus more on how to make the "input" right, rather than demanding too much on the improvement of the "order" of numerical calculations.

Let me give you an example of hot coffee getting cold.

Suppose we want to simulate the process of hot coffee with continuous heat exchange with the surrounding environment (cup) and become cool, to achieve this calculation we must know the thermal physical parameters of coffee, cup and other materials, that is, the properties of the material itself related to heat transfer, including thermal conductivity, density, specific heat capacity, etc., at the same time, we also want to know the initial conditions, including the initial temperature of the coffee and cup, and then we do a good geometric model, the parameters are substituted into the heat transfer solver to start calculating and producing results. In fact, even if we do not calculate, we will feel that the speed of coffee cooling is not completely determined by these parameters, but more importantly depends on the type of cup, such as an ordinary cup, the other is a thermos cup, obviously the cold coffee under the condition of thermos cup is slow. In fact, in this simple heat transfer process, what really determines the heat transfer efficiency is the heat resistance between the coffee and the cup, and the thermal resistance of the thermos cup is obviously larger than that of the ordinary cup, so the overall heat transfer is slower. But the real question is, when we calculate such a simple problem, how to set the interface thermal resistance between the coffee and the cup? Obviously, if it is not set properly, the accuracy of the calculated result will drop sharply. This small example actually derives a very important analysis idea, that is, in the simulation process, the accuracy of the calculation result is determined by those core input parameters, such as the interface thermal resistance mentioned above, if we want to improve the accuracy of our real process simulation, we must grasp and pay attention to these core input parameters, rather than grabbing the eyebrow beard, picking up the sesame seeds and losing the watermelon. In this example, if the contribution of the material thermal property parameters to the calculation accuracy is 1, then the effect of the interfacial thermal resistance will reach at least 10, and even in some actual production processes it can reach 100. I often find this problem when I talk to a lot of scientists or customers who actually use simulation software, that is, people pay too much attention to some parameters that are not so important, and instead turn a blind eye to the input parameters or boundary constraints that play a decisive role. In fact, it is not a blind eye, because often these parameters are the most difficult to understand and determine.

03

The question I've been asked the most is why do CAE simulations, and why did you choose to start a business in this direction?

The creation of Adaptive Technologies and the focus on industrial CAE simulation have a lot to do with my previous experience of studying and working. I really became interested in mathematics, especially applied mathematics, thanks to the courses and teachers I came into contact with during my college and graduate studies. At that time, I was first exposed to mathematical modeling, linear algebra, numerical analysis, partial differential equations, etc., including many years of research in combination with engineering projects, I used CAE simulation software (self-written or commercial software) to do a lot of simulation calculations and data analysis for practical engineering problems, and it was from then on that I found that I was good at using CAE to solve many seemingly complex engineering problems.

Adaptation Technology: Five years to speed up a hundred times - the operation of the CAE solver is not copied

My research records

During my Ph.D., I was fascinated by a wide variety of physics, mathematical problems, and mathematical methods, and immersed in the complex and wonderful worlds of classical and quantum mechanics, differential equations and discrete, linear algebra and matrix transformations, dimensionless and dimensionality-reducing analysis, complex variable functions and density functionals, Grimm's formulas and numerical algorithms, fractal geometry and chaos, topological structures, and Tebius rings. In my spare time, I often read these fields of writing, audit school courses, participate in relevant forums, and constantly shuttle from classroom to classroom. Interest gradually turned into ability, linear algebra (matrix operations) and numerical analysis (numerical algorithms) these two tools have been very good for me to use, which has also been an important impetus for me to engage in solver development.

When I was studying and working at Oxford University, my mentor told me that he was worried about his son, and that he was on vacation with a copy of quantum mechanics in his hand, too focused on learning to be entertained. "He's okay, he's just different from other people." I said, because in my heart I understood him very well.

I've always considered myself more like an Engineer than a pure Scientist, because the engineer's job is to translate science into applications. My PhD project was an inverse problem, or inverse computation. Normal simulation calculations generally input control parameters and boundary conditions into a so-called "solver" to produce results, so as to analyze the results and derive new knowledge; solving inverse problems, on the contrary, requires the actual results that have been measured into the "inverse calculation model" to solve the input conditions related to the problem, including control parameters or boundary conditions.

What I didn't expect was that this inverse calculation model made by the doctoral project was fortunate to help us solve a problem that has plagued the industry for a long time when we developed CAE software independently, which can be said to be unexpected but reasonable.

Our first CAE product is called "Smart Casting Super Cloud", which is a simulation platform for the die casting industry, which tries to solve the industry problem and the previous example of hot coffee cooling is very similar: in the process of filling into the cavity, the molten aluminum alloy liquid will continue to heat exchange with the mold, and the speed of the whole process of molten metal cooling mainly depends on the interfacial thermal resistance between the metal liquid and the mold wall, or the interface heat transfer coefficient (the reciprocal of the thermal resistance), and the correctness of this parameter setting significantly affects the accuracy of the overall heat transfer calculation. In fact, for so many years, when everyone uses simulation software, this parameter is "guessed", and even in many cases it is "randomly filled in". We found that in this case, the customer calculated the solidification time predicted by the liquid metal (liquid into solid) is often 2 to 3 times the actual coagulation time, which means that the error generated by the simulation can reach 100% to 200%, but it is interesting that we are still based on this result for the later analysis. The core problem here is that almost no one knows how the heat transfer coefficient should be set!

How to solve this problem? In fact, it is very simple, we need to do some experiments, in a large number of die casting production process to continuously collect the actual temperature, and then through the inversion to solve the heat transfer coefficient between the metal liquid and the mold, and then add this value to the model to achieve accurate solution - and this is the main work of my Ph.D. project. After obtaining these heat transfer coefficients, we found that the heat transfer coefficient similar to the complex process of die casting is not a fixed value, but a value that changes with the spatial position and time, which we call the 4D interface heat transfer coefficient. Thanks to this interface heat transfer coefficient model, the accuracy of the temperature field of the solidification process of intelligent casting super cloud computing can reach more than 95%, and in most cases, the solidification time predicted by its calculation is extremely close to the real casting process. It is conceivable how much value subsequent defect analysis based on this temperature field result can bring to actual process design and process production.

04

Several years of studying and working abroad have given me more opportunities to think, and combined with different types of research projects, I have a very deep understanding of various numerical algorithms, and have the opportunity to write and use these algorithms in various situations, so as to appreciate the differences between them in detail.

The core purpose of these numerical algorithms is only one - to find the solution or "root" of the problem in the fastest way. On the implementation path, almost all algorithms use a gradual wobble approach to the final solution, which is called iteration. It's like a pendulum that is subjected to friction, swinging left and right several times and eventually slipping down and stabilizing at the bottom. But if the pendulum does not stabilize downwards, it will inevitably swing like crazy, which is to calculate "divergence" in numerical simulation. Most of the practical engineering problems can be converted into solving a system of linear equations after decomposing the model and discretizing it, and the best way to evaluate the merits of these numerical algorithms is to see the computational stability and computational efficiency they show when solving this system of linear equations. In the process of reaching the final destination, different algorithms use completely different ideas or paths, most vector algorithms are mainly through the establishment of multidimensional orthogonal systems to find the coordinate position of the "root", such as a simple three-dimensional problem, to solve a three-way (Z, Y, Z) a system of linear equations, the final solution is actually a point in a ternary orthogonal system, or a specific coordinate value X*, Y*, Z*. And this three-way orthogonal system, or "subset", may be a three-dimensional space, or it may be a face or a line. The actual problem is much more complex than this, and in many cases we solve a "very" multivariate system of linear equations, such as a grid quantity of 10 million, that is a 10 million yuan linear equation system. The solution of the ternary one-time linear equation system may be calculated by mouth, but what about 10 million yuan? Back to the vector algorithm, when the unknown number or "meta" of the solution is getting larger and larger, because we have higher and higher requirements for the accuracy of the calculation, the various numerical algorithms we are familiar with will fail one after another, of course, the speed of failure of different algorithms is different, and this speed reflects the ability boundary of the algorithm itself. In the case of limits, if the precision required to solve the problem has exceeded the computer's floating-point error range, then all numerical algorithms fail. Therefore, numerical computing is a "joint performance" of a numerical algorithm and a computer, and it is indispensable.

I still remember that I used the MAC method to solve the NS equation, to constantly iteratively solve the Poisson equation, if the algorithm you choose is not good, then solving the equation is too laborious, may iterate dozens, hundreds of steps to solve the accuracy is not much improved, if you set the convergence limit is very harsh, then the equation may never converge, that is, "forever" no solution.

To solve this problem, I tried almost all the numerical algorithms, but to no avail, until I spent a lot of effort writing a Multigrid algorithm. Different from the concept of vector algorithms, multi-mesh algorithms eliminate computational bias (error) differences by transforming the mesh size, which is far better than vector algorithms based on the underlying elimination method in terms of error elimination efficiency. In this regard, the multi-mesh algorithm is like constantly changing the magnification of the sight and objective so that the errors of various "sizes" can be more clearly observed and eliminated, while the vector algorithm is only a pair of sight glasses and objectives, no matter how wide you open your eyes, you can't see more detailed things. The biggest inspiration for me by the multi-grid algorithm is that no matter how difficult the numerical problem itself is, there is always the best way to solve the problem, and more importantly, we must dare to change our thinking and change the perspective to think about the problem, rather than blindly "brute" calculation and "stupid" calculation.

The earliest LBM-based hydrodynamics solution code was developed when I was working in the UK, mainly for simulating the flow and segregation of solutes during the growth of aluminum alloy microstructure, this version of the code is more about the implementation of the algorithm's own function, I did not do too much optimization in computational efficiency. In the first year of founding Shichuang Technology, one after another, I optimized this version of the code, first of all, the previous code was converted from the scripting language (Matlab, Octave) to fortran90 language, and did a lot of optimization of its calculation efficiency, boundary conditions, I tried to use a variety of grid forms and differential formats to improve the calculation accuracy, calculation stability and calculation efficiency, and finally we determined a set of our own code architecture.

First, we need to determine the data format and mesh architecture, which is the basis for all backward extension solvers, and an efficient mesh system can greatly enhance computational accuracy, computational stability, and even computational efficiency. The grid system is like the underlying gene for CAE simulation, each grid is equivalent to a cell, and the morphology and flexibility that the cell can have directly determines the ceiling of the ability that can be obtained in the later solution. With careful consideration and choice, we eventually adopted a block structured adaptive mesh refinement architecture (BS-AMR). Therefore, the overall mesh system for a problem is multi-layered, for example, for a set of mold meshing, we may do 5-6 layers, describing the different components need different mesh sizes, for the physical phenomenon change is not so obvious we use a thick mesh, and for the physical phenomenon change is obvious or we pay more attention to the place, we use a thin mesh. Dissected in this way, the overall mesh is like a pyramid, and the higher up, the smaller the mesh size, the more accurate the description of the problem, but the smaller the field it covers. This kind of mesh system has a great advantage is that we do not need to use the smallest mesh to divide the entire computing domain and lead to the extremely large scale of calculation, hierarchical encryption after segmentation, the overall mesh amount is about 1/100 ~ 1/500 in the global (fine) segmentation state, which can greatly reduce the overall calculation scale, under the premise of ensuring accuracy, greatly improve the calculation efficiency. At the same time, using a multi-layer mesh structure of different sizes, I have another layer of thinking, which is also a relatively long-term preparation, that is, in the later stage of algorithm development, based on this multi-layer structure, the multi-mesh algorithm is used to maximize the performance of the solver.

Based on this grid architecture, I rewrote the LBM solution code in fortran90. At first, only 2-dimensional cases can be calculated, and then gradually upgraded to transition to 3-dimensional. The development of the fluid mechanics solver is a continuous iterative process, do not underestimate the 2-dimensional situation, although the dimension is small, but the 2-dimensional provides us with a large number of accuracy verification cases, because in general, the smaller the dimension of the problem, the greater the probability of an accurate analytical solution, and the analytical solution is more convenient for us to test the accuracy of the solver. To the three-dimensional turbulence problem, there is almost no case with an analytical solution, we can only verify it through experimental comparison, it should be noted that the experiment is not necessarily "accurate", because in many cases we need to accurately control various experimental parameters to obtain effective experimental results, and the experiment will inevitably introduce human error. Different from MAC, SIMPLE algorithm, the most difficult thing to develop LBM fluid solver is how to make the density function become more physically meaningful at the boundary, for different boundaries, we have to constantly think about and balance the possible states of various distribution functions, if the boundary conditions are not good, it is actually not only a problem of whether it is accurate, but a problem of whether it can be calculated, because in many cases, if the boundary conditions are set incorrectly, the fluid solution will collapse, and no one wants a simulation calculation to calculate to 60% Crashing, then we don't get any meaningful results, which is probably the biggest difference between the physics solver and most so-called algorithms. This kind of collapse is actually what we have been saying about the stability of computation, a high-quality solver, the biggest feature, or at least the attribute that needs to be possessed is that the computational stability must be high, and commercial customers cannot tolerate the situation of not counting, which causes not only the problem of the calculation results, but more importantly, the time cost involved. Developing the fluid mechanics solver, the work we do is to constantly approach the real physical situation, we can not according to personal preferences out of thin air how this function should be written, how the module should behave, but according to the real physical laws, that is, follow the Navigat-Stokes equation for development, so the real fluid mechanics solver developed, especially for turbulence these complex cases, the developer can not actually predict the calculation results of each step, all the results are in the physical laws Gradually evolved under the "regulations".

05

It was in this case that I developed the first version of the three-dimensional LBM fluid solver, in terms of calculation accuracy, after each development of a sub-version I will compare with the existing test cases in the database, but from the calculation stability, computational efficiency is actually not high, I remember at that time to calculate a 30 million grid (medium-sized) flow case, if the calculation can be completed, it generally takes 30-50 hours, and in many cases, The code will crash inexplicably as it runs, and each crash is caused by a certain mesh being singular in a physical quantity, but for different cases, the location and moment of the problem are different. In the process of developing the solver, the problem we are most afraid of encountering is the kind of erratic, unclear and untraceable bug, which often gives us a headache, debugging an algorithm bug may take 1 week or even 1 month without any help. And this is often the most shocking, many researchers who develop solvers are abandoned under the delay of this problem, which is the biggest uncertainty in the so-called development process, which is different from pure software development, there is no uncertainty in software development, we definitely know that this must be solved, but can not find this bug as soon as possible. And the developer solver is wrong, you can't be sure whether your model is wrong, the algorithm is wrong, or the code itself is wrong, the latter is relatively easy to solve, the first two are the real problem. Interestingly, there has been no "perfect" model for fluid mechanics until now, and smooth solving the NS equation is one of the 7 big problems of the millennium, so you have to think deeply between the model and the code, constantly balancing and trying to really solve the problem.

From 0 to 1, the next biggest problem is how to give full play to the advantages of the model and code, and improve the computational stability and computational efficiency of the solver itself. From the development experience of so many years, these two problems are not independent problems, but generally follow each other, symbiosis, we have no way to solve the stability first and then improve the computational efficiency (and vice versa), and the more likely way is to pay attention to and solve the two problems at the same time. Fortunately, at that time, our team joined two more core algorithm engineers, they were my brothers and sisters when they were studying, so there was no communication barrier between us, and after they successfully took over my baton, they began the "long" solver optimization process.

The premise of solver optimization is that they have a deep understanding of the mesh architecture, fluid model, and LBM algorithm principles I used before, and then modify and upgrade on the basis of my code. I remember the countless days and nights of discussions, arguments, and quarrels between us, with only one purpose, how to make this solver better and more competitive. Large code structure and framework changes, differential format transformations, code specification and management, memory calls and management related to data structures, or even just a subtle circular sequence modification (e.g. from Z, Y loop to X?). , or from X to Z first? ) is constantly happening, modifying, optimizing, and refining. Many nights after returning from business trips, seeing them still debugging code at the company, the mushroom cloud of fluffy hair told me they had a few more sleepless nights. One was once a clean and handsome core personnel of a foreign enterprise, and the other was a decent and determined researcher of a state-owned enterprise, but now I have become a "re-reading doctoral student" with two beards. But I'm glad they stuck with me, and it was in this constant attempt, correction, and regularity that our second largest version of the fluid solver was born—a 10x increase in computational efficiency and an increase in computational stability from 40% to 80%.

Adaptation Technology: Five years to speed up a hundred times - the operation of the CAE solver is not copied

I worked with two disciples and another core partner

At this moment, our first version of CAE products has finally landed. Integrating a solver, cloud-native desktop software, and a SaaS smart cloud platform, we've finally met with customers, three and a half years after our company was founded. We launched the product with a nervous mood, hoping that nothing would go wrong, everything would be smooth, and the customer would like it. But the reality is always cruel, a whole year of promotion, a bunch of product problems waiting to be solved, a bunch of customer needs can not be met, questions and criticisms arise one after another - customers do not pay. We can't even get into the customer's door, and the security master will directly bombard us out, "where to go back".

Tell a joke, remember the first time to demonstrate our stand-alone version of CAE software to customers, we used a heavy workstation in the car, with a nervous and excited mood to come to the factory, in a greasy corner of the plug board inserted with their own plug board, connected to the computer, excited to show our results to customers, but suddenly the program crashed, automatically exited. Looking back now, it was just a simple laugh, but at that time, the "dead" heart was there.

Different from the stand-alone version of the software, the SaaS version of the software does not need to bring a workstation, as long as the Internet connection can be displayed to customers, but we were contemptuous of the SaaS form of this form may produce a series of problems: cloud computing, cloud job scheduling, super data transmission and display (CAE simulation results are generally a few G or even hundreds of G) will bring great instability to the entire product experience, the network speed is not good? What about supercomputing crashes or goes wrong? How does such a large amount of data come about? Unexpected problems emerge in an endless stream, we do not feel these problems in the company's 100-megabit network environment, but most of the customers in Jiangsu, Zhejiang and Guangdong have a network speed of 10M network, and some places are even lower, and all the people in the factory are sharing and using.

To tell another joke, when our sales showed the first version of the SaaS platform in the customer's factory, the page could not be loaded for several minutes, the sales were sweating profusely, and the customer kept apologizing to the customer, and finally the customer looked embarrassed and comforted him in turn. After the sales came back, even the strength to scold the product team was gone, and the whole company was in an uproar.

This is the state of Shichuang Technology at the end of 2019, is it still dry?

06

When we started the company, we had an unwritten rule: insist on doing the right and difficult things, and have the courage to face problems and solve problems. So at this time, no matter how much resentment the team has, no one says no, no one wants to quit!

I don't have much time to do algorithms or solvers directly, and I have to focus a lot of energy on overall product planning, investment and financing, and company team building. There was a voice in the company at the time that we should control costs and try to maintain or even reduce the size of the team before the product really meets the needs of the customer. But my idea is different, no product is inherently perfect, and the problem cannot be solved by avoidance. I once again took out this sentence: "We must dare to face the problem squarely, face the problem head-on, solve the problem, and cannot avoid or shrink back." The real solution to the problem is there, that is, we can not reduce, but to increase the investment in the R & D team, in order to solve the existing problems in the future can be expected in the future, only to improve the product to obtain customers. I persuaded the team, although everyone's heart was beating drums.

At that time, I really felt that I was practicing that sentence: do the right and difficult thing. It is not difficult to realize that it is not difficult to be correct, and it is the most testing to dare to do it.

In 2020, while constantly combing through the products, development routes and customer needs with the product team, I ran to meet investors and tell them the product story and our ideals tirelessly. From 8:30 a.m. to 11:30 p.m., the marketing director and I could meet up to 6 groups of investors for 15 hours a day, that is, at that time, I developed the habit of sleeping on the bus, and slept very steadily, so that I could recharge my batteries and get out of the car immediately full of blood. Until now, I still can't change this "problem", and as soon as I get on the car, I am angry.

Finally, in September 2020, we got our first institutional investment (TusSetStar and Yajie Capital), and the morale within the team soared – we, who had been repeatedly whipped by the market and customers, needed a victory too much.

In addition to the success of the financing, the meeting with the new sales director gave me a new shot in the arm. We saw each other as usual, and despite a difference of ten years, we were highly aligned in our thinking on market strategy and team building. It took us two months to re-sort out our sales strategy, our playing style, and we started to re-assemble our sales team. At the same time, the product team is constantly making drastic upgrades and optimizations to the product. I know that the only thing that can buy them is time, and it turns out that we have indeed stood the test, and no one will stop until the computational stability reaches 100%, which is the consensus formed by the team of Adaptive Technology.

As the company continues to grow, many new members have joined us, and they have also sublimated the solver on the shoulders of my brother and sister. At the same time, through continuous communication and feedback with customers, the stability of the product has also been greatly improved, which is not a one-time thing, but a patient slow stew and firm determination in exchange. My brother-in-law succeeded me as the head of product development, my students were already the core backbone of the product team, and I myself transitioned to a "half-way salesman." With the efforts of everyone, many of the problems that we did not dare to think of before and did not dare to touch have been overcome one after another, for the product, our software has finally transformed from an "ugly duckling" into a "black swan", for sales, we have established a more tacit relationship with customers, not only can smoothly enter the customer door, but also the head of enterprise technology, and even the general manager has begun to communicate with us in earnest and in-depth.

Just in November 2021, the algorithm team increased the computational efficiency by 10 times in the existing fluid solver capabilities through innovative optimization, and the overall platform computing stability reached at least 95%.

From my first version of the fluid solver to now, the computational efficiency has increased by a factor of 100 — a leap of 2 orders of magnitude.

Before the calculation of a large thin-walled part took 1-2 weeks, and may even be too large to be calculated because of the amount of mesh, now the same large thin-walled part, the calculation time is only 2-3.5 hours;

The case, which was previously calculated for 10 hours, can now be completed in a matter of minutes.

Of course, this is all achieved under the premise of ensuring the accuracy of calculations.

Adaptation Technology: Five years to speed up a hundred times - the operation of the CAE solver is not copied

Our autonomous CAE software calculates the effect locally

Another of our product teams has even developed an instant simulation algorithm, that is, the CAE simulation results will be predicted immediately after the condition transformation, without the need for "calculation", which sounds very cool, this is the prototype of our next generation of intelligent design products, which is expected to be brought to the market in 2022.

Five years is like going through a big test. Every year the company encounters a variety of problems, but thankfully we have strong support from many parties, investors and institutions, startup parks and incubators, not to mention visionary national policies. But this is not a replicable experience, let alone an operation that can be copied, and Adaptive Technology can persist, relying on the indestructible team under the big waves. From its inception to the present, the core team of Adaptive Technology has always maintained a strong cohesion, not shrinking, but getting bigger and bigger, no matter how many ups and downs have been experienced in the middle, no matter how many quarrels and confusions have been experienced, almost all the old employees who have been employed for more than 3 years are still standing behind me and supporting the entire company.

"The strongest competitiveness of Shichuang Technology is our understanding and development ability of the core solver." That's what I've said to every investor. The entire CAE field has five core underlying solvers: force, heat, light, sound, electricity, the most difficult is the mechanical solver, and mechanics is divided into fluid mechanics and structure, shaping mechanics. We aim at fluid mechanics and officially take a powerful step, in the future, after the development of thermal, acoustic, electrical solvers, our algorithm team will be more enthusiastic to overcome the more difficult but more common application of compressible flow, combustion and plastic deformation mechanics solver, etc., so that the Adaptive Technology CAE computing platform can not only break into the casting, injection molding industry, but also can penetrate into the automotive, aerospace and nuclear power fields, so that Supreme-CAE can help the digital transformation of the entire Chinese industry - This is the true ideal and goal of Adaptive Technology.

Once you're in, you have to do your best. We believe that the more difficult things are, the more we should do them, and the more challenging things we do, the more we can get the great company in our centrifugation.

"Set big goals and do difficult and correct things" can really make our blood boil.

About author:Guo Zhipeng

Founder and CEO of Adaptive Technology, graduated from Tsinghua University (Ben, Ph.D.), Oxford University, Royal Society research members, long-term engaged in digital industry research, including high-performance algorithms, high-energy X-ray inspection, image processing and related industrial field materials and core process development, etc., determined to create internationally competitive autonomous CAE software, get rid of international monopoly, enhance and revitalize the national industrial level.