laitimes

What opportunities are hidden in the 2023 Tech Trend Forecast?

Text| Light Cone Intelligence Lu Yingxi

"We stare in the rearview mirror at the present and go backwards into the future." Canadian communication scholar McLuhan uses the "rearview mirror theory" to explain the basic operating principles of the evolution of media technology and its impact, and this theory is also applicable to the development of today's science and technology, emphasizing from history to the future.

But due to the limitations of human cognition, times often run faster than cognition, so we need both past experience and future perspectives.

Recently, Baidu, Tencent, DAMO Academy and MIT Technology Review have successively released their predictions for technology trends in 2023. Looking at the predictions released by various companies this year, "big model", "cloud computing" and "chip" have become the key words, and the intelligence brought by AI technology is still the main line.

Specifically, the top ten technology trend forecasts released by Baidu Research Institute in 2023 cover the fields of large model ecology, data-real integration, virtual-real symbiosis, autonomous driving, robotics, scientific computing, quantum computing, privacy computing, science and technology ethics and sustainable development of science and technology. Tencent predicted the application of cutting-edge digital technologies such as high-performance computing, ubiquitous operating systems, Web3, and spatiotemporal artificial intelligence. In DAMO Academy's predictions, hot areas such as cloud-native security, urban digital twins and generative AI have also emerged; MIT Technology Review focuses on biotechnology, environmental protection, technology and engineering.

As pathfinders, these technology companies and institutions have given judgments on the future based on their own practice and cutting-edge scholarship. At the beginning of last year, Baidu predicted that AIGC would achieve large-scale application, and the popularity of many AI paintings at home and abroad this year also confirms this. This year, DAMO Academy 2023 Top Ten Technology Trends also emphasized that generative AI will enter the application explosion period.

In 2022, the emergence of digital humans, AI painting, Web3 and other blowouts has made people excited, turning a new page on the calendar, people can't help but ask: In 2023, where is the boundary of technological development?

To this end, Optical Cone Intelligence sorted out the 2023 technology trend forecasts released by Tencent, DAMO Academy, Baidu and MIT Technology Review from four directions: AI, digitalization, biotechnology, technology and ESG.

Trend 1: Intelligence with AI as the main line

In 2022, with the popularity of artificial intelligence technologies such as autonomous driving, ChatGPT, and AI painting, AI will begin to enter the public context. In the predictions of Baidu, Tencent, Alidama Research Institute and MIT Technology Review on the top ten technologies in 2023, AI technology has also been given an important chapter. Of course, it is not easy to promote the development of AI, from the underlying computing power to pre-trained large models, and then to application scenarios, the development of AI will drive the transformation of many industries, specifically:

Base layer:

RISC-V architecture:

Since the instruction set standard X86 and ARM occupy the main market share of the CPU architecture, the design and development of domestic chips has always been a common problem, but thanks to the rise of RISC-V open standards, independent research and development of mainland chips has gradually become possible.

Compared with most instruction sets, the RISC-V instruction set can be freely used for any purpose, allowing anyone to design, manufacture and sell RISC-V chips and software, and RISC-V is open source and free, which greatly reduces the difficulty and cost of chip design and development.

In the context of scientific and technological competition among major countries, mainland chip manufacturers continue to increase RISC-V, which is expected to achieve corner overtaking. The data shows that of the 19 senior members of the nonprofit RISC-V International, 12 are related to the mainland.

Image source network

Chiplet:

In the post-Moore era, as it is increasingly difficult for chips to improve performance simply by upgrading the process, Moore's Law is facing a failure crisis, so high-performance, low-power, high-area utilization and low-cost Chiplet chip technology has received widespread attention.

Simply put, the principle of Chiplet is similar to building Lego bricks, breaking down a traditional SoC into multiple pellet modules, preparing these chips separately and then forming a complete chip through interconnected packaging. How to ensure the reliability and universality of the die connection process during interconnection packaging, and achieve large bandwidth and low latency of inter-die data transmission, is the key to the research and development of Chiplet technology.

According to the forecast of DAMO Academy, with the establishment of the UCle Alliance in March 2022, the Chiplet interconnection standard will be gradually unified, and the industrialization process will be further accelerated. Chiplets based on advanced packaging technology may reconstruct the chip development process and affect the chip industry structure in an all-round way.

Storage and computing integrated chip:

With the continuous landing of AI applications, higher requirements are also put forward for parallel computing, low latency, and bandwidth of chips.

Traditional Feng · Computers operating under the Neumann system usually include two parts, storage units and computing units, resulting in limited computing power. The storage-computing integrated architecture directly integrates the data storage unit and the computing unit, which can greatly reduce the power loss caused by data handling, and at the same time, reduce the waste of computing power when waiting for data to be read, greatly improve computing parallelism and energy efficiency, and in VR/AR, unmanned driving and other application scenarios, storage-computing integrated chips have significant advantages of high bandwidth and low power consumption.

At present, the integration of storage and computing has set off a boom in many vertical fields, driven by capital and industry, in-memory computing based on mature memory such as SRAM and NOR Flash will usher in large-scale commercial use in vertical fields, such as smart homes, wearable devices, pan-robots, intelligent security and other small computing power, low-power consumption scenarios are expected to usher in the upgrade and iteration of products and ecosystems.

High Performance Computing:

In recent years, driven by various artificial intelligence applications such as AI big models, AIGC, autonomous driving, protein structure prediction, etc., with the iteration and accumulation of architecture, hardware and software, high-performance computing has crossed the 1.0 era with CPU as the core computing unit, achieved core breakthroughs in the CPU + GPU 2.0 era, and now, it has moved towards the 3.0 era of "CPU + GPU + QPU".

High-performance chips are the core technology of high-performance computing, and the Chiplet technology mentioned above will also play an important role in high-performance computing, high-density computing and other fields.

Quantum computing:

The 2022 Nobel Prize in Physics was awarded to "quantum entanglement", allowing more people to realize quantum computing.

In recent years, under the promotion of giants at home and abroad, such as Google and Baidu, quantum computing technology has made continuous breakthroughs, and has been applied in many fields such as basic scientific exploration, digital economy, artificial intelligence, and information security. According to market research firm Hyperion Research, the quantum computing market will generate revenue of $614 million in 2022 and is expected to reach $1.208 billion by 2025, with a compound annual growth rate of 25%.

In the past year, quantum computing technology has achieved a new round of breakthroughs in key technology directions such as software and hardware, applications and networks, and in 2022, Baidu released an industrial-grade superconducting quantum computer called "Qianshi", and also released the world's first full-platform quantum software and hardware integrated solution called "Liangxi", realizing the "plug and play" of quantum chips.

According to the forecast of Baidu Research Institute, it is expected that in 2023, the core technology of quantum computing will continue to make breakthroughs, and achieve landing in artificial intelligence, material simulation, financial technology, biopharmaceuticals and other fields, accelerating the industrialization process.

Model layer:

Multimodal pre-trained large models:

From a single text-to-text, picture-image, to a literal diagram and a picture-based video, AI pre-training large models are developing from single-modal intelligence such as text, speech, and vision to the direction of multimodal integration.

In the field of Wensheng diagram, taking the Stable Diffusion diffusion model that will be out of the circle in 2022 as an example, thanks to the addition of CLIP, the model can complete the process of mapping text features to image features, guide the model to generate images, and improve image quality with the help of the diffusion model.

With the continuous maturity of technology development, multimodal pre-training large models will develop in the direction of cognitive intelligence that can reason, answer questions, summarize, and create, accelerating the evolution process of general artificial intelligence.

Image source network

Industry Big Model:

In the process of the evolution of AI large models to cross-language, cross-task, and cross-modal technologies, their versatility, generalization, and interpretability have been greatly improved. But this is far from enough, in order to industrialize technology, it is also necessary to think about how to match the needs of large models with real scenarios.

For example, in 2022, Baidu and Geely released a knowledge-enhanced automotive industry large model - Geely-Baidu Wenxin, which used Baidu Wenxin ERNIE 3.0 model to pre-train the model on 23 million pieces of Geely Automobile's professional fields without annotation, realizing the application of the large model in the automotive industry.

With the gradual maturity of large model technology, the training ability, core operator library and software platform layout continue to improve, and the use of industry knowledge to enhance technology, the ability of large model will also be applied to more fields such as energy and power, finance, aerospace, media, film and television.

Application layer:

AIGC:

From understanding to creating, this is one of the most significant advances in artificial intelligence.

Thanks to the breakthrough of large models in basic research, especially deep learning, from massive training data, generative AI can generate new digital video, images, text, audio or code. For example, ChatGPT, which can be used for question and answer, text writing, writing code, and AI painting software for various literary drawings.

With the blessing of multiple factors such as open source models and the reduction of computing power costs, the commercialization of generative AI has begun to appear, and at present, the works generated by AI painting have begun to be applied to advertising marketing, video production, game model production and other fields. In the future, with the further improvement of generative AI in cognitive intelligence and generative controllability, generative AI will also enter the application explosion period, greatly promoting digital content production and creation.

Image: AIGC painting "Space Opera House"

Urban Digital Twin:

Based on digital twin technology, building smart cities has become the mainstream trend of urban construction. According to IDC's forecast, the scale of smart city investment will exceed 100 billion US dollars by 2025, with a five-year compound growth rate of more than 30%.

Since it was first proposed in 2017, urban digital twins have been widely promoted and recognized. In the past two years, the key technologies of urban digital twins have achieved breakthroughs from quantity to quality, which is embodied in large-scale.

At present, large-scale urban digital twins have made great progress in application scenarios such as traffic governance, disaster prevention and control, and dual carbon management. For example, through the three-dimensional modeling and real-time rendering of urban high-precision road networks, water networks, rivers, vehicles and other entities, the twin drills and effect evaluation of all-round strategic plans such as crowd evacuation guidance, traffic control strategies, and weather conditions impact are realized.

In the future, urban digital twin will not only serve as the R&D and testing environment of urban three-dimensional integrated unmanned system, but also a support system for realizing global perception and global scheduling.

Image source network

Digital Human:

At the beginning of 2022, from China Mobile's digital Gu Ailing to Tencent's 3D sign language digital sapiens "Lingyu", the large use of digital humans at the Winter Olympics made more people recognize digital humans, and also created an excellent opportunity for the application promotion of digital humans.

At the R&D level, thanks to the addition of AI technology, the development cycle and cost of digital humans have been greatly shortened and reduced; At the intelligence level, digital humans mainly rely on NLP for text driven, and based on the development of NLP large models, the degree of intelligence of digital humans is increasing. At the application level, at present, digital humans are widely used in digital marketing, entertainment and other fields, and with the advent of the era of true interconnection, digital humans will become an important element and new entrance.

Image source network

Dual-engine intelligent decision-making:

In recent years, with the increasing complexity and speed of change of the external environment, enterprises need to make business decisions quickly and accurately in a complex and dynamically changing environment.

The limitations of classical decision optimization are insufficient processing power and slow response speed, and the disadvantages of machine learning are high cost and slow learning efficiency, so academia and industry have begun to build a new intelligent decision-making system with dual engines of mathematical model and data model, which makes up for the limitations between classical decision optimization and operation research optimization algorithms.

For example, the dispatch order of the taxi platform can use dual-engine intelligent decision-making to model and analyze relevant data such as order volume, order time, and road conditions, so as to obtain the optimal decision.

In the future, dual-engine intelligent decision-making will further expand application scenarios and promote global real-time dynamic resource allocation optimization in specific fields such as large-scale real-time power dispatching, port throughput optimization, airport shutdown arrangement, and manufacturing process optimization.

Robot:

In the context of global demographic adjustment and rising labor costs, the importance of robots is becoming more and more prominent.

From the supply side, the intervention of AI, big data and cloud computing technologies has greatly improved the performance of robots in perception, decision-making and execution.

Among them, in terms of perception, tactile perception is the focus of research in the field of robot perception and complementation. Benefiting from the breakthrough of flexible materials, haptic sensing technology has been developed and tested in the fields of robot hands, tactile gloves, health testing equipment, and intelligent cockpits. For example, the flexible grippers installed by collaborative robots on end effectors can sort and grasp items of complex shapes and materials, just like a human hand.

In the future, with the further decline of robot ROI, various robots such as mobile robots and collaborative robots will begin to enter factories and homes to undertake the heavy task of reducing costs and increasing efficiency.

Image source network

Autonomous driving:

In 2022, as autonomous driving enters urban scenarios, whether it is perceiving complex environments or processing massive data, the difficulty has greatly increased, and traditional small models cannot meet the requirements of high-level autonomous driving, so many autonomous driving companies have begun to apply Transformer large models to autonomous driving algorithms.

The addition of large models allows autonomous vehicles to effectively expand semantic recognition data, greatly improve the efficiency of long-tail problem solving, further enhance the ability of autonomous driving perception generalization, and adapt to more travel scenarios.

It is expected that in 2023, the commercialization of autonomous driving in major cities in China will show a trend of double growth in operation scope and fleet size, and the market penetration rate of smart cars with autonomous driving technology will also have a new breakthrough.

Image source network

Calculate the optical image:

As traditional optical imaging approaches the physical limit in terms of hardware functions and imaging performance, computational optical imaging relying on a new generation of information technology such as sensors, cloud computing, and artificial intelligence has emerged.

Computational optical imaging takes the specific application task as the criterion, obtains or encodes light field information through multiple dimensions, and designs a new paradigm of perception for sensors far beyond that of the human eye. At the same time, combined with mathematical and signal processing knowledge, the light field information is deeply excavated to break through the limits of traditional optical imaging.

At present, computational optical imaging has begun to be applied on a large scale in mobile phone cameras, medical treatment, surveillance, industrial testing, unmanned driving and other fields. For example, in the field of mobile phone photography, the shooting effect of mobile phone photography in a considerable number of scenes reaches or even exceeds that of general SLR cameras.

In the future, computational optical imaging is expected to further subvert the traditional imaging system and bring more creative and imaginative applications, such as lensless imaging, non-field of view imaging, etc.

AI empowers scientific research:

While transforming the industry, AI technology is also becoming an important force to assist scientific research.

In 1972, scholar Christian Anfinsen proposed the problem of protein folding, "the amino acid sequence of a protein should completely determine its structure", which made him a Nobel Prize winner in chemistry.

Behind this scientific research achievement, the AlphaFold model is indispensable. With AI technology, John Jumper's team has pushed the research of protein structure prediction into a new stage, and the success of models such as AlphaFold has made people see the huge impact of artificial intelligence technology on scientific computing, and the concept of AI for Science has attracted much attention.

Simply put, AI for Science is through the introduction of AI technology, researchers develop scientific computing tools, using models to solve actual scientific research problems.

In the future, AI will also change the research paradigm of more disciplines, and has great potential in basic sciences such as physics, chemistry, biology, materials science, and drug research and development.

Trend 2: Digital transformation is accelerating

With the improvement of cloud business requirements and the maturity of cloud computing, cloud-native technologies with higher agility, elasticity, and inter-cloud portability are attracting more and more attention. At the same time, the rise of concepts such as true interconnection and metaverse has also promoted the development of cloud computing power to become increasingly dense and specialized, and computing is more complex.

Therefore, it has also become a trend to promote the construction of heterogeneous computing systems with abundant computing resources and dedicated computing resources on the cloud. The high requirements for computing power imposed by heterogeneous computing are also accelerating the integration of software and hardware at the infrastructure level.

Nowadays, cloud computing technology has become a consensus, under the premise of consolidating the technical foundation, cloud computing and various fields are deeply integrated, cloud native security is valued, and the digital economy growing on the cloud is also becoming a definite trend.

Cloud computing:

In October 2022, Amazon clearly stated that from computing and storage to databases, data analysis, and machine learning, cloud services are moving towards serverless in an all-round way, helping customers minimize O&M work, increase business agility, and better cope with various uncertainties in Shanwu. This also reflects the continuous evolution of cloud computing to refined, integrated and heterogeneous computing.

With the continuous improvement of digital security, privacy compliance, resource autonomy, and high service availability, the main battlefield of the public, private and hybrid cloud markets, delivery models such as convenient and dedicated cloud continue to rise, hybrid cloud continues to become the main battlefield of the market, and proprietary cloud with convenient, controllable, sustainable and other characteristics has become a new trend and new choice, and the service model on the cloud is more refined.

With the help of cloud native advantages such as containers, microservices, and serverless, enterprises and developers can achieve efficient training of Al algorithms, agile development of big data applications, flexible deployment of programs and full life cycle management under the condition of IT cost optimization.

With the rise of concepts such as true interconnection and metaverse, the demand for instant service experience and lightweight continues to surge, and the development of cloud computing power is becoming increasingly dense and specialized, accelerating the pooling of computing resources such as GPUs and FPGAs, and promoting the construction of heterogeneous computing systems with abundant computing resources and dedicated computing resources on the cloud.

Image source: Photogram

Cloud-native security:

With the deep integration of cloud computing and various fields, the characteristics of rapid iteration, elastic scaling, and massive data processing on the cloud require the security protection system to be upgraded accordingly.

Cloud native security is to rely on the optimization and reconstruction of the security system based on the concept and technical characteristics of cloud native, and gradually realize the lightweight, agile, refined and intelligent security technology services to ensure the native security of cloud infrastructure and form stronger security capabilities. From the perspective of management, operation and users, cloud native security has three values: the whole link risk is visible and controllable; Closed-loop and efficient infrastructure security operation; Comprehensive protection of customer assets on the cloud.

In the next 3-5 years, cloud-native security will better adapt to multi-cloud architecture, helping customers build a dynamic and accurate security protection system covering hybrid architecture, full link, and precision.

Software and hardware convergence cloud computing architecture:

At present, cloud computing has entered the third stage, introducing special hardware, forming a virtualization architecture that integrates software and hardware, and achieves comprehensive hardware acceleration.

The traditional CPU-centric cloud computing architecture is limited by CPU performance bottlenecks and cannot cope with the further expansion of latency and bandwidth on the cloud, and the cloud computing architecture needs to be iteratively upgraded in the direction of software and hardware integration.

Ren Ju, associate professor of the Department of Computer Science of Tsinghua University, believes that the integrated design of software and hardware is an important evolution direction of the current computing architecture, especially in complex cloud computing scenarios, the collaborative optimization and iterative upgrade of software and hardware are the key to determining its performance improvement.

DAMO Academy predicts that in the next three years, cloud computing will evolve from a CPU-centric computing architecture to a new architecture centered on cloud infrastructure processors (CIPU). On this basis, CIPU will define the service standards of next-generation cloud computing, bringing new development opportunities to the core software research and development and special chip industry.

Image source: Dharma Academy

Predictable networks for device-network convergence:

It can be expected that the network is defined by cloud computing, and the high-performance network interconnection system with server-side and network coordination can greatly improve the network communication efficiency of distributed parallel computing, thereby building an efficient computing power resource pool and realizing the flexible supply of large computing power on the cloud.

Through the full-stack innovation of cloud-defined protocols, software, chips, hardware, architectures, and platforms, it can be expected that high-computing power networks are expected to subvert the current technical system based on the traditional Internet TCP protocol, become the basic characteristics of next-generation data center networks, and promote from local applications in data centers to the whole network.

The universal benefit of computing power in the digital society will continue to drive the development of data center networks to high-performance, resource-pooled cloud computing, which will make the network predictable technology undergo qualitative changes in the next 2-3 years and gradually become a mainstream technology trend.

Ubiquitous operating system:

At the end of 2021, the "14th Five-Year Plan for the Development of Software and Information Technology Service Industry" issued by the Ministry of Industry and Information Technology also mentioned that it is necessary to vigorously support the development of theoretical and technical research related to "software-defined" and ubiquitous operating system platforms.

Why is ubiquitous operating system so important?

The operating system is the core of the computing system and the core of the information industry ecology. However, with the all-round extension of the Internet to human society and the physical world, the complexity of the resources required to manage has increased exponentially, in short, computing is everywhere, and the future networked ubiquitous operating system not only manages different computing devices such as hosts, PCs, mobile terminals, and IoT terminals, but also includes a variety of different new computing environments for new human-machine-object integration application scenarios.

At present, the development focus of ubiquitous operating system is on the access and control of IoT terminals, as well as the support platform for the development and operation of various network applications including IoT terminals.

With the integration and development of "human-machine-object", the era of ubiquitous computing with deep integration of human society, information space and physical world is opening, and it has become a development trend to build a ubiquitous operating system that manages various ubiquitous facilities/resources and supports digital and intelligent applications in various scenarios.

The picture shows the ubiquitous operating system framework diagram

Digital Office:

In 2020, a sudden epidemic made students start online classes, making working from home one of the daily routines of office workers. But at the time, perhaps no one could have imagined that digital office would become a major trend today.

Remote workspace provider IWG estimates that 70% of employees worldwide work remotely at least once a week; In addition, IDC data shows that by 2023, 70% of the 2,000 organizations worldwide will adopt a remote or hybrid work-first work model.

At present, cloud platform, audio and video processing, digital collaboration, data operation, artificial intelligence, and expression rendering have basically built a digital office technology stack. At the same time, the extensive application of knowledge digitization and digital collaboration tools has also further promoted the development of digital office collaboration, making the future digital office increasingly moving towards "multi-modal" and "great collaboration", and triggering the paradigm innovation of knowledge co-creation.

Energy Internet:

In the context of new energy transformation, the volatility of the power grid has intensified, and it is impossible to achieve balance by relying solely on electrical installations, and it is necessary to rely on digital means to adjust, and digital technology has shifted from the original cost reduction and efficiency increase, which has become a rigid need to achieve the balance of the power grid. Therefore, now is an important opportunity for the development of software-defined energy networks.

Software-defined energy networks realize software-defined energy systems by comprehensively using relevant digital technologies to support remote deployment of business applications, flexible adjustment of organizational methods and operation modes, and customized operation status and functions of energy networks as needed.

On May 24, 2022, UKPowerNetworks, the largest power distribution company in the United Kingdom, and Deemind, an artificial intelligence company owned by Google, jointly released a new image recognition software for electronic maps of transmission lines in the UK, scanning thousands of transmission line pictures to convert electronic maps to accurately display the spatial distribution of transmission lines across the UK, help project planning and guide construction methods, and promote the development of new energy and electric vehicles.

With the development of the new energy market, software-defined energy networks will become a core of the future digital energy system infrastructure, representing the development direction of future energy power systems, especially new power systems.

Web3:

2021 is considered by the industry to be the first year of the rapid development of web3, but the development of the web3 industry has just begun, and breakthroughs in privacy and expansion technologies are accelerating the migration of applications to web3.

In the traditional Web 1.0 and 2.0 fields, due to the lack of unified identity layer services, users' identity data is easily stolen and used by others, resulting in user privacy leakage. Establishing a universal and robust digital identity system is the immediate need of all users in the future web3 ecosystem.

Ethereum 2.0 will introduce scaling capabilities, with the main goal of increasing Ethereum's processing power. In 2018, Ethereum announced the roadmap for Ethereum 2.0, which planned ETH scaling. The expansion is divided into two major stages, the first stage expands the sharding without computing power, combined with L2 greatly expands the performance, and the second stage provides computing power sharding to increase the processing power of L1 itself.

Due to the open, transparent and decentralized characteristics of blockchain, it brings great challenges to the privacy of users. Zero-knowledge proof technology can encrypt and store users' private data. For example, if the user's age is more than 20 years old, through zero-knowledge proof, you can only get yes or no, but not specific numbers, so that the verification of information can be completed without leaking any information content.

In summary, digital identity as a foundation, scaling to promote application migration, and the value of zero-knowledge proof are important development trends of Web3.

Image source network

Trend 3: Biotechnology benefits mankind

In the just-concluded year 2022, we have witnessed multiple technological breakthroughs in biomedical fields such as gene editing technology, pig heart transplantation, and ancient DNA analysis. These cutting-edge technologies will have a significant effect on the life and health of all mankind.

Gene Editing:

In 2022, a biotech researcher in the United States used the editing tool CRISPR when treating a woman with heart disease and a risk of hereditary high cholesterol. CRISPR gene-editing technology is often likened to "gene scissors," and in this trial, the researchers replaced a single base of the PCSK9 gene in patients' liver cells, which helps regulate levels of low-density lipoprotein cholesterol, also known as "bad" cholesterol, high levels that can trigger hardening of the arteries, clog blood vessels and trigger cardiovascular disease.

Previously, gene editing technology was mainly used in patients with rare diseases, and if this trial is successful, gene editing technology may be widely used to prevent common diseases, and the results of the trial will be announced in 2023.

Image source network

On-demand organ making:

Last January, the University of Maryland School of Medicine successfully transplanted a pig heart into David Bennett, a 57-year-old man, the world's first pig heart transplant.

Remarkably, in order to prevent the overgrowth of pig heart tissue and the rejection reaction of the human body, the pig heart was genetically edited to remove these sugar molecules and add other genes to make the pig tissue look more like human tissue.

After surgery, the transplanted heart performed very well for a few weeks. According to one report, about 2 million people worldwide need organ transplants every year, but the lack of donors has always been a hindrance to patients' surgery. The emergence of gene-editing technology means that those patients on the organ transplant waiting list have the possibility of recovery.

Although David Bennett eventually passed away on March 8, the technology was still a step-by-step in the field of medicine, opening the door to artificial organ making.

Ancient DNA analysis:

In November 2022, geneticist Svant Pääbo of the Max Planck Institute for Evolutionary Anthropology received the Nobel Prize for his contributions to the discovery of the extinct ancient human genome and human evolution.

The Swedish-born scientist spent decades trying to extract DNA from 40,000-year-old bones, culminating in uncovering the Neanderthal genome in 2010, a groundbreaking way in revealing genetic differences between all living humans and extinct hominins.

The picture shows Neanderthal genome sequencing

This important discovery is due to the advancement of commercial sequencers under the development of technology.

It is reported that today's commercial sequencers can see damaged DNA, and can detect a very small amount of residual DNA that was originally undetectable, which is of equal significance to modern medical research.

Trend 4: Technology and ESG

With the progress and development of science and technology, keywords such as sustainable development and low-carbon environmental protection have also attracted much attention. While achieving technological progress, enterprises also need to take into account social responsibility, fully integrate ESG concepts into scientific and technological innovation, and technological development in 2023 should move from understanding ESG to practicing ESG.

Battery Recycling Technology:

Since the rise of electric vehicles in 2014, the batteries of the first electric vehicles have reached the "retirement" period, according to incomplete statistics, the weight of lithium batteries retired in 2021 has reached 200,000 tons, and by 2025 this number will reach 780,000 tons.

Although less pollution than lead-acid batteries, new energy vehicle batteries also contain certain amounts of cobalt, nickel, copper, manganese, organic carbonate, etc., which will also cause pollution and damage to the ecological environment.

Therefore, battery recycling technology is needed to dissolve the financial dissolution of batteries and separate battery waste. Currently, there is a new method of battery recycling capable of sorting end-of-life batteries and battery scrap, and the recycling facility can now recover almost all cobalt and nickel, as well as more than 80% of lithium.

When it comes to environmental issues, global attitudes are almost unanimous. Overseas, the European Union has recently proposed recycling regulations for battery manufacturers, and domestically, companies represented by subsidiaries of large battery companies such as CATL are also stepping up their layout for battery recycling.

Privacy Computing:

No matter what era you live in, the development of technology is often a double-edged sword.

The widespread use of digital technology in various industries has also raised a series of questions: Is data on the cloud secure? What to do if you encounter a cyber hacker? Is potential new infrastructure safe and reliable? The importance and urgency of data security governance and the marketization of data elements are increasing.

Nowadays, more and more institutions in the fields of finance, communications, medical care, and the Internet have begun to build their own privacy computing platforms, and the application scenarios are constantly expanding and deepening, and promoting the interconnection of various privacy computing platforms has gradually become a new trend in the industry.

It is foreseeable that the application scenarios of privacy computing technology will continue to innovate in the next few years, and privacy computing platforms will become an important cornerstone to support data security governance and the market-oriented development of data elements in many industries, helping to shape a data industry that balances value creation and security and trustworthiness.

Image source network

Ethics of Science and Technology:

In the era of big data, the phenomenon of "algorithm black box" such as personal privacy data leakage and big data killing is common, the mainland government issued the "Opinions on Strengthening the Ethical Governance of Science and Technology" this year, and submitted the "Position Paper on Strengthening the Ethical Governance of Artificial Intelligence" to the United Nations, actively advocating the principle of "people-oriented, intelligence for good" to ensure that artificial intelligence is safe, reliable and controllable.

In terms of science and technology ethics, Baidu Research Institute predicts that in the future, in a highly intelligent and digital society, having credible and controllable AI technology capabilities will become a new competitive advantage for enterprises.

Sustainable development of science and technology:

In recent years, under the guidance of the goal of carbon neutrality, energy-intensive AI computing has also followed the evolution direction of energy saving, emission reduction, cost reduction and efficiency improvement. Among them, edge computing takes into account the real-time and elasticity of computing, which can reduce the transmission of massive data, save huge data transmission and energy costs, and the future collaboration between edge computing and 5G, AI and other technologies will help the development of a low-carbon economy. Advanced computing is improving the scale of existing computing power, reducing computing power costs, and improving computing power utilization efficiency from multiple levels such as computing theory, architecture, and system.

With the deepening of the concept of sustainable development, the R&D investment and technological breakthroughs of "green computing technologies" such as edge computing and advanced computing will be significantly improved, and it is expected to be implemented in the fields of environmental protection, energy and materials, and improve the quality of the human living environment.

Beyond the stars

Of course, in addition to the above four major directions, the James Webb Space Telescope, which helped humans explore space civilization, also entered the MIT Science and Technology Review's 2023 "Top Ten Breakthrough Technologies" list.

In December 2021, the James Webb Space Telescope, a space telescope for infrared observation, jointly developed by NASA, the European Space Agency and the Canadian Space Agency, was launched.

Specifically designed to detect infrared radiation, allowing it to penetrate dust and see information about the period when the universe's first stars and galaxies formed and hundreds of millions of years after the Big Bang, the space telescope is Hubble's "successor."

The reason why the Webb Space Telescope has attracted widespread attention is because it can see the formation of some of the oldest stars and galaxies, helping humans explore the mysteries of the universe and explore the traces of extraterrestrial civilizations. According to foreign media reports, on January 11, the James Webb Space Telescope added another cosmic achievement: the existence of exoplanets was confirmed for the first time.

The picture shows an image taken by the Webb Space Telescope

Read on