laitimes

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

author:闾丘津孜
The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

A new round of technology iteration of smart glasses

Over the past few years, smart glasses have been seen as a strong contender for the computing platform of the future. As the best among wearable devices, smart glasses offer unmatched portability and interactivity. However, due to the technical limitations of early products, the development of smart glasses has not been able to fully unleash its potential. Until now, the advent of a series of innovative technologies is bringing a new round of technology iteration to smart glasses, which is expected to comprehensively upgrade its product form, interaction mode and application scenarios, and lead the next generation of computing revolution.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

The computing power of AR chips has been improved

As a compact wearable device, smart glasses have extremely high requirements for the computing power and energy efficiency ratio of the chip. In the past, smart glasses products were often limited by chip performance and could not provide a smooth AR experience. But now, some chip solutions optimized for AR glasses have emerged, which have greatly improved the computing power of smart glasses by integrating innovative technologies such as GPUs and AI acceleration units.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

Taking HiSilicon's Kirin AR chip as an example, it uses a 7nm process to improve the GPU graphics rendering capability by 50%, and the AI computing power is also more than 3 times higher than that of the previous generation product. This means that smart glasses can smoothly render high-resolution virtual images, while running complex AI algorithms for voice recognition, gesture interaction, etc., bringing users an unprecedented AR experience.

AI large models empower cognitive interaction

In addition to the improvement of hardware performance, the application of AI large models has also revolutionized the interaction mode of smart glasses. In the past, the interaction of smart glasses mainly relied on touch or gestures, which was inconvenient, but now, many manufacturers are deeply integrating large model technology into smart glasses, giving them strong cognitive and understanding capabilities.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

For example, in the case of Singularity's products, they transplanted the GPT-4 large language model to smart glasses, allowing users to interact naturally with the glasses through their voice, obtain the information they need or complete various tasks. The large model can also accurately recognize the user's gestures and facial expressions, realize multi-modal interaction, and greatly improve the user experience.

Large AI models also play an important role in virtual content rendering. Through the real-time realization of the user's surrounding environment, the large model can generate 3D virtual objects that are highly suitable for the scene, creating an immersive AR experience for the user.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

Sound processing unit audio upgrade

As a compact wearable device, the audio experience of smart glasses has always been a major pain point. Due to space and power constraints, the speaker and microphone performance of early smart glasses were mediocre and could not provide immersive audio effects.

But now, the addition of a dedicated sound processing unit is changing that. These units often integrate advanced technologies such as algorithmic noise reduction and 3D surround sound to create a high-quality, immersive audio experience for users with limited hardware.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

Taking Yingmu smart glasses as an example, it adopts the design of dual speakers and dual microphones, with professional-grade sound processing algorithms, which can not only effectively eliminate background noise, but also simulate an immersive 3D surround sound effect, allowing users to watch film and television works as if they are in a virtual sound field.

The product form/interaction/application has been fully upgraded

The iteration of the above technologies is promoting the comprehensive upgrade of smart glasses product forms, interaction methods and application scenarios, which is expected to make smart glasses the main entrance for human-computer interaction and information acquisition in the future.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

All-in-one design/Slim and lightweight

In terms of hardware design, smart glasses are developing in the direction of all-in-one, light and thin. Traditional smart glasses often consist of a host and a projection module, which are bulky and inconvenient to carry. But now, the miniaturization of chips and advances in heat dissipation technology have made it possible for smart glasses to adopt a new all-in-one design, integrating all components into an ultra-thin frame, greatly improving portability.

Taking Li Weike Technology's Meta Lens S3 as an example, the entire glasses weigh only 25 grams and are less than 5 mm thick, but they have built-in integrated hardware from the chip to the camera and speakers, providing users with an unparalleled lightweight experience.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

Touchless voice interaction

In terms of interaction mode, voice interaction is gradually replacing traditional touch operation and becoming the main interaction mode of smart glasses. Thanks to the AI large model technology, users can complete various complex operations with simple voice commands, greatly simplifying the interaction process.

Take the AI capture function of the Huawei P70 mobile phone as an example, users only need to say the word "capture" to the object they want to shoot, and the AI will automatically identify the target and capture the image at the best time, eliminating the need for users to manually focus and press the shutter. This natural voice interaction method will also be widely used in smart glasses.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

New scenarios such as remote collaboration/gaming/navigation

The application scenarios of smart glasses are also expanding. In addition to the existing traditional scenarios such as navigation and games, emerging applications such as remote collaboration and virtual meetings have also begun to land on smart glasses.

Taking Meizu's AR smart glasses as an example, it supports real-time AR views shared by multiple people, which can be applied to scenarios such as remote maintenance and virtual exhibition halls. Engineers can share field views in real-time with remote experts for guidance and assistance by simply putting on their glasses. This kind of "hands-on" remote collaboration will greatly improve work efficiency.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

Smart glasses also have great application potential in education, medical and other fields. Through AR technology, students can observe three-dimensional models of human organs in glasses; Doctors can also use glasses to view patients' physical data in real time to support diagnosis and treatment decisions.

Leading the next generation of computing revolution

The addition of a series of innovative technologies such as AR chips, AI large models, and sound effect processing is comprehensively upgrading the product form, interaction mode and application scenarios of smart glasses, which is expected to lead the next generation of computing revolution.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

As a next-generation computing platform, smart glasses offer unparalleled portability and immersive experience. By projecting virtual information directly into the field of vision, users no longer need to frequently shift their attention to the mobile phone or computer screen, and can naturally obtain the information they need in their daily life, which greatly improves the efficiency of work and life.

The way smart glasses interact will also undergo a fundamental change. The popularization of natural interaction modes such as voice and gesture will completely liberate the shackles of human-computer interaction, and make the relationship between people and computing devices more friendly and harmonious.

The rebirth of smart glasses: AR chips, AI large models, and sound processing units have entered a new cycle of iteration

In the future, smart glasses will become the main entrance for people to obtain information and control devices, from work to life, from education to medical care, will be deeply affected and changed. Smart glasses are leading the next generation of computing revolution and ushering in a new era.

Read on