laitimes

InfoQ 2022 Trends Report: Mobile and the Internet of Things

Author | Sergio De Simone

Translated by | Wang Qiang

Planning | Ding Xiaoyun

One of infoQ's most striking highlights is our theme graph, which reflects our comprehensive understanding of the overlap of different topics in the technology adoption curve. When we decide from an editorial perspective what to cover, we can use them as a reference to choose the highest priority from a multitude of complex and competing topics. We also believe that sharing these theme maps can help our readers better understand the current and future technology environment and help them make better decisions.

The theme map is based on the famous framework proposed by Geoffrey Moore in his book Crossing the Chasm. Moore's framework describes five phases, namely "innovator," "early adoption," "early adoption," "late adoption," and "laggard," reflecting the evolution of the state of technology adoption over time.

InfoQ wants to identify ideas and technologies that are part of the early stages of innovators, early adoption, and adoption. We are also working to identify themes that we believe are in the late stages of universalization. InfoQ's previous reports can find a lot of content about the late and obsolete stages of popularization.

For our readers, these five stages mean they can more easily adjust their attention and decide for themselves which things are worth exploring right now, or waiting to see how it develops.

This report summarizes the views of infoQ editorial teams and some practitioners in the software industry on emerging trends in the mobile and IoT space. This field is quite rich in connotation, covering devices and objects such as smartphones, smart watches, IoT devices, smart glasses, and voice-driven assistants.

What all of these devices have in common is that beneath their surface are "network-connected computers." In some cases, their computing power has grown to a level comparable to that of personal computers, and smartphones and tablets are examples of this. In other cases, their computing power and the functionality they provide may have many limitations. All devices have some special form factor and can connect to the network. Human computer interaction (HCI) interfaces are another thing they have in common. In fact, while different categories of devices in this space follow different human-computer interaction paradigms, they all move away from "keyboard-mouse" or "text-click and click" paradigms that are popular in other areas of the software industry.

All devices that fall into the mobile and IoT space have a hardware component that is indispensable to their own functions. However, our report does not focus too much on the hardware aspect and more on their impact from the perspective of software development, which is also in line with InfoQ's mission. For example, while foldable devices certainly bring a lot of technological innovation, we're more interested in how to program their user interfaces, which reminds us of the rise of declarative user interfaces.

Late and outdated popularity

In the late stages of adoption, it's easy to identify some proven ways to build applications and solutions in the mobile space. They represent widely accepted, almost standardized paths to act, and we fully understand the pros and cons of these paths, why and under what circumstances they are beneficial, and so on.

Native mobile apps, for example, fall into this category. In this space, developers build mobile apps using native SDKs provided by Android or iOS, as well as corresponding programming languages (i.e. Kotlin/Java or Swift/Objective-C). According to AppBrain, more than 80% of the top 500 Android apps are written in Kotlin, while more than 75% of all Android apps use native Android frameworks.

We believe that hybrid app development frameworks should be outdated as a cross-platform approach. Hybrid apps are mobile apps embedded within a WebView or similar component, written using web technologies. There are two main motivations for using this approach: using a technology stack to develop mobile and web apps, and creating a mobile app that runs on all mobile platforms using a single code base. That doesn't mean hybrid apps don't make sense today. Instead, it means we have other, better ways to solve both problems, such as React Native and Flutter — discussed later.

Continue to discuss the topic of mobile app development. There are two practices that are also mature and are in the late stages of adoption, which are "using continuous integration/continuous deployment tools" and "testing with equipment farms". For example, tools like fastlane can free developers from most of the chores, such as taking snapshots, testing through the relevant app store, and pre-reviewing deployments. Similarly, there are companies that offer equipment farms to run automated testing processes for your app. Given the large number of smartphones on the market with different models, this should be a reasonable way to ensure the reliability of your app.

Finally, we think devices like Siri/Alexa/Google Assistant, wearables for fitness use, and smart homes are also in the late stages of adoption. This judgment has nothing to do with the popularity of these technologies today, but rather based on our understanding of some of their general metrics, and we believe that the functionality they provide has reached a certain level of maturity.

Early popularization

In the early stages of adoption, we saw technologies and approaches that made a lot of progress in supporting development needs, but they were not yet mainstream or were still in flux to some extent.

Declarative User Interface (SwiftUI)

A good example is the method of using SwiftUI to create UI for iOS native apps. SwiftUI, which has entered its third iteration, is a modern declarative framework that relies on Swift to implement some advanced syntax features that provide a whole new experience for iOS developers.

In fact, SwiftUI is completely declarative and responsive. With SwiftUI, you don't need to build your user interface piece by piece, but just use a textual abstraction to describe what it looks like and define how each of its components interacts with your model. Because of its design philosophy, SwiftUI implements an interactive development style in Xcode where you can preview your user interface in real time and adjust its parameters without having to compile a complete application.

Compared to Storyboards or UIKit programming methods, SwiftUI undoubtedly has a strong value proposition. If you start a new iOS project, you can easily evaluate it as a candidate UI framework. But that doesn't mean Storyboard and UIKit don't have a place in new apps. It's just that SwiftUI is becoming more technically mature, with higher adoption rates, and it seems to be moving in the direction of becoming the de facto standard for iOS UI development.

Native cross-platform apps

There are many approaches to cross-platform mobile app development, including React Native, Flutter, and Xamarin, all of which should be in the early stages of adoption. Of course, it's hard to imagine react Native, Flutter, or any other existing cross-platform solution that could easily replace native development. Therefore, their inclusion in the early stages of adoption means their rapid rise in the field of cross-platform mobile application development, mainly encroaching on the market space for hybrid application development methods.

In fact, if the reason you like this approach is to get the most out of your investment in the web stack, including HTML, CSS, JavaScript, and related tools, it's hard to argue that a hybrid approach is a more reasonable option given that React Native can provide you with a native, higher-performance user experience. For Xamarin, our reasoning is similar, except that it's in the Microsoft stack, not the Web.

On the other hand, if your motivation is to save development effort by writing app code only once, then you might also be able to use Flutter. It won't give you a native user experience, but for other reasons, you might prefer to use a compiled, strictly typed language, so opt for Flutter.

Cloud-based machine learning

Cloud-based machine learning services have also been added to this phase. You can find such services in apps like Snapchat and Tinder, such as classifying images or detecting objects by putting the calculation process on the cloud and then passing the results back to the app.

IoT security

In the ioT and IIoT space, we believe that cybersecurity is in the early stages of adoption. In fact, we would prefer to classify it as the late stage of popularization, but unfortunately the security status of home appliances, including the security of broadband routers that most people use to connect to the Internet, is not so reassuring. In addition, the industry has fully recognized the importance of protecting home appliances and IoT devices through automatic firmware updates, secure boot and communication, and user authentication measures, and efforts are being made to put all of these measures into practice.

Controlled publishing

In the mobile app deployment space, some of the technologies already in use include feature flags, incremental releases and A/B testing (both supported by the Google App Store), and mandatory updates to apps.

These fall under the umbrella of the controlled release topic, which aims to reduce the risks associated with the deployment of new releases. In fact, unlike servers or web apps, once a mobile app is published, errors in it are difficult to recover.

Attribute flags control the set of attributes provided by an app with specific flags that can be used to enable or disable specific attributes. Forced updates allow developers to retire older versions of an app, while incremental releases can effectively reduce the impact of potentially risky changes on the user base.

Mini app

The main benefit of microapps is that they do not go through the appStore and PlayStore review/publishing processes, saving development costs and time.

Mobile platform team

The need to platform core components is essential in any area of software development, and mobile apps are no exception. For example, logging, analytics, architectural frameworks, etc. all fall under the umbrella of components that naturally form a platform on which developers can build other features needed for various applications.

In this scenario, the specific allocation of responsibilities should be taken into account when building such a platform. Anticipating customer needs, defining standard best practices, choosing the right technology stack, evaluating tools, and more will all be the responsibility of a dedicated platform team.

This approach promises to provide clear abstractions while guiding the organization through a consistent development style and basic boundaries. There definitely needs to be a mobile team large enough for this approach to work. Several large organizations that have adopted this approach, such as Uber, Twitter, Amazon, and others, have been successful cases.

Early adoption

For the early adoption phase, we refer to software development technologies and approaches that are gathering more attention and opening up entirely new possibilities for developers.

Device-side machine learning, edge ML

First of all, we want to mention here device-side or edge machine learning, which means that you're actually running a pre-trained ML model directly on a mobile device or edge — not running it in the cloud.

Due to the advent of solutions such as TensorflowLite and PyTorch Mobile, this approach is gaining traction. These solutions dramatically reduce the overhead and latency associated with requests in the cloud and have spawned a number of entirely new categories of applications, where real-time prediction is key.

Another important advantage they offer is that user data never leaves the device, which can also be a key advantage in some use cases, such as health apps.

Augmented and virtual reality

The application of augmented and virtual reality is also growing. In particular, both iOS and Android systems provide ample support for some AR features, such as surface and plane detection, occlusion, face tracking, and so on.

The application of AR is not yet widespread, but it will certainly arouse more and more interest. Because it does not require specialized hardware and can be relatively simple to integrate into one application. On the other hand, virtual reality technology is mainly aimed at specialized headsets, such as Oculus, Sony PlayStation VR, HP Reverb, etc., and their applications are mainly focused on gaming. New impetus in this area may also come from the development of smart glasses.

Voice-driven mobile apps and home appliances

Both AR and VR have spurred the industry's exploration of new paradigms of human-computer interaction that are more appropriately placed in the innovator stage. However, due to the development of voice-based interactive interfaces, many new human-computer interaction methods are also entering the early stages of popularization.

We're not talking about dedicated devices like Alexa or operating system interfaces like Siri/Google Assistant here. Instead, we are referring to the practice of integrating voice capabilities into mobile applications and IoT devices themselves.

Run the mobile app on your desktop

Thanks to technologies like Apple Catalyst, mobile developers can also run their mobile apps on the desktop. In particular, some macOS system applications are implemented by Apple through Catalyst and Xcode, and AppStore also supports this technology. Microsoft also offers a similar solution for Android apps on Windows 10, specifically running apps on phones and mirroring them in a window on desktop machines.

Centralized logging

Centralized logging is also worth mentioning here, as it is an practice designed to collect all the logs generated by the system in a single store. The popularity of centralized logging is an important trend brought about by cloud-based systems, but this approach is also increasingly being used in mobile applications.

One of the main advantages of centralized logging for mobile apps is that it helps developers understand in real time what's going on with their customers' apps, helping to solve their problems and improve customer satisfaction.

The popularity of this practice is driven by a number of services, including AWS Central Logging, SolarWinds Centralized Log Management, and more.

Persistent connections

The last technology in the early stages of adoption is a persistent connection between the client and the server. This technology was originally popular with messaging apps and is now increasingly being used in e-commerce apps such as Halodoc and GoJek, mobile apps, and other areas.

Persistent connections want to replace push notifications and network polling with the goal of reducing access latency and power consumption.

Similar trends are evolving when it comes to IoT devices, such as lightweight protocols such as MQTT and gRPC.

A secondary trend worth observing closely is that the industry is building standardized protocols and/or specialized third-party solutions with the goal of making persistent connections plug-and-play convenient.

Declarative user interface (Jetpack Compose)

The recent 1.0 version of Jetpack Compose is Google's Declarative User Interface Framework for Android, based on Kotlin.

Regarding the development benefits of declarative user interfaces, it can be said that Jetpack Compose has a lot in common with the SwiftUI mentioned above. However, SwiftUI has reached its third major iteration, the iOS development community has generally embraced it, and Jetpack Compose is still in the early stages of adoption.

Innovators

Cross-platform mobile apps

While cross-platform mobility is still a minority, such applications are certainly a response to numerous development needs and constraints. Looking back at history, hybrid web apps and, more recently, methods like React Native, NativeScript, and Flutter have tried to provide solutions for them.

The industry has also recently been experimenting with a new solution to cross-platform mobile app problems, represented by projects such as Swift for Android and Multiplatform Kotlin. This approach walks you through choosing a reference platform, iOS or Android, and using its technology stack to build apps for your reference platform, while building the same app for another platform whenever possible.

In terms of user interface, Swift for Android provides Crystal, a cross-platform, high-performance graphics engine for building native UI. For Multiplatform Kotlin, you can choose to use Multiplatform-Compose, but it's still highly experimental. JetBrains recently released a beta version of a similar Compose Multiplatform that aims to bring declarative UI programming support to Multiplatform Kotlin, but it doesn't currently have support for iOS.

Both solutions provide good language interoperability, so you can definitely share parts of your codebase on both platforms; but it may be different when it comes to code that depends on the operating system. For example, Swift for Android provides Fusion, an auto-generated collection of Swift APIs that provides some common support for Android APIs.

Mobile Reliability Engineering (MRE)

Consistently delivering features at scale on mobile apps is a real challenge. This requires multiple teams to collaborate highly to deliver features and adopt simplified best practices, processes, and principles.

Software Reliability Engineering (SRE) was born to achieve the reliability of large-scale distributed systems, and it has recently gained popularity as a useful method in mobile applications.

MRE is still in the early stages of adoption and is designed to facilitate the adoption of best practices across the organization. Currently, some established organizations and startups are adopting this approach (though not obviously), with the help of a variety of tools, processes, and organizational dynamics, hoping to make feature delivery a more agile process.

Gesture- and body posture-based user interface

AR and VR offer new possibilities for interaction with applications and environments, leading to entirely new ways for humans to interact with computers, particularly using gesture recognition or 2D pose detection. While we're recing AR and VR into early adoption, there's also a trend toward bringing these approaches to human-machine interaction to mobile apps that aren't related to VR or AR.

These methods are based on ML and computer vision algorithms for gesture and human posture detection. For example, Apple provides this support through CoreML, while Google has its own MLKit for Android and iOS.

There are already some apps that use these techniques, mostly focusing on fitness, such as counting squats or recognizing movements while dancing or doing yoga. It's easy to predict that providing gesture and body posture detection support at the SDK level will enable developers to develop more applications and expand these user interface approaches to more areas.

Voice-driven user interface

While devices like Alexa and smart assistants like Siri, Cortana, and Google Assistant have made the idea of controlling devices with their voice widespread, native voice-driven UIs have only recently begun to gain prominence. This trend is driven by recent advances in machine learning in several areas, including speech recognition, NLP, question answering systems, and more.

One of the benefits of a voice-driven interface is that it provides the convenience of interacting with machines/programs with voice in many scenarios, such as driving, cooking, walking, etc. In addition, voice is also a huge help for some people with disabilities.

Many technologies are capable of integrating voice-driven user interfaces into mobile apps and IoT devices, either based on cloud computing models or embedded models. For example, Google has its own text-to-speech API and Dialogflow, while AWS provides the Alexa speech service that integrates with AWS IoT.

Web of Things

Web of Things is a web standard in the field of the Internet of Things that enables communication between smart objects and web-based applications. It attempts to define a way for IoT devices to interoperate with other devices and networks, thus providing an answer to the highly heterogeneous world of IoT devices.

While the definition of the Web of Things standard has been in place for several years, it's true that most IoT devices still have their own management interfaces and applications. These UIs and apps are compatible with the underlying network protocols and standards chosen by the manufacturer. This leaves users in a less than ideal state where they cannot control all their devices from one access point. In addition, these devices cannot communicate with each other.

Solutions like the Mozilla WebThing Gateway, AWS IoT, and a few others are expected to accelerate the adoption of IoT protocols.

IOTA

IOTA attempts to leverage blockchain technology to address some of the challenges that have hindered the mass adoption of ioT, including heterogeneity, network complexity, poor interoperability, resource constraints, privacy concerns, security, and more.

Traditional blockchain systems, such as Bitcoin and Ethereum, use sequential blockchains with multiple transactions within a block, while IOTA uses multipath directed acyclic graphs (DAGs) called Tangle. Other protocols, such as Byteball and Avalanche, also use Tangle with certain modifications. One of the goals of these protocols is to house IoT data in a distributed approach that is better than linear blockchains in terms of performance, scalability, and traceability.

IOTA is considered a highly scalable blockchain solution with no fees, no miners and no commissions. It promises to achieve the same benefits as other blockchain-based distributed ledgers, including decentralization, distributed, immutability, and trust, but without the latter's shortcomings of wasting resources and higher transaction costs.

Smart glasses

In wearable computing, smart glasses seem to be the next revolution. In fact, predictions and predictions about the rise of smart glasses have been popular for several years, at least since Google Glass. The project has not achieved anything worth mentioning, but has helped raise awareness of potential privacy issues related to the use of smart glasses.

From the perspective of human-computer interaction, smart glasses are a huge playground to promote many new methods and technologies, including speech and gesture recognition, eye tracking and brain-computer interface technology will usher in opportunities.

While some manufacturers do have relative success in the smart glasses market (including Microsoft HoloLens, Oculus Rift, Vuzix, etc.), the technology seems to be waiting for a more persuasive value proposition to drive its widespread popularity as predicted. Still, interest in the technology is growing, and several large companies have recently entered the space, such as Facebook's Ray-Ban Stories, and other companies rumored to be developing new products, including Apple, Xiaomi, and others.

Summary

As is often the case in the tech world, the speed of innovation is always surprising, and the same is true in the mobile and IoT space. We're trying to convey a very broad picture of the current technology landscape in this space and where it's going in the coming year. Only time will tell which of the latest trends will continue and which will quickly fade or disappear. Our Team at InfoQ will continue to do its part to provide practitioner-focused perspectives and coverage for the mobile and IoT space.

About the author

Sergio De Simone is a software engineer. Sergio has been a software engineer for over 15 years, working on a range of different projects and companies, including Siemens, HP and small startups. Over the past few years, his work has focused on the development of mobile platforms and related technologies. He currently works at BigML, where he leads development for iOS and OSX.

Abhijith Krishnappa is an architect at Halodoc with over 15 years of experience in mobile apps and platforms. He is currently responsible for architecture, technology strategy, and organizational development for various platforms at Halodoc. He enjoys innovative work and holds 4 patents at the United States Patent and Trademark Office. Abhijith holds a Master's degree in Computer Science from the Illinois Institute of Technology in Chicago.

Tridib Bolar works in Kolkata, India, and is a cloud solutions architect for an IT company. He has been working in the field of programming for over 18 years. He works primarily on the AWS platform and explores GCP as a side hustle. In addition to being an admirer of the serverless model of cloud computing, he is also a fan of IoT technology.

https://www.infoq.com/articles/mobile-and-iot-trends-2022/

Read on