laitimes

An overarching framework for AI and machine learning in future communications

author:Wireless pigs

The topic of Model ID is one of the important issues that should be solved by AI/ML air interfaces, and the following agreements have been reached on Model ID:

  • Model models are identified by Model ID.
  • When the network needs to know the UE AI/ML model, at least for some AI/ML operations, the Model ID with relevant information and model capabilities based on the AI/ML pattern
  • Model ID can be used to identify AI/ML models used in LCM, including model delivery.
  • Model ID can be used to identify one or more models during model selection/activation/deactivation/switching.

Based on the above protocol, Model ID use cases identified to date include model selection/activation/deactivation/switchover/rollback/delivery, with other typical use cases likely to be added in the future.

The definition of Model ID should be general enough to not only cover typical use cases chosen by PHY, but also other future uses to consider. One of the goals of AI/ML SID for air interfaces is to find research methods on B5G or 6G AI topics, in addition to this, how to define Model ID is very important and necessary even for 6G AI.

Therefore, many companies believe that at least the following types of Model ID definition directions can be further considered:

Global unique model ID: A Model ID is statically assigned to a model algorithm, i.e. the meaning of each Model ID is predefined in the specification, and the Model ID is globally unique, which means that all UEs in the same communication system have the same understanding of the meaning of the same globally unique Model ID, regardless of which operator the UE has registered;

Operator unique model ID: A Model ID is assigned semi-statically to a model algorithm, that is, the meaning of each Model ID is defined by the operator through the implementation, and once defined, the Model ID is operator-unique, which means that no matter which cell the UE is connected to, the meaning of a specific operator's unique Model ID is the same within the same operator;

Temporary model ID: A Model ID is dynamically assigned to the model algorithm, and the temporary Model ID is unique within the UE, as in the concept of BWP ID, once the service cell of a particular UE has changed, a different Model ID can be assigned to UE for the same model algorithm. In other words, the temporary Model ID is unique for each cell UE pair.

In summary, Table 1 below gives the advantages and disadvantages of each Model ID definition type:

An overarching framework for AI and machine learning in future communications

OPPO believes that it should at least support the global unique modelID definition because it is simple and future-proof.

For operator unique model ID definitions, UE may need to obtain the meaning of each operator's unique Model ID through the implementation of a multi-carrier protocol, or dynamically obtain the meaning of each operator's unique Model ID from the network. This type of Model ID definition is also proof of the future when larger Model IDs are introduced.

For temporary Model ID definitions, UE may need to dynamically obtain the meaning of each temporary Model ID from the network, and the scope of application is usually per cell, which is unfriendly for managing large-scale AI models across cells in the future.

But on the other hand, carrier-unique Model IDs and temporary Model IDs may still be useful if people think that exposing globally unique Model IDs may lead to certain privacy issues. In addition, carrier-unique Model IDs and temporary Model IDs can be defined as shorter than globally unique Model IDs, which is beneficial from an overhead perspective. Carrier Unique Model ID and Temporary Model ID can be used as a supplement to globally unique Model IDs.

For a detailed definition of each Model ID type, the following two directions can be considered:

Pros: The definition is simple.

Cons: Whenever you provide Model ID information, you should include the full Model ID, which is not friendly to reducing overhead.

Pros: Flexible model management is possible because part of the Model ID can be used for certain scenarios.

Cons: More specification work is required to define the meaning of each subfield of the model ID.

The first question is about what type of information will be transmitted during model transfer/delivery, and the initial consideration is to transfer model algorithm data including model structure and model weight parameters at least during model transfer and delivery. But the model algorithm data is not enough, because UE still does not know what functions the model algorithm data is used for and other basic model description parameters necessary for the model to be used.

If UE wants to use the AI model after transmission, then at least the correlation between the Model ID and the corresponding model algorithm data should be known/maintained by the UE, but how to obtain the association between the Model ID and the corresponding model algorithm depends on which model transfer/delivery solution is chosen.

If model algorithm data is obtained from an OTT server or OAM, Model ID information is not required or maintained through UE implementations for LCM purposes if any model LCM process does not involve network actions. However, if at least one model LCM program involves network deployment, the Model ID information is determined by specification defaults or assigned by the network.

If the model algorithm data is obtained from the CP solution, in order to distinguish different AI model algorithm data, the Model ID can be sent together with the corresponding model algorithm data through CP signaling.

If you get model algorithm data from an UP solution, you can consider two ways to notify UE model ID information:

Therefore, if the transmitted/delivered model requires a network-controlled model management process, UE should at least know/maintain the association between the Model ID and the corresponding model algorithm data after the model is transmitted/delivered.

In addition to Model ID information and model algorithm data, UE may still need additional model description parameters if it wants to use AI models after model transfer/delivery. For example, model input/output information, model version information, model format information, model accuracy information, and so on. This model meta-information may be critical for model use, but it is important to note that there may be some information overlap between the Model ID definition and the parameters included in the other model description parameters described above, so what additional information the other model description parameters should provide may depend on what the Model ID definition cannot provide, and if applicable, details can be discussed during the specification work phase.

The requirements regarding whether to introduce model transmission/delivery based on 3GPP signaling may depend on PHY's decision, but the main impact on model transmission/delivery is within the scope of Layer 2.

The following solutions for model transfer/delivery are listed:

Solution 1a: The gNB can transmit/deliver AI/ML models to the UE via RRC signaling.

Solution 2a: CN (except LMF) can transmit/deliver AI/ML models to UE via NAS signaling.

Solution 3a: LMF can transmit/deliver AI/ML models to UE via LPP signaling.

Solution 1b: The gNB can transmit/deliver AI/ML models to UE via UP data.

Solution 2b: CN (except LMF) can transfer/deliver AI/ML models to UE via UP data.

Solution 3b: LMF can transfer/deliver AI/ML models to UE via UP data.

Solution 4: The server can transfer/deliver AI/ML models to UE (transparent to 3GPP).

From a Layer 2 perspective, all of the above solutions may be feasible, but have different advantages and disadvantages.

Model updates are another issue beyond model transfer/delivery. Start with the model update in the lower row. There can be two types of model updates:

  • Model update type 1: update full model related data;
  • Model update type 2: Some model-related data has been updated.

For model update type 1, there is no need to distinguish between model update and model transfer/delivery because the specification impact is almost the same. However, for model update type 2, incremental signaling can be considered for optimization. Generally, AI/ML model related data includes at least model structure parameters and model weight parameters, if only the model weight parameters are changed (that is, the model structure parameters are not changed), then model update type 2 can be used for model updates; Otherwise, type 1 will be updated with the model, so which model update type will be used depends on the use case. Both types can be considered further.

If incremental signaling is used for model updates, there are two directions:

For direction 1, when incremental signaling is used for model updates, the model format defined by 3GPP needs to be specified. Advantage of direction 1: Easy to update part of the AI/ML model.

Disadvantages of Direction 1: A model format based on 3GPP signaling needs to be defined, and model details may be exposed in the air.

For direction 2, incremental model updates can be achieved by dividing the entire AI/ML model into parts, and each part is associated with a sub-block ID, and the mapping relationship between the sub-block ID and the associated AI/ML module part should be known by the UE and network at the same level. This model update method can update any part of the AI/ML model using the sub-block ID.

Advantages of direction 2: Incremental model updates can be implemented without exposing the details of the AI/ML model, as each part of the AI/ML associated with the sub-block ID can be considered a container.

Disadvantages of direction 2: The concept of sub-block ID needs to be introduced for AI/ML models.

Based on the above, incremental signaling can be used to update a portion of the AI/ML model, and the impact of the specification is different if different methods are adopted.

AI/ML models can be considered a new type of service, but at this stage, non-AI/ML approaches can at least be used as backups. If AI/ML models are widely used in communication systems in the future, two different solutions will be encountered to apply to the same system. From the UE vendor's perspective, the introduction of AI/ML model delivery/delivery capabilities may improve the user experience in some cases, but the introduction of AI/MM model delivery/delivery capabilities from an operator perspective will significantly increase management effort. If AI/ML models are freely available to all types of UEs through model transfer/delivery procedures, operators may lose interest in introducing model transfer/transfer capabilities in the air interface. In this AI/ML SID, you should also consider how to avoid unauthorized UEs obtaining AI/ML models through the model transfer/delivery process, even if the UE is properly registered to the carrier network. This topic may involve CN work, but it is still worth discussing.

The final section is about AI/ML capability reporting, and many companies believe that this topic should be discussed in the specification work like other SIDs, and the details of the specification work can also be discussed, but also that even during the SID, some high-level frameworks for AI/ML capability reporting can be discussed first. Unlike other UE capabilities (which are usually static once reported), AI/ML-related capabilities can change dynamically, for example, UE residual storage and UE remaining computing resources, this dynamic UE capability concept is proposed in the NR SID TR but is discarded at the end of the NR SID. Think of this AI/ML SID as a good opportunity to reconsider this mechanism and agree to this high-level requirement in the SID.

Another issue is about the framework for defining AI/ML capabilities, since AI/ML deployments are highly correlated with the sub-functions included in the LCM, the overall AI/ML capabilities are not sufficient to reflect the actual AI/ML related capabilities that UE can deploy. For example, if UE only reports supported model IDs, the network will not know if UE supports model training, so feature-specific AI/ML capabilities are required. In addition, if UE is capable of model training for certain models, it cannot be assumed that UE is capable of model training for any type of AI/ML model, as model training complexity varies between different models, so characteristic-specific AI/ML capabilities should be reported based on model ID.

Read on