laitimes

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

author:Technical Alliance Forum

Ma Yun Ali Developer 2023-06-15 09:01 Posted in Zhejiang

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Ali Mei's guide

This article describes the new version of DingTalk collaboration, and describes the process of front-end construction from the front-end technical support and technical perspective behind product capability upgrades: performance experience optimization and stability construction.

DingTalk new version of Collaboration Tab as a new front-end application under tens of millions of visits, it has been more than half a year since I led the front-end iteration, facing the pressure of high performance and stability under large traffic, the design and implementation of complex front-end interaction, and the design challenge of high-scalability architecture combining front-end technology perspective with business, completing three iterations of major versions from 0 to 1, completing the implementation of performance and stability solutions in conjunction with clients and mini program containers, undertaking more than 20 card types across business lines, and providing field services with stable experience. At the same time, under the guarantee of the R&D mechanism, it has achieved 0 customer complaints and 0 rollbacks throughout the year. This article will describe the new version of collaboration, and describe the process of front-end construction of collaboration from the front-end technical support and technical perspective behind product capability upgrades: performance experience optimization and stability construction.

First, the background

Compared with the old version of the application-centric DingTalk collaboration tab, the new version of the product design concept has changed: organizational management and stimulating individual symbiosis. Collaboration is individual-centric, aggregates events generated by DingTalk's one- and three-party applications such as to-dos, approvals, schedules, and documents related to "me", helping users deal with events related to me more efficiently and obtain a more efficient collaboration experience.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

1.1 New version of Collaboration Tab Mind

Collaboration Context, a collaborative information consumption processing center related to me (the employee's personal perspective).

1.2 Business Objectives

The new version of the CTR of the Collaboration Feed section aims to increase the CTR by 100% compared to the current Collaboration Tab card section. The logic is to verify that the new timestream is more "useful" to the user than the previous application widget. (One of the objectives)

Why talk about goals here is to associate their own goals with business goals in the process of product capacity building, and then integrate them into the road of front-end construction to make precipitation, so as to reflect the blessing of business results brought by the "front-end".

Second, the front-end technical support behind product capability upgrades

Collaborative front-end evolution:

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

A new version of the Collaborative Evolution

before: The old version

App-centric

after: New version of collaboration

The first edition, centered on time, emphasizes order.

The second version, event-centric, emphasizes operations.

The third edition, people-centered, emphasizes relationships. Echoing the theme, the working version of "Today's Headlines"

Many times it is difficult for us to think from the initial product to the final state of the complete product, but if we use the goal and user value of the product itself to reverse the design and technical realization of the product, we can make corresponding changes.

2. 1 Dynamic feeds flow cards for collaborative front-end main architecture design

During the iterative construction of the first version to the third version of the card, product adjustments and optimizations are continuously made through verification of effectiveness and value increase

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Thinking about front-end design implementation

When doing the first version of the card implementation, it has planned with the product on the subsequent product capabilities, and the addition and deletion of card UI content are updated more frequently, and the logical functions of the card also need to be continuously constructed, such as operation buttons, card click events, opening methods after clicking, card deletion and topping functions, etc. In this context, I decided to design it like this: the UI layer that frequently changes the card and the logic layer of the card function that will not change are abstracted and constructed separately, so as to ensure that the continuous construction of card functions does not require frequent changes, and can meet the continuous iteration of the card UI.

The card UI layer is designed hierarchically for layout dimensions

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Front-end component structured event cards

//卡片引用
<common-card-box onClickCardBtn="onClickCardBtn" componentData={{feedItem.componentData}} swipeIndex={{swipeIndex}} childrenFeed={{feedItem.children}} 
//卡片内部组件拼装
<div>
  //卡片头部第一层 事项来源和事项时间
 <common-card-head 
      userInfo={{componentData.userInfoVO || {} }}
      totalBadge={{totalBadge}} //红点能力覆盖
    />
//卡片内事项标题 事项描述
 <common-card-content props={{...props}}/>
//互动卡片SDK小程序插件引用条件渲染
    <component id="{{componentData.flowId}}" componentProps="{{pluginComponentProps}}" is="dynamic-plugin://5000000002721132/card" a:if="{{ isReady && componentData.isPlugin }}"></component>
// 卡片补充3C推荐内容
    <common-card-recommend props={{...props}}/>
// ButtonList 卡片响应动作
     <common-card-buttons props={{...props}}/>
//卡片折叠 递归引用 此处不需要UI变更
<view a:if="{{childrenFeed && childrenFeed.length > 0}}" class="children-contain">
    <common-card-box a:for="{{childrenFeed}}" componentData={{item}} isReady={{isReady}} ut={{ut}} from={{from}} onClickCardBtn="onClickCardBtn"/>
  </view>
</div>           

The advantage of this is that the code structure is clear and you can directly conceive the layout from the code mind, no matter how the content of your card changes, I can quickly support the adjustment needs made on the product by adjusting the corresponding components.

Application scenarios: At present, 20+ universal card display scenarios in DingTalk are supported.

Capacity building of the logic layer of card operation (to adapt to change)

Card content operation: jsapi type (initiate Ding reminder, etc.) Jump type (half-screen and other card details)

The card itself operation: card deletion, card topping, card collapse and expansion

Update action: The content of the card is updated after clicking the action card, such as Completed after the event is accepted.

What needs to be noted here is the scalability of the business combination code in the process of card function construction, even if the card UI is adjusted, but the functional logic layer does not need to be changed. In the whole process from the initiation of the collaboration request to the end of the collaboration application, the most valuable collaboration information is provided to users at key collaboration nodes through the information design of different roles and scenarios, and by sending messages or precipitating information in the collaboration flow.

In scenarios such as deleting, updating, and pinning, cooperate with the Mini Program $spliceData The front-end local list is not updated insensitively.

modifyFeedFlowsCommon(
  flowId: string,
  type: string,
  targetName: string,
  feedFlows: IComponentInfo[],
  newFeed?: IComponentData
) {
  // step1 找到对应卡片在协作卡片或子卡片(折叠卡片)中的index
  const feedFlowsIndex = this.findFeedFlowsIndex(flowId, feedFlows);
  // step2 通过小程序提供的$spliceData函数进行局部更新
  if (feedFlowsIndex) {
    this.modifyData(targetName, type, feedFlows, feedFlowsIndex, newFeed);
  }
},
 
modifyData(
  targetName: string,
  type: string,
  feedFlows: IComponentInfo[],
  feedFlowsIndex: IFeedFlowIndex,
  newFeed?: IComponentData
) {
  // 分发操作事件,局部更新
    this.$spliceData({
            [targetName]: [parentIndex, 1, newFeedItem],
          });
  }
};           

2.2 Collaboration field interactive card SDK capability integration

Why integrate interactive cards: Here you can refer to the push and pull model of collaborative server architecture design, assuming that we want to increase the penetration rate of DingTalk workbench applications to flow more complex scenarios related to users into collaboration, and ensure that the same cards can be placed in other places at the same time, interactive cards are inevitable.

There are two types of cards in collaboration, the first is the standard business card mentioned above, and the second is the interactive card that can support DingTalk cross-site delivery.

DingTalk standard interactive cards are a common card type of DingTalk, which adapts capabilities and styles for different delivery scenarios to ensure that the same card template has a consistent experience in chat messages, group chat ceilings, collaboration, workbenches and other scenarios.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

DingTalk interactive card building link + delivery link schematic

For details of DingTalk interactive cards, please refer to the open platform documentation: https://interactive-card.alibaba-inc.com/

Application of interactive cards in collaboration - business shell interactive cards

The card itself consists of two parts: data + template. Data is dynamically updated by relying on synchronization protocols, while templates are dynamically updated through the dynamic delivery and update mechanism of small packages.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Differences in the particularity of the internal implementation of collaboration and collaboration business scenarios: Card data consists of two parts, the data source provided by the business itself, such as the title and description of the event in the figure above, and the 3C recommended reason for the data source of the collaboration itself, based on the source and time of the event.

On the basis of this difference, we use business cards to embed interactive cards for rendering, which I call business shell interactive cards, so that the content of interactive cards can be applied to the maximum extent while meeting business scenarios.

The first step of the internal implementation is to load the small package of the mini program, which is reflected in the fact that when the client loads a card template, it is essentially loading a small package, and the small package contains card template resources.

// 互动卡片插件初始化
    initCard() {
      try {
        if (dd && dd.loadPlugin) {
          dd.loadPlugin({
            plugin: '5000000002721132@*',
            success: (res: any) => {
              this.setData({ isReady: true });
            },
            fail: (err: any) => {
              logErrorUser('插件加载失败',JSON.stringify(err))
            },
          });
          app.globalData.hasInitCard = true;
        } else {
          logApiError('dd.loadPlugin', Date.now(), {}, 'dd.loadPlugin load fail', 'jaspi')
        }
      } catch(e) {
        logApiError('initCard fail', e);
      }
    },           

Step 2: The Mini Program plug-in references the Card SDK

<component
      id="{{componentData.flowId}}"
      componentProps="{{pluginComponentProps}}"
      is="dynamic-plugin://50000000027211321/card"
      a:if="{{ isReady && componentData.isPlugin }}">
    </component>           

Card Multi-site delivery buried point problem solving

How to distinguish the same interactive card in different venues?

The card component context adds the trackingRuleModel parameter. This parameter is passed through during rendering.

export interface IRenderContext {
    /** 场域标识,聊天为 'im', 吊顶为 'topbox' 等, 未定义的场域可以先填写 standard */
    bizScene: string;
    
    /**
     * 多场域埋点规则
     */
    trackRuleModel?: ITrackRuleModel;
    /**
     * 多场域埋点数据源
     */
    trackSourceDataModel?: Omit<TrackSourceDataModel, 'extension'> & {
        extension?: never;
    };
}           

Is it possible that collaboration is just a field where the stream of collaboration feeds you see is entirely carried over by interactive cards? OK.

The use of interactive cards in collaboration

ISV applications such as tomato forms and cloud management cars are integrated with normal cards, and there is no sense of contradiction.

In the future, more one-side, two-party, three-way interactive cards can flow into collaboration, and even collaboration can directly increase the penetration rate of workbench applications.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Pomodoro form interactive cards in a collaborative flow

2.3. Algorithm recommendation order access

Why access algorithms:

1. Use the existing data middle office and algorithm team collaboration model of DingTalk to provide users with high-value event processing cards from a personal perspective.

2. Echo the goal of collaborative business and increase the click rate, that is, verify that collaboration is more and more useful to users!

Architecture design of collaborative relationship model access scheme:

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Collaborative front-end algorithm recommendation sequential link design

Front-end design implementation:

The recommended sequence list consists of two segments, pending and other, after the implementation of the card model design described above, in essence, the workload here for the front end is only the splicing of the recommended sequence list, and all interactions between the UI layer and the logic layer have been overwritten.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Pending (important content): Processing cards that are strongly relevant to the user, such as incomplete to-dos, and start collapsing after three items.

Other (may be interested): Based on the algorithm collaborative relationship model and the behavior buried point data provided by the front-end, the algorithm center is uniformly output, and the interaction follows the recommendation sequence list drop-down loading.

2.4. Collaboration PC cross-terminal product capabilities are complete

To fill the gap in the experience of collaboration on the desktop. Applications that rely on multi-terminal collaboration, such as documents, need to support the desktop before it is possible to migrate application collaboration messages from within the application to collaboration.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Collaborative PC

Combined with the technology precipitation of collaboration on the mobile terminal, at the end of February, we quickly delivered the collaboration PC collaboration in 10 days.

This article focuses on the collaborative mobile terminal construction process, the front-end design scheme implementation of the PC side is not stated here, and then another ATA will be published to improve the scheme to explain how to land a collaborative PC, so stay tuned.

2.5. Capacity Building for Data Operations

APN unified push capacity building

Application scenario example: DingTalk annual report undertakes:

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

APN push DingTalk annual report undertakes

A general APN push link scheme was built at the first time of receiving the demand: the channel was used to establish a communication channel from the client to the frontend, and the source information of the front-end jump was pushed to the front-end under the listenership of the DingTalk unified jump protocol, and the front-end did relevant business logic operations through the subscribed listening events.

import { getChannel } from '@ali/dingtalk-jsapi/plugin/uniEvent';


export function ddSubscribe(
  channelName: string,
  eventName: string,
  handler: (data: any) => void,
  useCache = true,
) {
  try {
      const channel = getChannel(channelName);
      channel.subscribe(eventName, handler, { useCache });
  } catch (e) {
      logApiError('ddSubscribe fail',safeJson(e));
  }
}
// 客户端开启订阅统一跳转协议监听事件,并建立两端通道
ddSubscribe('channel.jumpAction.switchtab', 'cooperate_cooperate', (data) => {
        if(data?.data.from==='nianzong'){
          sendUT('Page_Work_New_Year_Summary');
          openLink$({url:yearSummaryUrl||'https://page.dingtalk.com/wow/dingtalk/default/dingtalk/yearsummary?dd_nav_translucent=true&wtid=yearsummary__xiezuo'})
        }
       
      });           

Collaboration top app area

Why should the app area collapse

Affected by multiple reasons such as global switching organization, application shortcut entry ceiling, information flow toolbar ceiling, etc., the content display area of the collaborative information flow itself is less than 1/2 of the overall page height, and the new version of the information card increases the amount of information but also increases the height of the card, so it is necessary to use folding to increase the screen effect ratio.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Application area display logic: pure front-end implementation, no server-side interaction here

1. Drag and drop custom order display on the linkage settings page.

2. Each entrance is uniformly controlled by a grayscale key in the display of collaboration, and the application entrance is filtered for customized noise reduction for exclusive nail customers.

3. Set the number of red dots through the client JSAPI before display.

//灰度key获取到联动降噪红点设置统一管理
export const getGrayValueFromCacheByKeys = async (keys: KeysType[] = []) => {
  const result: any[] = [];
  const promiseAllArr: Array<Promise<any>> = [];
  const promiseAllArrKeys: KeysType[] = [];
  keys.forEach((key) => {
    if (cacheGrays[key]) {
      result.push(cacheGrays[key]);
      return;
    }
    const defaultGrayKeysConfig = allGrayKeys[key];
    if (!defaultGrayKeysConfig) {
      result.push(true);
      return;
    }
    if (!defaultGrayKeysConfig) {
      result.push(!!defaultGrayKeysConfig);
      return;
    }
    promiseAllArrKeys.push(key);
    promiseAllArr.push(grayLemonFnFactory(...defaultGrayKeysConfig)());
  });
  const promiseResult = await Promise.all(promiseAllArr);
  promiseAllArrKeys.forEach((key: KeysType, i: number) => {
    cacheGrays[key] = promiseResult[i];
  });
  return cacheGrays;
};           

The above is the front-end main implementation and strategy in the process of product capacity building, and there are many detailed functions such as card automatic positioning to the current time, the nearest event, card items, synchronous push protocol, and list insensitive refresh and other functions will not be repeated here, interested students can contact me.

Third, the front-end technology perspective performance experience optimization and stability construction

Performance experience is a topic that cannot be avoided on the front end, and I insist that this thing is not to wait for the project to be completed and then optimized, but to implement it in the process of business iteration.

3.1 Performance Construction

The complete startup link diagram of the collaboration applet is as follows, what can we do at which key nodes?

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

The collaboration applet starts the link diagram

Common performance issues circumvent

The white screen time is too long

The screen is white for a long time before the first screen comes out, and the user's body feeling is poor. 

Lots of native API calls:

Through the IDE AppLog panel, it is found that there are a large number of native JSAPI calls, in fact, many of them are non-fold-dependent, and the performance loss of JSAPI calls varies under different models, especially on low-end machines.

Bag size is too large:

If the package size is too large, on the one hand, it consumes JS initialization execution time, on the other hand, unnecessary native API calls may occur, which will further drag down the above-the-fold performance.

Collaborative mobile performance experience construction big picture

The following content is high-energy throughout the process, and the front-end project will do this in the future

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

The big picture of collaborative performance construction

Strategy: Implement end-to-end performance policies and stability construction from the user perspective in conjunction with client containers and mini programs themselves

Case 1: Optimization of first-screen rendering experience - optimization strategy package and effects

Optimize static resources

Similar to conventional web gameplay, the main thing is that the picture is lazy loading and compression quality, which belongs to a means with a very high input-output ratio.

Reduce package size

Reducing the package size not only reduces the initialization time of the JS context, but also reduces redundant API calls.

Rendering principle

A large amount of setData is one of the main reasons for the drag on performance, and ideally setData should be controlled 1 time between the launch of the applet and the completion of the first-screen rendering. There are some challenges in doing this, reducing irrational modules re-render and reducing setData's data content, such as collaborating on a local update after a single card operation, rather than updating the entire list.

Consider dynamic plugins

For modules that are not above the fold, have independent >complex functions, and cannot be disassembled into subpackages, you can consider disassembling the logic into the Mini Program plug-in and loading it on demand.

Case 2: Optimization of first-screen rendering experience - DingTalk client LWP prefetch access

Plan and then move

Collaboration does not have pre-dependencies such as location services, so it can be pre-requested, which is especially suitable for accessing LWP prefetching.

The startup process of most Mini Programs goes through the following steps:

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

The overall process is serial, in which the request of home page data is often related to the user's network environment and server performance. Serial process parallelization is a very common performance optimization idea, and the step of requesting home page data is often to pass some parameters to the server to obtain data to render the UI. We can abstract some rules and load the homepage data in parallel before the JS of the business starts executing.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

According to historical online data, it generally takes about 1s for the container of the Mini Program to start execution from the business JS to start execution, while the average execution time of DingTalk's LWP request is about 300ms. This means that during the container startup phase, 3 LWP requests can be executed, which is wasted.

DingTalk LWP prefetch is a full-link solution for client, server, and frontend

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Service access

Use getBackgroundFetchData to open the client communication channel to read data on demand:

dd.getBackgroundFetchData({
  type: "lwp",
  wait: true,
  success(res) {
    console.log(res.fetchedData); // 缓存数据
    console.log(res.timeStamp); // 客户端拿到缓存数据的时间戳
    console.log(res.path); // 页面路径
    console.log(res.query); // query 参数
    console.log(res.scene); // 场景值
  }
});
//调用时可以传入 wait 参数,表示是否等待预加载结果。如果为 true,会等待预加载任务完成后收到回调。           

Result data:

There is advance injection of LWP LWP is not injected in advance
Android 10ms 299ms
IOS 17ms 30ms

It can be seen that iOS devices have a limited time to reduce due to their own better performance, but the improvement effect on Android is obvious. 

Having said all this, look at the cold start effect of the DingTalk App killing process, and the entire Mini Program is hijacked by the container, that is, the preservation of the resident memory. 

Before: Skeleton screen LWP request loading after: LWP prefetches the first screen straight out

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]
DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

iPhone xR Apple mid-range model

After accessing the prefetch, in addition to the startup time of the UC kernel of the Mini Program container, the first screen is basically straightened out.

3.2 Stability building

Case: DingTalk lwpcache has upgraded the experience of weak network without network

Collaboration is a large traffic entrance, according to data shows that there are 3k+ users with weak network visits every day, and access to offline solutions is the only way to improve user experience.

Cache strategy: The above screen data is directly out, subsequent iterations, and the cache acquisition time is advanced from the home page onload to App onLaunch, which can reduce the interval of tens to hundreds of milliseconds, which is a relatively common means, but it is the most direct and effective way to reduce business time, on the other hand, it will indeed be faster from the physical point of view. 

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

DingTalk LWP-cache client cache scheme link flowchart

There is also a small program snapshot scheme at the framework container level - collaboration setting page access, but not all scenarios are suitable for presentation caching first. 

Offline policy:

1. Use the client lwp cache capability to cooperate with LocalStroage to obtain the cached data of the last request offline.

2. On the front-end page, degrade the resource images and features that depend on the network request.

DingTalk Collaboration Tab front-end evolution road [Ultimate Performance Optimization Summary]

Optimize before and after comparison

Regarding stability, the collaboration front-end also has a full-featured degradation experience, such as a retry page for server-side interface exceptions, a downgrade display with empty data, and a retry mechanism after JSAPI failures, etc. to ensure the high availability and stability of the system in an all-round way.

Fourth, business thinking from the front-end perspective

Do business to be responsible for the final result, and the front-end role from technology/product thinking to business thinking leap has many natural bottlenecks and gaps, technical people should play a forward-looking understanding of the business, a good architecture design behind must be a high degree of understanding and abstraction of the business, the process to have a correct understanding of the key indicators of the business, rather than a simple pure function stacking.

1. Excessive productization obsession leads to easy falling into details;

2. Lack of long-term and high-frequency interaction with business parties, insufficient understanding of business models and data sensitivity;

The shift from product/technology thinking to business thinking can be cultivated from the following aspects:

1. Cultivate sensitivity to goals and numbers, try to collect and form your own subscription reports, regularly review, and ask the reasons behind the rise and fall of indicators;

2. Strengthen interaction with the business side, look at each requirement from the perspective of business goals, use the STAR rule to sort out the correlation, and ask more whys;

3. Try to combine the information you have to do formula disassembly and sandbox deduction, such as App DAU = (MAU * average monthly access frequency) / 30 + daily average new pull, the current status quo of each indicator is in what magnitude, each demand serves which index, how much can be improved, whether the goal can be deduced after improvement, disassemble matters and sort out priorities;

4. Abandon the product obsession that is too advanced, avoid falling into details, and quickly try and error and iterate on the principle of producing a minimum viable product (MVP), and distinguish between "icing on the cake" and "carbon in the snow".

5. Concluding remarks

Users are using DingTalk more and more deeply, resulting in more and more collaborative "events".

"Collaboration" helps users better manage and more easily handle the daily "events" through efficient recommendation and screening mechanisms.

"Collaboration" has the opportunity to become the first entry point for users to deal with "events", making it easier for users to complete their work. At present, it is accumulating basic capabilities, and more than 20 collaborative scenarios have been connected to "collaboration", and the number of tasks processed by users through collaboration is close to 2 million per day.

The long-term value is that the entrance is more convenient, the scenarios are richer, the recommendation is more accurate, and the operation is simpler.