laitimes

Hong Jiewen, Chang Jingyi | Between "Intentional" and "Unintentional": A Study on the Algorithmic Identity Construction of Youth at Station B

author:Build the Tower of Babel again

In this study, we place the problem of identity construction in the intelligent media environment, and take Bilibili users as an example to investigate the algorithmic identity construction of young people. Through participatory observation and in-depth interviews, it is found that young users of Bilibili have a certain degree of algorithm awareness and have formed algorithmic identity imagination, which has an impact on the algorithmic identity construction behavior as a precondition. On the one hand, the awakening of the algorithmic consciousness of the young people at station B is still not complete, causing them to "unconsciously" comply with the rules of the platform and fall into the passive identity construction process of "self-labeling", self-interest performance and interactive "carnival"; In general, the algorithmic identity construction of the youth of Bilibili wanders between "unintentional" and "intentional", with the characteristics of background, dynamics and improvement, showing a new possibility for people and algorithms to get along freely.

I. Introduction

With the wide application and development of intelligent technologies such as cloud computing and big data, the combination of algorithms and the information environment is becoming more and more deep, and algorithms divide us on the Internet into species and genera, trying to use 0 and 1 to create an algorithmic identity for us that is different from real identity. Algorithmic identity is different from our previous identity identification in real life or targeted identity performance, but the algorithm's interpretation of our identity based on data automatically determines our identity characteristics such as gender, class, and race (Liu Pei and Chi Zhongjun, 2019). In the face of ubiquitous and pervasive algorithms, every individual in them cannot get rid of this "new clothes" called algorithmic identity, and clarifying the connotation and construction process of algorithmic identity can help us uncover a corner of the complex picture of intelligent society.

The academic community has paid attention to the problem of algorithmic identity, but more emphasis is placed on the research idea centered on the platform ("identity classifier"), focusing on the technical classification and capital control logic of the algorithm platform. However, the individual cognition and agency of users ("identity holders") are less reflected in the study of algorithmic identity. In order to study these issues in depth, this paper introduces the elements of algorithmic consciousness to realize the perspective transformation from "classifier" to "owner", which is helpful to re-examine the influence of human free will and personality topology in algorithmic society. This study focuses on young people as the "pioneers" of new media technology, and examines their algorithmic awareness, algorithmic identity construction methods, and interactive games with algorithmic platforms, so as to help us use the perspective of young people as a prism to refract the innovative practices of users' media use in the era of algorithms, and explore new possibilities in the symbiotic relationship between humans and algorithms.

2. Literature review

(1) Conceptual origin: from digital identity to algorithmic identity

Identity is a topic that has never been discussed in modern society, and its connotation has multiple meanings in different theoretical perspectives and academic contexts. Identity may be considered to be a distinction between the self and others, "the characteristics that define one's own personality and attitudes that distinguish oneself from others" (Anfan Chen, Jianbin Jin, and Chen, 2019); Legitimacy of responsibility and loyalty (Zhang Jing, 2006:4), or as an interpretation and construction of people's personal experiences and social status in a cultural context (Xiang Yunhua, 2009). The representation and interpretation of identity are contingent and fluid historically, culturally, and structurally (Davis, 2015).

In the Internet era, the separation of online and offline has expanded traditional identity to the concept of digital identity. He Yijin (2021) argues that outside the body of physical space-time, there are digital identities that form a mapping with them in the cyberspace of different nodes, and the two are closely related but not completely corresponding. Digital identity is a feature that allows an individual to be identified and personalized in the Internet Protocol, and is the identity of an individual participating in the network through personal data, Internet accounts, comments, photos, text, videos, and other visible media (Martinez et al., 2021). Through digital identity, we gain the possibility and the right to be recognized in the network and to participate in different activities. In this sense, digital identity is part of a broad panorama of all the activities and records that we carry out on the Internet.

After entering the intelligent society, intelligent technology has been widely penetrated and applied to all aspects of social life, the connotation and form of digital identity continue to change, and the concept of "algorithmic identity" has come into being. John Cheney-Lippold (2017:25) argues that the behaviors, faces, and even emotions we leave on the Internet are captured by algorithms, which match the extracted individual data with a database formed by the cleaning of a large number of user data, and carefully create an algorithmic identity composed of 0s and 1s for us. Liu Pei and Chi Zhongjun (2019) define algorithmic identity as "a classification inferred by an algorithm based on the data footprint of a data subject, which automatically determines an individual's gender, class, race, and other identity characteristics". Lin Fan and Lin Aijun (2022) proposed from the perspective of subject and object that non-human factors such as algorithm technology produce technical intentionality on people by shaping the user image, so that the subject that originally used the algorithm becomes the object of processing and development by the algorithm. These definitions and expressions of algorithmic identity emphasize the objectification of human beings by algorithms, but ignore human subjectivity to some extent. Therefore, this study attempts to shift the focus from non-human subjects such as algorithms to human subjects, in order to supplement the conceptual connotation of algorithmic identity.

(2) Platforms and users in the construction of algorithmic identities

1. Platform: An identity classifier based on technology and capital logic

Throughout modernity, identities are constructed through processes of symbolic representation, scientific thinking, and institutionalization, and instruments such as censuses, psychological surveys, or medical taxonomies are used to classify people into knowable, stable, and relatively uncontroversial categories (Kotliar, 2020). But these days, more and more people are being categorized by algorithms, which tend to dynamically create uncountable, unnamed, and undecipherable clusters. Drawing on Max Weber's "ideal type," Lippold proposed the concept of a "measurable type" to represent these clusters, i.e., the various generics that algorithms divide into by mining user data. Combined with Tizianan Terranova's (2004) theory of "micro-macroscopic states", measurable types as macroscopic states are formed by multi-dimensional interactive behavior data as microscopic states. As a result, we in the age of intelligence have shifted from an "atomic me" to a "data me" (Lippold, 2017:16).

Antoinette Rouvroy (2012) describes algorithmic analysis as "data behaviorism"—a way of generating knowledge that bypasses the "traditional" process of meaning-making, and thus ignores the reflexivity, looseness, or moral capacity of the human subject. Andrejevic (2013;135) argues that algorithmic classification of user identities is a "post-comprehension" information strategy that ignores pre-existing social categories. Fisher (2019) proposed an "algorithmic epistemology" for this, in which algorithms do not understand the causes of user behavior through analysis and experience, but simply to be able to identify user behavior patterns. Since individuals can adapt to thousands of rules, "algorithmic epistemology" abolishes the notion that individuals belong to a binary (e.g., man/woman) or other discrete population categories, and instead provides a more subtle gaze. Algorithms can create a behavior-based scale that considers individuals to be more or less feminine or masculine, viewing people as dynamic, vivid data hybrids (Kotliar, 2020). The algorithm dynamically creates cross-combinations between various generics based on user data to form the user's algorithmic identity. This identity is no longer bound to a strict, eternal concept, and can be adjusted, rewritten, or even erased anytime and anywhere, and the original stable authenticity of the identity becomes ambiguous.

Technology is not completely neutral and value-neutral, so behind the logic of algorithmic technology, it is also necessary to consider the political and economic relations embedded in the platform. From the perspective of platform power, Lawrence Lessing (2006) argues that code is the "law" of cyberspace, and that the platform's control over code as a rule-maker is a kind of power. Lippold (2017:160) proposes that there is a more indirect and covert form of "soft-biopolitical" power in algorithmic platforms, in which the platform controls the identity representation of users by constructing species and genera through clustering models and huge databases. From the perspective of capital accumulation, Christian Fuchs's (2013) theory of "digital labor" and "Internet prosumer" holds that platforms monitor users' behavioral data and analyze their consumer interests during use, and then sell these as goods to third-party advertisers or use them to improve their own services, so as to complete the circular accumulation of capital. Users are not only recipients of services and advertisements, but also producers of personal behavior data and content shared by themselves. Shoshana Zuboff (2019) also proposed "surveillance capitalism", arguing that platforms transform human behavior data into virtual goods, and control and guide people's behavior through predictive analytics.

Therefore, even if various studies have shown that algorithmic identities are not a replica of offline real identities like David Lyon's (2001:22) "data doubles", algorithmic identities that can stimulate users' consumption and achieve capital accumulation are more valuable to platforms than the real identities of users. Hidden behind the technical logic of the algorithm is the value selection of the platform based on the logic of capital, which may be contrary to the individual interests and values of users, but the platform can influence users' choices and decisions through powerful "rationality power", and even control users' behaviors and activities (Quan Yan and Chen Long, 2019).

2. User: Identity owner based on individual cognition and agency

Algorithmic identity is significantly different from traditional identity in terms of fragmentation, fluidity, anti-essentialism and other characteristics, which makes the platform "identity classifier" perspective based on technology and capital logic become the dominant perspective of algorithmic identity research, and insufficient attention is paid to the individual cognition and agency of users as algorithmic identity owners.

First of all, identity is constructed on the basis of self-cognition, and people must first answer the question of "who am I" in order to develop identity from identity cognition and then construct self-identity (Xiang Yunhua, 2009). For algorithmic identity, in addition to self-awareness, it also involves the user's cognition of the algorithm and algorithmic identity, that is, answering the question "Who am I in the algorithm". Scholars define the level of awareness of algorithms as "algorithmic awareness", which includes knowledge of content filtering, automated decision-making, human-computer interaction, and ethical considerations (Brahim, 2021). Many studies have shown that users have a certain degree of algorithmic awareness, but there are significant differences in the degree (Emilee, Rebecca, 2015, Proferes, 2017, Huang, 2019, Gran et al., 2020). The most basic algorithmic awareness is that users only perceive that search results do not show all sources of information equally, and that certain information will be prioritized (Eslami et al., 2015), and on this basis, users may develop an understanding of the specific ordering criteria and code principles of the algorithm (Devito et al., 2018), and even a critical understanding of the ethical issues that exist in the underlying logic of the technology (Rieder, 2016). Lippold believes that the user's algorithmic consciousness provides the possibility for the flow and bridging between the algorithmic identity and the real identity (Lippold, 2019:166-179), but there is no in-depth discussion on how to become "possible", and the role of the user's algorithmic consciousness in the construction of algorithmic identity needs to be further studied.

Secondly, the research on identity construction in existing media practices shows that users can construct their own identities in different ways with the evolution of media technology, and these diverse and flexible identity construction strategies reflect the subjective initiative of users. In the BBS era, users mostly adopted the strategy of interest-oriented and small community to highlight the expression of "tribal" identity; in the era of microblogging and web pages, users comprehensively used multimedia elements to construct a vivid three-dimensional network identity; in the era of social media represented by WeChat, the combination of social exhibitions and instant messaging made "always present" It has become the main mode of identity construction (Lv Yuxiang, 2021), and at the same time uses emotional elements to construct and strengthen one's own identity attributes and social relationships (Chen Anfan, 2019). Among them, young users often play the role of "pioneers" of new media technologies (Yan Qihong, 2023), and they are also the groups with the most extensive exposure to algorithm technology and the most profound influence by algorithm technology (Zhao Longxuan and Lin Cong, 2022).

In addition, no identity is static, and the agency of the individual is also reflected in the communication and interaction between the self and the other. Cooley's "Me in the Mirror" states that people's perception of self comes from the reactions of others, and Gandy also believes that identity is formed in the process of direct and indirect interaction with others, which contains forces from both external and internal sides (Fangting, 2008). Through a variety of strategies, individuals externally represent their self-perception, and use reflective ability to constantly adjust and revise their identities in their interactions with others (Li Qinke and Xia Zhuzhi, 2021), and their self-expectations and other reactions interact with each other in the interaction, and finally tend to the unity of self-identity and other-recognized identity (Qin Mingxing, 2005). With the advent of the intelligent era, the absolute dominance of people in the human-technology relationship has been broken, and technology is no longer just a tool to be dominated, and even in turn to control and dominate people. In the hybrid heterogeneous system of platform, algorithms have become a new type of "quasi-other" (Jiang Xiaoli and Zhong Dibing, 2022), and the identity construction game between users and algorithm technology and the platform behind it is also worthy of study.

Based on this, this study takes bilibili (hereinafter referred to as Bilibili) users as an example to investigate the algorithmic identity construction of young people, focusing on the cognition and actions of young people in the process of algorithmic identity construction, in order to echo and supplement the previous research. The main research questions include: what kind of algorithmic consciousness and algorithmic identity imagination have been formed by the youth at station B at the cognitive level, how do the youth at station B construct algorithmic identity at the action level, what impact does the former have on the latter, and what are the characteristics of the algorithmic identity construction of young people at station B as a whole?

3. Research design

Bilibili is the main ACG content creation and sharing platform in China, which transforms a large number of users into "co-actors" through algorithmic mechanisms (He Yuan, Zhang Hongzhong, Su Shilan, 2022), and has now become a multicultural community covering more than 7,000 interest circles. Bilibili and Douyin, Kuaishou, Xiaohongshu, Toutiao, etc. belong to the same representative algorithm platforms in China, compared to Bilibili's interactive functions are the most complex and diverse (including barrage, follow, share, like, coin, favorite, comment, dislike, etc.), which makes the user data types involved in Bilibili's algorithm more diverse, and the identity expression and presentation of Bilibili users are also richer. Therefore, in this study, station B was selected as the research platform.

In 2021, Gu Yu, president of the Bilibili Public Policy Research Institute, pointed out in his speech "Yunxiu Vitality, Internet New Youth, Youth Positive Energy" that the core users of Bilibili are "Generation Z" (people born from 1995 to 2009, and the current age is 14 to 28 years old). According to the latest data from the service provider "Blue Lion Asks" at station b in 2022, users aged 18-30 at station B account for 78%, and the proportion of male and female users is basically the same. Chen Junjun et al. (2022) conducted a descriptive statistical analysis of the age, education, and residence of Bilibili users, and concluded that Bilibili is young and student-oriented. Based on the above data, this study considers that the youth group aged 18-28 is the core user group of Bilibili. Based on this, this study took young people at station B as the research object, and used a combination of participatory observation and in-depth interviews to conduct the study. The main purpose of participatory observation is to analyze the identity construction behavior and interaction mode of young people at station B by entering the community of station B and collecting video information, articles, barrages, comments and other materials through long-term observation, while the main purpose of in-depth interviews is to directly obtain first-hand interview data of young people at station B to analyze their cognition, subjective imagination and behavioral motivation of algorithms.

From July to August 2022, the researchers retrieved 121 related videos and 24 column articles with keywords such as "algorithm" and "big data", and after eliminating irrelevant content, they obtained about 8,400 words of text materials such as barrage, comments, and column articles in the Bilibili community, and recruited interview subjects through WeChat Moments and Bilibili background private messages from July 27 to August 25, 2022. In order to ensure the proficient use of station B, the interview and recruitment conditions are for users of station B with a service life of more than 1 year and a level of Lv3 or above. A total of 25 interviewees were interviewed for 30 minutes to 60 minutes, 18 of which were online interviews and 7 were offline interviews. The basic information of the interview subjects selected in this study is shown in Table 1, the age is concentrated in 18-28 years old, the male-to-female sex ratio is 13:12, and the educational background covers high school, junior college, master's degree or above, and the sample characteristics are basically consistent with the characteristics of the core users of station B, which can be considered to be representative.

The interview consisted of four parts: the researcher first asked the interviewee's basic information (age, service life, station B level, etc.), and then used popular language to ask the interviewee about the understanding and evaluation of the algorithm and the identity of the algorithm, such as using "the content of station B" instead of the concept of "algorithm recommendation", "what kind of person does station B think you are" instead of the concept of "algorithm identity", etc.; and then asking about the identity construction actions and strategies of the interviewee in station B, such as " What methods will you use to adjust the content you swiped on Bilibili", etc.", and finally ask about the effects of these action strategies and the perceived process of algorithm identity change, such as "when you ...... What has changed since then?" and so on. After the interview, the interview record or audio recording was transcribed sentence by sentence, and the content related to the research topic was screened and extracted, and a total of about 24,000 words of text material were obtained.

Hong Jiewen, Chang Jingyi | Between "Intentional" and "Unintentional": A Study on the Algorithmic Identity Construction of Youth at Station B

Fourth, the findings

(1) Preconditions for the construction of algorithmic identity: the awakening of algorithmic consciousness and the imagination of algorithmic identity

1. Station B recommendation algorithm based on interactive relationship chain

According to the "Personal Information Collection List" provided in the "Privacy Policy" of Bilibili, Bilibili collects three types of information: user identity information (including gender, IP location, date of birth, school, device information, third-party ID, etc.), personal property information (including address, transaction content, member order records, etc.) and browsing and interaction information (including following, sharing, liking, coin, collection, comment, dislike/report, playback duration, history, etc.) for personalized recommendation services. The platform collects this information for user portraits, covering needs, behaviors, interests, psychology, personality and other user attributes (Wu Jianyun, Xu Mingzhu, 2021), and at the same time, classifies and manages videos through various indicators, covering video characteristics, video content, related personnel and other video attributes (He Yuan et al., 2022), and finally realizes the accurate correlation and matching between user attributes and video attributes.

Different data indicators have different weights in the Bilibili algorithm. According to official disclosures and user discussions (see Bilibili's Help Center and column articles for specific content), Bilibili's video recommendations adopt the "strong attention" mode, that is, the weight of "follow" data is the highest. The algorithm will give priority to recommending the videos of the UP master that the user follows to the user, which is also in line with the habit of many users of station B to open the "dynamic" (that is, the content posted by the person the user follows) page to watch the video. In addition, the weight of "sharing" and "comment" indicators in the recommendation algorithm of station B is also high, followed by "barrage", "coin", "like", "favorite", and "playback" is the lowest. The reason is that behaviors such as following and sharing are more interactive than playback behaviors, which are more expensive for users and can better represent the user's interest in the video. On the whole, the algorithm used by Bilibili is a personalized recommendation algorithm based on the interactive relationship chain, and the more interactive the behavior data, the higher the weight of the behavior data, and the higher the probability of these videos being recommended.

2. The awakening of the algorithmic consciousness of the youth of station B

Based on previous research, we divided algorithm awareness into three levels: "basic perception", "logical understanding" and "critical reflection", and evaluated the degree of awareness of algorithm awareness among 25 respondents. First of all, all respondents were able to "basically perceive" the existence of Bilibili's algorithm, that is, they clearly stated in the interview that the content pushed by Bilibili was "different from person to person" and "to their liking". Second, the majority of respondents (22 people) reached the second level of "logical understanding" of algorithmic awareness, that is, they were able to clearly list the possible basis for content recommendation, but this understanding was not comprehensive. Among them, the most commonly mentioned by respondents is the impact of playback behavior on recommendation results, such as "I feel that I recommend based on the videos I usually browse" (F9), "I have swiped a certain celebrity's video several times, and I will always push videos of the same star or program" (F1), "After each video is played, there will be related recommendations, all of which are related to the video or the content I usually browse" (F3); The main update of the UP will be recommended on the homepage" (F4), "I follow a lot of game streamers, and game content frequently appears in the homepage recommendation" (M7); a small number of respondents can also perceive the association between the behavior of "one-click three-in-a-row" (like + coin + favorite) and the recommendation results, such as "After being 'one-click three-in-a-row' by me, similar videos will appear when refreshed" (M8); while respondents do not have a deep perception of the relationship between sharing, commenting behavior and recommendation results.

Further analysis shows that the algorithm awareness at the level of "logical understanding" mainly comes from the direct experience obtained by users in the process of use. Respondents often cite their own experiences and experiences in the use of Bilibili to support the relevance between them and the recommended content. Browsing is the most frequent operation of respondents on the Bilibili platform, and its correlation with recommendation results is also the easiest for respondents to perceive, while sharing, commenting and other behaviors require respondents to take more than two additional steps and face certain social pressure, resulting in a significant reduction in the frequency of these two behaviors, and the correlation between them and recommendation results is more difficult for respondents to judge. By investing time and energy in using the Bilibili platform, the respondents unconsciously participated in the process of algorithm operation, and deepened their understanding of the logic of the algorithm in the process of extensive exposure to algorithm technology. In this sense, algorithms are a technique that is "not easy to understand without direct use" (Grant, William, 2012). However, only a small number of respondents (5 respondents) were able to reach the "critical reflection" level of algorithm awareness, which was mainly manifested in the reflection on the homogenization of the algorithmic push content of station B. For example, "I can't stop when I often swipe station B, the algorithm is too easy to addict" (M2), "I often see a lot of repetitive and similar videos, so sometimes I feel that the content pushed by station B limits me to it" (M9). On the whole, the young people at station B in the interview have awakened to a certain degree of algorithm awareness, but most of them stay at the level of basic perception of the existence of algorithms and partial understanding of algorithm logic.

3. The algorithmic identity imagination of the youth of station B

The study found that the awakening of the algorithm consciousness of young people at station B also formed the imagination of algorithm identity. When asked "what identity do you think Bilibili has for you", most respondents were able to answer more than three characteristics and were able to cite relevant examples. When the respondents described their algorithmic identities, hobbies were the most frequently mentioned identity characteristics, such as "old LOL players (referring to loyal players of the game "League of Legends)" (M10), "fitness enthusiasts" (M4), "Harbin fans" (referring to fans of the Harry Potter works) (F3), etc., while gender, age, geography and other characteristics were rarely mentioned, such as "Wuhan University Students" (M9), etc., and property information was not mentioned at all, and some respondents even expressed a clear denial of this." (I) don't think station B should be able to know my financial situation" (F8).

Comparing the three types of information in the Personal Information Collection List above, it can be found that the algorithmic identity in the respondents' imagination focuses on personality and interest characteristics, and pays less attention to the basic identity characteristics that are more closely related to the real identity, and even ignores the characteristics of personal property information. The reason for this is that although Bilibili provides users with the "Personal Information Collection List" and "Privacy Service Agreement", for the vast majority of respondents, they will not carefully read the "extra-long" privacy agreement provided by Bilibili, and even if they pay attention, it is difficult to fully understand its professional terms, and it is difficult to find the location of these instructions and agreements after they are closed. Through surveys and statistics, only 2 of the 25 respondents actively searched and viewed the information collection and privacy-related information of Bilibili, but one said "I don't understand" (F1), and the other "didn't find it" (M10). The rest of the respondents said that the privacy policy that popped up on the "open screen" interface of station B was directly ignored, and they only saw this kind of pop-up window when they downloaded and logged in for the first time. It can be seen that the youth of Bilibili can form a certain algorithmic identity imagination, but as a part of corporate secrets, the platform owner inevitably hides or conceals the details about its specific algorithm (Pasquale, 2015), which restricts the accuracy and completeness of this algorithmic identity imagination.

Combined with the analysis of the process of algorithmic identity construction in the following paper, the researchers believe that the awakening of algorithmic consciousness of young people at station B and the consequent formation of algorithmic identity imagination are the preconditions for the construction of algorithmic identity. On the one hand, the cognitive imagination of the "weak" or "incomplete" algorithm of the youth of station B will lead them to "unconsciously" comply with the rules of the platform and passively participate in the construction process of the algorithmic identity without knowing it. However, on the other hand, some of the algorithmic consciousness that has been awakened by the young people of station B has brought their subjective initiative into play, and they have "consciously" actively adopted a variety of ways to construct algorithmic identity.

(2) "Unconscious" algorithmic identity construction: passive cooperation under the guidance of platform control

1. "Self-labeling" identity dismantling under the framework of algorithms

According to Howard Saul Becker's labeling theory, the act of "labeling" refers to the subjective framing of the meaning of others based on specific socio-cultural backgrounds, facts, and judgments. "Labeling" is the way to classify someone or something into one category and summarize it together by "labeling" (Wang Yuqi, 2021:16-19). The "tag" in the algorithm platform is the positioning of user characteristics and content attributes, which is used to assist the platform in setting and operating rules, and users become the objects "labeled" by the algorithm. The algorithm realizes the priority push of video content and the personalized customization of the user interface through the accurate matching of content and user "tags".

The study found that while being "labeled" by the algorithm, the young people at station B also involuntarily or inadvertently dismantled their self-identity according to the algorithm framework, which the researchers described as "self-labeling". During the interview, it was found that from the beginning of the registration of the account, the respondents filled in the user information and labeled themselves with various labels such as "post-90s", "post-00s", "college students", etc.;When browsing videos, respondents automatically classified their interests into "drama", "movies", "TV series", "music", "dance", "games" and other tags according to the division of station B; In addition, from July 18, 2022, Bilibili will disclose the IP location of the account in accordance with relevant laws and regulations, and the geolocation tag will also be mandatorily displayed. With the explicit appearance of the "tag" structure in the interface and the embedding in the background, the young people of Bilibili are generalizing, characterizing, and flattening their complex "polyhedron" identities in every link from entering Bilibili, registering accounts, watching and publishing content on demand.

More importantly, "labels" have a guiding and qualitative effect, which will have an impact on an individual's self-perception. Due to self-verification, self-identity, self-impression management, etc. (Duan et al., 2014), individuals tend to align their behavior with the content of the "label", that is, the "label effect" (Guadagno, 2017). For example, a respondent was recommended a large number of food videos after searching for cooking tutorial videos on station B, and finally "became a cooking enthusiast" (F8). However, most of the respondents said that they have become accustomed to quickly locating videos of interest through a variety of verticals and keywords, and it is difficult to detect and lack reflection on this "label effect".

2. Disclosure of self-interest in explicit vs. implicit interactions

In addition to "labeling", the study found that young people at station B were also guided by the platform's well-designed interaction methods and unconsciously disclosed their interests and hobbies. The interaction methods in station B are divided into explicit and implicit, among which explicit interaction includes sharing, liking, coining, collecting, commenting, following, sending barrage and other interaction methods presented on the application interface of station B, while implicit interaction includes screen operation track, video completion ratio, interface stay time and other interaction methods that are not directly presented on the application interface of station B. The explicit or implicit interaction of users is monitored, recycled, and evaluated by the platform in real time, which promotes the production and reproduction of algorithmic identities, and the changes in the objects, methods, and degrees of interaction will also lead to instantaneous updates and changes in the identity features captured by algorithms.

First, explicit interactions tend to be a direct reflection of the user's personal preferences. In the interview, it was found that many interviewees said that they would share, like, and coin when they encountered a favorite video, and when they encountered a tiresome video repeatedly pushed, they would press and select "not interested": "I usually 'click three times' when an original video particularly impresses me" (F5), "If I encounter a video that I don't want to watch, I will usually brush it directly, but if I push it repeatedly, I will choose to press and hold 'not interested', and basically I can't see it" (F7). Second, implicit interactions are also captured by algorithms for user profiling, such as the increasing degree of interest represented by incomplete playback, full playback, and repeated playback, and the longer the interface stays, the higher the likelihood that the user is interested. However, due to the more insidious mode of action of implicit interaction, most respondents are difficult to perceive the impact of implicit interaction on content visibility, and there are still blind spots in the understanding of the Bilibili algorithm. In addition, Bilibili will also inspire and guide users to interact explicitly through implicit interaction design. For example, after the playback time of a video on Bilibili reaches a certain standard, the "One-Click Three-in-a-Row" icon will appear in a prominent position on the video screen, and after staying on the introduction page of Bilibili for a period of time, the "Share Button" at the bottom will also change from a gray arrow to a bright green WeChat icon (see Figure 1).

Hong Jiewen, Chang Jingyi | Between "Intentional" and "Unintentional": A Study on the Algorithmic Identity Construction of Youth at Station B

3. Emotional stimulation and interactive "carnival" under the barrage mechanism

The unique barrage mechanism of station B has, to some extent, contributed to the "collusion" between the UP master and the platform. According to the algorithm mechanism of station B, videos with more barrages/comments/likes will be recommended to more users. Algorithm-driven UP owners pursue these indicators and have to find ways to guide users to actively participate in interactions. For example, some experienced UP owners will pair the barrage mechanism with the "one-click three-in-a-row" mechanism, and remind the video viewer of the "one-click three-in-a-row" at the beginning or end of the video through clever words. This kind of "routine" can sometimes really work, and the viewer who is reminded may "like or throw a coin" (F5), and sometimes it will stimulate the viewer's "rebellious psychology", forming a large number of barrage swiping in response to "next time" and other ridiculous words. But regardless of whether the "routine" works or not, the UP master is in a win-win situation, and even if he doesn't get the "one-click triple connection", he has also gained a large number of barrages, which has improved the recommendation of the video.

On this basis, the barrage mechanism will also trigger the mutual infection of users' emotions/emotional experiences, forming an interactive "carnival". For example, in the barrage area of popular videos, the phenomenon of screen swiping occurs frequently and on a large scale, even to the extent that the video screen is completely obscured. Some interviewees said that "swiping the screen" will make them have a stronger desire to express themselves, as well as a sense of identity belonging and integration: "I prefer to post barrages. will be a little bolder, hide in the middle of a large number of barrages, or directly copy other people's barrages, I don't have so much 'shame'" (F2), "If everyone is complaining, I can't help but post barrages, sometimes they complain together, sometimes they refute" (F3), "Many stalks in the barrage 'understand everything', everyone tacitly swipes the screen together, which will make me feel like I am part of this circle" (M1). Moreover, the symbols in the barrage/comment space are non-instantaneous, and the barrage/comments at different times converge in the same space, forming the illusion of a prosperous "presence", thus creating the continuation of the "carnival". Unless it is removed or cleaned up by the background of the reviewers of station B, the barrage area and comment area will retain every text and every emotion sent by the user, and the infected latecomers will continue to gather and join it like a trickle, forming a torrent of emotions. The young people of station B in the barrage space are surrounded by emotions, and it is easier to integrate into the carnival crowd and express them loudly.

Beer (2009) uses the term "technological unconscious" to describe the user's difficulty in perceiving the shaping of daily life by information technology. This study argues that the understanding and reflection of the young people on station B on algorithms is still incomplete, which makes it difficult for them to get rid of the control of platform rules, and there is an "unconscious" state of some algorithm technologies: the youth of station B carry out "self-labeling" identity dismantling under the framework of algorithms, are guided by the interactive design of the platform to disclose their own interests, and are stimulated to fall into a "carnival" in the barrage mechanism. An unequal cooperative relationship has been formed between the youth of station B and the platform in the state of "unconscious", in which the platform is the party that actively exerts control, while the youth of station B is the party that passively accepts and is imperceptibly invested, unconsciously engaged in the process of data raw material production and algorithm identity construction.

(3) "Conscious" algorithmic identity construction: the active strategy of "submissive but unyielding".

1. Algorithmic rhetoric born in the context of "banter culture".

Under the unique environment of "banter culture" of station B, the discussion involving algorithms and big data in the community of station B rarely has serious and esoteric science popularization, and most of them are carried out in a humorous and ridiculous style of discourse. Observation and interviews found that when the youth of station B found that the recommended content was very in line with their interests, they would jokingly say "the number is raised" to express their satisfaction with the algorithm, and when the recommended content is too homogeneous, the youth of station B will also laugh at themselves "the number is raised" to express their dissatisfaction. Similar expressions include: "I successfully brushed the homepage to so-and-so" (M5), "This number has been raised crookedly by me" (from the comment area of station B) and so on. On the one hand, it shows that the youth of station B have perceived that they have provided "nourishment" for the operation of the algorithm, and on the other hand, it also shows the subjective consciousness of the youth of station B in the process of interacting with the algorithm. In addition, there are also anthropomorphic rhetoric such as "please big data remember me" and "big data understands me" in the comment area and video title of station B: "Comment, big data must remember me", "Send a comment, lest big data think I don't like it", "I must comment to let station B know that I love to watch this" (from the comment area of station B).

The "banter and ridicule" of the young people of station B on the algorithm shows that in addition to being manipulated by the algorithm and passively resisting the algorithm, users may also get along with the algorithm with a relaxed and positive attitude. However, through personal participation and interviews, the researchers found that after sending texts such as "big data remember me" for a period of time in station B, the algorithm recommendation results of station B did not show a particularly obvious directional change, and only publishing such content in the comment area will arouse a certain reaction of the algorithm, but this reaction is also difficult to attribute to the algorithm "responding" to the call of the youth of station B, because any content published in the comment area will be captured by the algorithm as a data record into a fixed response program, so that the same recommendation results may be obtained. Some interviewees also bluntly said that "[I posted] 'big data remembers meme' is just a meme, and it shouldn't really have much effect" (F6).

After further interviews and observations, it was found that under a comment of "big data remembers me", multiple replies and comments with similar words are often gathered, and these algorithmic rhetoric forms a kind of "meme", which has strong appeal and dissemination. The researchers believe that the call of the youth of station B to "remember me with big data" is not mainly the subject of algorithm technology, but more likely to be other users as onlookers. "Big data remembers me" is a humorous expression of the young people of station B about their own interests, although they hope that this call can be captured by the algorithm, but they also expect to trigger a group echo of users with the same interests. The algorithm that focuses on personalization seems to have the purpose of separating users into separate individuals, but the algorithmic rhetoric of the youth of Bilibili shows that users can also actively use the algorithm rules to turn the mark recognized by the algorithm into a code for group members to recognize each other. The comments of "Big Data Remember Me" constitute an interactive ritual, which realizes participation through emotional interaction across time and space, which facilitates individual users to find their own interest groups, integrate into the circle, achieve identity, and satisfy their own desire for expression and sharing. To a certain extent, the algorithmic rhetoric of the youth of station B is a "useless use".

2. Identity correction and adjustment strategies after algorithm "failure".

Human behavior is inherently highly flexible and unpredictable (Robertson, 2011:65), and for users, different behavioral choices have different underlying motivations. For example, liking and coining means "thanks to the UP main video production is not easy" (F3, F7, F11), which is generally an affirmation and gratitude to the video maker for the careful production and investment cost, but not all such videos are what users want to see often, and the collection is mostly due to the long video content or too many episodes, which need to be "coded first and then watched" (F7).

In the interviews, it was found that many respondents said that they had experienced "disappointment" with the algorithm. It is difficult for algorithms to grasp the complex psychological motivations and real needs behind users' dynamic behaviors, which leads to the "failure" of algorithms, which is mainly manifested as the two-way "failure" of positive and negative feedback. For the videos they are interested in, the youth will take positive feedback behaviors such as browsing videos or "one-click triple", while for videos that are not interested, they usually choose to ignore or choose "not interested" for negative feedback, which implies the expectation that station B can adjust the push results according to their own feedback behavior data, "I hope to refresh more and more interested videos" (F2) or "I don't want to see this kind of video anymore" (M2).

However, in actual use, on the one hand, the "excessive sensitivity" of positive feedback makes the algorithm predict user interests in advance, which may lead to recommendation bias or the proliferation of homogeneous content: "I just listen to a certain song of a certain program, but it will push me videos of the whole program or other artists of the program" (F4), "In the past, I could always find my own 'dish' in various videos, but now station B is more like a certain district station and has become very single" (M9); on the other hand, the "insensitivity" of negative feedback The algorithm lags behind the user's needs, resulting in a serious reduction in the user experience, and even because the "not interested" button is hidden in the interface, the user needs to press and hold the video for a few seconds or click on the upper right corner of the video before it appears, and for many users, the negative feedback channel itself is not smooth: "Some content that I am not interested in will still be recommended" (F5), "I have never clicked 'not interested', I don't know where this is" (M3).

In the experience of using "malfunctioning" algorithms, respondents with two layers of algorithm awareness, "basic perception" and "logical understanding", can gradually perceive that the algorithmic identity is ahead and lagging behind compared with expectations, and may correct the algorithmic identity by strengthening the identity characteristics. For example, some respondents said that they would adopt a "polarization" strategy, interacting only when they encounter content that they particularly like or are interested in (posting barrages, comments, "one-click triple link"), actively avoiding (never clicking) or giving in-depth feedback when encountering content that they don't like or are not interested in (clicking "not interested" multiple times and giving detailed feedback on the reason, including options such as "I don't want to watch the UP master", "I don't want to watch the section", "I don't want to watch the channel", "There is too much such content, recommended" and so on). In addition, some respondents used the strategy of "active search" to adjust the algorithmic identity, which is generally applicable to the situation where there are missing features in the algorithmic identity. When the youth of station B are interested in something or need to obtain certain knowledge information, and the algorithm cannot capture this change in time, the youth of station B will activate the algorithm through direct search behavior to actively update and shape the identity of the algorithm. It can be seen that when faced with the "failure" of the algorithm, the youth of station B are not completely resigned, but can take the initiative to use the algorithm rules to correct and adjust the identity of the algorithm.

3. Algorithm response and platform transformation attempts in community discussions

The study found that some young people at Bilibili were able to discover the limitations of algorithms and reflect on them, and then actively discussed how to deal with algorithms and transform platforms in the community to grasp the initiative of algorithmic identity construction. There are two main forms of community discussion: one is the discussion between users. For example, in the comment area of station B, topics such as "how to 'raise an account'", "how to prevent addiction", "how to get high-quality videos" are actively discussed, and this discussion may also extend to other platforms such as Douban, Xiaohongshu, and Douyin, so as to compare and learn from the experience of algorithms on different platforms. By actively sharing their own experience and coping strategies, users have deepened their cognition, understanding and reflection on algorithms, and are more likely to take active actions to construct algorithmic identities.

The second is the official channel discussion between users and the platform. Bilibili has opened up a discussion area for user opinions and feedback in the community center, and users can directly feedback to Bilibili by publishing posts to participate in platform optimization. For example, in the #Hope that station B can add or modify the function suggestion topic, feedback "I hope to add a function to filter the length of the video on the homepage", "I hope to add [My Favorites] videos to the algorithm recommendation on the homepage", etc. Through follow-up observation, it is found that although this kind of feedback is inefficient and has a long cycle, many of the rationalized feedback have indeed been improved and solved by the platform, and the user's positive suggestions and suggestions can have a certain impact on the construction of the platform and algorithm optimization of Bilibili.

Through further interviews, it was found that compared with abandoning station B, most respondents are still more willing to jointly transform station B: "Station B is my home, and the transformation depends on everyone." (M9) The reason for this is that the youth of station B are mainly groups of interests gathered together based on common interests and hobbies, and emotions are the bond that maintains their identity. The inclusive and diverse cultural environment of Bilibili makes it produce a strong sense of identity and belonging, and then forms a sense of ownership, and regards Bilibili as a "home" for personal interests and a "place for self-retention" of the network. For the youth of station B, shutting down or disrupting the algorithm means the "deterioration" of the space of interest, and abandoning the platform means completely losing this network reserve: "I also thought about not using it, but I have deep feelings and am reluctant, and I still hope that Xiaopo Station will get better and better" (M2), "I have also tried to turn off personalized recommendations, but then Station B will become very boring" (F1). Therefore, the youth of Bilibili are more willing to adapt to and use algorithms, or actively influence and change algorithms, so as to create algorithmic identities that meet imagination and expectations.

In general, whether it is the rhetoric of algorithms, the identity correction after the "failure" of algorithms, or the attempts to transform the platform in community discussions, it shows that the youth of station B can take certain actions to "consciously" construct algorithmic identities driven by algorithm awareness. Researchers summarize this active strategy as "submissive but not submissive", that is, following the algorithm but not being at the mercy of the algorithm, but actively constructing the identity of the algorithm by using the algorithm or influencing the algorithm. When the young people of station B have the basic ability to perceive algorithms, they may perceive the deviation between the algorithmic identity and their self-imagination formed by the intervention of the platform, which lays the foundation for them to "consciously" construct the algorithmic identity When the youth of Bilibili improve their ability to reflect on algorithms, they can evaluate the quality of the content they are exposed to and make rational judgments, creating the possibility for them to "consciously" influence and change the algorithm.

V. Conclusions and Discussion

This study places the problem of identity construction in the intelligent media environment, and takes Bilibili users as an example to investigate the algorithmic identity construction of young people. This study first supplements the connotation of algorithmic identity from the perspective of human-algorithm interaction, and defines algorithmic identity as "the cross-reconstructed identity constructed by the algorithm in a two-way process by inferring its identity characteristics based on the data provided by the user passively or actively, and at the same time, the user highlights, corrects and changes the identity characteristics according to the algorithm rules". Algorithmic identity is not only an identity given to users by algorithms according to classification logic, but also an identity that users can actively shape through consciousness awakening, and also has the interactive attribute of "self-other".

It is found that young people who have been immersed in the algorithm environment for a long time can perceive the existence of algorithms and understand the logic of algorithms to a certain extent, and further form algorithmic identity imagination, which has an impact on the behavior of algorithmic identity construction as a precondition. On the one hand, the awakening of the algorithmic consciousness of the young people of station B is still not complete, resulting in them "unconsciously" conforming to the rules of the platform and passively participating in the construction process of algorithmic identity, including "self-labeling" under the framework of algorithms, self-interest disclosure under interactive guidance, and interactive "carnival" under the barrage mechanism. On the other hand, the algorithmic awareness that the young people of Bilibili already possess make them adopt the strategy of "submissive but unyielding", and actively construct algorithmic identities through algorithmic rhetoric, identity correction after algorithm "failure", and platform transformation in community discussions. In general, the algorithmic identity construction of the youth at station B wanders between the "unconscious" and the "conscious". Even under the implicit control of the platform, the initiative of the youth of station B in the construction of algorithmic identity is still worthy of affirmation, and the awakening of deeper algorithm consciousness is also precious.

This study summarizes the characteristics of algorithmic identity construction of young people at station B into three points:

The first is the "background" feature. Compared with the front-end identity performance published on social media such as WeChat and Weibo, the algorithmic identity constructed by the youth group in Station B is more of a backstage identity presented to themselves and their "like-minded friends". The youth of Bilibili regard the platform as a private space or a safe space, thus revealing their true personal hobbies and tastes, which essentially forms a "digital backstage". In this area, the youth of station B form the discourse boundary of identity confirmation between groups through algorithmic rhetoric and other codes, and use algorithm rules to build a space for identity expression with common meaning. At the same time, the background identity construction of Bilibili youth also implies the "default trust" of the algorithm platform (Walker, 2006), which may broaden some new ideas for the study of human-computer interaction.

The second is the "dynamic" characteristic. Whether it is "unconscious" identity disclosure or "conscious" identity adjustment, the algorithmic identity characteristics of the youth of Bilibili are accumulating, updating and transforming all the time. In a sense, the dynamic algorithmic identity construction of the youth of Bilibili pursues a kind of excavation and exploration of the real self, implying the goal expectation of "making the algorithm understand me better". At the same time, the youth of station B are constantly recognizing the identity of the algorithm in the algorithm, and constantly comparing it with the real self in imagination and expectation, and coping with the deviation between the two through multiple representation methods. In this dynamic process, the convergence and bridging between the algorithmic identity and the true self is thus possible.

The third is the "improved" characteristic. In the process of algorithmic identity construction, the youth of Bilibili are more inclined to obey but not succumb to the algorithm, and transform rather than abandon the platform. When caught in the "failure" of the algorithm, the youth of Bilibili can use the algorithm rules to correct and adjust the identity of the algorithm, and can also exchange experiences and suggestions through community discussions, so as to actively deal with the algorithm and even transform the platform. This kind of flexible improvement of collective wisdom is a manifestation of strong enthusiasm and group wisdom, which is different from the more common "obedience algorithm" and "resistance algorithm" in existing research, and adds a link of "improved algorithm" in the human-computer interaction chain, presenting a new possibility for people and algorithms to get along freely.

There are still some limitations and deficiencies in this study: some respondents in the study showed negative emotions such as avoidance and resistance when facing algorithmic identity-related problems, and the sample can be further expanded in future research to pay attention to the complex mentality of young people in algorithmic identity and more diverse action possibilities.

(Hong Jiewen and Chang Jingyi: "Between "Intentional" and "Unintentional": Research on the Algorithmic Identity Construction of Youth at Station B", Issue 12, 2023, excerpt from the WeChat Publishing Department, please refer to the original text for academic citations)