laitimes

21st century cybernetics| cross-cultural cybernetics: the localization of universality

author:The Paper

This speech was delivered in the "Cybernetics in the 21st Century" lecture and forum series, November 4, 2022, abridged text version. The event was hosted by the Media Laboratory of Guangdong Times Museum and the Instrument Philosophy and Technology Research Network, co-sponsored by Hanya Jingshe, with Xu Yu as the academic host and Wu Jianru curated. Lecture Homepage:

https://medialab.timesmuseum.org/en/lectures

Today we will explore the history of cybernetics in different cultural contexts and the localized exploration of its universality.

The history of cybernetics is one that transcends cultural, political and disciplinary boundaries. Cybernetics: or the Science of Control and Communication in Animals and Machines, also the title of a book, was published in 1948 by Norbert Wiener, a professor of mathematics at the Massachusetts Institute of Technology. Wiener's historical photographs and documents are displayed in the lobby of MIT's mathematics department, and the exhibition hall of these historical materials is not far from my office.

Wiener's work in cybernetics drew on his wartime study of anti-aircraft gun control. He designed and built an air defense predictor, a servo-mechanical device that operates on feedback to predict the trajectory of enemy aircraft. The function was originally usually performed by human naval gunners and sharpshooters, so Wiener's equipment would "assume specific human functions." The invention led Wiener to make a profound analogy: an analogy between the operation of automatic controls, feedback-based control devices, and human purposeful behavior. In 1943, Wiener, physiologist Arturo Rosenblueth, and engineer Julian Bigelow published an article in which they proposed that purposeful human behavior is governed by feedback mechanisms, and that the same feedback mechanism is used in servo mechanisms. Combining the terms of control engineering (e.g., feedback), psychology (purpose), philosophy (teleology), and mathematics (extrapolation, which refers to constructing new data methods from isolated collections of known data), they constructed a behavioral classification scheme that applies to both human and machine operations.

In his book Cybernetics, Wiener generalized these ideas and introduced a new "universal" language, which I call "cyberlanguages." It ties together a variety of different human-machine metaphors. Spanning various disciplines – computation, information theory, control theory, neurophysiology and sociology – cybernetics uses the same cybernetic terms to describe organisms, control and communication devices, and human society – information, feedback and control, and so on.

Cybernetics has repeatedly appeared in different forms across the Atlantic to Europe, to the Soviet Union, South America and elsewhere: it has appeared at different times and places as a tool for designing cutting-edge weapons, it is the theoretical basis for freedom of expression, it is the method of designing intelligent machines, it is a model for describing the functioning of the human brain, it is an interdisciplinary vehicle, it is also a tool for reforming the theoretical framework of a wide range of life and social sciences built through mathematics and calculation. It is sometimes filled with strong ideological messages, sometimes manifesting itself in so-called political neutrality. Every time cybernetics crosses a new cultural, political or disciplinary boundary, its connotations are questioned and given new connotations.

A particularly striking example is one of the visions of universalists at the time of the global cybernetic localization movement to design computer programs capable of performing certain human cognitive tasks, often referred to as artificial intelligence (AI). The aspiration of artificial intelligence is to grasp the universal principles of ideas in order to apply and implement them in computers. In 1984, Patrick Winston elaborated on the goals of AI research: "Artificial intelligence inspires those who want to discover the universal principle that all intelligent information processors must develop and exploit." Meanwhile, in the Soviet Union, an emerging AI community set its goal that sounded familiar—"to understand how humans think, and what the mechanisms of thinking are." On both sides of the Iron Curtain, AI research is understood as an exploration of the fundamentals of the human mind.

Scientists in both the United States and the Soviet Union believed that there was a conventional, universal, and historically irrelevant mechanism of human thinking. However, because these scientists are people from different cultures, they have a unique, culturally specific intuitive understanding of the human mind. What they see as a universal category of "people" is actually people belonging to a particular culture. As a result, their AI models reflect the peculiarities of their respective cultures.

The day-to-day practice of any society is based on universally accepted patterns of behavior – seen as typical and normal behaviour – and various strategies for dealing with everyday situations, the so-called "common sense." John McCarthy famously called AI systems "common-sense programs," implying that the human mind is generally fundamental to common sense. However, as anthropologist Clifford Geertz once said, common sense is "historically constructed, and... Subject to historical definitions. It will... Vary. Simply put, it is a cultural system. Geertz also cautions against "sketching out some [common-sense] logical structure because it doesn't exist." This unfortunately breaks the basic premise of McCarthy's statement.

21st century cybernetics| cross-cultural cybernetics: the localization of universality

Everyday practice acts as a medium, carrying a constant exchange of cultural symbols and shaping the cultural vocabulary for any given group. For Americans, everyday experiences range from reading The New York Times to watching political debates on TV to buying a wide variety of goods in the supermarket. The everyday experience of the Soviets was completely different. They never read The New York Times, never watch political debates, or hesitate about brands when buying goods. They read Pravda and underground literature, attend party congresses, queue up at food stores to consume. What seems typical and normal to them is peculiar and exotic to Americans, and vice versa. However, even if common sense is not universal, AI models do tell us something – if not about the fundamentals of the human mind, then perhaps about the specific cultural connotations of common sense.

Cultural influences manifest themselves not only through the typical patterns and strategies of everyday life, but also through language – through the various metaphors we experience and use, including reflections on the mind itself. Today I will discuss the metaphors popular among American and Soviet intellectuals, influenced by different cultures, and explore their connection to specific AI systems. I think that American and Soviet scholars have adopted a completely different approach when developing artificial intelligence, which has deep cultural factors. In looking for general principles of thinking and behavior, AI experts actually bring their own cultural stereotypes into the models they develop.

For example, if we look at the everyday behavior of shopping, the main problem facing American consumers is how to make the right (or "healthy") choices among the wide array of foods and goods. The ability to make the right choices is also a key focus of academic training in the United States. College students choose the course they want to take from a wide variety of courses. The most common multiple-choice questions in the exam also require the candidate to choose a correct answer among the many possibilities. More than one candidate will also be listed on the ballot paper for political elections. In contrast, most everyday situations in the USSR were different, for example, higher education courses prescribed fixed subjects and pre-ordered courses for each major, students had the only option to choose their favorite sport in physical education class, multiple-choice questions in exams were rare, instead it required students to write out all the problem-solving steps, and if the student did not give the best solution (or simply the solution was different from the textbook), even if the correct answer was drawn, points would be deducted. Finally, the Soviet way of shopping also created new problems for customers. Their question is not what to choose, but what to buy. Due to the shortage of many food and household items, people can only get sought-after products from the black market. An ordinary Soviet citizen had to create a unique, extensive private social circle of friends, relatives, friends of relatives, relatives' relatives, etc., casting a wide net in order to be able to buy the desired washing machine or TV.

The cognitive psychology theories developed by American and Soviet scholars reflect choice and creativity under different cultural values. For example, American cognitive psychologist Jerome Bruner describes conceptual acquisition as a process in which each step "is often a choice or decision between alternative steps." Brunner's research demonstrates a "cognitive revolution" in psychology, which is closely related to the work of American AI pioneers Herbert Simon and Alan Newell, who put choice at the heart of a model of intellectual activity called "heuristic search."

However, the Soviet psychologist Andrei Brushlinskii denied this view, arguing that thinking involved choosing among preset alternatives. True thinking, he argues, must produce a new alternative: "Actual vivid thinking, for example, solving a task or problem, is the prediction of an initially unknown, non-existent solution. This prediction... People no longer have to choose among alternative solutions. ”

Soviet and American AI experts sometimes borrowed psychological theories, sometimes psychologists borrowed AI models. However, AI experts are still accustomed to ignoring psychologists' research results, believing that knowledge should flow from artificial intelligence to psychology, not from psychology to artificial intelligence. AI and psychology often agree because they both rely on the same cultural mindset.

Herbert Simon, one of the pioneers of artificial intelligence in the United States, explicitly referred to everyday experience, arguing that at the heart of intellectual activity is an act of choice:

None of us fully understand the general characteristics of human choice, or the broad characteristics of the environment in which such selection occurs. I am free, then, to invoke this shared experience as a hypothetical source of a theory about the human and human world.

Simon draws on a wide range of mathematical theories that offer various forms of choice that occur in well-categorized, well-constructed environments—econometrics, game theory, operations research, utility theory, and statistical decision theory—which his biographer, Hunter Crowther-Heyck, calls "the science of choice." All of these theories assume that choice behavior is free and rational: individuals act on their environment, but the environment does not affect the individual's goals or preferences.

Simon also draws on another set of disciplines—sociology, social psychology, anthropology, and political science. In contrast, these "control sciences" emphasize the plasticity and conformity of individuals, subject to group and social pressures and influenced by the social environment. The "manager" who controls science seems to be incompatible with the "economic man" who chooses science.

Simon used the science of choice and control to develop the theory of "bounded rationality." When solving complex problems, people can reduce them to a limited set of alternatives and make reasonable choices among them. The organization of individual choices makes rational decision-making possible.

In his 1956 paper, Rational Choice and the Structure of the Environment, Simon used a maze as a metaphor for the mathematical model he introduced, which describes how organisms meet the diversity of needs, making a series of rational choices at fork points that are based on incomplete information. This figurative description is very apt and easy to understand. Simon extrapolates from his personal experience to humanity as a whole, viewing a series of rational choices as a "universal" model, a philosophy of life:

The philosophy of life certainly involves a set of principles. ...... Principles can be assembled into various heuristics or heuristics to guide people in making choices at forks in the road in life, like staying on the right course in a maze. ...... In this chapter, I describe my own life and my personal philosophy of life, but I have also been describing everyone's life.

In the 1950s and 1960s, Simon collaborated with Allen Newell to develop the "heuristic search" method, which soon became the dominant paradigm for artificial intelligence research in the United States. According to their model, the process of solving problems involves finding a path from the initial state to the target state within the problem space. This space looks like a branching tree or a maze; At each step of the process, the problem solver must make a choice from the options – each of which is one of the branches that diverges from the point of selection. In the absence of complete information about the maze, or if the maze is too large to make feasible calculations, Newell and Simon suggest using heuristics—rules of thumb—to help people make the right choices. They see maze search as a generic model of intelligence, and their computer program "Universal Problem Solver" as a generic "theory of human problem-solving."

As Simon and Newell's conceptualization of human behavior became more formal, the situational models they borrowed became increasingly limited and normalized: from semi-independent decision-making by workers in large organizations, to semi-autonomous actions of machine operators in air defense control centers, to the limited chess routines of chess players. Among the various computer runs of the "heuristic" model – deriving the "logical theorist" (a program that can automatically run reasoning, also known as the first artificial intelligence program in history), a diagram of the logical theorist's model at the bottom of the screen, a chess program and a "universal" universal problem solver, and at the top of the screen is a sketch of a universal problem solver – Newell and Simon tend to focus on situations that are fully described, unambiguous, and computer-friendly.

Newell and Simon redefined the problem of choice: they no longer talked about "making a decision," they talked about "solving a problem." If decision makers can consider different goals, then problem solvers must focus on the specified problem. Decision-making becomes "an uncontroversial or political process, a process of allocating 'processor time' to different tasks." Now, the choice is less about which set of values to accept and more about which set of data to process. "Politics is reduced to technology: the idea of freely controlling and purposefully changing the environment becomes a purely technical task of simplifying the search in a maze.

In her paper, "Telling the American Story," anthropologist Livia Polanyi, in her elaboration of cultural "grammar," emphasizes that "control" is one of the most important aspects of American life. The "appropriate people" portrayed in everyday conversations are those who are "able to fully control the world for pleasure and power." In contrast, in the case of the USSR, your social environment was not something you could control. If a Soviet cultural grammar were constructed, this description might be rewritten as "proper people are those who are able to achieve happiness from the control of the world sufficiently."

The idea that computers could perform intelligent tasks sparked intense controversy in the Soviet Union in the early 1950s. In the paranoid and suspicious context of the Cold War, technological innovation from the West is often highly questioned. In response to the discussion of the "thinking machine" that is trending in the West, the Soviet side denounced the idea through the media, considering it both a technological threat and an ideological subversion. Soviet journalists rebuked the capitalists for having something else, arguing that their motive behind them was to replace striking workers with robots and human pilots who refused to bomb civilians with "indifferent monsters." For their part, Soviet philosophers attacked the concept of a "thinking machine" as both "idealistic", that is, the separation of thought from the material base of the brain, and "mechanistic", that is, the reduction of thinking to computer operations. Soviet critics classified all controversial computer applications as "cybernetics" and called the field "reactionary, idealistic pseudoscience." Despite obvious logical contradictions – cybernetics was portrayed as idealistic and mechanical, utopian and dystopian, technocratic and pessimistic, pseudoscientific and dangerous weapons of military aggression – the movement had a serious impact on Soviet research. Due to media frenzy, the study of "thinking machines" became ideologically unacceptable, and early Soviet computer applications were limited to scientific computing.

However, the anti-cybernetic movement did not dampen the interest of Soviet scientists in computer systems that could perform intelligent tasks. The Soviet Union's first large-scale electronic digital computers were installed in defense research institutes that were relatively free from ideology and gave their employees access to the latest Western publications. Early Soviet adherents of cybernetics and artificial intelligence came mainly from these institutions. Mathematician Aleksei Liapunov headed the Computer Programming Section, which was attached to the Department of Applied Mathematics of the Institute of Mathematics of the USSR Academy of Sciences in Moscow. The department (after 1966, the Institute of Applied Mathematics) carried out data operations for Soviet nuclear weapons and rocket projects. These calculations were compared with those obtained at the First Computer Center of the Ministry of Defense, where computer expert Anatolii Kitov was in charge of research and development. In 1955, after Stalin's death, Kitov and Lyapunov, in collaboration with Sergei Sobolev, the leading mathematician of the nuclear weapons program, published an article in the journal "Problems of Philosophy" openly refuting ideological accusations against cybernetics, which immediately legitimized research in this field.

As the cybernetic movement grew, it incorporated various mathematical models and computer applications, such as "cybernetic biology", "cybernetic physiology", "cybernetic linguistics", "cybernetic economics" and many other fields.

21st century cybernetics| cross-cultural cybernetics: the localization of universality

In the upper left corner of the photo above is the Soviet electronic digital computer - MESM, which was used in defense calculations. Below is Alexei Lyapunov, one of the pioneers of Soviet cybernetics, who is pointing with a teaching stick to a huge table outlining various cybernetic applications. In the lower left corner is a picture of the meeting, Alexei Lyapunov on the left and Norbert Wiener in the center. In 1960, Norbert Wiener attended a conference in Moscow and became famous. In the upper right corner, you can see Wiener next to Lenin's huge portrait. Party leaders became interested in computer technology and the prospects it opened up for a socialist economy.

In the lower right corner, you can see Soviet leader Leonid Brezhnev looking at a typing device connected to a Soviet computer, which was in the 1970s. The attitude of the Soviet public towards the "thinking machine" began to waver in the other direction. The Soviet press began to praise the intelligence of computers, describing them as a universal magic tool for solving any problem. Articles titled "The Thinking Machine" and "Almost Science Fiction" have sprung up in newspapers and popular magazines. Journalists are quick to dismiss previous ideological criticisms, claiming that the side effects apply only to capitalist societies:

If in the capitalist world the introduction of "thinking" machines means increasing unemployment, exploiting workers and fearing the future, then in a socialist society, by liberating people from hard, boring work, machines will give us the opportunity to focus on something sublime and joyful – thinking, creating, and above all creating new "thinking" machines.

In this way, the creation of thinking machines and artificial intelligence was legalized in the USSR. In 1961 the new program of the Communist Party of the Soviet Union declared: "Cybernetics and electronic computers and control systems will be widely used in the production processes of manufacturing, construction and transportation, and it will be widely used in scientific research, planning and design, accounting and management." The Soviet press began to call computers "communist machines."

Despite the media hype, the Soviet government was less interested in supporting AI research. The leaders of the cybernetic movement do not expect to achieve artificial intelligence, but are simply trying to mold the computer into an efficient tool, not an autonomous agent. Admiral Axel Berg, chairman of the board of cybernetics committees and engineers of the USSR Academy of Sciences, openly declared that electronic computers "will increasingly help humans, but will never replace them and will never think." Computers are still in short supply, and managers are not in favor of computer programmers trying to divert valuable computing resources to ponder problems related to their own intelligence.

The Soviet Union's belief in artificial intelligence is skeptical of language. The phrase "thinking machine" is always placed in quotation marks to emphasize that it is a figurative statement. The term "artificial intelligence" is still controversial, and researchers avoid using the phrase. They prefer neutral-sounding words such as "control psychology," "information process research," or "heuristic programming."

In 1964, when mathematician Dmitrii Pospelov and psychologist Veniamin Pushkin convened a regular symposium of computer experts and psychologists interested in artificial intelligence at the Moscow Power Engineering Institute, they named their field "psychonics." The psychographics team directly challenged Simon Newell's mindset and proposed another approach.

The term "imitation" was formed through analogy bionics. Biomimicry experts want to mimic the "design" of living organisms in engineered systems, while Pospelov and Pushkin are eager to use psychological knowledge to build intelligent computers. Pushkin conducted several eye-motion tracking studies on chess players and concluded that each player built a different mental model of the position of chess pieces on the board, rather than looking for solutions in a preset problem space. He asserts that the human problem space is not originally a tree, and that the process of finding solutions involves creating a new problem space, rather than "pruning useless branches" as Simon Newell proposed the labyrinth model.

Soviet AI experts did not like the maze model, not because it was inefficient, but because it deviated from their cultural notions. Even without knowing the origin of the concept of a "universal problem solver," they associate it with the "bureaucratic mechanism" of maze search. While some follow the logic of Newell and Simon and assert that "humans think through exhaustive searches," many others have proposed alternative models, for example, to think of thinking as a chain of associations.

21st century cybernetics| cross-cultural cybernetics: the localization of universality

Pushkin and Pospelov conceptualized thinking as reflection and reflection on a problem rather than an exploration. In their view, the description of the current situation and objectives often differed in different ways. For example, in the case of chess, the initial position is the position of a particular piece on the board, while the target state – the general – requires a higher-level description involving the inability to move a checkmate king. Human chess players must be able to travel back and forth between low-level and high-level descriptions, i.e. build and manipulate media models for various situations. As you can see in the figure in the middle, there are different levels of depiction of the situation. Pushkin and Pospelov argue that the basic intelligent program is situational modeling, not a maze search: "Of all the existing words and concepts used to describe creative thinking, the most appropriate and appropriate is the Russian word soobrazhenie (reflection/imagination). ...... The solution reflects the situation, based on an image or element model. ”

For Pospelov and Pushkin, human creativity manifests itself in abandoning the old labyrinth, reconceptualizing the problem, and constructing a new problem space. For example, a person cannot build four equilateral triangles with six matches, if he has been looking for a solution on a flat surface. Building a new solution maze – that is, switching to three-dimensional space – solves this problem.

While Newell and Simon started with a ready-made problem structure, Pushkin and Pospelov suggest that building problems is an important intelligent step in finding solutions. Building a full model of the context is more important than a strong search algorithm. Pushkin and Pospelov proposed a "semantic language" for the formal description of situations at various levels of universality and developed a system for constructing relational situational models. Pospelov and his team implemented this approach in computer systems for controlling loading operations and other industrial operations in seaports, which combined technical and human factors.

Pospelov and Pushkin's critique of the labyrinth theory echoes the perception of choice as a limitation of creativity in Soviet culture. For intellectuals in the Soviet Union and Eastern Europe, the labyrinth of rigid choices offered by the government seemed too restrictive. Some, like Pospelov and Pushkin, choose to openly point out the limits of choice behavior and create new problem spaces.

Soviet intellectuals developed complex tactics. Recent research on Soviet intellectuals broke down prejudices that had been formed since the Cold War, and mathematicians and computer experts working on defense projects, as priests in the temple of the almighty god "computer", created an autonomous territory belonging to the intelligentsia, a temperature-controlled, restricted access, and room with the first generation of huge computers. Mathematicians Izrail Gelfand and Mikhail Tsetlin, working at the Institute of Applied Mathematics for Defense Studies, used their free privileges as intellectuals to conduct research on the central nervous system.

In 1958, Gervander and Zetlin organized an informal regular symposium on mathematical models in the field of physiology. Neurophysiologists have traditionally assumed that various nodes within the central nervous system coordinate their activities through a complex interconnected system. However, this assumption puzzles mathematicians: in a large system, the number of connecting nodes would grow so fast that any mathematical model would become overly complex. In contrast, Zeitling and Görvander propose a model in which each node treats the activity of all other nodes as an environmental change. They show that a single node does not have to act directly on other nodes, but can simply observe changes in the environment and follow simple adaptive algorithms, minimizing its own interaction with the environment. This leads to purposeful behavior of the system as a whole, in this case to minimize the interaction of the system with the environment. In this model, the purposeful behavior of the entire system does not require the high complexity of its subsystems. All monolithic behaviors are very simple: they try to avoid interaction instead of building complex networks of coordination. Görvander and Zeitling call this adaptive mechanism the "principle of least interaction":

The subsystem is constantly solving its own "special" and "individualized" problems – that is, it minimizes interaction with the medium; Therefore, the complexity of the subsystem does not depend on the complexity of the entire system. ...... Our mathematical models allow us to imagine (to some extent) the interaction of nerve centers without having to consider the complex linking system and its coordinated activities.

The unique definition of purposeful behavior, that is, minimizing the interaction between the system and its environment, apparently stemmed from the desire of Soviet intellectuals to maintain maximum intellectual autonomy. Zeitlin argues that his neurological model has the advantage of non-personalized control: the system doesn't need to tell each node within it what it should do; The system uses freedom of mobilization to self-organize under the most routine conditions. In February 1965, in a lecture at the Physiological Society, Zeitling explicitly proposed a contrast between free and forced labor to highlight the advantages of self-organization:

Convict labor is more expensive than free labor, although the former is much worse at food and clothing, and the workload of the two is similar. The key is not only that prisoners are less efficient, but that prisoners must be provided with food, clothing and supervised by hire. For free people, the situation is different: ... My manager... Don't have to think about when it's time to change my shoes or sheets, or how to raise my children.

Murray Eden, a biophysicist at the Massachusetts Institute of Technology, once said: "One wonders if cultural or social differences are the causes of Zeitlin's choice to study the phenomenon of cooperation through 'expedient' behavior, while American game theory focuses on competition between participants." Strictly speaking, Zeitling's model did not conform to the ideal mathematical model of socialists. It reflected the special position of intellectuals in the Soviet system, and the "phenomenon of cooperation" stemmed from the efforts of individuals to escape the control of the environment or other individuals. However, Eden's views on the social and cultural roots behind different approaches to game theory deserve more in-depth exploration.

In 1926, the Hungarian-born American mathematician John von Neumann developed an axiomatic formalized two-person zero-sum game with a finite number of "strategies" (complete rules of the game). It is based on a Western notion that social interaction is a competition, a competition centered on self-interest, one that calculates rationally while being careful of its opponents.

von Neumann proved the minimum maximum theorem, asserting the existence of an optimized "hybrid" or stochastic strategy that minimizes the maximum loss and guarantees the "value of the game" to win. He argues that the minimum-maximum strategy captures some fundamental aspects of human reason: "Any event—given external conditions, participants, and participation situations (as long as the participants can act as they wish)—can be considered a strategy game because it exerts influence on the participants." ”

Von Neumann's biographer, Steve Heims, traces von Neumann's formalism to his worldview, arguing that the world is full of ruthless competitors who see all other participants as cunning enemies:

His temperament was influenced by the harsh political realities he experienced in Hungary. The recommended style of "economic game", which emphasizes prudence and calculates the expected outcome, all these utilitarian emphasizes aptly expressing the typical ideal of the middle class in capitalist society.

In 1944, von Neumann, together with his partner Oskar Morgenstern, an Austrian-American economist, extended the original conceptual framework of game theory to deal with economic problems, and co-authored it

Game Theory and Economic Behavior. They explicitly challenge the deterministic decision-making sought after by neoclassical economics and present the "solution" of the economic game as a probabilistic "stable set" of possible distribution of benefits among participants. As historian Philip Mirowski puts it, they see hybrid strategies as "manifestations of the randomness of thought itself" and effectively transform the minimum-maximum strategy into "the epitome of abstract reason." Mirovsky further notes that von Neumann and Morgenstern believed that game theory could "simulate the behavior of any adversary and therefore serve as a general theory of reason," and wrote in their writings that "the line between game theory and artificial intelligence is often blurred." ”

Among the non-determinism advocated by von Neumann and Morgenstern, one thing remains the same: the rules of the game. Fixed game rules can not only deduce very formal results in game theory; It is also the backbone of the concept of reason: the world is too complex for deterministic analysis, but it still follows the rules, so randomly equipped minds can still calculate the best set of strategies.

U.S. defense analysts assert that "the significance of game theory as a decision-making tool is that it frees us from having to guess the intentions of our adversaries." While speculation seemed to American analysts to be the antithesis of rational problem solving, it was often the only option for clever Soviet policymakers.

The outcome of a debate about the validity of a scientific theory usually depends on the game's ability of the discussant. The game becomes complicated by the uncertain nature of rewards and penalties for specific strategies.

The fundamental uncertainty of the Soviet social game is reflected in Zeitling's theory of automata collective games. An automaton is a mathematical model of a finite state machine that changes its state based on a transition function and the current input. Zeitling interprets an automaton as an agent that is in an environment of random rewards and punishments for specific behaviors. Unlike von Neumann's classic game, Zeitling's game confronts automata with a world of uncertainty. He wrote:

It is worth noting that the starting point of the automata game discussed here is different from the view accepted in game theory. In fact, the latter usually assumes that the game is determined by the participant's previously known payment function system. ...... We consider games played by finite automata interesting, have no prior information about the game, and passively adjust the strategy during the game for one repetitive action after another.

In Zeitling's game, "the participants have almost no information about the game. They don't know the number of participants, they don't know what is going on at any given moment, and they don't even know what type of game they're actually playing in. Zeitling colloquially likens his model, an agent operating in an environment with unknown rules and changing changes, to "little animals in the big world." His friend Nicholas Bernstein, a cybernetic neurophysiologist, uses a similar metaphor to describe the fundamental uncertainty of mental activity: "For example, we can say that an organism is constantly playing games with its environment, a game with no clear rules, and it is unknown what actions the other side has planned. ”

Zeitling found that in a changing environment, the probability of reward and punishment changes over time, and the most successful are automata without much state. In other words, if the rules of the game are constantly changing, then automata is of no benefit to its own historical memory. The more dynamic the environment, the shorter the optimal depth of automata "memory".

In his study of the collective "distributive game," Zetlin unabashedly commented on the economic strategies of individuals during that period. First, he came up with a game in which a group of automata compete for resources (reward or gain) by choosing different strategies. He designs automata that is completely unaware of the relative advantages of different strategies, but ultimately determines the best strategy by reacting to environmental benefits. However, Zeitling showed that if automata participated in a Soviet-style version of game theory, their average returns might increase – a game with "public funds" in which the gains and losses of all individual automata were aggregated and then evenly distributed back to them. The disadvantage of this model is that public funds mask the linkage between individual contributions and returns, so it puts forward higher requirements for the memory ability of individual automata. He concluded that one could "intercept earnings from public fund programs once the memory of automata has reached a certain level of sophistication." "If the memory capacity is below this threshold, the introduction of a public foundation reduces the average return."

Zeitling mathematically modeled this dogmatic ideology, calculating the precise memory ("level of consciousness") needed to find the best strategy in a game with public funds. Zeitlin's colleagues translate his findings into fundamental principles of human thinking and behavior. Viktor Varshavskii and Dmitry Pospelov interpreted memory as a general measure of intelligence. They associate a person's "intelligence level" with the ability to find the best strategy in a game in which gains and losses are not explicitly linked to a person's direct actions, but are generated at a higher organizational level. Intellectuals who believe in liberalism are very familiar with the concept of a game in which the rules are unknown or constantly changing.

There are two metaphors that illustrate the key cultural differences in thought and behavior reflected in the AI systems implemented by the United States and the Soviet Union. The labyrinth of life—in which we must find the right path—became the central metaphor for American AI. The labyrinth metaphor is reminiscent of B.F. Skinner 's behaviorist model experiment of having mice run a T-shaped maze, and the American colloquial term "rat race" for "fierce competition." In 1950, Claude Shannon designed a mechanical mouse that could search for metal "cheese" in a maze. Herbert Simon's study of managerial behavior points to rats running mazes as a case in point: "The simplified model of human decision-making is like the behavior pattern of a mouse looking for food in the maze of a psychology laboratory." Simon firmly believes that a mouse with limited knowledge and intelligence better reflects the limitations of human reason than assuming a sacred, omniscient, and perfect reason: "We don't need a god-like, but a rat-like chooser." ”

For Soviet AI experts, the central metaphor of decision-making was not like searching in a fixed maze, but like the flight of a butterfly, plotting its flight trajectory through random air currents. Soviet AI researchers Viktor Varshavskii and Dmitrii Pospelov described a system that simulates the behavior of bats preying on moths. When the bat gets too close and the moth can't fly away, the moth will start flying wildly:

The so-called flying refers to a series of wing-folding and forced landings, sharp turns, circles and sharp dives. In other words, the moth's chaotic trajectory makes it harder for the bat to predict where it will appear at the next moment. We should mention that in the experiment, the success rate of saving the lives of moths was as high as 70%.

The image of a butterfly flying around in chaos trying to escape a predator – this image is too intimate, it is a vivid portrayal of their attempts to maintain autonomy of their minds.

Both American and Soviet AI experts sought the principle of universality—universal, timeless mechanisms of thought and behavior. However, their generalizations are influenced and conditioned by the cultural context in which they live. The examples held by American and Soviet scientists are, in fact, social organization and decision-making with specific cultural patterns. When people try to grasp universality, AI models give the exact opposite—a culturally specific model.

Unknowingly, scientific research often has national characteristics. Cultural symbolic systems are clearly embodied in scientific thought, just as they are in literature or art. In the simulation of the human mind, the AI system truly reflects rational and irrational patterns, individual creativity and social psychological stereotypes, human nature and humanities.

Read on