laitimes

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

10,000 words long article reveals the secret of OpenAI Gong Dou: Microsoft has prepared three sets of solutions to deal with it

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

Tencent Technology News According to foreign media reports, artificial intelligence startup OpenAI has worked out an ambitious but safe agreement with Microsoft before the "Gong Dou" incident that occurred last month. But OpenAI's then-board board completely disrupted Microsoft's elaborate plan with OpenAI. The following is the full text of the article:

On the Friday before Thanksgiving this year (Nov. 17), around 11:30 a.m., Microsoft CEO Satya Nadella was in a weekly meeting with the company's top management when a panicked colleague asked him to answer the phone. An executive at AI startup OpenAI called to explain that within the next 20 minutes, the company's board of directors would announce the firing of OpenAI co-founder and CEO Sam Altman. This is also the beginning of OpenAI's five-day "palace fight" drama. Internally, Microsoft has dubbed OpenAI's crisis "the Turkey-Shoot Clusterfuck."

The easy-going Nadella was so surprised that he didn't know what to say for a moment. He worked closely with Ultraman for more than four years and began to appreciate and trust him. In addition, their collaboration has just pushed Microsoft to host its biggest launch event in a decade: a host of cutting-edge AI assistants built on OpenAI's technology and integrated into Microsoft's core productivity applications, such as Word, Outlook, and PowerPoint. These assistants are essentially specialized and more powerful versions of OpenAI's much-lauded ChatGPT, known as Office Copilots.

Unbeknownst to Nadella, however, there is a problem in the relationship between Ultraman and OpenAI's board of directors. Some of the six members of the board found Altman to be "cunning and treacherous" — qualities common among CEOs of tech companies, but unpleasant for board members with academic or nonprofit backgrounds. "They felt that Altman was lying," said a person familiar with the boardroom discussions. These tensions are now flaring up in front of Nadella, threatening a vital partnership.

Microsoft hasn't been at the forefront of the tech industry for years, but its alliance with OpenAI — which started as a nonprofit in 2015 but added a for-profit division four years later — has allowed Microsoft to leapfrog rivals like Google and Amazon. Copilots makes it as easy for users to ask the software as easily as they would a colleague – "Tell me the pros and cons of each plan described in the video call," or "What's the most profitable product in these 20 spreadsheets?"—and get instant answers answered in fluent English. Copilots can write a complete file with simple instructions. ("Take a look at our past ten executive summaries and create a financial roundup of the past decade.") Copilots can turn memos into slideshows, listen to team video meetings, summarize meetings in multiple languages, and compile to-do lists for attendees.

Microsoft's development of Copilots requires an ongoing collaboration with OpenAI, a relationship that is at the heart of Nadella's plans for Microsoft. In particular, Microsoft has worked with OpenAI engineers to install safety guardrails. OpenAI's core technology is called GPT, which is a type of artificial intelligence known as a large language model. GPT has learned to mimic human conversation, it reads public text in large quantities from the internet and other data stores, and then uses complex mathematics to determine the relationship between each piece of information and all others. While such systems have had significant results, they also have significant weaknesses: a tendency to "hallucinate," or fabricate facts, help people do bad things, such as making fentanyl formulas, and fail to distinguish between reasonable questions ("How do I talk to a teenager about drug use?") and sinister questions ("How do I convince a teenager to take drugs?") Microsoft and OpenAI have developed a protocol for incorporating safety measures into AI tools. This, they argue, allows them to achieve their ambitions without the risk of disaster. The release of Copilots is a pinnacle for these companies and a testament to how Microsoft and OpenAI will be key to bringing AI to the wider public. The launch of Copilots began this spring, with a selection of enterprise customers, and was expanded to a wider range in November. ChatGPT was launched in late 2022 and was a smash hit, but it only had about 14 million daily active users. Microsoft has more than 1 billion daily active users.

When Nadella recovered from the shock of Ultraman's dismissal, he called OpenAI board member Adam D'Angelo to ask him for details. D'Angelo's brief explanation to Nadella also appeared a few minutes later in the company's statement: Altman did not "maintain the usual candor in his communications with the board." Did Ultraman misbehave? No, but D'Angelo wouldn't say more. He and his colleagues even deliberately let Nadella know about their intention to fire Ultraman, because they didn't want Nadella to warn him.

Nadella hung up in frustration. Microsoft owns nearly half of OpenAI's for-profit division – and Nadella should certainly be consulted when OpenAI's board makes such a decision. What's more, he knew that the dismissal could spark a civil war within OpenAI and could ripple through the entire tech industry, which has been hotly debating whether the rapid development of artificial intelligence is something to celebrate or a cause for concern.

Nadella immediately called Kevin Scott, Microsoft's chief technology officer, who was responsible for building the OpenAI partnership. Scott had heard the news, and it spread quickly. They immediately held a video conference with other Microsoft executives. They asked each other if Altman was fired because of the tension between speed and safety when releasing AI products? OpenAI and Microsoft, as well as some big names in the tech world, have previously expressed concerns about the reckless progress of AI companies. Even Ilya Sutskever, OpenAI's chief scientist and board member, has spoken openly about the dangers of unfettered AI. In March 2023, shortly after OpenAI released its most powerful AI service to date, GPT-4, thousands of people, including Elon Musk, the "Iron Man of Silicon Valley," and Apple co-founder Steve Wozniak, signed an open letter calling for a moratorium on the training of advanced AI models. "Should we let the machine flood our information channels with propaganda and lies?" the letter rhetorically asked. "Should we risk letting our civilization spiral out of control?" many Silicon Valley observers believe that the letter is essentially an accusation against OpenAI and Microsoft.

In a way, Scott respects their concerns. He argues that for those who know what they want a computer to do, but lack the training to implement it, the discussion around AI has been strangely focused on the science fiction scenario – that computers destroy humanity – and largely ignores the technology's potential to "level the playing field." Scott felt that if AI was built with enough care and patience, it would be able to communicate with users in simple language, and it could be a force for change and balance.

Scott and his partners at OpenAI have decided to release AI products slowly but continuously: Microsoft will observe how uneducated users interact with the technology, and users will learn on their own the advantages and limitations of the technology. By releasing admittedly imperfect AI software, and getting candid feedback from customers, Microsoft has found a way to both improve the technology and foster skepticism in users. Scott believes that the best way to manage the dangers of AI is to be as transparent as possible with as many people as possible, and to let the technology gradually permeate our lives – starting with monotonous uses. And what better way to teach humans to use AI than through something as non-sexy as a word processor?

All of Scott's cautious positioning was in jeopardy with Altman's fire. As more people learned of Ultraman's dismissal, OpenAI employees – who bordered on fanaticism about Altman and OpenAI's mission – began to express frustration online. Mira Murati, the startup's chief technology officer, was then appointed interim CEO, but she didn't enthusiastically embrace the role. Soon, Greg Brockman, the president of OpenAI, said on the social platform X: "I'm quitting. Other OpenAI employees have also begun to threaten to resign.

During a video call with Nadella, Microsoft executives began discussing a possible response to Altman's ouster. Plan A is to try to stabilize the situation by backing Murati and then work with her to see if the startup's board of directors will change its decision, or at least explain its rash moves.

If OpenAI's board refuses to comply, Microsoft executives will implement Plan B: use its enormous influence, including billions of dollars promised to OpenAI but not yet paid, to help Altman return to the CEO role and reshape OpenAI's governance structure by replacing board members. During the meeting at the time, Microsoft executives mentioned: "From our perspective, things have been going well, and OpenAI's board of directors has done some shaky things, so we thought, 'Let some adults take charge and get back to everything we have.'" ’”

If both of these plans fail, Plan C would be for Microsoft to hire Altman and his most gifted colleagues to re-establish OpenAI within Microsoft. In this case, the software giant will have all the new technologies that come as they appear, meaning it can sell them to others – which could make a lot of money.

The teams involved in the video call found all three plans to be strong. But Microsoft's goal is to get everything back to normal. The belief behind this strategy is that Microsoft has figured out some of the important elements of the methodology, security assurances, and frameworks needed to develop responsible AI. No matter what happens to Ultraman, the company is advancing the popularization of artificial intelligence according to its own blueprint.

A key figure in the partnership with OpenAI

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

Scott is convinced that AI can change the world, and this is because technology has revolutionized his own life. He grew up in Gladys, Virginia, a small community not far from where General Lee, the main Southern commander during the Civil War, surrendered to Grant. No one in his family had gone to college, and health insurance was almost a foreign concept. As a boy, Scott was sometimes dependent on his neighbors for food. His father, a Vietnamese veterinarian, tried to run a gas station, a convenience store, a trucking company, and various construction businesses, but all failed, and he declared bankruptcy twice.

Scott wanted a different life. His parents bought him an encyclopedia in monthly installments, and Scott read it from cover to cover, like a large language model, Avant La Lettre. For fun, he took apart the toaster and food blender in his home. He saved enough money to afford the cheapest computer at Radio Shack, and then learned to program by consulting library books.

For decades before Scott was born in 1972, the area around Gladys was home to furniture and textile mills. By the time he reached adolescence, much of the manufacturing industry had moved overseas. Advances in technology – supply chain automation, and telecommunications – are ostensibly to blame, as they have made it easier to produce goods overseas, where daily expenses are cheaper. But even as a teenager, Scott felt that technology wasn't really to blame. "The country told itself that outsourcing was inevitable," Mr. Scott said in an interview in September. "We can tell ourselves about the social and political negative effects of losing manufacturing, or the importance of protecting communities. But those never really became reality. ”

After attending Lynchburg College, a local school affiliated with Disciples of Christ, Scott earned a master's degree in computer science from Wake Forest University and began pursuing a Ph.D. at the University of Virginia in 1998. He was fascinated by artificial intelligence, but he learned that many computer scientists saw it as equivalent to astrology. Early attempts to create AI failed, and the notion of the field's courage and resourcefulness is deeply ingrained in the academic sector and software companies. Many top thinkers have abandoned the discipline. In 2000, some scholars tried to revive AI research by renaming it "deep learning." But skepticism persists: At an artificial intelligence conference in 2007, some computer scientists produced a parody video suggesting that the deep learning crowd was made up of cultists.

As Scott pursued his PhD, he noticed that some of the best engineers he had met emphasized the importance of being both a short-term pessimist and a long-term optimist. "It's almost a must," Scott said. "You see everything that's broken in the world, and it's your job to try to fix it. Even if engineers don't think most of what they're trying will be successful, and that some of the attempts might make things worse, they "have to trust that they can fix the problem until things finally get better." ”

In 2003, Scott took a leave of absence from his PhD program to join Google, where he oversaw mobile advertising engineering. A few years later, he resigned from Google to take charge of engineering and operations at AdMob, a mobile advertising startup that Google later acquired for $750 million. Scott then moved to LinkedIn, where he was known for his exceptional knack at building ambitious projects in ways that were both inspiring and realistic, and Scott joined Microsoft in 2016 when LinkedIn was acquired.

Scott was already very wealthy at that time, but little known in tech circles because he liked to be "anonymous". He had planned to leave LinkedIn after the Microsoft acquisition was completed, but Nadella, who became Microsoft's chief executive in 2014, urged him to reconsider. Mr. Nadella shared some information that made Mr. Scott curious about artificial intelligence, thanks in part to faster microprocessors, and developments in the field that made the technology famous: Facebook has developed sophisticated facial recognition systems, and Google has built an artificial intelligence that can expertly translate languages. Nadella was quick to announce that at Microsoft, artificial intelligence "will determine all of our future actions." ”

Scott isn't sure if he and Nadella share the same ambitions. He sent a memo to Nadella explaining that if he stayed, part of his agenda would be to uplift those who are usually overlooked by the tech industry. Scott hopes that AI will help people who are smart but don't have a digital education, and he grew up with them. This is a compelling argument – would some technologists think it's intentional?, given widespread concerns that AI-assisted automation will eliminate jobs such as grocery store cashiers, factory workers, or movie extras.

However, Scott believes in a more optimistic story. He said in an interview that there was a time when about 70 percent of Americans worked in agriculture. Technological advances have reduced the need for labor, with only 1.2% of the workforce farming today. But this does not mean that millions of farmers are unemployed: many of them become truck drivers, or go back to school to become accountants, or find other roads. "Perhaps to a greater extent, AI is more likely to be used to revive the American dream than any previous technological revolution," Scott said. He felt that his childhood friend, who runs a nursing home in Virginia, could use AI to handle her interactions with Medicare and Medicaid, allowing the agency to focus on day-to-day care. Another friend works at a store that makes precision plastic parts for theme parks, and he can use artificial intelligence to help him make the parts. Scott argues that AI can make society better by transforming "zero-sum transactions with winners and losers into non-zero-sum progress." ”

Nadella read the memo and, as Scott said, "yes, that sounds good." A week later, Scott was named Microsoft's chief technology officer.

If Scott wants Microsoft to lead the AI revolution, he must help the company outperform Google. Google is co-opting talent in the field of artificial intelligence by offering millions of dollars to just about anyone, even if it's just a small breakthrough. For the past 20 years, Microsoft has tried to compete with Google by spending hundreds of millions of dollars on internal AI projects, with little success. Microsoft executives are starting to think that a company as huge as Microsoft — with more than 200,000 employees and a huge bureaucracy — doesn't have the flexibility and momentum needed for AI development. "Sometimes smaller is better," Mr. Scott said in an interview.

In this context, Scott began to look at various startups, one of which stood out: OpenAI. The company's mission is to ensure that "artificial general intelligence – by which we mean highly autonomous systems that surpass humans in the most economically valuable jobs – benefits all of humanity." Prior to this, Microsoft and OpenAI had already established a partnership: the startup used Azure, Microsoft's cloud computing platform. In March 2018, Scott arranged a meeting with some employees at the San Francisco-based startup. He was thrilled to meet dozens of young people who turned down millions of dollars in payments from Big Tech to work 18 hours a day for an organization that promised that its inventions would not "harm humanity or be overly centralized." Sutskovy, the company's chief scientist, is particularly concerned with preparing for the emergence of artificial intelligence, which is so complex that it could solve most of humanity's problems – or lead to mass destruction and despair. At the same time, Altman is a charismatic entrepreneur who is determined to make artificial intelligence useful and profitable. According to Scott, the startup's sensibilities are ideal. OpenAI, he said, is committed to "directing energy to the things that have the greatest impact." They have a real culture of 'this is what we're trying to do, these are the problems we're trying to solve, and once we find something that works, we're going to double down.' They have their own theory of the future. ”

OpenAI had already achieved impressive results at the time: its researchers had created a robotic hand that could play with the Rubik's Cube, even with challenges not encountered before, such as tying some of its fingers together. However, what excites Scott the most is that at a subsequent meeting, OpenAI's management told him that they had given up on the robot hand because it wasn't promising enough. "The smartest people are sometimes the hardest to manage because they have a thousand brilliant ideas," Scott said. But the company's employees are almost savior-enthusiastic about their work. Shortly after Scott met Sutskoy in July, Sutskove told Scott that AI would "disrupt every area of human life" and could make areas such as health care "100 million times better" than they are now. That confidence scared off some potential investors, but Scott found it appealing.

This optimism contrasted sharply with the gloomy atmosphere that permeated Microsoft at the time. A former Microsoft executive said: "Everyone thinks that AI is a data game, that Google has more data, and that Microsoft is at a huge disadvantage that will never be able to shrink." The executive added, "I remember feeling very desperate until Scott convinced us that there was another way to play the game." "The cultural differences between Microsoft and OpenAI make them special partners. But it was wise for Scott and Altman, who led startup accelerator Y Combinator before becoming CEO of OpenAI.

Nadella, Scott, and others at Microsoft are willing to tolerate these strange things because they believe that if they can enhance their products with OpenAI technology and harness the talent and ambition of startups, they will gain a significant advantage in the AI race. In 2019, Microsoft agreed to invest $1 billion in OpenAI. Since then, Microsoft has actually acquired a 49% stake in OpenAI's for-profit division, as well as the rights to commercialize OpenAI's past and future inventions, including in products such as Word, Excel, Outlook, and Skype and Xbox consoles.

Murati who grew up in poverty

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

Nadella and Scott's confidence in this investment is underpinned by the bonds they have formed with Ultraman, Sutzkvi, and CTO Murati. Scott attaches particular importance to his relationship with Murati. Like him, she grew up in poverty. Born in Albania in 1988, she experienced the rise of gangster capitalism and the outbreak of civil war. She coped with this upheaval by participating in math competitions.

When Murati was 16 years old, she received a scholarship to a private school in Canada, where she excelled. "A lot of my childhood was filled with sirens, shootings and other horrific things," Mr. Murati said in an interview this summer. "But there are still happy birthdays, unrequited love of teenage girls and oceans of knowledge. This teaches you a tenacity trait – the belief that things will get better if you keep trying. ”

While studying mechanical engineering at Dartmouth, Murati joined a research team building a race car powered by a supercapacitor battery capable of generating huge bursts of energy. Other researchers argue that supercapacitors are impractical, while others pursue more esoteric technologies. Murati thinks both views are too extreme. Such a person would never have been able to walk through the crater to reach her school. You have to be an optimist and a realist, Murati said, "sometimes people misunderstand optimism as careless idealism. But this has to be well thought out and well thought out, with lots of guardrails – otherwise, you're taking a big risk. ”

After graduating, Murati joined Tesla and then OpenAI in 2018. Scott said one reason he agreed to the billion-dollar investment was that he had "never seen Murati panic." They started talking about how to use supercomputers to train various large language models.

The two companies quickly got a system up and running, and the results were impressive: OpenAI trained a robot that can generate stunning images in response to prompts such as "Show me a baboon throwing pizza next to Jesus, presented in Matisse's style." Another creates GPT, capable of answering any question – even if not always correct – in conversational English. But it's unclear how the average person can use this technology for anything other than idle entertainment, or how Microsoft will recoup its investment. At the beginning of this year, it was also announced that Microsoft's investment would increase to $10 billion.

One day in 2019, a VP of OpenAI named Dario Amodei showed his peers something extraordinary: he entered part of a software program into GPT and asked the system to complete the programming. It did so almost immediately (using a technique that Amoldei himself had no plans to use). No one can say exactly how AI does this – large language models are basically a black box. GPT's actual code is relatively small, and its answers are word-for-word based on billions of mathematical "weights" to determine what should be output next based on complex probabilities. When answering a user's question, it is not possible to map out all the connections that the model makes.

For some within OpenAI, GPT's mysterious programming capabilities are terrifying – after all, it's the scene of dystopian movies like The Terminator. It's almost exhilarating when employees notice that despite GPT's technical prowess, programming mistakes are sometimes made. After learning about GPT's programming capabilities, Scott and Murati felt a little apprehensive, but more excited. They are always looking for practical applications of artificial intelligence that people may pay to use.

The birth of Copilot

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

Microsoft bought GitHub five years ago for much the same reasons it invested in OpenAI. GitHub's culture is young and fast-moving, unbound by tradition and orthodoxy. After being acquired, it became an independent division within Microsoft, with its own CEO and independent decision-making power. The strategy proved to be successful, and GitHub was beloved by software engineers, growing its number of users to more than 100 million.

So Scott and Murati were looking for a Microsoft division that might be excited about a tool that would be able to automate code – even if it occasionally goes wrong – and they turned to Nat Friedman, the CEO of GitHub. After all, code posted on GitHub sometimes contains bugs, and users have learned to fix imperfections. Friedman said he wanted the tool. He points out that GitHub just needs to come up with a way to tell people that they can't fully trust autocomplete. GitHub employees brainstormed the names of the products: Coding Autopilot, Automated Pair Programmer, Programarama Automat. Friedman, an amateur pilot, and others believe that the names falsely imply that the tool does all the work. And this tool is more like a co-pilot – someone who gets into the cockpit with you and makes recommendations, while also occasionally making some untimely suggestions. Usually you listen to the co-pilot, sometimes you choose to ignore it. When Scott heard Friedman's favorite name, GitHub Copilot, he loved it. "The name perfectly conveys its pros and cons," Scott said. ”

But as GitHub prepares to launch Copilot in 2021, some executives in other Microsoft divisions protest, arguing that the tool occasionally makes bugs and can damage Microsoft's reputation. "It's a tough fight," Friedman told me. "But I'm the CEO of GitHub, and I knew it was a great product, so I released it. "When GitHub Copilot was released, it was an immediate success. "Copilot simply blew my mind," one user tweeted hours after posting. "It's !! magic" Another post said: Microsoft began charging $10 a month for the app, and in less than a year, GitHub had surpassed $100 million in annual revenue. The independence of the sector has been rewarded.

But GitHub Copilot has also caused a less positive reaction. On message boards, programmers speculate that if someone is too lazy or ignorant to check the autocomplete code before deploying it, the technique could cannibalize their work, or power cyberterrorists, or cause chaos. Prominent scholars, including some AI pioneers, cite the late Stephen Hawking's 2014 statement that "all-AI could mean the end of humanity." ”

It's shocking that GitHub Copilot's users have discovered so many catastrophic possibilities. But executives at GitHub and OpenAI have also noticed that the more people use the tool, the more nuanced their understanding of its capabilities and limitations becomes. "After using it for a while, you get a gut idea of what it's good at and what it's not," Friedman says. "Your brain will learn how to use it properly."

Microsoft executives believe they have found an AI strategy that is both bold and responsible. Scott began writing a memo titled "The Age of AI Copilot," which was sent to Microsoft's technical leaders in early 2023. Importantly, Scott writes, Microsoft has found a powerful metaphor for explaining the technology to the world: "Copilot does exactly what the name suggests; it's an expert assistant for users trying to accomplish complex tasks... Copilot can help users understand the limits of their capabilities. ”

After the release of ChatGPT, it introduced artificial intelligence to most people and quickly became the fastest-growing consumer-grade app in history. But Scott can see the future: machines and humans interact through natural language, and people, including those who know nothing about programming, program computers simply by saying what they think. This is the level playing field he has always been looking for. As the co-founder of OpenAI said on social media, "The hottest new programming language is English. ”

Scott wrote: "Never before in my career have I experienced a moment in which my field has changed so much, and the opportunity to reimagine what's possible is so real and exciting. "The next task was to apply the success of GitHub Copilot, a boutique product, to Microsoft's most popular software. These Copilot engines will be a new OpenAI invention: a large language model. OpenAI calls it GPT-4.

Microsoft tried to bring AI to the masses years ago, but it ended in an embarrassing failure. In 1996, the company released Clippy, an "assistant" for its office products. Clippy appears on the screen as a paper clip with big cartoon eyes, seemingly randomly popping up, asking the user if they need help writing a letter, opening PowerPoint, or completing another task. Alan Cooper, a prominent software designer, later said that Clippy's design was based on a "tragic misunderstanding" of research that showed that people might be better able to interact with computers that seemed to have emotions. Users certainly have emotions about Clippy: they hate it. The Smithsonian called it "one of the worst software design mistakes in computer history." "In 2007, Microsoft axed Clippy.

Nine years later, Microsoft created Tay, an AI chatbot designed to mimic a teenage girl's tone and attention, with the aim of interacting with Twitter users. Almost immediately, Tay began posting racist, sexist, and homophobic content, including a statement that "Hitler was right." In the first 16 hours after its release, Tay was posted 96,000 times, at which point Microsoft realized it was a PR disaster and shut it down.

By the end of 2022, Microsoft executives felt ready to start developing Copilots for Word, Excel, and other products. But Microsoft understands that just as laws are constantly changing, the need to generate new protections will continue to increase, even after a product is released. Sarah Bird, head of AI engineering, and Scott are often embarrassed by the technology's failures. During the pandemic, when they tested another OpenAI invention, the image generator Dall-E 2, they found that if the system was asked to create coronavirus-related images, it would often output images of empty shelves. Some Microsoft employees worry that such an image would exacerbate fears of an economic collapse caused by the pandemic, and they have suggested changing the product's security measures to curb the trend. But the rest of Microsoft thinks these concerns are stupid and not worth the time spent by software engineers.

Scott and Byrd decided to test this scenario in a limited public release rather than adjudicate this internal debate. They rolled out a version of the image generator and then waited to see if users would be upset by seeing empty shelves on their screens. They don't devise a solution to a problem that no one is sure of – like a paper clip with wide eyes helping you navigate a word processor that you already know how to use – they just add a mitigation if necessary. After monitoring social media and other corners of the internet, and gathering direct feedback from users, Scott and Bird concluded that these concerns were unfounded. "You have to experiment in public," Scott said. "You can't try to find all the answers on your own and expect you to do everything right. We have to learn how to use these things together, otherwise none of us will understand. ”

At the beginning of 2023, Microsoft is preparing to release the first integration of GPT-4 into a Microsoft-branded product: the search engine Bing. Bing, which integrates AI technology, has received enthusiastic reception, with downloads skyrocketing eightfold. Nader joked that Microsoft had beaten the "800-pound gorilla" as a mockery of Google. (As impressive as this innovation is, it doesn't mean much in terms of market share: Google still holds more than 90% of the search share.) )

Bing is just the beginning on Microsoft's agenda. Subsequently, Microsoft began to roll out Copilot in other products. When Microsoft finally started rolling out Copilots this spring, the release of the versions was carefully staggered. Initially, only large companies could use the technology, and it will only be available to more and more users as Microsoft understands how these customers use it and develops better protections. As of Nov. 15, tens of thousands of people are already using Copilots, and millions are expected to sign up soon.

Two days later, Nadella heard that Ultraman had been fired. Some members of OpenAI's board of directors have discovered that Ultraman is a troubly cunning manipulator. Earlier this fall, for example, he confronted Helen Toner, director of Georgetown University's Center for Security and Emerging Technologies, for co-authoring a paper that appeared to criticize OpenAI for "fueling the fire of AI hype." Tonna defended herself (although she later apologized to the board for not expecting how the paper would be perceived). Ultraman begins to reach out to other board members individually to discuss replacing her. When the members exchanged transcripts of their conversations, some believed that Ott had misinterpreted them as supporting Tona's dismissal. "He'll lie about other people's ideas and have them fighting each other," revealed a person familiar with the board's discussions. "Something like this has been happening for years. (A person familiar with Altman's views said he acknowledged that "the way to try to remove a board member was clumsy," but that he did not attempt to manipulate the board.) )

Microsoft's Plans A, B, and C

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

Ultraman is considered a shrewd corporate fighter. This has helped OpenAI in the past: in 2018, he blocked the urge of early board member Elon Musk to buy OpenAI. Ultraman's ability to manipulate information and manipulate cognition – both overt and covert – attracts venture capitalists to compete with each other by investing in various startups. His tactical skills were so intimidating that when four members of the board – Tonna, D'Angello, Suzkewe, and Tasha McCauley – began discussing removing him, they were determined to take him by surprise. "It's clear that once Sam knows, he's going to do everything possible to weaken the board," said a person familiar with those discussions.

Displeased board members felt that OpenAI's mission required them to be vigilant about AI becoming too dangerous, and they felt that with Altman in power, they could not fulfill this duty. "The task is multifaceted, to ensure that AI benefits all of humanity, but no one can do it if you can't hold the CEO accountable," said another person who understands the board's thinking. Ultraman looks at the problem from a different angle. People familiar with his views say he and the board had a "very normal and healthy board debate," but some board members are unfamiliar with business norms and are intimidated by their responsibilities. The man said, "Every time we take a step closer to AI, everyone suffers 10 points of insanity. ”

It's hard to say whether board members are more afraid of sentient computers or if they're worried about Altman's assertiveness. Either way, the board of directors chose to strike first, mistakenly believing that Microsoft would stand with them and jointly target Altman and support their decision to remove them.

Shortly after Nadella learned of Altman's dismissal and called Scott and other executives for a video conference, Microsoft began to execute Plan A: backing Murati as interim CEO to stabilize the situation while trying to figure out why the board was so impulsive. Nadella has approved the release of a statement emphasizing that "Microsoft remains committed to Mira and their team as we bring the next era of AI to our customers" and expressed the same sentiment on his personal X and LinkedIn accounts. He maintains frequent contact with Murati in order to keep abreast of the information she has from the board.

The answer is: not much. The night before Ultraman was fired, the board informed Murati of their decision and received a promise from her to remain silent. They believe that her consent means that she supports the dismissal of Ultraman, or at least will not oppose the board of directors, and they also think that other employees will agree. They were wrong. Internally, Murati and other OpenAI executives voiced their displeasure, with some employees seeing the board's actions as a coup d'état. OpenAI employees asked board members pointed questions, but the board barely responded. Two people familiar with the board's thinking said board members felt compelled to remain silent due to confidentiality concerns. In addition, as Altman's ouster made global news, board members felt overwhelmed, "limited bandwidth to engage with anyone, including Microsoft." ”

The day after Altman's dismissal, OpenAI's chief operating officer, Brad Lightcap, sent a company-wide memo saying he understood that "the board's decision is not in response to malfeasance or anything related to our financial, business, security, or security/privacy practices." He went on to say, "It's a break in communication between Sam and the board." But whenever Altman was asked to give an example that he had not been "consistently candid in his communications," as the board had initially complained, board members remained silent and refused to even mention Altman's campaign against Toler.

Inside Microsoft, the whole affair looks incredibly stupid. OpenAI is reportedly worth about $80 billion so far. "Unless the OpenAI board's goal is to destroy the entire company, they always seem to inexplicably make the worst choices every time they make a decision," said one of the company's executives. "Even as other OpenAI employees publicly resigned under President Brockman, the board remained silent.

Plan A clearly failed. As a result, Microsoft executives turned to Plan B: Nadella began consulting with Murati to see if there was a way to reinstate Altman as CEO. In the meantime, the Cricket World Cup is underway, with Nadella's beloved India taking on Australia in the final. Nadella occasionally posts on social media to update on the event's developments, hoping to ease the tension, but many of his colleagues don't know what he's talking about.

OpenAI employees threatened to revolt. Murati and others at the startup, backed by Microsoft, began urging all board members to resign. Eventually, some of them agreed to leave as long as they thought the replacement would be acceptable. They said they might even be open to Altman's return, as long as he's not the CEO and doesn't get a board seat. By the Sunday before Thanksgiving, everyone was exhausted. OpenAI's board of directors invited Murati to join them alone for a private conversation. They told her that they had been secretly hiring a new CEO and had finally found someone willing to take the job.

For Murati, OpenAI employees, and Microsoft, they can only grasp the last straw and launch Plan C. On Sunday night, Nadella formally invited Altman and Brockman to lead a new artificial intelligence research lab within Microsoft and provide all the resources and as much freedom they wanted. Both accepted. Microsoft began preparing offices for the hundreds of OpenAI employees they thought would join the department.

Murati and her colleagues wrote an open letter to OpenAI's board of directors: "We cannot work for or work with people who lack competence, judgment, and don't care about our mission and employees. The author of the letter promised to resign and "join the newly formed Microsoft subsidiary" unless all current board members resign and Altman and Brockmann are reappointed. Within a few hours, almost all OpenAI employees signed the letter.

Plan C and the threat of a mass departure from OpenAI were enough to soften the board's attitude. Two days before Thanksgiving, OpenAI announced that Altman would return to his role as CEO. With the exception of D'Angelo, all board members will resign, while more well-known figures — including former Facebook executive and Twitter chairman Bret Taylor, and former Treasury secretary and Harvard University president Larry Summers — will serve on the boards. OpenAI's executives agreed to an independent investigation into what happened, including Altman's past actions as CEO.

Although Plan C initially seemed tempting, Microsoft executives later concluded that the current situation was the best outcome. Transferring OpenAI's employees to Microsoft could lead to costly and time-wasting lawsuits, as well as government investigations. Under the new framework, Microsoft has been given a non-voting board observer seat for OpenAI, giving it greater leverage without drawing regulatory scrutiny.

Microsoft's big win

The details of OpenAI's infighting are exposed! Ultraman's "separation" of the board of directors failed, and Microsoft prepared three sets of countermeasures

In fact, the end of this soap opera is seen as a huge victory for Microsoft and a powerful endorsement of its approach to developing artificial intelligence. A Microsoft executive said, "Altman and Brockman are really smart, they can go anywhere. But they chose Microsoft, and all those OpenAI folks are ready to choose Microsoft, just as they did with us four years ago. This is a great validation of the system we have established. They all know that this is the best place, the safest place to continue the work they are doing. ”

At the same time, the dismissed board members insisted that they acted wisely. "There will be a full and independent investigation, and instead of putting a bunch of Sam's cronies on the board, we end up with new people who can stand up to him," a person familiar with the boardroom discussions revealed. "Sam is very powerful, he's very persuasive, he's good at doing whatever he wants, and now he's noticing people watching him. "The Board remains focused on fulfilling our obligations to OpenAI's mission. (Altman told others that he welcomed the investigation—in part to help him understand why such a tragedy had happened and what different things he could have done to prevent it from happening.) )

Some AI regulators were not particularly pleased with the outcome. Margaret Mitchell, chief ethicist at Hugging Face, an open-source artificial intelligence platform, believes that "when the board fires Ultraman, it is really doing its job." His return will have a chilling effect. We're going to see fewer and fewer people speaking up within the company because they think they're going to be fired – and people at the top are going to be even more irresponsible. ”

As far as Ultraman is concerned, he is ready to discuss other things. "I think we're just moving to good governance and good board members, and we're going to do an independent assessment, which makes me very excited," he told me. "I just want everyone to get on with their lives and be happy. We will continue this mission. ”

To the relief of Nadella and Scott, everything is back to normal at Microsoft with the mass release of Copilots. Office Copilots looks both impressive and mediocre, though. They make mundane tasks easier, but they are still a long way from replacing human workers. They feel like a far cry from what science fiction predicts, but they're also things that people might use every day.

According to Scott, this effect was intentional. "True optimism means taking your time sometimes," he said. If he, Murati and Nadella get their wish – a more likely scenario now given their recent victories – AI will continue to permeate our lives steadily enough to accommodate the warnings needed for short-term pessimism, and only if humans are able to absorb how this technology should be used. Things can still get out of hand – and the gradual development of AI will prevent us from becoming aware of these dangers until it's too late. But, for now, Scott and Murati believe they can balance progress and security.

"AI is one of the most powerful things that humans have invented to improve the quality of life for everyone," Scott said. But it takes time, and it should take time. We always use technology to solve very challenging problems. So we can tell ourselves a good story about the future or a bad story about the future – and no matter which one we choose, it has the potential to become a reality. (Compiler/Mowgli)

Read on