laitimes

OpenAI CEO: GPT-4 isn't perfect but still good Support for Microsoft's new Bing

Tencent Technology News On March 15, artificial intelligence research company OpenAI announced the latest version of its large-scale language model - GPT-4 on Tuesday. The advent of GPT-4 will trigger fierce competition from technology companies in the field of generative artificial intelligence.

OpenAI says GPT-4's new technology is more precise, creative and collaborative. Microsoft, which has invested more than $10 billion in OpenAI, says a new version of the AI tool is powering its Bing search engine. GPT-4 will be available to OpenAI's paid ChatGPT Plus subscribers, where developers can sign up to use it to build apps.

OpenAI said Tuesday that the tool is "40 percent more likely to produce a correct response than GPT-3.5 in our internal assessment." "The new model will produce fewer wrong answers, less off the conversation track, less talk about taboo topics, and even perform better than humans on many standardized tests." In addition, the new version can also handle text and image queries, so users can submit a picture with a related question and ask GPT-4 to describe it or answer the relevant question.

OpenAI says Morgan Stanley is using GPT-4 to organize data, while electronic payments company Stripe is testing whether it can help fight fraud. GPT-4's other customers include language learning company Duolingo, nonprofit educational institution KhanAcademy and the Icelandic government. In addition, Be My Eyes, a company that develops tools for the blind or illuminated, will also use GPT-4 for virtual volunteer services, allowing people to send images to an AI service that will answer questions and provide visual aids.

OpenAI co-founder and president Greg? According to Greg Brockman, "We do have a system that is actually very capable of giving you new ideas and helping you understand things that you can't understand." The new version, he said, is better at finding specific information in company earnings reports or answering questions in detail about U.S. federal tax law — basically by combing through "dense commercial law jargon."

However, like GPT-3, GPT-4 cannot reason about current events because it is trained on most of the data that existed before September 2021. In a January interview, Sam Altman, the CEO of OpenAI, tried to control people's expectations. He said at the time, "It's ridiculous that GPT-4 rumors are circulating. I don't know where it all came from. People beg for disappointment, they will be disappointed. The company's chief technology officer, Mira Murati, also said in an interview earlier this month that "less hype will be beneficial." ”

After the release of GPT-4, Altman tweeted: "It's still flawed, it's still limited, and it seems more impressive than it really is when you spend more time using it for the first time." ”

GPT-4 is the so-called Large Language Model, an artificial intelligence system that analyzes massive amounts of information in the internet to determine how to generate human-like text. In recent months, the technology has sparked excitement and controversy. When OpenAI first released GPT-2 in 2019, it chose to expose only some of its models for fear of malicious use. The researchers noted that large language models sometimes deviated from the topic or fell into inappropriate or racist remarks. They also raise the issue of carbon emissions associated with all the computing power required to train and run these AI models.

OpenAI says it spent 6 months making AI software more secure. For example, the final version of GPT-4 is better at dealing with the question of how to make a bomb or where to buy cheap cigarettes – for the latter case, it now offers warnings about the health effects of smoking and possible ways to save money on tobacco products. "GPT-4 still has a number of known limitations that we are working to address, such as social bias, hallucinations, and adversarial prompts," OpenAI said in a blog post Tuesday, referring to submitting prompts or questions designed to trigger adverse actions or disrupt the system. "As society adopts these models, we encourage and promote transparency, user education, and broader AI literacy. We also aim to expand the input channels people have as they shape our models. ”

The company declined to provide specific technical information about GPT-4, including the size of the model. Brockman, the company's president, said OpenAI expects a future to be developed by billion-dollar supercomputer companies, and some of the most advanced tools will carry risks. OpenAI wants to keep certain parts of their work secret to give startups "some breathing room to really focus on security and get it right." ”

This is a controversial approach in the field of artificial intelligence. Some other companies and experts say more open and open AI models will improve safety. OpenAI also said that while it keeps some details of model training secret, it provides more information about what it is doing to eliminate bias and make products more accountable. Sandhini Agarwal, a policy researcher at OpenAI, said: "We are actually very transparent during the security training phase.

The launch is part of a spate of AI announcements from OpenAI and backer Microsoft, as well as emerging industry rivals. Many businesses have released new chatbots, artificial intelligence search, and new ways to embed the technology into corporate software for salespeople and office workers. GPT-4, like other recent models of OpenAI, was trained on Microsoft's Azure cloud platform.

Google-backed Anthropic, a startup founded by former OpenAI executives, also announced Tuesday the release of its Claude chatbot to business customers. Meanwhile, Google's parent company Alphabet said Tuesday that it was giving customers access to some of its language models. Microsoft is scheduled to talk Thursday about how it plans to provide artificial intelligence capabilities for office software.

The emergence of generative AI models has also sparked discussions about copyright and ownership, both in terms of what AI programs create that looks similar to existing content, and whether these systems should be able to be trained using other people's artistic, writing, and programming code. Previously, a number of individuals and businesses have filed lawsuits against OpenAI, Microsoft and their competitors. (Mowgli)

Read on