laitimes

Latest! OpenAI leaders share future product roadmap: GPT-4 will be faster and cheaper

Latest! OpenAI leaders share future product roadmap: GPT-4 will be faster and cheaper

Tencent Technology News June 1 news, machine learning and artificial intelligence startup Humanloop CEO Raza Habib (Raza Habib) recently invited artificial intelligence research company OpenAI CEO Sam Altman (Sam Altman) and more than 20 other developers to discuss the future of artificial intelligence. Ultraman was very candid about OpenAI's product roadmap for the next two years, and also talked about issues such as OpenAI's mission and the social impact of AI.

Latest! OpenAI leaders share future product roadmap: GPT-4 will be faster and cheaper

Here are some of the key points from the interview:

OpenAI is heavily constrained by GPU shortages

A common theme that emerged throughout the discussion was that OpenAI's GPU supply was very limited at the moment, which caused them to postpone many short-term plans. The biggest complaint from customers is the reliability and speed of the API. Ultraman acknowledged this concern, explaining that most of the problem was due to GPU shortages.

The contextual feature with a length of 32k cannot be generalized to more people yet. OpenAI hasn't overcome the associated technical challenges yet, so while it looks like they will have a contextual window of 100k to 1M tokens soon (this year), the bigger windows will require breakthroughs in research.

The fine-tuning API is also currently limited by GPU availability. They don't yet use efficient tuning methods like Adapters or LoRa, so running and managing tuning requires a lot of computation. There will be better fine-tuning support in the future. They can even host a marketplace with a community contribution model.

Dedicated capacity provisioning is limited by GPU availability. OpenAI also offers dedicated capacity to provide customers with private copies of the model. But to use the service, customers must be willing to pay the equivalent of $100,000.

2. OpenAI's near-term roadmap

Altman shared what he sees as an interim near-term roadmap for OpenAI APIs.

In 2023:

Cheaper and faster GPT-4: This is OpenAI's top priority. Overall, OpenAI's goal is to keep the "cost of intelligence" as low as possible, so they will strive to continue to reduce the cost of APIs over time.

Longer context windows: In the near future, context windows could be as high as 1 million tokens.

Fine-tuning APIs: Fine-tuning APIs will extend to the latest models, but their exact form will be determined by the elements developers really want.

Support for session state APIs: When you call the chat API today, you have to pass the same session history over and over again and over again and again pay the same tokens. In the future, there will be a version of the API that remembers session history.

In 2024:

Multimodal: This was demonstrated when GPT-4 was released, but can't be extended to everyone until more GPUs go live.

3. The plugin "no PMF" may not appear in the API soon

Many developers are interested in accessing ChatGPT plugins through APIs, but Altman said he doesn't think the plugins will be released anytime soon. In addition to browsing, the use of plugins indicates that they do not yet have PMF. Altman said that a lot of people think they want their apps to be integrated into ChatGPT, but in reality, what they really want is to bring ChatGPT into their apps.

OpenAI will avoid competing with their customers, except for ChatGPT

Many developers say they are nervous about building products using OpenAI APIs because OpenAI may end up releasing products that compete with them. In response, Ultraman said that OpenAI will not release more products outside of ChatGPT. Many great platform companies have their own killer apps, he said, and ChatGPT will allow them to improve their APIs by becoming customers of their own products. ChatGPT's vision is to be a superintelligent work assistant, but there are plenty of other GPT use cases that OpenAI can't touch.

5. Regulation is necessary, but open source is equally important

While Altman called for regulation of future models, he doesn't think existing models are dangerous, and that regulating or banning them would be a big mistake. Altman emphasized the importance of open source and said OpenAI is considering open source GPT-3. Part of the reason they haven't been open source yet is that he doubts how many individuals and companies have the ability to host big language models.

6. The law of scaling still holds

Many recent articles have claimed that "the era of giant AI models is over," but Altman says that doesn't get exactly what he meant.

OpenAI's internal data suggests that the scaling law of model performance still holds, and that scaling models will continue to help improve performance. But this rate of scaling is unsustainable, because OpenAI has scaled up models millions of times in just a few years, which is not sustainable. That's not that OpenAI will stop scaling models, though, just that they're likely to only double or triple in size every year, rather than growing exponentially.

The fact that the law of scaling continues to work has important implications for the timeline of general artificial intelligence (AGI) development. The scaling assumption is that we probably already have most of the elements needed to build AGI, and most of the work left will be to take existing methods and scale them to larger models and larger datasets. If the era of scaling is over, then we should expect AGI to be even further away. But the fact that the law of scaling continues to hold strongly suggests that the timeline is getting shorter. (Golden Deer)