laitimes

OpenAI introduces more enterprise-grade AI capabilities for API customers to compete with Meta's Llama 3

author:The Webmaster's House

Highlights:

* OpenAI has announced the expansion of its enterprise-grade capabilities, providing API customers with new tools designed to enhance security, manage control, and control costs more effectively.

* OpenAI's upgraded features include Private Link, MFA, an improved assistant API, and new project features for managing and dividing access to specific tasks and assignments.

* These new features show that OpenAI is focusing on providing enterprises with an out-of-the-box "plug-and-play" experience to meet the challenges from open-source models such as Meta Llama3 and Mistral.

Webmaster's Home (ChinaZ.com) April 24 News: While Meta's new Llama3 has quickly topped the list of the most widely used and customized giant language models (LLMs), OpenAI, a competitor that pioneered the era of generative AI, has outgrown the competition by introducing new enterprise-grade features to build and program on top of its GPT-4Turbo LLM and other models.

OpenAI introduces more enterprise-grade AI capabilities for API customers to compete with Meta's Llama 3

OpenAI today announced the expansion of its enterprise-grade capabilities for API customers, further enriching its assistant API and introducing a number of new tools designed to enhance security and management control, as well as control costs more effectively.

Private Link 和增强的安全功能

One of the major security upgrades is the introduction of Private Link, a secure method that enables direct communication between Microsoft's Azure cloud services and OpenAI, which OpenAI says helps "reduce the exposure of customer data and queries sent through APIs to the open internet."

The move complements the existing security stack, which includes SOC2Type II authentication, single sign-on (SSO), AES-256 data encryption at rest, TLS 1.2 encryption in transit, and role-based access control.

In addition, OpenAI has introduced native multi-factor authentication (MFA) to strengthen access controls to meet growing compliance needs.

For healthcare companies that require HIPAA compliance, OpenAI continues to offer a business associate agreement as well as a zero data retention policy for eligible API customers.

OpenAI introduces more enterprise-grade AI capabilities for API customers to compete with Meta's Llama 3

The upgraded assistant API can handle more than 500 times more files

One of OpenAI's lesser-known but most important enterprise products is its assistant API, which allows businesses to deploy their trained custom fine-tuned models and invoke specific documents by retrieving enhanced builds (RAGs) to provide conversational assistants in their own applications.

For example, e-commerce company Klarna boasted earlier this year how its AI assistant, made with the OpenAI Assistant API, was able to complete the work of 700 full-time human agents, with a 25% reduction in duplicate queries and an almost 82% reduction in resolution time (from 11 minutes to 2 minutes).

OpenAI has now upgraded its assistant API to include enhanced file retrieval capabilities, including a new "file_search" feature that can process up to 10,000 files per assistant.

This represents a 500x increase over the previous 20-file limit and the addition of additional features such as parallel queries, improved reordering, and query rewrites.

In addition, the API now supports the streaming of real-time conversational responses, which means that AI models such as GPT-4Turbo or GPT-3.5Turbo will do their best to generate tokens without having to wait for a full response.

It further integrates new "vector_store" objects for better file management and provides more fine-grained control over token usage to help effectively control costs.

OpenAI introduces more enterprise-grade AI capabilities for API customers to compete with Meta's Llama 3

Projects enables businesses to control and divide access to specific tasks and assignments

A new feature called "Projects" improves administrative oversight by allowing organizations to manage roles and API keys at the project level.

This feature allows enterprise customers to set permission scopes, control available models, and set usage-based limits to avoid unexpected costs – a promise of improvements that will greatly simplify project management.

Basically, they can isolate a fine-tuned or even a normal version of an AI model into a specific set of tasks or documents and allow specific people to work on each of them.

So, if one team in your company uses a set of public-facing documents and another team uses a set of confidential or internal documents, then you can assign a separate project to each team in OpenAI's API, and both teams can use AI models to work on those documents without mixing or jeopardizing the latter.

Official blog: https://openai.com/blog/more-enterprise-grade-features-for-api-customers

Read on