laitimes

Amazon Bedrock Introduces New Capabilities to Help Tens of Thousands of Customers Build and Scale Secure Generative AI Applications

author:Bitsusha

The new proprietary model import feature makes it easier for customers to import their own models into Amazon Bedrock, allowing them to take full advantage of the power of Amazon Bedrock

The new model evaluation capabilities enable customers to choose from a wide range of fully managed models, including the new RAG-optimized version of Amazon Titan Embeddings V2 and the latest models from Cohere and Meta

Amazon Bedrock's Guardrails uses industry-leading technology to help customers effectively implement security measures tailored to their application needs and responsible AI policies.

BEIJING, April 23, 2024 /PRNewswire/ -- Today, Amazon Web Services announced new capabilities for Amazon Bedrock, providing customers with a simpler, faster, and more secure way to develop advanced generative artificial intelligence (AI) applications. Thousands of users have chosen Amazon Bedrock as the core foundation of their generative AI strategy. Amazon Bedrock provides easy access to leading foundational models (FMs) from AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, Amazon, and more, while addressing the functional and enterprise-grade security needs needed to develop and deploy generative AI applications. Amazon Bedrock provides a robust model that is delivered to customers as a fully managed service, so customers don't have to worry about the underlying infrastructure, which ensures seamless deployment, scalability, and continuous optimization of applications. The new capabilities allow customers to run their own fully managed models on Amazon Bedrock, simplifying the process of selecting the best model for a given use case and making it easier to implement protections for generative AI applications, while also expanding the choice of models. For more information or to use Amazon Bedrock, please visit.

From fast-rising startups to security-conscious large enterprises and government agencies, organizations around the world are using Amazon Bedrock to spark innovation, increase productivity, and create new user experiences. For example, the New York Stock Exchange (NYSE) leverages Amazon Bedrock's rich foundation model and advanced AI-generated capabilities to process numerous regulatory filings and translate complex regulatory content into easy-to-understand language. Ryanair, Europe's largest airline, is improving service efficiency with Amazon Bedrock. Amazon Bedrock enables cabin crew to query information about country-specific regulations in real time or quickly extract key summaries from a vast number of manuals to ensure a smooth journey for passengers, while Netsmart, a technology provider focused on electronic health record (EHR) solutions for community healthcare providers, is working to reduce the burden of clinical document management for healthcare workers. With a generative AI automation tool built on Amazon Bedrock, they aim to reduce the time spent managing personal health records by up to 50%. This will enable Netsmart customers to accelerate the submission of patient reimbursement applications and improve the patient care experience.

"Amazon Bedrock is exploding in the growth of enterprise applications. Thousands of companies of all sizes and industries have chosen Amazon Bedrock as the core foundation of their generative AI strategy, dramatically accelerating and simplifying the transition from pilot to production. "

Amazon Web Services TO and Vice President of Data Swami Sivasubramanian doctor

"Customers are enthusiastic about Amazon Bedrock because it not only provides enterprise-grade security and privacy, but also offers a broad selection of cutting-edge models that make it easier than ever to build generative AI applications." With today's new capabilities, we will continue to innovate rapidly to provide our customers with richer features and industry-leading models to further drive generative AI adoption at scale." "

The new proprietary model import feature allows customers to integrate their custom models into it Amazon Bedrock to reduce operational costs and accelerate application development

On Amazon Bedrock, customers from industries such as life and health and financial services can easily access a wide range of leading foundational models from AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon, as well as customize public models with their own data to support industry-specific use cases. When enterprises need to build these models from their own data, they often rely on services like Amazon SageMaker, which provides industry-leading model training capabilities to train models from scratch or to customize existing public models such as Llama, Mistral, and Flan-T5. Since its launch in 2017, Amazon SageMaker has become a platform for building and training world-class foundational models, including the Falcon 180, a public model with a very large number of parameters. At the same time, customers want to be able to combine their own custom models with advanced generative AI tools built into Amazon Bedrock, such as knowledge bases, Guardrails, Agents, and model evaluation, without having to develop these capabilities themselves.

With the new Amazon Bedrock proprietary model import feature, enterprises can now import their own custom models into Amazon Bedrock and access them as a fully managed API, bringing an unprecedented experience to building generative AI applications. With just a few clicks, customers can integrate the models they developed using Amazon SageMaker or other tools into the Amazon Bedrock platform. Once the model passes through the automated validation process, customers can access their own custom models just like any other model on the platform, while enjoying all the benefits of Amazon Bedrock today—including seamless scalability, robust application protection, adherence to responsible AI principles, augmentation of the model knowledge base with Retrieval Enhanced Generation (RAG), easy creation of agents for multi-step tasks, Fine-tune to continuously train and optimize models without having to manage the underlying infrastructure. This new feature makes it easy for enterprises to access Amazon Bedrock's models and their own custom models through the same API. Amazon Bedrock's proprietary model import feature is now in preview and supports three of the most popular open model architectures: Flan-T5, Llama, and Mistral, with plans to support more models in the future.

The Model Evaluation feature helps customers evaluate, compare, and select the best model for their application

Amazon Bedrock offers the broadest selection of industry-leading models to meet the needs of enterprises in terms of price, performance, or capabilities, and allows them to monopolize or share models with other enterprises. A critical step in building a generative AI application is finding the right model, and choosing the best model for a specific use case requires customers to strike a delicate balance between accuracy and performance. Now, businesses also need to spend a lot of time analyzing how each new model fits their use case, which hinders the speed at which they can deliver generative AI experiences to users. Today, the Model Evaluation feature is generally available and is the fastest way for businesses to quickly analyze and compare models on Amazon Bedrock, reducing the time it takes to evaluate models from weeks to hours, enabling them to launch new applications faster and improve the user experience. Customers can start the assessment right away by selecting predefined evaluation criteria such as accuracy and robustness and uploading their own datasets/thesaurus, or by choosing from built-in, publicly available resources. For subjective criteria or content that requires careful judgment, Amazon Bedrock makes it easy for customers to incorporate human review into their workflows to evaluate models based on use case-specific metrics such as relevance, style, and brand voice. Once set up, Amazon Bedrock runs assessments and generates reports that make it easy for customers to understand how their models are performing on key metrics and quickly select the best model for their use case.

Pass Amazon Bedrock Guardrails features, customers can easily implement protections with cutting-edge technology to remove personal and sensitive information, profanity, specific vocabulary, and block harmful content

Businesses need to implement generative AI in a secure, trusted, and responsible way. Many models use built-in controls to filter out undesirable and harmful content, but most customers want to further tailor their generative AI applications to ensure that the results generated are more relevant, following responsible AI principles while complying with company policies. Amazon Bedrock's Guardrails feature is now generally available, providing industry-leading security on top of the native capabilities of the underlying model, helping customers block up to 85% of harmful content. Guardrails is the only solution offered by a top-tier cloud service provider that allows customers to have both built-in and custom protections in a single service, and can be used with all large language models (LLMs) in Amazon Bedrock, as well as fine-tuned models. To create a Guardrail, customers only need to provide a natural language description to define topics that are not displayed in the context of their application. In addition, customers can set thresholds to filter content such as hate speech, insults, pornographic language, or offensive and violent prompts, and set filters to remove any personal and sensitive information, profanity, or specific blocking words. By providing a consistent user experience and standardized security and privacy controls for generative AI applications, Amazon Bedrock Guardrails enables customers to innovate quickly and securely.

More model options: Amazon Titan Text Embeddings V2 , officially available Titan Image Generator , as well as from Cohere and Goal The latest models

Amazon Bedrock's proprietary Amazon Titan models are created and pre-trained by Amazon Web Services on large-scale and diverse datasets, designed for a variety of use cases, and built with responsible AI capabilities. Amazon Bedrock continues to expand the Amazon Titan family to provide customers with more choice and flexibility. Amazon Titan Text Embeddings V2 is optimized for use cases where RAG is used, making it ideal for tasks such as information retrieval, Q&A chatbots, and personalized recommendations. Many businesses have adopted RAG technology, a popular model customization technique that enhances the results of the underlying model generation by connecting to knowledge sources. However, running these operations can consume a lot of compute and storage resources. The upcoming release of the Amazon Titan Text Embeddings V2 model reduces storage and compute costs while improving accuracy. By providing customers with flexible embeddings, it reduces storage requirements by up to a quarter and significantly reduces operating costs, while maintaining 97% accuracy in RAG use cases, outperforming other leading models.

Amazon Titan Image Generator, now generally available, provides a low-cost way for customers in industries such as advertising, e-commerce, and media and entertainment to generate professional-grade images, or enhance and edit existing images, using only natural language prompts. Amazon Titan Image Generator also designs an invisible watermark for all images it generates to help identify AI-generated images, driving the safe, secure, and transparent development of AI technology and helping to reduce the spread of false information. At the same time, Amazon Titan Image Generator can also check for the presence of watermarks in images and help customers confirm whether the images were generated by them.

At the same time, the Meta Llama 3 base model on Amazon Bedrock is generally available, and Cohere's Command R and Command R+ models are coming soon. Llama 3 is designed for developers, researchers, and businesses to enable them to build, experiment, and responsibly scale generative AI projects. Llama 3 models are a family of pre-trained and fine-tuned large language models for a wide range of use cases, especially for tasks such as text summarization and classification, sentiment analysis, language translation, and code generation. Cohere's Command R and Command R+ models are cutting-edge foundational models that enable customers to build enterprise-grade generative AI applications with advanced RAG capabilities in 10 languages to help them expand their businesses around the world.

Amazon Bedrock Empower customers and partners to succeed

Rufus, built by Amazon, is a professional shopping assistant with generative AI at its core, trained on the company's vast product catalog, customer suggestions, community Q&A, and internet information. Rufus is able to answer customers' shopping questions, provide product comparisons, and make recommendations based on the context of the conversation. "In order to provide a superior conversational shopping experience in the Amazon store, we are committed to developing a state-of-the-art model for Rufus and expect it to deliver exceptional value to our customers. "

Amazon store Foundational AI Vice President of the Department and Distinguished Scientist Trident Chimbi

"By leveraging Amazon Bedrock's custom model import capabilities, we were able to make Rufus' advanced underlying model available to in-house developers to access it as a fully managed API." Now, business teams from logistics to studios can use this model to build their own apps, and Amazon Bedrock streamlines the development process and helps all Amazon customers quickly develop new experiences. "

Dentsu is one of the world's leading providers of integrated marketing and technology services. "Over the past three months, we've leveraged the preview version of the Amazon Titan Image Generator model to create a large number of photorealistic, professional-grade images with natural language promptwords, primarily for product promotion and consistent brand identity generation. "

Global Technical Director, Dentsu Creative James Thomas

"Our creative team was impressed with the diverse content generated by the Titan Image Generator, which helped us create compelling imagery for our campaigns around the world. We look forward to experiencing the model's new watermark detection capabilities, which will increase transparency in AI-generated content and help us build stronger relationships of trust with our customers." "

Salesforce is a global leader in AI customer relationship management (CRM), and its powerful combination of "CRM + Data + AI + Trust" provides enterprises with powerful customer connection capabilities. "AI is at the heart of our commitment to helping our customers deliver personalized experiences in Salesforce apps that are built on Data Cloud's data. To implement generative AI on our unified data platform, we are evaluating various foundational models to ensure that the selected model best meets the needs of our customers. "

There is no need for a more than Senior Vice President of Product According to Kaushal

"Amazon Bedrock is a key part of our open ecosystem model strategy, and this new model evaluation capability provides both automated and manual evaluation to accelerate our model comparison and selection. Now, we can evaluate models not only based on intuitive criteria, but also from more qualitative criteria such as friendliness, style, and brand relevance. This increased capability will make it easier and faster for us to operate our model for our customers. "

Read on