Amazon and Anthropic announces collaboration to train future foundation models and make them accessible to AWS customers.
Anthropic selects AWS as its primary cloud provider and will train and deploy its future foundation models on AWS Trainium and Inferentia chips, taking advantage of AWS’s high-performance, low-cost machine learning accelerators.
Anthropic deepens commitment to AWS, making its future foundation models accessible to millions of developers and providing AWS customers early access to unique features for model customization, using their proprietary data, and fine-tuning capabilities, all through Amazon Bedrock.
Table of Contents
Amazon and Anthropic announced a strategic collaboration that will bring together their respective industry-leading technology and expertise in safer generative artificial intelligence (AI) to accelerate the development of Anthropic’s future foundation models and make them widely accessible to AWS customers. As part of the expanded collaboration:
Amazon and Anthropic Collaboration Includes:
- Anthropic will use AWS Trainium and Inferentia chips to build, train, and deploy its future foundation models, benefitting from the price, performance, scale, and security of AWS. The two companies will also collaborate in the development of future Trainium and Inferentia technology.
- AWS will become Anthropic’s primary cloud provider for mission critical workloads, including safety research and future foundation model development. Anthropic plans to run the majority of its workloads on AWS, further providing Anthropic with the advanced technology of the world’s leading cloud provider.
- Anthropic makes a long-term commitment to provide AWS customers around the world with access to future generations of its foundation models via Amazon Bedrock, AWS’s fully managed service that provides secure access to the industry’s top foundation models. In addition, Anthropic will provide AWS customers with early access to unique features for model customization and fine-tuning capabilities.
- Amazon will invest up to $4 billion in Anthropic and have a minority ownership position in the company.
- Amazon developers and engineers will be able to build with Anthropic models via Amazon Bedrock so they can incorporate generative AI capabilities into their work, enhance existing applications, and create net-new customer experiences across Amazon’s businesses.
“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, Amazon CEO. “Customers are quite excited about Amazon Bedrock, AWS’s new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’s AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”
“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers. By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”
Foundation Model Claude:
An AWS customer since 2021, Anthropic has grown quickly into one of the world’s leading foundation model providers and a leading advocate for the responsible deployment of generative AI. Their foundation model, Claude, excels at a wide range of tasks, from sophisticated dialogue and creative content generation to complex reasoning and detailed instruction, while maintaining a high degree of reliability and predictability.
Its industry-leading 100,000 token context window can securely process extensive amounts of information across all industries, from manufacturing and aerospace to agriculture and consumer goods, as well as technical, domain-specific documents for industries such as finance, legal, and healthcare.
Customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable compared to other foundation models, so developers can get their desired output with less effort. Anthropic’s state-of-the-art model, Claude 2, scores above the 90th percentile on the GRE reading and writing exams, and similarly on quantitative reasoning.
This is the latest AWS generative AI announcement as the company continues to expand its unique offering at all three layers of the generative AI stack. At the bottom layer, AWS continues to offer compute instances from NVIDIA as well as AWS’s own custom silicon chips, AWS Trainium for AI training and AWS Inferentia for AI inference.
At the middle layer, AWS is focused on providing customers with the broadest selection of foundation models from multiple leading providers where customers can then customize those models, keep their own data private and secure, and seamlessly integrate with the rest of their AWS workloads—all of this is offered through AWS’s new service, Amazon Bedrock.
With this announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature within Amazon Bedrock. At the top layer, AWS offers generative AI applications and services for customers like Amazon CodeWhisperer, a powerful AI-powered coding companion, which recommends code snippets directly in the code editor, accelerating developer productivity as they code.
As part of this deeper collaboration, AWS and Anthropic are committing meaningful resources that are helping customers get started with Claude and Claude 2 on Amazon Bedrock, including through the AWS Generative AI Innovation Center, where teams of AI experts will help customers of all sizes to develop new generative AI-powered applications to transform their organizations.
Customers accessing Anthropic’s current models via Amazon Bedrock are building generative AI-powered applications that help automate tasks such as producing market forecasts, developing research reports, enabling new drug discovery for healthcare, and personalizing education programs.
Enterprises already taking advantage of this advanced technology include Lonely Planet, a premier travel media company celebrated for its decades of travel content; Bridgewater Associates, a premier asset management firm for global institutional investors; and LexisNexis Legal & Professional, a top-tier global provider of information and analytics serving customers in more than 150 countries.
“We are developing a generative AI solution on AWS to help customers plan epic trips and create life-changing experiences with personalized travel itineraries,” said Chris Whyde, senior vice president of Engineering and Data Science at Lonely Planet. “By building with Claude 2 on Amazon Bedrock, we reduced itinerary generation costs by nearly 80 percent when we quickly created a scalable, secure AI platform that organizes our book content in minutes to deliver cohesive, highly accurate travel recommendations.
Now we can re-package and personalize our content in various ways on our digital platforms, based on customer preference, all while highlighting trusted local voices—just like Lonely Planet has done for 50 years.”
“At Bridgewater, we believe the global economic machine can be understood, so we strive to build a fundamental, cause-and-effect understanding of markets and economies powered by cutting-edge technology,” said Greg Jensen, co-CIO at Bridgewater Associates.
“Working with the AWS Generative AI Innovation Center, we are using Amazon Bedrock and Anthropic’s Claude model to create a secure large language model-powered Investment Analyst Assistant that will be able to generate elaborate charts, compute financial indicators, and create summaries of the results, based on both minimal and complex instructions. This flexible solution will accelerate the more mundane, yet still involved, steps of our research process, enabling our analysts to spend more time on the difficult and differentiated aspects of understanding markets and economies.”
“We are working with AWS and Anthropic to host our custom, fine-tuned Anthropic Claude 2 model on Amazon Bedrock to support our strategy of rapidly delivering generative AI solutions at scale and with cutting-edge encryption, data privacy, and safe AI technology embedded in everything we do,” said Jeff Reihl, executive vice president and CTO at LexisNexis Legal & Professional. “Our new Lexis+ AI platform technology features conversational search, insightful summarization, and intelligent legal drafting capabilities, which enable lawyers to increase their efficiency, effectiveness, and productivity.”
Amazon and Anthropic are each engaged across a number of organizations to promote the responsible development and deployment of AI technologies, including the Organization for Economic Cooperation and Development (OECD) AI working groups, the Global Partnership on AI (GPAI), the Partnership on AI, the International Organization for Standardization (ISO), the National Institute of Standards and Technology (NIST), and the Responsible AI Institute.
In July, both Amazon and Anthropic joined President Biden and other industry leaders at the White House to show their support for a set of voluntary commitments to foster the safe, secure, responsible, and effective development of AI technology. These commitments are a continuation of work that both Amazon and Anthropic have been doing to support the safety, security, and responsible development and deployment of AI and will continue through this expanded collaboration.
Anthropic
Anthropic is an AI safety and research company based in San Francisco. Anthropic’s flagship product is Claude, an AI assistant focused on being helpful, harmless, and honest.