Amazon Trusted AI Challenge

Event Notification Type: 
Call for Proposals
Abbreviated Title: 
Location: 
State: 
Country: 
City: 
Contact: 
Michael Johnston
Maureen Murray
Submission Deadline: 
Sunday, 1 September 2024

Introducing the Amazon Trusted AI Challenge

University students will compete for cash prizes in a competition to securely advance LLMs that code.

Amazon is announcing the Amazon Trusted AI Challenge, a global university competition to drive secure innovation in generative AI technology. This year’s challenge focuses on responsible AI and specifically on large language model (LLM) coding security.

“We are focusing on advancing the capabilities of coding LLMs, exploring new techniques to automatically identify possible vulnerabilities and effectively secure these models,” said Rohit Prasad, senior vice president and head scientist, Amazon AGI. “The goal of the Amazon Trusted AI Challenge is to see how students' innovations can help forge a future where generative AI is consistently developed in a way that maintains trust, while highlighting effective methods for safeguarding LLMs against misuse to enhance their security.”

University students will compete in a tournament-style challenge as either model developer teams or red teams to enhance the AI user experience, prevent misuse, and enable users to build more secure code. Model developer teams will build security features into code-generating models, while red teams will develop automated techniques to test these models. Each round will allow teams to refine their models and techniques based on multi-turn interactions, identifying strengths and weaknesses.

Amazon will select up to 10 teams for the competition starting November 2024, which will run through the academic year. Each of the 10 selected teams will receive $250,000 in sponsorship along with monthly AWS credits, and winning teams have a chance to win an additional $700,000 in cash prizes.

Advancements and opportunities in AI-assisted software development

The Amazon Trusted AI Challenge aims to enhance the safety, reliability, and trustworthiness of LLMs powering AI-assisted software development tools. With the rise of generative AI coding assistants, these technologies demonstrate unprecedented innovative capabilities and offer exciting opportunities to ensure responsible and reliable use. This challenge looks to inspire developers, scientists and researchers to create solutions that enhance AI-assisted coding tools' ability to protect users and systems.

Tournament structure

Through four tournaments and a live finals event, red teams will test model developer teams' AI models to uncover vulnerabilities and improve their security. Red teams will be ranked on their success in forcing models to breach their policies through automated conversational red-teaming. Model developer teams will create code-generating models to enhance security, identify threats, and prevent unintended behavior. They will be ranked on their ability to build and reinforce successful defenses through techniques such as fine-tuning and alignment. The goal is to discover innovative ways for LLM creators to mitigate risks and implement effective safety measures.

The top model developer team wins $250,000, with $100,000 for second place. The red team demonstrating the most effective vulnerability identification also wins $250,000, with $100,000 for second place.

For more information about the challenge, including rules and frequently asked questions, visit the Amazon Trusted AI Challenge landing page. https://www.amazon.science/trusted-ai-challenge