Software runs the world. Everything from IoT, medical devices, the power grid, smart cars, and voting apps has software behind it. Learn from the best of the best on exploiting software vulnerabilities and securing the software that is the foundation of our dynamic world.
Stop by RSAC Sandbox in Moscone South at RSA Conference starting Tuesday, May 7 at 4:30 PM through Thursday, May 9 at 2:30 PM to visit the AppSec Sandbox and participate in hands-on activities.
Day 1 - May 07, 2024
16:30
Hacking GPTs Using Prompt Manipulation
Large Language Models, also known as LLMs, have become an essential part of our daily work routine. OpenAI is a leading company in this field, having launched the first LLM, called ChatGPT, and constantly improving the model by adding new features. One such feature is GPTs, a customizable version of...
More info...Spot the Secrets: Finding The Valid Secrets Throughout Your Environments
Before you can deal with secrets sprawl, you first need to understand how deep the issue of plaintext secrets can be. Improperly stored and shared secrets goes beyond just the top layer of code that you put in production. It affects feature branches, old commits, logs, and communication and collabor...
More info...Spot the False Positive
Find the true positives out of 5 SQLi. You've got 18x18 inch game board, 5 cards, 5 code weaknesses, and a 5-minute sand timer, ready, set, go! You'll have 5 minutes to place the cards in the correct order and find the true positive(s). The winner? Whoever finds the solution in the shortest amount o...
More info...Day 2 - May 08, 2024
09:30
Capture the Container
In this session, we will dive into bloated containers, a pressing problem plaguing open source software supply chains. We will discuss this phenomena and demonstrate how to use scanners and the National Vulnerability Database to address bloat in your own containers. The bulk of this session will con...
More info...Hacking Developers’ Trust – Faking GitHub Contribution
Join us for a revealing exploration of open-source trust and its vulnerabilities. In this captivating activity, we will delve into the fascinating world of developer credibility and the unsettling phenomenon of faking GitHub contributions. With open source becoming an integral part of software devel...
More info...Test Your AppSec Knowledge—It's in the Cards
Pick 5 cards with random levels of difficulty. Answer questions ranging from true/false to multiple choice to spot the vulnerable code. Test your knowledge on risky deployment scenarios, rack up the points, and get to the top of the leaderboard to win!
More info...11:30
Open Source LLM Security Demo with Trivia
Join us for an LLM security demo with real-world examples and engage in a trivia game centered around it!
LLM security involves measures and techniques used to ensure the safety, privacy, and integrity of large language models like OpenAI's GPT models. We start with some attacks and stats on LLM...
More info...Spot the Secrets: Finding The Valid Secrets Throughout Your Environments
Before you can deal with secrets sprawl, you first need to understand how deep the issue of plaintext secrets can be. Improperly stored and shared secrets goes beyond just the top layer of code that you put in production. It affects feature branches, old commits, logs, and communication and collabor...
More info...Spot the False Positive
Find the true positives out of 5 SQLi. You've got 18x18 inch game board, 5 cards, 5 code weaknesses, and a 5-minute sand timer, ready, set, go! You'll have 5 minutes to place the cards in the correct order and find the true positive(s). The winner? Whoever finds the solution in the shortest amount o...
More info...13:30
Test Your AppSec Knowledge—It's in the Cards
Pick 5 cards with random levels of difficulty. Answer questions ranging from true/false to multiple choice to spot the vulnerable code. Test your knowledge on risky deployment scenarios, rack up the points, and get to the top of the leaderboard to win!
More info...Hacking GPTs Using Prompt Manipulation
Large Language Models, also known as LLMs, have become an essential part of our daily work routine. OpenAI is a leading company in this field, having launched the first LLM, called ChatGPT, and constantly improving the model by adding new features. One such feature is GPTs, a customizable version of...
More info...Hacking Developers’ Trust – Faking GitHub Contribution
Join us for a revealing exploration of open-source trust and its vulnerabilities. In this captivating activity, we will delve into the fascinating world of developer credibility and the unsettling phenomenon of faking GitHub contributions. With open source becoming an integral part of software devel...
More info...15:30
Capture the Container
In this session, we will dive into bloated containers, a pressing problem plaguing open source software supply chains. We will discuss this phenomena and demonstrate how to use scanners and the National Vulnerability Database to address bloat in your own containers. The bulk of this session will con...
More info...Spot the False Positive
Find the true positives out of 5 SQLi. You've got 18x18 inch game board, 5 cards, 5 code weaknesses, and a 5-minute sand timer, ready, set, go! You'll have 5 minutes to place the cards in the correct order and find the true positive(s). The winner? Whoever finds the solution in the shortest amount o...
More info...Spot the Secrets: Finding The Valid Secrets Throughout Your Environments
Before you can deal with secrets sprawl, you first need to understand how deep the issue of plaintext secrets can be. Improperly stored and shared secrets goes beyond just the top layer of code that you put in production. It affects feature branches, old commits, logs, and communication and collabor...
More info...Day 3 - May 09, 2024
09:30
Hacking GPTs Using Prompt Manipulation
Large Language Models, also known as LLMs, have become an essential part of our daily work routine. OpenAI is a leading company in this field, having launched the first LLM, called ChatGPT, and constantly improving the model by adding new features. One such feature is GPTs, a customizable version of...
More info...Hacking Developers’ Trust – Faking GitHub Contribution
Join us for a revealing exploration of open-source trust and its vulnerabilities. In this captivating activity, we will delve into the fascinating world of developer credibility and the unsettling phenomenon of faking GitHub contributions. With open source becoming an integral part of software devel...
More info...Spot the Secrets: Finding The Valid Secrets Throughout Your Environments
Before you can deal with secrets sprawl, you first need to understand how deep the issue of plaintext secrets can be. Improperly stored and shared secrets goes beyond just the top layer of code that you put in production. It affects feature branches, old commits, logs, and communication and collabor...
More info...12:00
Spot the False Positive
Find the true positives out of 5 SQLi. You've got 18x18 inch game board, 5 cards, 5 code weaknesses, and a 5-minute sand timer, ready, set, go! You'll have 5 minutes to place the cards in the correct order and find the true positive(s). The winner? Whoever finds the solution in the shortest amount o...
More info...Hacking Developers’ Trust – Faking GitHub Contribution
Join us for a revealing exploration of open-source trust and its vulnerabilities. In this captivating activity, we will delve into the fascinating world of developer credibility and the unsettling phenomenon of faking GitHub contributions. With open source becoming an integral part of software devel...
More info...Open Source LLM Security Demo with Trivia
Join us for an LLM security demo with real-world examples and engage in a trivia game centered around it!
LLM security involves measures and techniques used to ensure the safety, privacy, and integrity of large language models like OpenAI's GPT models. We start with some attacks and stats on LLM...
More info...