Recommendation: View this article on desktop for the best experience.
Securing AI Chatbots
AI chatbots are no longer just answering basic questions. They are increasingly connected to backend systems and given authority to perform tasks such as issuing refunds, updating customer records, and triggering complex, privileged workflows.
These expanding capabilities create significant value, but they also introduce meaningful risk. A system that can autonomously take action can take the wrong action. A system with access to sensitive data can expose it. If poorly designed, the same privilege that enables efficiency can also create real damage.
In this lab, you can attack a deliberately vulnerable chatbot to build intuition for how these failures occur. You can then interact with the same chatbot implemented with stronger security patterns designed to mitigate those risks.
The goal is to develop a practical understanding of how chatbots can be exploited and to highlight the architectural patterns that allow them to operate more securely.
The Broken Chatbot
Below is a real support chatbot implementation. It can look up orders, issue refunds, and change account information. There is no traditional login; users identify themselves by providing an order number or customer ID.
These design choices are intentional. They reflect patterns commonly used by support chatbots operating on production websites today.
Your task is simple: interact with both chatbots and see what you can get away with. There are multiple vulnerabilities embedded in the first chatbot. Try to uncover them. Attempt to access data you should not see. Attempt to perform actions that should be restricted.
