The AI Behaviour Toolkit helps you understand how AI actually gets used in practice — and where behaviour introduces risk. It is designed for product teams, designers, data scientists, and risk or governance professionals working with AI systems in real-world environments.
Identify the behavioural patterns causing AI risk
The Signals Library helps you recognise when behaviour is introducing risk into AI workflows. Each signal outlines what to look for, the research questions to ask, and the of data needed to measure it. Organised into 14 behavioural signals, each one helps your team distinguish between normal usage and emerging risk.
Understand the consequences of high risk behaviour.
The Impact Assessment helps teams identify where actual behaviour may lead to poor outcomes. It connects how AI is used to specific failure modes, evaluating the impact on performance, compliance and safety. This tool helps you design targeted responses to reduce behavioural risk before it scales.
Define how AI should be used.
This tool brings together product design with risk and governance to articulate what safe behaviour looks like, and the thresholds beyond which behaviour becomes risky. The tool helps teams explore key risk scenarios to create a shared understanding of how to recognise risy behaviour and which controls to use to nudge behaviour back into safety.
Validate behaviour in the real world.
The experiment lab provides a structured way to design and run experiments that test how AI is actually used. Teams capture key data, observe real life behaviour, and interpret their findings to identify and manage behavioural risk. By linking real-world actions to known risk signals, you get a clear view of what works in practice, not just in theory.
Complete workshop templates, with examples and faciltator guides
A library of printable Behavioural Risk Signal cards
Access to the open source community on GitHub
Most AI risk frameworks focus on models, data, and systems. The AI Behaviour Toolkit focuses on how people actually use AI—because this is where many risks emerge in practice.
The AI Behaviour Toolkit is free for internal use, adaptation and sharing. It is distributed under a Creative Commons Attribution‑NonCommercial‑ShareAlike 4.0 licence. This means that you can download it, adapt it, and share it freely, as long as you credit Tiny Designer and don’t use it for commercial purposes. Just don't try and sell it on to others, or claim it as your own work.
We encourage teams to test, remix, and build on the toolkit. Every adaptation helps the community improve its understanding of behavioural AI risk. If you develop new tools, signals, or examples, share them back to help strengthen the open‑source ecosystem.