Training doesn't create adoption, behaviour does. This example shows how an AI Adoption Lab would validate real-time guidance, nudges, and reminders to validate the hypothesis that they will help people remember, choose, and rely on AI support in the moments it matters most.
The lab will focuses on learning about key behaviour through observation:
Are choices clear and understood?
Is AI used how and when it was intended?
Do people continue to use AI during times of stress?
Motor insurance businesses typically operate with a network of approved repairers, alongside additional considerations such as location, availability, parts supply, and specialist services. During outbound complaints calls, agents must juggle compliance requirements, lengthy delays, and new information revealed by the customer — often under pressure and time constraints.
In this environment, it’s easy for key considerations to be overlooked. Well-timed reminders can help surface the right information at the right moment, increasing the likelihood that approved repairers are selected and reducing avoidable repair delays and costs. The challenge for the AI Adoption Lab is to validate solutions that influence behaviour without removing agent autonomy — prompting better decisions while ensuring agents remained in control.
Early designs will focus on contextual prompts aligned to the information available at the time. If parts were scarce for a particular vehicle, agents would be reminded to check parts availability with the repairer. In rural locations, prompts would highlight tow costs and distance considerations.
Clear actions should be built into the nudges — allowing agents to proceed with booking, defer the decision, or flag recommendations as inappropriate. This will make it easy to act, while also generating feedback when suggestions didn’t fit the situation.
The lab should ran a series of experiments using a small number of test claims across a larger pool of agents. This allows behaviours and choices to be compared across groups, rather than relying on anecdotal feedback from one or two users.
As nudges are refined, the pilot will expand to cover additional scenarios, with ongoing comparisons between agents working with and without nudging. Four consistent themes will likely emerge:
Explaining the Choices
Do agents understand why a recommendation is being made? A/B tests explore explanation styles, levels of detail, and confidence cues to increase acceptance without slowing decisions.
Behavioural Change
Do nudges prompt the intended actions? Data shows which actions were taken, from which screens, and whether agents followed the prompt or worked around it.
Flagging errors
What happens when the AI is wrong? Experiments track when incorrect recommendations are followed, when they are flagged, and the reasons agents give.
Seeking further information
When do agents need more context before acting? Different claim profiles reveal when additional information is sought, when it is not, and what increases agent confidence in complex cases.
AI adoption isn’t a one-time decision — it’s a habit. Behavioural nudges help AI tools:
show up at the right moment
reduce cognitive load
become part of everyday work
When designed and tested properly, nudges remove the need for constant training or enforcement. That’s how AI moves from “available” to “used”.