Insurance Technology Diary
Episode 44: Fun with Flags
Guillaume Bonnissent’s Insurance Technology Diary

England’s Locomotive Act of 1865 was intended to safeguard life in an age of rapid technological change. It limited all vehicles with mechanical propulsion (“road locomotives”) to the walking pace of four miles an hour in rural areas, and down to half that in cities. If a locomotive was hauling more than one wagon, a man had to “precede such Locomotive on Foot by not less than Sixty Yards, and shall carry a Red Flag constantly displayed.” The progress of motoring technology was restricted by human limitations.
I’ve seen it argued that requiring people to check AI outputs is a similar limitation. This argument is obviously flawed, for two reasons. First, in 1865 an element of caution was sensible with this new technology, as it was unfamiliar and less than perfectly reliable. The same is true of AI today. Just as crashes and explosions happened then, hallucinations happen now.
Second, rules are made for breaking. The red flag requirement was revoked within five years. Lawmakers became convinced that motorised vehicles would be common, safer, and therefore no longer needed such drastic restriction. Equally, better, finer tools (like registration and licensing) were adopted. Similarly, if human agency over AI becomes pointless, it will soon be replaced by something better.
But now it’s not. In 2025 our relationship with AI – like our relationship with steam engines in 1865 – is at a very early stage. We are very much in the experimental stage (except, of course, where AI has already been used for many years; alas those early applications typically have gone entirely under the radar of anyone but the techies).
The fact is that AI, like the road locomotive, is a very wide category. The later includes tools as different as the steam engine and the e-bike. Similarly, Generative AI (which powers LLMs) and Narrow AI (used, for example, to control self-driving road locomotives) are very different tools. My very first tech product incorporated AI, but back then it wasn’t a selling point, so we didn’t flag it (so to speak).
A blog this week by Ethan Mollick, Co-Director of the Generative AI Lab at Wharton (whose LinkedIn meta-tag begins “Ethan Mollick is a Genius”), tries to reconcile three facts about workplace AI with a fourth, as follows:
- AI boosts work performance.
- A large percentage of people are using AI at work.
- More transformational gains are available with today’s AI systems than most [people] currently realize.
- These gains are not being captured by companies.
Mollick goes on to offer this slightly obvious reconciliation: “AI use that boosts individual performance does not naturally translate to improving organizational performance.” He then goes into an interesting, long-winded explanation about how process and corporate culture must be aligned specifically to make that happen.
I don’t particularly disagree with any of what he says in the blog (entitled Making AI Work: Leadership, Lab, and Crowd – read it), but I believe our genius has missed some big factors in his four points.
To support fact number one, he reports that knowledge workers in Denmark say AI halved their working time for two out of five tasks, and that Americans workers say AI reduces 90-minute tasks to 30 minutes. For fact two, apparently “65% of marketers, 64% of journalists, and 30% of lawyers… [say they have] used AI at work.” Overall, the number was 40% of workers in April 2025.
But when people say they use AI, they usually mean they log on to ChatGPT or a similar LLM platform, and use it to generate a first draft or the like. This may make them more productive, but the time they’ve saved, and how they use it, might be neither recognised by, nor beneficial to, their employer. Further, as Mollick points out, “many workers are hiding their AI use, often for good reason, while others remain unsure how to effectively apply AI to their tasks, despite initial training.”
More importantly, online LLMs are only one of many applications of AI. A great many other, super-beneficial types and uses of AI are possible. Even some 20-year-old technology platforms use machine learning. LLMs are not the original nor best, they’re just the most flash and accessible. If AI is running in the background on your D&O Underwriting Workbench, for example, you may be blissfully unaware, whilst headily enjoying the productivity boost (since you no longer must read 600-page 10Ks), because you think AI is ChatGPT.
The LLM versions of AI may make employees more efficient, but won’t replace them any time soon. We still need the red flag. And as Mollick said, the companies that provide these tools don’t really know how they’re going to be used, and they don’t know your industry in particular. They are like a tractor. The manufacturer does not know (or care) what you intend to pull, or where you want to go.
That’s okay. We’re only at the beginning, the red flag stage, of AI, at least I some of its forms. The boundaries of the possible are expanding rapidly. We already know how to use AI for the little things; we need to think big. Companies don’t need to ask “what has AI done for me lately?” Instead they should ask if AI can achieve the unimaginable, then set their engineers to it.
To make the unthinkable real, we simply need to understand interlinked processes, identify pain points, and apply the correct tools to suit the specifics of the situation. By thinking wider, more will be achieved – with no red flags.