Guillaume Bonnissent’s Insurance Technology Diary
Episode 72: Hero worship
Guillaume Bonnissent’s Insurance Technology Diary

Having sons means I watch superhero films. After decades of character-building action on page and screen, some fictional fighters have highly developed, complex characters. We can learn a lot from them about life, love, and technology for insurance.
Consider Steve Rogers, Captain America’s army alter-ego. As he prepares for battle, he typically asks only a single question: “What’s the right move?” Rogers wants it clean cut. He’s answer-oriented, and he doesn’t want to have to use his own judgement.
Contrast Tony Stark. When he’s preparing to fight the good fight as Ironman, he tends to ask a whole bunch of questions to get a pretty clear picture of what lies ahead. He considers multiple possible scenarios before he decides on a course of action.
Unfortunately when it comes to Large Language Models, the superheroes of AI, users tend to be more like Captain America than Iron Man.
This week I finally got around to reading an article called “Probabilistic Multi-Variant Reasoning: Turning Fluent LLM Answers Into Weighted Options.” I confess to the title had put me off, and the only reason it stayed in my inbox was the much friendlier subtitle, “Human Guided AI Collaboration,” which sounds much more like my kind of lunchtime reading.
The article says that we often place too much responsibility on AI. We use it like an answers machine. We write a prompt, maybe play around with it a little, then stop when we like the answer. It’s the Steve Rogers approach.
The author, a chap named Alan Nekhom, says that instead we should treat AI like a scenario generator. We should ask it for multiple answers to the same questions. We shouldn’t just ask and then do what we’re told, like a soldier. We should ask for options, then make a choice based on our own strengths and experience, like a successful modern industrialist.
We should all be more like Tony Stark. Make our Large Language Models work a little harder, then do some work ourselves. Ask for multiple opinions, garner our AI’s opinion about which of its outputs it ‘thinks’ is best, then use our human judgement to make our own choice. It’s a neat and nifty approach.
I saw another article by a different AI pundit this week. An AI developer named Matt Shumer wrote on X that AI is so good now that it can do all the work he used to do. That, he suggests, is why AI is so much better with every new release: the big providers use AI to build their new AI engines. It’s the Terminator scenario where the machines keep building better machines until they can make the puny humans redundant.
Shumer says not only is AI doing his job now, but it’s better at it than ever he was. He suggests that pretty soon AI will be better than us at law, finance, medicine, accounting, consulting, writing, design, analysis, and customer service. For him, AI is no superhero. It’s Green Goblin, an evil villain intent on mayhem and the destruction of order.
Shumer is right, AI can do amazing things. It can write code that’s clean and perfect. Yet we at Quotech always have to spend time tweaking the outputs. We always ask for options. We work the AI to get the right result, rather than wandering off to make coffee whilst it does our jobs for us. And it happily and consistently produces options.
Nekhom is right, but he’s describing a problem with the way LLMs are used, not with the tools themselves. AI is great at coding, but not so great at building solutions that make people’s jobs easier. That goal still demands an expert human who uses AI coding tools to develop alternatives. Human-guided AI collaboration is required.
With it, the reality of AI is somewhere between Nekhom’s answer machine and Shumer’s human replacement. My conclusion that both writers are correct has practical implications for underwriters using LLMs in their day-to-day speciality re/insurance underwriting processes.
Think about your junior underwriters. You constantly check on them. When they prepare a quote, you don’t simply take their rate and send it to the broker. They’re not answer machines. You ask endless questions: Have you considered this? Have you thought of this? What about doing it this way?
You must treat LLMs like junior employees. Assume they don’t know what they’re doing. Ask them questions. Treat them like scenario generators. And don’t simply take their output and rely upon it. They’re not superheroes. And like superheroes, AI can go rogue. You must have controls in place. You need people in the loop at every stage. You need Human-guided AI collaboration.
But LLMs aren’t workplace supervillains, either. They’re the opposite of the Green Goblin. They won’t replace your junior underwriters. Instead they will make the youngsters more efficient. Increasingly their role will be to use and question AI tools to a get clear picture of every risk. A more efficient junior makes a more efficient you.
* Like every Insurance Technology Diary entry about AI, this one is accurate only to the best of my knowledge at the time of writing. The pace of AI progress is so great that I cannot guarantee it remains so now that it’s finished, let alone when you read it.
Guillaume Bonnissent is CEO of Quotech.
