Insurance Technology Diary

Episode 55: Don’t start from here

Guillaume Bonnissent’s Insurance Technology Diary

Years ago I worked a stint as a PA underwriter at a big Lloyd’s syndicate. As I stepped into the class, one of the first things I was given was the agency’s rating scale for UK football players. The base price to be charged increased with the age of the payer, but stopped at 32 years. For reasons that were not expressed, goalkeepers paid more for cover. If the club was located in the north, the rate was higher because the ground was deemed colder there, and injuries therefore more likely.

I was instructed to start with the base price, then modify the rate for specific individual’s circumstances. When I questioned the logic of all those assumptions, I received the answer that seems to be encoded in the London insurance market’s DNA: “It’s how we’ve always done it.”

Loss data for the book later fell into my hands. I visited the actuaries’ office at the back of the building, and asked them to compare the rating scale’s relationship to the actual loss experience. Outcomes had less correlation to the scale than ”bee stings to the speed limit,” I was told. It was like the old joke about giving directions, the actuary said. “If you want to get to risk-based pricing, don’t start from here.”

This all came to mind when I read a recent article by veteran insurance journalist Paul Carroll. He revealed how programmers prompted the IBM supercomputer Deep Blue to disarm chess masters, and how others trained Google’s DeepMind to stop Go champions in their tracks. More interesting, though, his article went on to reveal an even better way to get AI to beat the incumbents.

When teaching those early AIs to beat humans at games, their programmers logically trained the AI with everything we know about strategies for winning chess or Go. But these first-generation man-beating technoplayers were later routed by their own successors in machine vs machine showdowns.

The secret? Give the AI no instruction at all. It turns out that when we don’t tell the models everything we think we know, they do even better. A DeepMind model that wasn’t told anything about Go – not even the rules – thrashed its better-informed predecessor 100 to nil.

So much for games. DeepMind was recently put to the test of predicting hurricane tracks, and proved very much up to the task. Just like with game play, the AI’s forecasts improved when the prompts were stripped back. Instead of telling the computer to draw on everything we think we know about the physics of hurricanes, DeepMind did best when it was supplied only with historical hurricane data.

Google deployed its DeepMind ‘Weather Lab’ model to predict the track and intensity of Hurricane Erin, the first really big storm of the current season. According to Ars Technica journalist Eric Berger (another old hand), the model “not only beat the ‘official’ track forecast from the National Hurricane Center, but also bested a number of physics-based models that make global forecasts as well as hurricane-specific models.” In other words, AI does better when we don’t feed it limiting assumptions.

There’s a powerful lesson here. All too often in our sector, when we set out to construct a new data management platform or build an underwriting system we begin operating within a group of parameters based on what we think we already know. These are usually determined by how we have always done things. We immediately lay out boundaries within which our new tech has to function, which is natural enough, but is obviously limiting.

An example is the bordereaux. The very nature of this uniquely insurance-market document is a list of things. We very often set out to use technology to improve the way we compile the list, but DeepMind and Deep Blue would virtually scoff at this approach. They would, no doubt, find a super-efficient way to supply each risk or loss report instantly and directly to the required place in real time in a hyper-flexible format. Or, just maybe, they’d come up with an even more efficient solution. The less we tell them how, the more they can imagine.  

We multiply the boundaries by working within the confines of legacy systems. They automatically limit what our platforms can do by forcing them to meet the lowest common denominator of what our existing systems can do. The work-around is incremental implementation of new systems at a team level, rather than the ten-or-so-year big-bang approach to technological renewal.

That also gives, say, the marine team the chance to work without the assumptions that suit, say, the PA people. It’s an approach to the technological revolution which mean you can start from right where you are standing, and let the tech operate on an empty field.

Guillaume Bonnissent is Chief Executive of Quotech.