Insurance Data Diary
No. 3, October 2022
Insurance data and technology commentary and news from Quotech Founder Guillaume Bonnissent.
Clear evidence of perils
Swiss firm PERILS provides natural catastrophe exposure and event loss data for seven perils in 21 European countries. The aptly named organisation has just expanded its coverage to include flood events. At the same time, PERILS announced it has upsized the severity threshold for its reporting on extratropical windstorms by 150%, from €200 million to €500 million. The change is driven, PERILS said, by “market growth and claims inflation over the last 13 years.”
Each of the paired announcements delivers an interesting message. The first underscores a fundamental reality about data (for insurance or for any sector). It must be collected and compiled before it is useful. That much is obvious, but the fact that comprehensive European flood data is only now being compiled by PERILS shows how far we have left to go.
Data must come first. Without it, we cannot achieve the efficiencies promised by technology for insurance. The multiple sources from which it can be obtained range from brokers to third-party providers like PERILS to resellers like Quotech, but the best data remains a company’s own. It is therefore remarkable that many insurers, MGAs, and brokers have yet to categorise and store their own data in ways which make it accessible to potential users in practical, efficient ways – functions which neatly define the role of insurance technology companies.
That takes us to the second (and also obvious) message from PERILS: valuable insights can be gleaned from data. The decision to ditch their old €200 million baseline for windstorm losses shows this. Clearly the company has decided, based on the numbers, that a €200 million loss can now be considered attritional. But PERILS can have done so only after its accumulated data was categorised, stored, and made accessible in a simple way to the individuals that need it for analysis.
If we were able to look deeper into the relevant Perils datasets we would no doubt see why. With the right technology in place to categorise and store the PERILS data, and to make it accessible to users in a practical way, we could generate powerful knowledge with impact reaching from aggregation to reinsurance attachment points. In short, data is valuable when it has been collected, parsed, and presented. That’s the role of insurTechs.
A.I. proposals foiled
In the previous edition of Insurance Data Diary I reported briefly on the UK’s ongoing consultation on the use of Artificial Intelligence. The document’s authors propose a “proportionate, light-touch and forward-looking” regulatory framework, and invite input. The august Forum of Insurance Lawyers (FOIL) has now weighed in, through an article in Insurance Edge.
The solicitors, it seems, don’t like the idea of principles-based regulation governing the use of AI. That would leave decisions in the unguided hands of regulators, which FOIL believes is inadequate. “Guidelines are not law,” FOIL says. “The mere fact that a Regulator may seek to prohibit a particular practice by the imposition of rules and edicts does not mean that the practice in question is automatically contrary to law.”
We should not be surprised at lawyers arguing for more laws, but the interest comes at the conclusion of FOIL’s article. “The proposal arguably provides a timely opportunity for AI-utilising companies to audit and offset legal and financial risk with sound cyber security and data management frameworks,” the lawyers declare as an aside. This is an utterly sensible agenda which should be adopted no matter what form the AI regulation of AII in insurance ultimately takes.
For more of my thought’s on AI in insurance, see my latest Quotech blog. A link is below.
Elsewhere in the news…
Some data stories that caught my eye:
Data management is getting worse, according to the second annual Data Health Barometer survey by Talend. Their poll of about 900 companies in multiple sectors found that 99% recognise data as crucial to success, but 97% face challenges in using data effectively. The survey recorded a ten-point decline year on year in respondents’ satisfaction with data timeliness, accuracy, consistency, accessibility, and completeness.
Data faces a climate challenge, according to a recent article in the tech magazine Wired. The problem is heat. Apparently this summer’s heatwave overwhelmed some data farms’ air conditioning systems, which were designed for an earlier, cooler climate. “The data [used to calibrate cooling] is historical, and represents a time when temperatures in the UK didn’t hit 40 degrees Celsius,” the magazine reports. That threshold, as we all remember, was smashed this year. Sadly, though, predicting the future with old data is a familiar problem.