Cat model pioneer cautions insurers on over reliance of models

Insurers are relying too heavily on catastrophic models says Karen Clark, a pioneer of catastrophe modeling. Two years ago, Clark was honoured with an award certificate for the Nobel Peace prize for her work on severe weather events that contributed to the Intergovernmental Panel on Climate Change (IPCC).

While Clark concedes the importance of complex decision-making models in helping to provide a consistent decision-making framework and “expert quantification of key system variables,” she is concerned that “model precision can be confused with accuracy.”

Why the Reliance on Cat Models
This confusion occurs when the industry relies on, and emphasizes, the probable maximum loss—and this leads to inaccurate assessments, said Clark.

“The models are still approximations. They are subject to significant uncertainty, and provide estimates, not answers.”

Despite industry awareness of the limitations of these models, there appears to be an industry standard of relying on the quantification these models provide.

According to Clark, a large part of the problem arose when rating agencies got involved.

“Pressure from rating agencies to look at the models drove a lot of this over reliance on point estimates, which come out of the models.” As a result, a company’s required capital is measured against the metrics from the cat models, explained Clark.

Mike Mangini, vice president and catastrophe manager at Chubb Insurance, considers models an important part of doing business—because of what they offer, and because of industry expectations—but acknowledges that there are limitations.

“[Cat modeling] is a great internal tool [that] gives us an insight into the risk we are taking on,” explained Mangini. However, he is quick to concede that:  Models cannot always predict everything.

“They are a necessary tool recognized throughout the industry as a gauge,” explained Mangini. As a result, reinsurers, rating agencies and even cat bond investors will often examine the potential risks held by insurers based on the models. “On top of that state and federal governments all require [carriers to] submit model output,” said Mangini.

Why Cat Models Are Required
Clark introduced her predictive models in the 1980s, when “companies were just using rudimentary underwriting formulas multiplying premiums by factors, and this wasn’t sufficient.”

These first models brought together three important components: hazards, engineering and exposure data and showed that the industry was missing the loss potential by a factor of 10, said Clark. By the early 1990s almost every insurer and reinsurer had adopted predictive modeling.

While the models improved upon the basic formulas that they replaced, they are not a panacea for planning and assessing the impact of catastrophes and severe weather patterns, explained Clark.

“Accuracy should not even be in our vocabulary when we talk about these models,” said Clark. This is because the models (and their creators) are constrained by the data.

Data Constraints Hurts Cat Models
“Models are a great tool, but we can’t use them as a magic black box that spits out the answer,” explained Clark. “We need to get back to a happy medium. Every company should know what the models say about the loss potential for their portfolio, but no company should believe what the model says.”

Claire Souch, vice president of model management at Risk Management Services (RMS), a leading catastrophe modeling company, agreed with Clark on the uses and limitations of modeling.

“The main purpose of a cat model is to produce a view of risk that is based on a scientific assessment that extends way beyond most companies’ loss history.”

Souch explained that as a cat modeling company, RMS uses a “mixture of scientific assessment, [and] external researchers and academics around the world to build a view of what of what the distribution is of all possible events that could strike a particular region, and then for each of those events a probability of occurrence.”

The limitation with this set-up is the number, consistency and accuracy of records available for each geographic region.

“The U.S. and Canada have really good historical records, but in other parts of the world that is not going to be the case,” said Souch. However, the best source of severe weather is in Europe, where modelers found a record describing an enormous windstorm that hit Europe during the 15th century.

Because of these limitations, RMS does forewarn clients of the dangers of overrelying on the models. “There are things that happen that are [simply] not in the model,” said Souch.

The most recent example was with Hurricane Katrina in 2005. “Katrina, and its impact on the gulf coast, created a situation the modelers were not prepared to handle,” explained Mangini.

The real problem was not the severity of the storm, but the extended amount of time that elapsed between the hurricane’s landfall and the ability of residents and business owners to return to their property for assessment and clean up.  “That lag of weeks and months led to losses that were amplified.” Losses that were not part of the cat model predictions.

Insurers Role in Cat Model Predictions
Mangini also believes insurers have a valuable role to play in the cat modeling process.

“Before you even start looking at the loss that is being estimated by the model, start with the data you are actually putting into the model,” Mangini said. “Do you have good insurance to value on the risks? Do you know the construction? The occupancy? The number of stories [of a building]? The year [it was] built? If you don’t have that basic information then you haven’t created a foundation and without that solid foundation, anything the model produces and all the science [behind it] will be diluted.”

“There are uncertainties and so one should not look at the model output as a final answer,” Mangini stated. “One should think of it as guidance as to how we approach the catastrophe model question. The bottom line is that it is a model, not an end all.”

Chris Brophy, a director at LECG and a certified public accountant who has 16 years experience assisting policyholders with complex insurance claims, agrees with Clark’s assessment on the industry’s over reliance on predictive models, but also believes that despite limitations, these models are the best the industry has, at the moment.

Need for Industry Specific Data
Brophy believes that more accurate data and more differentiation is the key to improving the models’ accuracy. For Brophy, this accuracy would also be enhanced by taking into consideration the intricacies of each specific industry sector.

Brophy came to this decision after working for a series of hospitals that were affected by Hurricane Katrina. While he acknowledges that all businesses suffered losses from this hurricane, hospital losses were significant. The key reason: Unlike other businesses, hospitals don’t close, said Brophy.

Souch agrees with Brophy. “When we work with clients we work with them to understand how their specific book of business will be compared with the average book of risk. Knowing, for example, information about specific buildings might alter the loss adjustment [calculation].”

Yet, despite all the talk about the need for more information to develop more accurate models, Clark is not optimistic. “We have so little data points, so little information, there is not much modelers can do to make them more accurate. We can certainly fine tune the models and improve them, but even if we do that, there is still this huge amount of uncertainty.”

Originally published in Canadian Insurance Business Magazine in August 2009 and co-written by Alex Vizer