Weather Modeling Pioneer Karen Clark Slams Use of Short Term Models

Yep it’s official ladies and gents, short term modeling doesn’t work too well according to Karen Clark:

Karen Clark & Company, independent experts in catastrophe risk, catastrophe models, and catastrophe risk management, today released a report on the performance of near term hurricane models. The report finds the models, designed to predict insured losses in the U.S. from Atlantic hurricanes for the five-year period ending in 2010, significantly overestimated these losses for the cumulative 2006 through 2008 hurricane seasons.

Slabbed readers can obtain a pdf of the report here. The press release continues:

Near term models were introduced in 2006 by the three major catastrophe modelers − AIR Worldwide (AIR), EQECAT and Risk Management Solutions (RMS). AIR initially predicted an overall annualized increase in hurricane losses of 40 percent above the long term average, but later lowered that figure to 16 percent in 2007. EQECAT predicted increases of between 35 and 37 percent, and RMS consistently predicted an overall increase of 40 percent above the long term average.

Assuming long term average annual hurricane losses of $10 billion for each year, these figures translate into cumulative insured losses for 2006 through 2008 of $37.2 billion, $40.8 billion, and $42 billion respectively, for the AIR, EQECAT and RMS models. The actual cumulative losses were $13.3 billion, far lower than the model predictions, and more than 50% below the long term cumulative average of $30 billion.

“With the close of the 2008 hurricane season, and three years into the application of near term hurricane models, it is a good time to evaluate the models’ performance,” said Karen Clark, President and CEO of Karen Clark & Company. “While it is still too early to make definitive conclusions about the near term models, with insured losses significantly below average for the cumulative 2006 through 2008 seasons, initial indications are there is too much uncertainty around year-to-year hurricane activity and insured losses to make credible short term predictions.”

Catastrophe models were introduced to the insurance industry in the late 1980s. By utilizing many decades of historical data, the models gave insurance companies better estimates of what could happen and more specifically, the probabilities of losses of different sizes on specific portfolios of insured properties. The destructive 2004 and 2005 hurricane seasons were catalysts for introducing the near term models. Use of these models by insurance and reinsurance companies, which are based on short term assessments of the frequencies of hurricanes, was a radical departure from the way in which catastrophe average annual losses (AALs) and probable maximum losses (PMLs) are typically derived.

According to the Karen Clark & Company report, in order for insured losses to reach 40 percent above average for the five year period, in line with the highest model predictions, the next two years will have to be similar to 2004, or there will have to be another Hurricane Katrina.

The report notes that hurricane activity is influenced by many climatological factors, many of which are known, but some unknown, by scientists. There are complicated feedback mechanisms in the atmosphere that cannot be quantified precisely even by the most sophisticated and powerful climate models. The report recommends that insurers, reinsurers and regulators evaluate the efficacy of the near term hurricane models in light of this uncertainty.

“Standard, long term catastrophe models are characterized by a high degree of uncertainty, and short term assumptions on frequency and severity only magnify this uncertainty and the volatility in the loss estimates,” noted Ms. Clark. “While computer models are valuable decision-making tools, they can lead to bad business decisions when not used correctly. Model users frequently forget that all models are based on simplifying assumptions, and therefore all models are wrong. Models attempt to replicate reality, but they are not reality.”

And this snippet from the report itself:

Given that in many coastal areas the catastrophe loss cost is the most significant component of the property premium, we need to ask if property owners should be subjected to this increased uncertainty and volatility in determining insurance rates. As stated earlier, an original objective of the catastrophe models was to bring more stability to insurance markets by providing a more credible view of risk than short term experience. Other business decisions relying too heavily on short term predictions can also be disruptive to effective business strategies.

Of course, if we knew there was a long term trend in either hurricane frequency and/or severity, and the trends could be credibly quantified, that information should be captured in premium calculations and other risk decisions taken by insurance companies. But hurricane activity can change markedly year to year, as the past several seasons illustrate. Two or three active seasons in a row, even those as extreme as 2004 and 2005, do not necessarily indicate a continuous trend, particularly for hurricane landfalls and insured losses.

Conclusions

Three years into the application of near term hurricane models, the model predictions have not performed well. While all three major catastrophe modeling companies predicted significantly elevated hurricane activity and losses for the period 2006 through 2010, two of the past three years have been below average. Catastrophe models are designed to simulate thousands of potential scenarios of what could happen to an insurance company – not what will happen in any given year or short time period. While catastrophe models, used appropriately, can provide credible estimates of a company’s potential loss experience, the models are not able to predict where, when or how big actual events will be. While a definitive conclusion on the near term hurricane models cannot yet be made, early indications are that a five year period is too short for hurricane loss estimation.

Finally here is the link to the National Underwriter story which broke this news. Thanks Sam!

sop

5 thoughts on “Weather Modeling Pioneer Karen Clark Slams Use of Short Term Models”

  1. I don’t think anyone would dispute Ms Clark on the need for methods to assess outcomes. Insurers have used plenty of such methods previously without misusing the modeling data.

    In fact I’m drawn back to the Slabberator and the Cat Intensity Control. It seems to me this has become a case of the industry not liking the outputs of their financial risk modeling so they change the inputs to use short term (and unreliable) weather models and go to FLOIR with another 50%+ rate up despite dropping customers in Florida as quickly as they can.

    Is it any wonder Nassim Taleb has little use for the misuse of math theory in these areas. And of course the problem is not with the math, rather it is the people who mis-use the math.

    sop

  2. I think the insurance people confused issues of global warming with short term variations in weather patterns. They would be better off looking at issues of short term supply and demand on the reinsurance market. That seems to be Mr. Buffets method, and he seems to have done well in the short term. And Ms, Carpenter mentions this issue in her paper. She notes that a number of years after a disaster insurers come crowding back into a market and drive the profit back out.

    I should note that Buffet at least claims that he watches the upper loss limits very carefully when he sells his reinsurance. But I still suspect he is playing with fire. When your 80 something years old your not likely to get burned, but as Greenspan has shown events can turn around a reputation very quickly.

Leave a Reply

Your email address will not be published. Required fields are marked *