<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1354242&amp;fmt=gif">

Assessing The Assessment: Cyber Risk Scoring in the Ransomware Era

Corvus's cybersecurity score was built more than two years ago. Has time, and the rise of ransomware, affected its accuracy?

  • The absence of a long historical record of cyber claims data led Corvus to develop a cybersecurity risk score

  • Whether or not the score would remain accurate and predictive over time, as trends in cybercrime shifted, was an open question

  • Now, an analysis shows the correlation between the Corvus Score and claims over two years, a span of time that includes the sharp rise of ransomware

Underwriting Unconventionally

Cyber risk is notorious within insurance circles for being difficult to underwrite. 

We’ve noted on many occasions that historical data composed of claim information is limited in its usage for cyber underwriting: the purchase of cyber insurance wasn’t widespread until very recently, and few insured organizations means few claims. Limited purchase can lead to a variety of biases. Compounding the challenge, trends and attack methods have shifted so much in response to changes in technology that today’s risk landscape already looks much different than 2015’s, or even 2018’s. 

But in our early diagnosis of the challenge of cyber underwriting, we were wrong about one thing. And we are happy to have been.

If you’ve worked with Corvus before, you know that our solution to this lack of reliable historical data includes capturing much more information about the current state of risk, from numerous sources, and matching it with a comprehensive set of data about each policyholder that’s gathered by our IT security scan

Even with up-to-the-moment data, though, this approach still relies upon some assumptions about risk that have been gathered from real-world events − i.e., historical data. So while we have a much richer array of data points than a traditional underwriter, as well as proxies for digital activity and risk, how we interpret those signals is in part based on a record of past events.

From the outset, we understood that there were some unknowable aspects to this approach. How deeply could we trust this scant historical data to predict events in the future? Would the score we developed to encapsulate risk remain as accurate in the future as it was when it was created?

Based on everything we could know, we were confident in the durability of our model, but knew that only time would tell. Now, for the first time we’re excited to share some of our findings that relate our Corvus Score to actual claims data.  

The findings: let’s dig in 

We were right that historical data for cyber underwriting was unproven, and our skepticism at the time was healthy; but as it turns out, we were wrong to be so suspicious of its potential use over time. The base assumptions of Smart Cyber’s unconventional underwriting work − even as trends and attack methods change. 

First, a little background on our scoring model. 

For each account we underwrite we generate a Corvus Score based on the outcome of our proprietary IT security scan. Our Corvus Scores are built to align roughly to letter grade scoring. We’d expect to find that an organization with D grade (in the 60s) would be more likely to have a claim than one with a score in the 90s. We might also guess that claims would be more severe when scores are lower, since lower defenses can lead to more financially devastating attacks (not just a higher chance of an attack). 

After more than two years of running a real-world "experiment" the central idea behind the score has proven true. When we compare policyholders' original scores to the frequency of claims and severity of losses they've experienced, significant correlations are evident. (We run scans frequently for our policyholders, but in this case we examined only the original scores that were made upon quoting the policy, since these are the scores that would have impacted the underwriting decisions). 


[GRAPH] Claim Rate vs. Origin Score

Frequency of Claims: Percentage of Clients with a claim per score grouping* 

[GRAPH] Average Loss per Policy vs. Origin Score

Average Loss: Average incurred loss per policy by score grouping*

*Each score grouping shown here contains an equal number of policies, which explains variation in the score spreads represented on the x axis.   


Looking at scores of 90 and above against all lower scores (with the majority of these falling between 80 and 89), being a policyholder in the 90+ group means a 50% reduction in the likelihood of any claim, and a 25% reduction in average losses per policy. 

Likewise, comparing the lowest-scoring decile to the highest-scoring decile, we see a 10x difference in likelihood of a claim. 

Even if you slice the data again to segment by the size of the policy (total premium dollars) the overall trend holds up. For instance, the largest risks, which pay the highest premiums, see higher rates of claims as a group than do smaller policies, but within that group the original Score still accurately predicts the likelihood of claims and average loss. 

The bottom line is that the correlations are significant. We hope this gives brokers working with Corvus some added confidence in presenting our scores as determinative of risk. Paired with our IT security recommendations, this can help to convince clients that they should take action on improving their Corvus Score in order to reduce the risk of a cyberattack.


One reason we think these findings are of particular interest right now can be summed up with one word: ransomware. 

Ransomware has caused a great deal of hand-wringing in the cyber insurance market. After over a year of intense ransomware activity − a major uptick in ransomware attacks was observed around mid-2019 and has sustained − insurers are seeing a troubling trend in claims. Assumptions about cyber risk that were formed before this wave are leading to unbalanced books of business. 

Corvus is not immune to these claims (although we have reduced the frequency of ransomware events through detection of RDP). Notably, though, our score has remained as predictive as ever. The ransomware wave simply doesn’t show up in the data. In fact, the correlations are slightly stronger now than they were prior to ransomware’s rise. 

To us, this means that the security criteria we thought were predictive of cyber risk when we launched Smart Cyber Insurance in 2018 have proven resilient. The fact that the model was able to absorb a major shift in practices by cyber criminals is encouraging as we gaze into the unknowable future of cybersecurity. It demonstrates that cyber criminals continue to focus on the same types of IT security weaknesses that are the focus of the Corvus Scan, despite their evolving tactics. We’ll continue to be vigilant and implement the best, newest data we can to improve our model, but we also know that our assumptions won’t necessarily be made irrelevant by the changing tides. 

Perhaps we were too critical of the historical record available to us for cyber underwriting. It is short, but mighty.

Recent Articles

Change Healthcare Hack: Everything You Need To Know

Change Healthcare experienced a ransomware attack with unprecedented fallout. What happened, and what have we learned?

Women in Cyber: Advice from the Field

In honor of Women’s History Month, we connected with women making significant contributions to cyber for career advice, lessons from the field, and more.

Law Enforcement Can Help in a Cyber Crisis — But Prevention is Even Better

Law enforcement is thwarting threat actors on the dark web, but how can organizations lay a strong security foundation (with or without the FBI's help?).