<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1354242&amp;fmt=gif">

02.03.21

Chris Hedenberg

(Re) Building a Ransomware Risk Score for the Future

 

[DIAGRAM] (Re) Building a Ransomware Risk Score for the Future

Re-developing this score provided an opportunity to rethink how we build scoring mechanisms. 

As ransomware rose to become the single biggest driver of cyber insurance claims in 2020, we felt that this aspect of cyber risk deserved more detailed reporting for brokers and policyholders. So we got to work. We decided to re-create one aspect of our overall cyber risk score, adding more detail and providing a separate report page in Smart Cyber quotes. You can read about the specifics of the score here
 

As the Data Science team embarked on this project, we knew our biggest challenge was improving our ability to deal with newer or otherwise hard-to-quantify attack vectors. Over time, our original scoring model has proven resilient in properly assigning risk, as we recently found in a long-term analysis of claims and scores. But the process of integrating new information to the score involves a lot of work and creates uncertainty in how it will impact scores broadly. Developing this score provided an opportunity to rethink how we build scoring mechanisms. 

Future-proofing the score

Our existing cyber risk model is based on numerous sources of real-world data that link events (cyber incidents) with information about the IT systems that experienced the incidents. Vulnerabilities that appear frequently in our data, either because they have been around for a long time or they are very common, get assigned a weight by the model based on their link to reported incidents. That weighing is updated based on the continued frequency and severity of incidents. Whenever we run a scan on an organization we can rely on the model to tell us how much of a risk most vulnerabilities pose if we find them on an organization’s IT system. 

But what happens when a new vulnerability arrives on the scene, or we want to account for a known risk factor that is not easy to see in the data we have? 

From a data science perspective, this creates a quandary. We might know that expert security researchers are saying such vulnerabilities are a major problem. We might see reports in the news of attacks that have the hallmarks of the vulnerability. But compared with the rest of the inputs in the model, there’s a relatively tiny amount of actual data to support a scoring mechanism. Without lots of data, it’s hard to do data science. 

Hard, but not impossible. 

As we developed the ransomware risk score, we had a real-world experience like the one just described. As we observed the rise in ransomware attacks, we knew that exploits of BlueKeep, a known security vulnerability, were relatively rare in our data set — both because of the fact that it was only about a year old and because a single vulnerability will always be less common than other security factors that are more universal. (Corvus began alerting policyholders of the existence of BlueKeep if present on their systems over a year ago). 

But just because it was rarer doesn’t mean it’s any safer for the companies that do harbor the vulnerability. We knew from security experts that this posed a serious risk. So if we were going to develop a more accurate score of ransomware risk, we needed a way to properly rate the significance of something like BlueKeep, and also future-proof the score by building it in a way that allows us to add new vulnerabilities. 

To accomplish this we turned to a Bayesian model. This enables us to incorporate assumptions from experts about the risk posed by any new vulnerability. We run the model with these expert assumptions expressed as a range of possible degrees of impact for each variable. As we get more real-world data about these risks, the model will tweak the weighting assigned by the original expert assumption, and eventually the impact rating will be backed up by as much data as the longstanding vulnerabilities. 

Side note: a big part of this project that we’re not digging into here was solving computational challenges that come with assessing the final impact of the variables with this kind of model. Our team’s use of leading computational software, such as Python’s PyTorch package for machine learning, and optimal sampling techniques using variational inference methods, allowed us to optimize the model’s performance and speed.

Experts, Meet Data

Thanks to the new model, we were able to quantify the heightened risk of BlueKeep-based attacks. Most encouragingly, we now have a process by which we can integrate new risk factors that arise. 

In fact, just a couple weeks after launching the ransomware risk score we got to put this future-proofing to the test. Following the massive SolarWinds breach, we were able to add new assumptions to the model about certain vulnerabilities that were associated with that hacking event, knowing the increased likelihood of a criminal using them. This way, we don’t have to wait until a slew of adverse events happen — we can anticipate that they are on the way, and score risk accordingly. The scores you see in a Smart Cyber quote now adjusts for these risks.  

Extra Credit: Easier Explanations for Brokers and Policyholders

An added benefit of the Bayesian linear approach is it’s easier to reveal how any given factor will affect the score. The earlier iteration of the scoring model involves complex interactions between factors driven by ensemble methods in machine learning. That means one client who fixes a vulnerability might see little change in their overall score, while another client who fixed the same vulnerability sees a dramatic improvement. We could only make an educated guess as to why these discrepancies may have occurred. 

With a Bayesian linear model, we can know with some certainty that when a client closes down a risky RDP port, for instance, will see a certain change in score. This opens up opportunities to make further improvements in the user experience of how our score is displayed and utilized by clients by exposing more of the inner workings. 

We’re excited to share more from the Data Science team soon — and if you’re a data scientist who likes the idea of applying advanced methods to make the world safer by reducing cyber risk, check out our open positions at www.corvusinsurance.com/careers

[RELATED POST] The Privacy Problem: A Conversation on Pixel Tracking with Experts from BakerHostetler

The Privacy Problem: A Conversation on Pixel Tracking with Experts from BakerHostetler

On January 5th, we hosted a webinar with Lynn Sessions and Paul Karlsgodt of BakerHostetler to discuss pixel tracking technology, the culprit behind the latest ad tech litigation and regulatory trend. Below is an exploration of prior and current website tracking litigation, and how it may impact non-regulated industries. 

[RELATED POST] Insurance’s Watershed: Lean Into Cyber, or Fade into Irrelevance

Insurance’s Watershed: Lean Into Cyber, or Fade into Irrelevance

At its best, insurance helps businesses manage and mitigate the risks they worry about most, and helps make everyone safer along the way. The data insurers have on effective interventions — and the lever of pricing to guide policyholders’ actions — are a powerful combination. Over time, the insurance industry has helped make buildings, work sites, and transportation safer – the key uncertainties people cared about.