(Re) Building a Ransomware Risk Score for the Future
Re-developing this score provided an opportunity to rethink how we build scoring mechanisms.
As the Data Science team embarked on this project, we knew our biggest challenge was improving our ability to deal with newer or otherwise hard-to-quantify attack vectors. Over time, our original scoring model has proven resilient in properly assigning risk, as we recently found in a long-term analysis of claims and scores. But the process of integrating new information to the score involves a lot of work and creates uncertainty in how it will impact scores broadly. Developing this score provided an opportunity to rethink how we build scoring mechanisms.
Future-proofing the score
Our existing cyber risk model is based on numerous sources of real-world data that link events (cyber incidents) with information about the IT systems that experienced the incidents. Vulnerabilities that appear frequently in our data, either because they have been around for a long time or they are very common, get assigned a weight by the model based on their link to reported incidents. That weighing is updated based on the continued frequency and severity of incidents. Whenever we run a scan on an organization we can rely on the model to tell us how much of a risk most vulnerabilities pose if we find them on an organization’s IT system.
But what happens when a new vulnerability arrives on the scene, or we want to account for a known risk factor that is not easy to see in the data we have?
From a data science perspective, this creates a quandary. We might know that expert security researchers are saying such vulnerabilities are a major problem. We might see reports in the news of attacks that have the hallmarks of the vulnerability. But compared with the rest of the inputs in the model, there’s a relatively tiny amount of actual data to support a scoring mechanism. Without lots of data, it’s hard to do data science.
Hard, but not impossible.
As we developed the ransomware risk score, we had a real-world experience like the one just described. As we observed the rise in ransomware attacks, we knew that exploits of BlueKeep, a known security vulnerability, were relatively rare in our data set — both because of the fact that it was only about a year old and because a single vulnerability will always be less common than other security factors that are more universal. (Corvus began alerting policyholders of the existence of BlueKeep if present on their systems over a year ago).
But just because it was rarer doesn’t mean it’s any safer for the companies that do harbor the vulnerability. We knew from security experts that this posed a serious risk. So if we were going to develop a more accurate score of ransomware risk, we needed a way to properly rate the significance of something like BlueKeep, and also future proof the score by building it in a way that allows us to add new vulnerabilities.
To accomplish this we turned to a Bayesian model. This enables us to incorporate assumptions from experts about the risk posed by any new vulnerability. We run the model with these expert assumptions expressed as a range of possible degrees of impact for each variable. As we get more real-world data about these risks, the model will tweak the weighting assigned by the original expert assumption, and eventually the impact rating will be backed up by as much data as the longstanding vulnerabilities.
Side note: a big part of this project that we’re not digging into here was solving computational challenges that come with assessing the final impact of the variables with this kind of model. Our team’s use of leading computational software, such as Python’s PyTorch package for machine learning, and optimal sampling techniques using variational inference methods, allowed us to optimize the model’s performance and speed.
Experts, Meet Data
Thanks to the new model, we were able to quantify the heightened risk of BlueKeep-based attacks. Most encouragingly, we now have a process by which we can integrate new risk factors that arise.
In fact, just a couple weeks after launching the ransomware risk score we got to put this future-proofing to the test. Following the massive SolarWinds breach, we were able to add new assumptions to the model about certain vulnerabilities that were associated with that hacking event, knowing the increased likelihood of a criminal using them. This way, we don’t have to wait until a slew of adverse events happen — we can anticipate that they are on the way, and score risk accordingly. The scores you see in a Smart Cyber quote now adjusts for these risks.
Extra Credit: Easier Explanations for Brokers and Policyholders
An added benefit of the Bayesian linear approach is it’s easier to reveal how any given factor will affect the score. The earlier iteration of the scoring model involves complex interactions between factors driven by ensemble methods in machine learning. That means one client who fixes a vulnerability might see little change in their overall score, while another client who fixed the same vulnerability sees a dramatic improvement. We could only make an educated guess as to why these discrepancies may have occurred.
With a Bayesian linear model, we can know with some certainty that when a client closes down a risky RDP port, for instance, will see a certain change in score. This opens up opportunities to make further improvements in the user experience of how our score is displayed and utilized by clients by exposing more of the inner workings.
We’re excited to share more from the Data Science team soon — and if you’re a data scientist who likes the idea of applying advanced methods to make the world safer by reducing cyber risk, check out our open positions at www.corvusinsurance.com/careers.
The rise of remote work and growing concerns over ransomware acted as partners-in-crime to get organizations to hone in on risk mitigation efforts over the past couple years. Through compiling our Risk Insights Index, we found that with certain initiatives — safer or reduced usage of RDP, growing use of email security tools, and other measures taken to limit the impact of threat actors — businesses are more prepared than a year before and ready to play defense. Those efforts are borne out in our finding that the rate of companies who pay a ransom when attacked with ransomware fell by half within a year.
The whisperings of “firming rates” start first, quietly in business meetings, then published in industry reports. Soon to follow, rumblings of a “hard market” are brought to the conversation. It’s cyclical in nature, and we see it across all insurance lines at one point or another. For years, Cyber Insurance stretched far and wide with “soft” market conditions, remaining highly profitable. Now that period of growth, with exceedingly available coverage and inviting terms, has stalled in the face of a hard market.