972
Views
9
CrossRef citations to date
0
Altmetric
Feature Articles

Capital Requirements for Cyber Risk and Cyber Risk Insurance: An Analysis of Solvency II, the U.S. Risk-Based Capital Standards, and the Swiss Solvency Test

&
Pages 370-392 | Published online: 30 Oct 2019
 

Abstract

Cyber risk is becoming more significant for insurance companies in both underwriting and operational risk terms, but the characteristics of cyber risk are still not yet well understood. We contribute to the literature by analyzing the role of cyber risk in insurance regulation frameworks. The aggregated cyber risk exposure of an insurer is estimated by fitting different marginal distributions and dependence models to historical cyber losses. This aggregated cyber exposure allows us to derive the insurer’s survival probability and compare it with the goals of regulatory frameworks, such as the U.S. Risk Based Capital (RBC) or Solvency II (SII). Our findings indicate that regulatory models underestimate the potential risks associated with cyber threats. This is especially true for small cyber insurance portfolios, which are predominant in practice today. Regulatory models should be adapted to account for the heavy tails and dependence structure specific to cyber risks, instead of assuming “one size fits all.”

ACKNOWLEDGMENTS

The authors are grateful for feedback and comments from Kwangmin Jung, Christian Kubitza, Pingyi Lou, Joan Schmit, Shaun Shuxun Wang, Steven Weissbart, Jan Wirfs, and the participants of the 2016 annual meeting of the American Risk and Insurance Association, the 2016 annual meeting of the German Finance Association, the 2017 insurance research seminar at the University of St. Gallen, and the 2017 Insurance Risk and Finance Research Conference at NTU Singapore.

SUPPLEMENTAL MATERIAL

Supplemental material for this article can be accessed on the publisher’s website at www.tandfonline.com/uaaj.

Notes

1 See National Association of Insurance Commissioners (NAIC 2016b), Betterley (Citation2015), Swiss Re (Citation2015), Advisen (Citation2015), and Marsh (Citation2016). The premium volume for the year 2015 is U.S.$3.5 billion globally, with U.S.$3 billion from the United States only. The market is expected to grow to U.S.$18 billion until the year 2025 (see Swiss Re Citation2015). KMPG (2017) estimated that cyber risk insurance will become the biggest nonlife insurance business line within the next 20 years.

2 Only recently regulators in the United States and United Kingdom have recognized the increasing importance of cyber risk. For example, the NAIC has designed the cybersecurity and identity theft coverage supplement that was first filed in 2016 with the 2015 annual statements (see NAIC 2016b). The U.K. regulator is developing a qualitative cyber risk assessment in the context of the own risk and solvency assessment (see Prudential Regulation Authority [PRA] 2016).

3 Probably due to the lack of data, most research about cyber risk does not focus directly on the monetary loss. Instead, proxy variables are used, such as number of records breached (Maillart and Sornette Citation2010; Edwards, Hofmeyr, and Forrest Citation2016), number of attacks (Böhme and Kataria Citation2006), or number of system failures (Mukhopadhyay et al. Citation2013).

4 We attribute this lack of data to a relatively short history of observed cyber losses and to the changing technological environment that might render past observations irrelevant. Small risk pools limit the benefit of diversification and amplify the lack of data. Moreover, the extent to which general property and casualty policies cover cyber losses (so-called silent covers) is not clear.

5 See Eling, Schmeiser, and Schmit (Citation2007), Holzmüller (Citation2009), and Braun, Schmeiser, and Schreiber (Citation2015). The intention of the article is to analyze the regulatory treatment of policies written on cyber insurance (affirmative covers) and operational risk within the insurance firm. On top of these two aspects there is the problem of silent cyber risk covers in existing policies (nonaffirmative covers), which are discussed in the conclusion of the article but are not the main focus of this analysis.

6 Our article complements a number of recent papers that analyze the suitability of regulatory frameworks, especially Solvency II, from different perspectives. Aas et al. (Citation2017), for example, test different models for the interest rate to derive best estimate liabilities under Solvency II. Christiansen and Fahrenwaldt (Citation2016) extend the one period (1 year) view, usually taken by the regulator, to a time-continuous framework. Their model incorporates risk factors that are based on multivariate time-continuous stochastic processes. How scenarios can be used to gauge and complement regulatory frameworks is shown in Christiansen et al. (Citation2016). They suggest a procedure to derive worst-case scenarios from a set of parameters and relate it to the Solvency II standard formula and internal models. Their worst-case scenarios indicate that the Solvency II standard formula underestimates risk for life insurance. Finally, other emerging risks may also have consequences for Solvency II. For example, Shao, Sherris, and Fong (Citation2017) assess the Solvency II capital requirement for long-term care insurance.

7 For example, health insurers might have a much larger volume of transactions with the customer compared to property and casualty and life insurers.

8 For example, Wirfs (Citation2016) considers portfolios of 50 policies and Eling and Wirfs (Citation2016) vary the portfolio size from 50 to 500 based on expert judgment. An additional clue is given by the report on the cybersecurity insurance coverage supplement published by the National Association of Insurance Commissioners (NAIC 2016d; Insurance Information Institute Citation2017). This report shows that the median portfolio size for stand-alone cyber policies is around 50.

9 The Pareto index is here defined as α=1/ξ, with ξ being the shape parameter for the generalized Pareto distribution; for a formal definition of the Pareto index α see, e.g., Neslehová et al. (Citation2006). Eling and Wirfs (Citation2019) also find that cyber risk is slightly less heavy-tailed than noncyber operational risk (Pareto index of 0.61). In fact, Pareto indexes lower than 1 are a reliable finding for operational risks in empirical analyses (see, e.g., Neslehová et al. Citation2006).

10 Wheatley et al. (Citation2016) find that the Pareto index decreased from 0.57 in 2007 to 0.37 in 2015, indicating that cyber risk has become more extremely distributed. However, a similar analysis conducted by Eling and Wirfs (Citation2019) with a broader dataset does not find a comparable time effect.

11 A distribution is heavy-tailed (infinite second and higher moments) if the Pareto index is α<2 and extremely heavy-tailed (infinite first and higher moments) if α<1 (see Ibragimov and Prokhorov Citation2016).

12 Note that the correlation only exists for heavy-tailed distributions if the support is limited (cover limit or limited liability).

13 In the appendices we conduct the same analysis as in the main text for three different datasets. In Appendix A we consider a subset of the SAS OpRisk data used in the main text (used in Biener et al. 2015). Appendix B presents the analyses for a frequently considered data breach dataset (e.g., Maillart and Sornette Citation2010; Edwards et al. Citation2016) provided by the Privacy Rights Clearinghouse (PRC 2016). Appendix A reports the result for the more recent period of 2006–2014. All three datasets lead to the same qualitative conclusions as those presented in the main text.

14 This frequency might still be too high, because it represents the frequency contingent on a company having a cyber loss event within the observed 19 years. Like in many other empirical studies, we cannot observe the full sample including all companies that never had a cyber loss event. As an alternative approach to overcome this problem, we estimate the frequency for the sample of all companies listed in the S&P 500. We found 130 events for those 500 companies over a sample period of 10 years, leading to a frequency value of 2.60%. In Appendix C we show that our findings also hold for this alternative value for the frequency parameter. Moreover, it has been suggested that the frequency of cyber incidents is increasing. Thus, we also report in the Appendix C the results for a frequency of 15%, leading to qualitatively identical results.

15 We start with the Poisson and later also consider the negative binomial distribution, which Eling and Wirfs (Citation2019) find to be the best fit for the loss frequency of cyber risks.

16 The authors use the bootstrap goodness-of-fit test suggested by Villaseñor-Alva and González-Estrada (Citation2009) to find the optimal threshold. Note that if the threshold is higher, fewer observations are available for the estimation. As a result, the variance of the estimated parameters increases while the Pickands–Balkema–de Haan theorem ensures that the model’s bias decreases (see, e.g., Scarrott and MacDonald Citation2012). Therefore, choosing the optimal threshold is a trade-off between the model’s bias and variance.

17 Biener et al. (Citation2015b) show that U.S.$50 million is a typical deductible used in many cyber insurance policies. Moreover, as discussed in the preceding, 50 policies is a typical size for cyber insurance portfolios.

18 Parameter risk describes the deviation of our estimated parameters from their true unknown values. The estimated parameter is drawn from a sampling distribution. Combining a Poisson distribution with a gamma(α,β) sampling distribution would yield the NB distribution (see, e.g., Wang Citation1998).

19 EIOPA (2014) justifies neglecting reputational risk by the lack of historical data on that risk category.

20 Here we derive the risk factor for the premiums as sp=vp(ap·spl+(1ap)spnl), where spl is the given factor for life premium risk and spnl the risk factor for nonlife premiums. When assuming that the premiums in our dataset stem equally from life and nonlife business, i.e., ap=0.5 (see Statista Citation2017c), we can derive sp=0.035. In a similar way we find that the reserve risk factor is sr=0.017.

21 For example, in the SAS dataset we use here, 1579 cyber losses are documented, but also 24,962 other operational loss events, for which scrop would also be needed.

22 The regulatory capital models allow for discounts when risks are ceded to a reinsurance company. However, the discounts on capital requirements are partially compensated by the counterparty risk of the reinsurance company. Future research thus has to analyze the potential benefits of reinsurance for managing cyber risks.

23 Until the quantitative impact study QIS5 the required capital was defined as scr=VaR0.995(Z)v. It assumes a lognormal distribution (EC 2010) and the capital requirement can be interpreted as the unexpected risk exceeding the expected loss. Equation (19) does not make an explicit distribution assumption, but approximates the old one.

24 We thus focus on written premiums and neglect the reserves here. The premiums are calculated as b1·EX.

25 In Appendix E we show results for different choices of b; as the loss ratio increases, the survival probability decreases.

26 With this we implicitly a constant premium volume and linear claims settlement.

27 To clarify the actual calculations we again consider the example of a cover limit of 50 and a portfolio size of 50 (see Table 2). Then EX=31.7 and b=0.6 and thus v=31.7/0.6=52.8 and scr=52.8·0.14·3=22.2. Evaluating the distribution function in Equation (11) at rc=(52.8+22.2)=75 would yield a probability of about 83% that X is smaller than 75.

28 In Appendix F we also show the results for a different cover limit of U.S.$200 instead of U.S.$50 million.

29 We bootstrap 200 samples from the original dataset with replacement and show the 95% confidence interval.

30 Without a cover limit the PoT would lead to a lower survival probability than the lognormal, because of the tail events. With a cover limit the effect of the Pareto distribution in the tail is, however, reduced, so that the two lines more or less correspond to each other.

31 Note that the binomial distribution with ρ=0 results in the Poisson and binomial survival probability being congruent. Somehow counterintuitive is the observation that the line for a correlation ρ=1 is located above the one for ρ=0.1. However, this difference can be explained by the way the modeling of correlations in the negative binomial distribution changes the density functions. A negative binomial distribution with ρ=1 is heavier tailed than for ρ=0.1. SII evaluated the distribution at a relative low confidence interval so that psρ=1>ps(ρ=0.1).

32 We refer to Appendix H for a more detailed comparison with Ibragimov et al. (Citation2008). We extend the finding of Ibragimov et al. (Citation2008), who provide evidence for the nondiversification trap arising under extremely heavy-tailed distributions, in two ways. First, it is applicable to cyber risk as it indeed is characterized by very heavy tails. Second, we extend the nondiversification trap to the specific perspective taken by the risk management department or regulator. Since these actors only care about the survival or default probability, we show that it might be optimal for them to stick with a small undiversified portfolio since the transition to the optimally diversified portfolio is too risky.

33 As a simplification, we assume a nominal parity exchange rate between U.S.$and CHF. Appendix I shows that the threshold to U.S.$1 million yields a lower survival probability and the survival probability the regulator aims for would not be achieved for the whole size-cover limit space analyzed here. Additionally, for increasing portfolio sizes the survival level does not converge to the regulator’s target.

34 Since the premium and reserve risk have a similar risk structure and SST assumes no correlation between the two, the same risk factors are used.

35 The framework allows for an alternative aggregation procedure that cannot be applied in this context.

36 Note that the survival curve for the Clayton copula (bottom right) is similar to the one with 0.2 linear correlation (bottom left). This is different to the result for the Solvency II model. The reason is that under Solvency II the evaluated quantiles are much higher than under SST, and because the Clayton copula exaggerates the correlation in the far most outer tail, nonlinear models for Solvency II are more strongly affected.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 114.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.