Skip to main content

Claim reserving for insurance contracts in line with the International Financial Reporting Standards 17: a new paid-incurred chain approach to risk adjustments


This study considers the risk management of insurance policies in line with the implementation of the new International Financial Reporting Standards 17. It applies the paid-incurred chain method to model the future unpaid losses by combining the information channels of both the incurred claims and paid losses. We propose the recovery of the empirical distribution of the outstanding claims liabilities associated with a group of contracts via moment-based density approximation. We determine the risk measures and adjustments that are compliant with the new standard using the Monte–Carlo simulation method and approximated distributions. The historical data on the aggregate Ontario automobile insurance claims over a 15-year period are analyzed to examine the appropriateness and accuracy of our approach.


The International Accounting Standards Board (IASB) is responsible for setting accounting standards for the countries that are a part of the Organisation for Economic Co-operation and Development. One of its standards, known as the International Financial Reporting Standards (IFRS) 17, takes insurance contracts into account (see International Accounting Standards Board 2017). It is considered to be a fundamental overhaul for insurers’ accounting practices with a significant impact on large parts of the industry, such as equity valuation, operating infrastructure, product pricing models, and business strategies. The Canadian Accounting Standards Board will adopt the IFRS 17 without modification from January 1, 2023. The preparation for implementing the new standard has engendered substantial efforts and upgraded systems, processes, and controls. The Building Block Approach (BBA)is the general measurement model for liabilities in insurance contracts as considered in the IFRS 17. It ensues a comprehensive and coherent framework that provides information on the various features of insurance contracts. Under the BBA, the value of insurance contracts is measured as the sum of four blocks described as follows:

  • Block 1: Sum of the future cash flows that relate directly to the fulfillment of contractual obligations;

  • Block 2: The value of future cash flows;

  • Block 3: Risk adjustment (RA) representing the compensation for bearing uncertainty with regard to cash flows;

  • Block 4: Contractual service margin (CSM) representing the unearned profit that will be recognized as a profit or loss when services are provided.

In this study, we address the evaluation of the RA (Block 3) of insurance policies by proposing an efficient and accurate method in RA estimation under IFRS 17. It should be noted that IFRS 17 does not provide specific prescriptions for RA calculations or mandate a particular method to determine RAs. Although it does not restrict the use of any technique, it requires RAs to reflect the compensation that the entity should receive for bearing the uncertainty with regard to the amount and timing of cash flows, which arises from non-financial risks. RAs can be computed using various measures of risk, justified by the principles and concepts in probability theory and statistical mathematics.

In this context, we consider the claims development triangle that will be the basis for our RA calculation. It is specifically used for forecasting future claims and estimating outstanding claims liabilities. Under the IFRS specification, Lindholm et al. (2020) used this method in the calculation of the CSM, valuation of a portfolio of insurance contracts, and allocation of the aggregate liability value to different groups of policies. Conversely, our study focuses on the characterization of the distribution of outstanding claims liabilities and the computation of the ensuing RA. Some methods for estimating outstanding liabilities include the chain-ladder, Bornhuetter—Ferguson, and frequency-severity methods; an overview of such methods is presented in (e.g. Taylor 2012; Wüthrich and Merz 2008). Models for the claims supporting the abovementioned methods include the log-normal model (Barnett and Zehnwirth 2000), gamma model (Zehnwirth 1994; Mack 1991), distribution-free approach (Mack 1993), and over-dispersed model (Renshaw and Verrall 1998). In our case, we develop and customize the paid-incurred chain (PIC) model proposed in Merz and Wüthrich (2010) to predict the future claims, using Ontario’s automobile claims data.

To evaluate the RAs of insurance policies, the distribution of the outstanding claims liabilities needs to be determined. The general approach is to obtain the empirical cumulative distribution function (CDF) via Monte-Carlo (MC) simulation. The distribution could be approximated using the normal or log-normal distributions. Here, we also employ the moment-based density method propounded in Provost (2005) as an alternative. For example, this method was applied to the risk measurement and management of insurance policies, including guaranteed minimum benefits in Gao et al. (2017) and Zhao et al. (2018) as well as aggregate losses in Jin et al. (2016).

Herein, we first calculate the outstanding claims liabilities of various groups of policies under a novel PIC method. Our method of calculation is an extension of the approach used in Happ and Wüthrich (2013) for a multi-line setting with a dependency structure. In accordance with the requirements of IFRS 17, we apply four RA calculation methods to determine the RAs based on the distribution of outstanding claims liabilities. As the explicit expression of the distribution of such liabilities is rarely obtainable, the moment-based density approximation is proposed to recover their empirical distribution. Our proposed methodology is subsequently implemented on the historical data of incurred claims and paid losses. Although our application focuses only on the auto insurance sector, we develop an approach with overarching principles applicable to the analysis of other insurance data.

The departure of our contributions from the existing literature on the computation of the RAs of claims liabilities is highlighted through the following: (1) we provide a new PIC approach to model and predict the future unpaid losses based on the information on both incurred claims and paid losses data in practice; (2) in contrast to the methods used in Hertig (1985) and Gogol (1993), our methodology combines their principal elements by introducing the dependency between paid losses and incurred claims; (3) the moment-based density method is used as an efficient alternative to approximate the distribution of claims reserves; (4) finally, to our knowledge, this study is the first to elaborate on the entire process of estimating RAs for auto insurance claims involving different groups of policies to comply with the IFRS 17 implementation.

The remainder of the paper is organized as follows: in “Model description” section, the PIC method is explained, and the claims reserves are computed; four RA computation methods are described in  “Risk adjustment calculation” section; in “Numerical illustration” section, numerical illustrations are discussed based on historical data and the moment-based density approximation is employed to approximate the claims reserves’ distribution; and finally, the concluding remarks are presented in “Conclusion” section.

Model description

The claims development triangle is an essential method to organize data for actuarial analyses and claims reserving. The triangle (see Table 1) depicts how the claims in each accident year for the same line of business (LOB) develop to achieve their ultimate value. This organization of data greatly facilitates the comparison of the development history experienced in an accident year. In this method, we consider two types of triangles: (1) incurred claims, which refer to the reported claims amounts and (2) paid losses, which refer to the payments made for the claims. The paid losses should be less than or equal to the incurred claims in the same development year. However, they will have the same values ultimately, that is, all the reported claims must be paid off at the end of the lifetime of the policies.

Table 1 Illustration of the claims development triangle

The fundamental ideas of the PIC method can be found in Merz and Wüthrich (2010). It is designed for analyzing the data on both incurred claims and paid losses. We denote the accident year by i (\(0\le i \le J\)) and the development year by j (\(0\le j \le J\)). Here, J refers to the largest development year. We assume that there are n LOBs, where l (\(1\le l \le n\)) is the \(l^{th}\) LOB. Let \(I_{i,j,l}\) be the cumulative incurred claims in accident year i after j development years for the \(l^{th}\) LOB and \(P_{i,j,l}\) be the corresponding cumulative paid losses. We further assume that all the claims are settled after J years, that is, \(P_{i,J,l}=I_{i,J,l}\). It is worth noting that the assumption of \(P_{i,j,l} \le I_{i,j,l}\) is not considered in Merz and Wüthrich (2010). However, after investigating the constraint via numerical experiments, we find that the probability of \(P_{i,j,l}> I_{i,j,l}\) is extremely small. Furthermore, our focus lies on the prediction and forecast of the outstanding loss liabilities rather than the incurred losses. Therefore, the possible breaches of \(P_{i,j,l}> I_{i,j,l}\) are innocuous in our modeling framework. Table 1 graphically portrays the implementation structure of this method for the same LOB l . Write

$$\begin{aligned} {\mathfrak {B}}_{j}^{P}:= & {} \{P_{i,s,l}:0\le i \le J,0\le s\le \max (j-i,0),1\le l \le n\}, \\ {\mathfrak {B}}_{j}^{I}:= & {} \{I_{i,s,l}:0\le i \le J,0\le s\le \max (j-i,0),1\le l \le n\} \end{aligned}$$


$$\begin{aligned} {\mathfrak {B}}_{j}:={\mathfrak {B}}_{j}^{P}\cup {\mathfrak {B}}_{j}^{I} \end{aligned}$$

for the sets of paid losses data, incurred claims data, and joint data information up to development year j, respectively.

To model the dependence between paid and incurred data, we introduce a covariance matrix \({\varvec{V}}\) based on the idea in Happ et al. (2012) and extend their work to a more general case of n LOBs. The matrix \({\varvec{V}}\) is set as any positive definite matrix that may incorporate managerial experience and judgment. Pinning down \({\varvec{V}}\) suitably reduces computational complexity. Given an accident year i for n LOBs, we consider a vector of random variables \({\varvec{x}}_{i}=(\xi _{i,1,1},~\zeta _{i,1,1},~\ldots ,~\xi _{i,1,n}, ~\zeta _{i,1,n},~\ldots ,~\xi _{i,J,1},~\zeta _{i,J,1},~\ldots ,~\xi _{i,J,n},~\zeta _{i,J,n})^{\top }\), following the multivariate normal distribution with mean \({\varvec{u}}\) and covariance matrix \({\varvec{V}}\), where \(\top \) is a matrix transpose. The cumulative paid losses \(P_{i,j,l}\) are determined by the recursion

$$\begin{aligned} P_{i,j,l}=P_{i,j-1,l}e^{\xi _{i,j,l}} \end{aligned}$$

with an initial value \(P_{i,0,l}=e^{\xi _{i,0,l}}.\) For the cumulative incurred claims \(I_{i,j,l}\), we have a backward recursion

$$\begin{aligned} I_{i,j-1,l}=I_{i,j,l}e^{{-\zeta _{i.j,l}}} \end{aligned}$$

with an initial value \(I_{i,J,l}=P_{i,J,l}\), where \(e^{{-\zeta _{i,j,l}}}\) are called the link ratios. Following Happ and Wüthrich (2013), we assume that there is a correlation coefficient between the incurred claims for a specific year and the paid losses after s years in the same LOB given by

$$\begin{aligned} \rho _{s,l}=\text{ Cor }(\xi _{i,j+s,l},\zeta _{i,j,l}). \end{aligned}$$

The correlation coefficients corresponding to the paid losses and incurred claims in different LOBs are defined as

$$\begin{aligned} \rho _{s,(l,l^{\prime })}=\text{ Cor }(\xi _{i,j+s,l},\zeta _{i,j,l^{\prime }}),~\rho ^{\xi }_{s,(l,l^{\prime })}=\text{ Cor }(\xi _{i,j+s,l},\xi _{i,j,l^{\prime }}),~\text{ and }~\rho ^{\zeta }_{s,(l,l^{\prime })}=\text{ Cor }(\zeta _{i,j+s,l},\zeta _{i,j,l^{\prime }}), \end{aligned}$$

where \(\rho _{s,(l,l^{\prime })}\) denotes the correlation coefficients between the incurred claims in the LOB l for a specific year and the paid losses in the LOB \(l^{\prime }\) after s years; \(\rho ^{\xi }_{s,(l,l^{\prime })}\) denotes the correlation coefficients between the paid losses in the LOB l for a specific year and the paid losses in the LOB \(l^{\prime }\) after s years; \(\rho ^{\zeta }_{s,(l,l^{\prime })}\) denotes the correlation coefficients between the incurred claims in the LOB l for a specific year and the incurred claims in the LOB \(l^{\prime }\) after s years. All the remaining entries in \({\varvec{V}}\) are assigned a value of 0. The covariance matrix indicates that the incurred claim increments \(\zeta _{i,j,l}\) correlate with the paid loss increments \(\xi _{i,j+s,l}\) in the succeeding years. Such dependencies are also extended to other LOBs, which are observed by the correlation between \(\xi _{i,j+s,l}\) and \(\zeta _{i,j,l^{\prime }}\).

Conditional on the sets \({\mathfrak {B}}_{j}\) and given the distribution assumptions on \(\xi _{i,j,l}\) and \(\zeta _{i,j,l}\), we analyze the joint distribution of \(\left( \log P_{i,j+1,1},\ldots \log P_{i,j+1,n},\ldots ,\log P_{i,J,1},\ldots ,\log P_{i,J,n}\right) ^{\top }\). For ease of calculation, we consider

$$\begin{aligned} {\varvec{y}}_{i}=\left( \log P_{i,1,1},\log I_{i,1,1},\ldots ,\log P_{i,1,n},\log I_{i,1,n},\ldots ,\log P_{i,J,1},\log I_{i,J,1},\ldots ,\log P_{i,J,n},\log I_{i,J,n}\right) ^{\top }. \end{aligned}$$

Let \({\varvec{B}}\) be an invertible matrix determined via logarithmic transformation, such that \({\varvec{y}}_{i}={\varvec{B}}{\varvec{x}}_{i}\). Thus,

$$\begin{aligned} {\varvec{y}}_{i} \sim \left( \varvec{\mu },\varvec{\Sigma }\right) , \end{aligned}$$

where \(\varvec{\mu }={\varvec{B}}{\varvec{u}}\) and \(\varvec{\Sigma }={\varvec{B}}{\varvec{V}}{\varvec{B}}^{\top }\).

Furthermore, we define

$$\begin{aligned} {\varvec{y}}_{i,j}^{(1)}= \left( \log P_{i,1,1},\log I_{i,1,1},\ldots ,\log P_{i,1,n},\log I_{i,1,n},\ldots ,\log P_{i,j,1},\log I_{i,j,1},\ldots ,\log P_{i,j,n},\log I_{i,j,n}\right) ^{\top }, \end{aligned}$$


$$\begin{aligned} {\varvec{y}}_{i,j}^{(2)}=\left( \log P_{i,j+1,1},\log I_{i,j+1,1},\ldots ,\log P_{i,j+1,n},\log I_{i,j+1,n},\ldots ,\log P_{i,J,1},\log I_{i,J,1},\ldots ,\log P_{i,J,n},\log I_{i,J,n}\right) ^{\top }. \end{aligned}$$

This means \({\varvec{y}}_{i,j}=({\varvec{y}}_{i,j}^{(1)},{\varvec{y}}_{i,j}^{(2)})^{\top }\). Thus, the mean and covariance matrix can be partitioned in the same way as follows:

$$\begin{aligned} \varvec{\mu }=(\varvec{\mu }_{(1)},\varvec{\mu }_{(2)})^{\top }, \end{aligned}$$


$$\begin{aligned} \varvec{\Sigma }=\left[ \begin{array}{cc} \varvec{\Sigma }_{(1)(1)} &{} \varvec{\Sigma }_{(1)(2)}\\ \varvec{\Sigma }_{(2)(1)} &{} \varvec{\Sigma }_{(2)(2)}\end{array}\right] . \end{aligned}$$

Following the Schur complement concept (see Zhang (2006), for example), we have

$$\begin{aligned} {\varvec{y}}_{i,j}^{(2)} | {\varvec{y}}_{i,j}^{(1)} \sim \left( \bar{\varvec{\mu }},\bar{\varvec{\Sigma }}\right) , \end{aligned}$$


$$\begin{aligned} \bar{\varvec{\mu }}=\varvec{\mu }_{(2)}+\varvec{\Sigma }_{(2)(1)}\varvec{\Sigma }_{(1)(1)}^{-1} ({\varvec{y}}_{(1)}-\varvec{\mu }_{(1)}) \end{aligned}$$


$$\begin{aligned} \bar{\varvec{\Sigma }}=\varvec{\Sigma }_{(2)(2)}-\varvec{\Sigma }_{(2)(1)}\varvec{\Sigma }_{(1)(1)}^{-1}\varvec{\Sigma }_{(1)(2)}. \end{aligned}$$

Suppose \(\bar{\varvec{\mu }}^{\prime }\) and \(\bar{\varvec{\Sigma }}^{\prime }\) are the respective projection vector and matrix of \(\bar{\varvec{\mu }}\) and \(\bar{\varvec{\Sigma }}\) onto the space that contains \(\left( \log P_{i,j+1,1},\ldots \log P_{i,j+1,n},\ldots ,\log P_{i,J,1},\ldots ,\log P_{i,J,n}\right) ^{\top }\). Suppose \(\bar{\varvec{\mu }}^{\prime }\) and \(\bar{\varvec{\Sigma }}^{\prime }\) are the respective projection vector and matrix of \(\bar{\varvec{\mu }}\) and \(\bar{\varvec{\Sigma }}\), respectively. The entries in \(\bar{\varvec{\mu }}^{\prime }\) and \(\bar{\varvec{\Sigma }}^{\prime }\) then correspond to their mean and covariance, respectively, of \(P_{i,j,l}\) for \(0\le i \le J\), \(0\le j \le J\) and \(1\le l \le n\). Consistent with the assumption propounded in Merz and Wüthrich (2010), the vector \(\left( \log P_{i,j+1,1},\ldots \log P_{i,j+1,n},\ldots ,\log P_{i,J,1},\ldots ,\log P_{i,J,n}\right) ^{\top }\) follows a multivariate normal distribution with parameters \(\bar{\varvec{\mu }}^{\prime }\) and \(\bar{\varvec{\Sigma }}^{\prime }\), that is,

$$\begin{aligned} \left( \log P_{i,j+1,1},\ldots \log P_{i,j+1,n},\ldots ,\log P_{i,J,1},\ldots ,\log P_{i,J,n}\right) ^{\top } \sim N\left( \bar{\varvec{\mu }}^{\prime },\bar{\varvec{\Sigma }}^{\prime } \right) . \end{aligned}$$

We denote the paid losses of the \(l^{th}\) LOB for the accident year i and during the \(j^{th}\) development year by \(C_{i,j,l}\). Thus,

$$\begin{aligned} C_{i,j,l}=P_{i,j,l}-P_{i,j-1,l}. \end{aligned}$$

As we are interested in the future unpaid losses, we require the distribution of \(C_{i,j,l}\). Now, the total paid losses in year k is given by

$$\begin{aligned} C_{k,l}=\sum _{i+j=k}C_{i,j,l}. \end{aligned}$$

At time J, the total discounted future unpaid losses is given by

$$\begin{aligned} R_{J,l}=\sum _{k=1}^{J}\frac{C_{J+k,l}}{(1+r_{k})^{k}}, \end{aligned}$$

where \(r_{k}\) is the discount rate at time k. Therefore, this denotes the time-J outstanding loss liabilities of the claims, considering accident years 0 to J. It is also the best estimate of liability for the incurred claims related to the past service up to J. Hence, the total outstanding liabilities of the claims of all LOBs is

$$\begin{aligned} R_{J}=\sum _{l=1}^{n}R_{J,l}. \end{aligned}$$

Risk adjustment calculation

Notably, IFRS 17 mandates that RAs must be calculated based on the discounted fulfillment cash flows over the term of the policies. As this new standard does not designate any estimation technique in determining the RA, we adopt four methods outlined in an RA document by the International Actuarial Association (2018): value at risk (VaR), conditional tail expectation (CTE), Wang transform (WT), and cost of capital (CoC).

VaR is the loss level for which the loss random variable exceeds with probability \(1-\alpha \), where \(\alpha \) is the confidence level (CL). Therefore, if S is the loss random variable, then the \(100\alpha \%\) VaR, denoted by \(\text{ VaR}_{\alpha }(S)\), is given by

$$\begin{aligned} \text{ VaR}_{\alpha }(S)=\inf \{s:{\mathbb {P}}(S\le s)\ge \alpha \}. \end{aligned}$$

RA is equal to the difference between VaR and the corresponding probability weighted expected value. The drawback of VaR is that it disregards the extreme values beyond the CL, and this could lead to an underestimation of risk measures.

Furthermore, CTE’s advantage over VaR is its primary concentration on the tail of the loss distribution; it is also a coherent risk measure and an alternative quantile technique. Specifically, CTE is the expected loss given that the loss is greater than \(\text{ VaR}_{\alpha }(S)\). Therefore,

$$\begin{aligned} \text{ CTE}_{\alpha }(S)={\mathbb {E}}[S|S>\text{ VaR}_{\alpha }(S)]. \end{aligned}$$

RA is then calculated as the difference between the CTE and the corresponding probability weighted expected value.

Another coherent risk measure that we include is WT, which is typified as a distortion risk measure; see Wang (2000, 2002). Under the WT, the probability distribution is adjusted due to risk preferences. This means that lower adjusted probability values are assigned to favorable outcomes while higher adjusted probability values are assigned to unfavorable outcomes. A WT was chosen in Miccolis and Heppen (2010) to provide RA as per IFRS 17 by calibrating parameters. A general expression for the distortion risk measure for a non-negative loss random variable S is given by

$$\begin{aligned} g_{\chi }(s)=\int _{0}^{\infty }\chi \left( 1-F_{S}(s)\right) \text{ d }s, \end{aligned}$$

where \(F_{S}(s)\) is the CDF of S and \(\chi (x)\) is a distortion function \(\chi :[0,1]\rightarrow [0,1]\), which is a non-decreasing function with \(\chi (0)=0\) and \(\chi (1)=1\). The distortion function for WT is given by

$$\begin{aligned} \chi (x)=\Phi \left( \Phi ^{-1}(x)+\Phi ^{-1}(\eta )\right) , \end{aligned}$$

where \(\eta \) is a risk aversion parameter. The higher the \(\eta \), the less is the risk aversion. RA is calculated as the difference between the probability weighted expected value using adjusted probability and the expected value under the original probability.

Under the concept of CoC, an entity will determine its risk preference based on its selection of a capital amount appropriate for the risks that are relevant to IFRS 17’s measurement objectives. Among the various approaches in determining capital requirements, the assigned capital amount used to compute the CoC adopted in this paper is the difference between (a) the amount calculated under the probability distribution associated with the selected CL and (b) the amount using the probability weighted expected value.

RA is computed as the present value of the future cost of capital associated with relevant cash flows. This is expressed as

$$\begin{aligned} \text{ risk } \text{ adjustment }=\sum _{i=1}^{n}\frac{c_{i}A_{i}}{(1+r_{i})^{i}}, \end{aligned}$$

where \(A_{i}\) is the assigned capital amount for period ending at time i, \(c_{i}\) is the CoC’s rate at time i, and \(r_{i}\) is the discount rate at time i.

The challenge in choosing the appropriate \(c_{i}\) is akin to meeting the specific requirement objectives. This should reflect a rate of return that is consistent with the entity being indifferent between insurance contract’s liabilities with uncertain cash flows and liabilities with fixed cash flows. Additionally, the CoC’s rate should consider the entity’s risk preference and experience. A general approach to quantifying an RA based on the CoC technique is described in Meyers (2017), which incorporates stochastic path dependencies given that the capital amount is impacted as time progresses, and such an approach is applied to the claims development triangles of the unpaid losses.

Numerical illustration

In this section, we present empirical demonstrations using incurred claims and paid losses data on Ontario automobile insurance policies from 2002 to 2016 inclusive. We study three different groups of policies categorized as: bodily injury, direct compensation, and accident benefit. This implies that we have six different data sets: (1) bodily injury incurred claims, (2) bodily injury paid losses, (3) direct compensation incurred claims, (4) direct compensation paid losses, (5) accident benefit incurred claims, and (6) accident benefit paid losses. The data sets were compiled by the General Insurance Statistical Agency.

Before performing the numerical implementation under our model setting, it is crucial to verify the assumption of the chain-ladder data structure with dependencies. All the data sets in this empirical illustration are displayed as claims development triangles with a chain-ladder structure. For instance, Tables 2 and 3 display the incurred claims and paid losses of the direct compensation policy in the form of a triangle. Furthermore, we examine the residuals to ensure the existence of a chain-ladder data structure by employing the time series and lag plots, and the turning point test (Bhattacharyya 1984; Montgomery et al. 2015; Salvidio et al. 2016). Our findings reveal that the residuals are independent and identically distributed (IID), and the data conform with the proposed structure. To validate the dependency assumptions, we compute the correlation coefficients from \({\varvec{V}}\) following Happ and Wüthrich (2013). Table 4 shows the correlation coefficients between the incurred claims for a specific year and the paid losses after s years for all the three groups of policies. Table 5 reports the correlation coefficients between the incurred claims in one LOB for a specific year and the paid losses in another LOB after s years. For instance, \(\rho _{s,(1,2)}\) denotes the correlation coefficient between the incurred claims in business line 1 for a specific year and the paid losses in business line 2 after s years. Table 6 displays the correlation coefficients between the paid losses in one LOB for a specific year and the paid losses in another LOB after s years, and the correlation coefficients between the incurred claims in one LOB for a specific year and the incurred claims in another LOB after s years. Our findings show that the dependency structure exists in the data sets. Hence, the implementation of our model to the chain-ladder type data with dependency assumptions is well-justified.

Table 2 Observed cumulative paid losses \(P_{i,j}\) of the direct compensation policy
Table 3 Observed cumulative incurred losses \(I_{i,j}\) of the direct compensation policy
Table 4 Correlation between the incurred claims and paid losses in the same LOB
Table 5 Correlation between the incurred claims in one LOB and the paid losses in another LOB
Table 6 Correlation between different LOBs

These three types of insurance policies have various features and show different trends over the development years. In this illustration, we have \(n=3\) and \(J=14\), i.e., we assume that all claims are settled after 14 years. Moreover, it is assumed that the horizontal year is 2016, and we must forecast the future claims after 2016. The estimates of parameters \(\phi _{j,l},\sigma _{j,l},\varphi _{j,l},\tau _{j,l}\) for \(0\le j \le J\) and \(1\le l \le n\) are obtained by using the maximum likelihood method. The formulae for the estimators are given by

$$\begin{aligned} {\hat{\phi }}_{j,l}=\frac{1}{J+1-j}\sum _{i=1}^{J+1-j} \log \frac{P_{i,j,l}}{P_{i,j-1,l}},~~~{\hat{\sigma }}_{j,l}^{2} =\frac{1}{J-j}\sum _{i=1}^{J+1-j}\left( \log \frac{P_{i,j,l}}{P_{i,j-1,l}}-{\hat{\phi }}_{j,l}\right) ^{2}, \end{aligned}$$


$$\begin{aligned} {\hat{\varphi }}_{j,l}=\frac{1}{J+1-j}\sum _{i=1}^{J+1-j} \log \frac{I_{i,j,l}}{I_{i,j-1,l}},~~~{\hat{\tau }}_{j,l}^{2} =\frac{1}{J-j}\sum _{i=1}^{J+1-j} \left( \log \frac{I_{i,j,l}}{I_{i,j-1,l}}-{\hat{\varphi }}_{j,l}\right) ^{2}. \end{aligned}$$

As there are insufficient data points to estimate \(\sigma _{14,l}\) and \(\tau _{14,l}\), we extrapolate their values. Tables 7 and 8 display the estimates of these parameters.

Table 7 Parameter estimates recovered from paid losses data and their corresponding standard errors (SEs)
Table 8 Parameter estimates recovered from incurred claims data and their corresponding standard errors (SEs)

To verify that our data set for the numerical illustration follows the normality assumption in “Model description” section, we perform the Kolmogorov–Smirnov (KS), Shapiro–Wilk (SW), and Jarque–Bera (JB) tests on the observations \(\displaystyle \log \frac{P_{i,j,l}}{P_{i,j-1,l}}\) and \(\displaystyle \log \frac{I_{i,j,l}}{I_{i,j-1,l}}\) covering the first three accident years. The KS test is a nonparametric test of the null hypothesis that the CDF of the data is equal to the hypothesized CDF. The KS test could be used to assess normality of empirical data when the hypothesized CDF is a normal distribution. The SW test is also used as a normality test, but it is designed for situations of small sample sizes. The JB test is a goodness of fit test to determine whether the sample data follow a normal distribution. The results from the three statistical tests are shown in Table 9. We see that high p-values are obtained so that we could not reject the null hypothesis of a normal CDF. Therefore, it is reasonable to adopt the normal assumption in our modeling approach.

Table 9 p-values for the KS, SW, and JB normality tests

To compute the risk measures, the distribution of reserves for future unpaid losses is necessary. Note that the uncertainty of the predicted unpaid losses comes from both the uncertainties of future processes and parameter estimates. Therefore, both uncertainties must be considered to determine the distribution. We employ the nonparametric bootstrap method to estimate the parameter uncertainty. Bootstrapping is a simulation-based statistical analysis that could be used to measure the accuracy of sample estimates. As statistics calculated from samples can be used to estimate population parameters, we employ the bootstrap distribution to describe the behavior of the quantities being estimated, such as SEs and CLs. Bootstrapping is a powerful technique in stochastic claims reserving (e.g. Gao 2018; England et al. 2019; Jeong et al. 2021). Considering the dependent structure in our model setting, we employ a modified bootstrapping approach based on a stationary bootstrap method posited in Politis and Romano (1994). This approach is applicable for models with dependent data by using a moving block bootstrap technique. The bootstrapped observations under the moving block bootstrap are normally non-stationary. However, the modified method circumvents this problem by resampling blocks of random lengths. Based on this bootstrapping technique, Lin and Verrall (2010) conducted an adapted bootstrap approach to deal with dependent data containing both paid and incurred claims information. We then follow their bootstrapping to tackle dependent data across multiple LOBs.

Suppose \(\{P_{i,j,l}, I_{i,j,l}:0<i\le J, 0<j\le J-i,1\le l \le n\}\) is an observed data set. We consider the log ratio of the data, that is, \(\{\xi _{i,j,l}=\log \frac{P_{i,j,l}}{P_{i,j-1,l}}, \zeta _{i,j,l}=\log \frac{I_{i,j,l}}{I_{i,j-1,l}}:0<i\le J, 0<j\le J-i,1\le l \le n\}\). Recall that \(\phi _{j,l}\) and \(\sigma _{j,l}\) are the respective mean and standard deviation of \(\xi _{i,j,l}\) in year j, while \(\varphi _{j,l}\) and \(\tau _{j,l}\) are the respective mean and standard deviation of \( \zeta _{i,j,l}\) in year j. Consider the residuals

$$\begin{aligned} D_{i,j,l}=\frac{\xi _{i,j,l}-{\widehat{\phi }}_{j,l}}{{\widehat{\sigma }}_{j,l}}~~~\text{ and }~~~E_{i,j,l}=\frac{\zeta _{i,j,l}-{\widehat{\varphi }}_{j,l}}{{\widehat{\tau }}_{j,l}}, \end{aligned}$$

where \({\widehat{\phi }}_{j,l}\),\({\widehat{\sigma }}_{j,l}\),\({\widehat{\varphi }}_{j,l}\) and \({\widehat{\tau }}_{j,l}\) are the estimates of \(\phi _{j,l}\), \(\sigma _{j,l}\), \(\varphi _{j,l}\) and \(\tau _{j,l}\), respectively. Similar to the work of Lin and Verrall (2010), the residuals are grouped as \(U_{i,j}=\{D_{i,j,l},E_{i,j,l}:1\le l \le n\}\). We now follow and apply the stationary bootstrap method. For each i, starting from \(j=1\), let \(U_{i^{*}_{i},j^{*}_{1}}\) denote the residuals randomly drawn from \(\{U_{i,j}\}\). The new observations are given by

$$\begin{aligned} \xi ^{*}_{i,1,l}={\widehat{\phi }}_{j}+{\widehat{\sigma }}_{j,l}D_{i^{*}_{i},j^{*}_{1},l}~~~\text{ and }~~~\zeta ^{*}_{i,1,l}={\widehat{\varphi }}_{j,l}+{\widehat{\tau }}_{j,l}E_{i^{*}_{i},j^{*}_{1},l}. \end{aligned}$$

For \(j+1\), we pick the residuals \(U_{i^{*}_{i},j^{*}_{j+1}}\) at random from the original residual sets \(\{U_{i,j}\}\) with a probability p, i.e.,

$$\begin{aligned} \xi ^{*}_{i,j,l}={\widehat{\phi }}_{j}+{\widehat{\sigma }}_{j,l}D_{i^{*}_{i},j^{*}_{j+1},l}~~~\text{ and }~~~\zeta ^{*}_{i,j,l}={\widehat{\varphi }}_{j,l}+{\widehat{\tau }}_{j,l}E_{i^{*}_{i},j^{*}_{j+1},l}, \end{aligned}$$

and then we draw \(U_{i^{*}_{i},j^{*}_{j}+1}\) with a probability \(1-p\), i.e.,

$$\begin{aligned} \xi ^{*}_{i,j,l}={\widehat{\phi }}_{j}+{\widehat{\sigma }}_{j,l}D_{i^{*}_{i},j^{*}_{j}+1,l}~~~\text{ and }~~~\zeta ^{*}_{i,j,l}={\widehat{\varphi }}_{j,l}+{\widehat{\tau }}_{j,l}E_{i^{*}_{i},j^{*}_{j}+1,l}. \end{aligned}$$

As argued in Politis and Romano (1994), the probability p is chosen as \(p=1/b\), where b is the block size. The new estimates of parameters for each corresponding distribution are determined through these new observations.

Using the parameter estimates, the MC simulation method is conducted to forecast future unpaid losses and estimate the outstanding claims liabilities via the following procedure:

  1. 1.

    Resample the data by using the bootstrap method to estimate the parameters.

  2. 2.

    For each i (\(0\le i\le J\)), generate values of \(P_{i,j,l}\) (\(i+1 \le j \le J\), \(1 \le l \le n\)) through equation (1) using the new estimated parameter values.

  3. 3.

    Calculate \(C_{i,j,l}\), \(C_{J+l,l}\), \(R_{J,l}\) and \(R_{J}\) using equations (2), (3), (4), and (5), respectively.

  4. 4.

    Repeat steps 1–3 for N times.

Considering that \(J=14\) and the horizontal year is 2016, we forecast the unpaid losses from 2017 to 2031. These unpaid losses correspond to insurance contracts with accident years from 2002 to 2016. The directive from IFRS 17 states that the discount rate applied to the estimates of the future cash flows should: (1) reflect the cash flows and liquidity’s characteristics of the insurance contracts and (2) be consistent with the current observed market prices. IFRS 17 does not specify any technique to determine the discount rate. The “top-down” and “bottom up” methods are suggested by KPMG (2017) to determine the discount rate value. For ease of calculation, we assume r is constant and \(r=0.024\) in this example.

We implemented the MC simulation with 1,000,000 replicates. The histograms of unpaid losses for bodily injury type policies in different years (\(C_{J+l}\)) are displayed in Fig. 1. The means and standard deviations of the unpaid losses in different years are shown in Table 10; the means and standard deviations of the total discounted unpaid losses (i.e., outstanding claims liabilities) are shown in the last row. From Table 10, the direct compensation policy has much lower means and standard deviations. This is consistent with the fact that the claims are almost always reported and paid in the first few years under this policy type. The accident benefit policy has the highest volatilities, and this observation suggests that the future cash flows for this policy are more uncertain than those of the other two policies. The histograms of the discounted total paid losses (\(R_J\)) are portrayed in Fig. 2.

Fig. 1

Histogram of unpaid losses (Bodily injury)

Table 10 Mean and standard deviation for unpaid losses in future years

Note that it is impossible to obtain the exact distribution of outstanding claims liabilities. Therefore, some approximation methods must be employed, and the most commonly used one is the empirical CDF. Some parametric approximations, including normal and log-normal distributions, could also be employed. The moment-based density approximation introduced in Provost (2005) is not only conceptually simple and computationally efficient but also produces approximation results that are quite accurate while providing useful functional representations of the density functions of interest. The resulting density approximants are mathematically equivalent to those obtained by employing orthogonal polynomials. However, this conceptually simple semiparametric technique eliminates some of the complications associated with the use of orthogonal polynomials but still yields comparable density approximants. Furthermore, compared to the normal power approximation, the moment-based density approximation provides more accurate approximation results. The abovementioned arguments motivate our adoption of the moment-based density approximation method. Under this method, the exact density function with known first n moments can be approximated by the product of (1) a base density, whose tail behavior is congruent to that of the distribution to be approximated, and (2) a polynomial of degree q. The parameters of the base density can be determined by matching the moments of the loss random variable and the approximated density.

The choice of the base function depends on the loss distribution. Based on the preliminary examination on the distribution of claims liabilities, we adopt the normal distribution for the approximation. In particular, the base function \(\Psi (x)\) with parameters \(\theta \) and c is given by

$$\begin{aligned} \Psi (x)=\frac{1}{\sqrt{2\pi c}}e^{-\frac{(x-\theta )^{2}}{2c^{2}}}. \end{aligned}$$

Let the moments of the standardized version of the random variable X (i.e. \((X-c)/\theta \)) be \(\alpha _{X}(i)~~ \text{ for }~~i=0,1,\ldots ,q\). As it is impossible to obtain the theoretical moments of X, we can use the sample moments obtained from the data. Denote the theoretical moments of the base function \(\Psi (x)\) by \(m_{X}(i)~~\text{ for }~~i=0,1,\ldots ,2q\). The parameters \(\theta \) and c of \(\Psi (x)\) can be determined by calculating the sample mean and sample standard deviation of the data.

The density \(f_{X}(x)\) of X is approximated as

$$\begin{aligned} f_{q}(x)=\Psi (x)\sum _{i=0}^{q}\frac{k_{i}}{c}\left( \frac{x-\theta }{c}\right) ^{i}; \end{aligned}$$

the \(k_{i}\text {'s},~i=0,\ldots ,n\) are polynomial coefficients and are determined via

$$\begin{aligned} (k_{0},k_{1},\ldots ,k_{n})^{\top }={\mathbf {M}}^{-1}(\alpha _{X}(0),\alpha _{X}(1),\ldots ,\alpha _{X}(q))^{\top }, \end{aligned}$$

where \({\mathbf {M}}\) is a \((q+1)\times (q+1)\) symmetric matrix whose \((i+1)^{\text{ th }}\) row is \((m_{X}(i),m_{X}(i+1),\ldots ,m_{X}(i+q))\).

Consequently, the approximated density of X is given by

$$\begin{aligned} f_{q}(x)=\frac{1}{\sqrt{2\pi c}}e^{-\frac{(x-\theta )^{2}}{2c^{2}}}\sum _{i=0}^{q}\frac{k_{i}}{c}\left( \frac{x-\theta }{c}\right) ^{i}. \end{aligned}$$
Fig. 2

Density of total unpaid losses

Fig. 3

CDF of total unpaid losses

Fig. 4

Differences between the CDFs of approximations and empirical CDF

The approximated CDF of X can be obtained from equation (6). As an explicit formula for the quantile function is unavailable, quantiles can be calculated numerically (e.g., using Newton’s method). We consider the normal and log-normal approximations as benchmarks. The approximated densities for the total unpaid losses based on these three methods are displayed in Fig. 2. The corresponding CDFs are shown in Fig. 3, and the moment-based approximation is the closest to the empirical CDF. Additionally, Fig. 4 illustrates the differences between the corresponding CDF approximations of total unpaid losses and the empirical CDF. We observe that the moment-based density approximation generates the smallest deviations and manifestly outperforms the other two alternatives.

The KS test is used to assess the goodness of fit of the approximations. The results of the test are provided in Table 11, showing that the moment-based density approximation outperforms the two other approximations. More specifically, from the p-values, the test shows that we cannot reject the null hypothesis that the moment-based approximation is not different from the empirical density, while the same cannot be said for the other two approximations.

Table 11 Results of the KS test under different methods

RAs of three policy types are computed using the VaR, CTE, WT, and CoC metrics based on the simulation results. The normal and log-normal approximations serve as benchmarks. We select three CLs for VaR and CTE: 90%, 95%, and 99%. The three risk aversion parameter values for WT are 0.1, 0.05, and 0.01. The capital amounts every year under the CoC method are determined using the VaR. The CoC rate is set at 0.08. The results are shown in Tables 12, 13, 14 and 15; percentages refer to the ratios of the RAs to the probability weighted expected values. In particular, the risk-margin percentages are obtained using the baseline amounts displayed in the last row of Table 10. These are the respective means of 7,976,162,  177,179  and  7,183,122 for the bodily injury, direct compensation, and accident benefit policies.

The results obtained from the moment-based density method are better than those from the normal and log-normal approximations. This is because the differences between the results from the empirical CDF and those from the moment-based density method are much smaller than the differences between the results from the other two methods and those from the empirical CDF. Note that RAs using the CTE method have the highest values because CTE considers potential loss beyond the CL. Risk adjustments using the CoC method have much lower values.

It has to be noted that, notwithstanding IFRS 17’s not indicating the method to determine RAs, the CLs are required to be disclosed. Therefore, if the WT or CoC method is used to calculate the RAs, the equivalent risk aversion factor or the CL consistent with the VaR method must be revealed.

Table 12 Risk adjustments using VaR
Table 13 Risk adjustments using CTE
Table 14 Risk adjustments using WT
Table 15 Risk adjustments using CoC

Table 16 displays the aggregated future cash flows in different years. It should be noted that the sum of the RAs computed under the four methods for three business types (see Tables 12, 13, 14, and 15) are higher than the aggregated RAs computed through the corresponding four techniques (cf Table 17). For instance, as shown in Table 12, the RAs for bodily injury, direct compensation, and accident benefit types are 281,256, 11,421, and 353,028, respectively, calculated using the VaR with the moment-based approximation under a 90\(\%\) CL. The sum of these RAs is 645,705, which is higher than the corresponding aggregated RA of 586,476 presented in Table 17. The same conclusion is observed in all the other cases. This implies that if all the policy types are considered together, each aggregated RA will be lower than their corresponding sum of RAs under the business lines calculated separately.

Table 16 The mean and standard deviation of aggregated unpaid losses
Table 17 Aggregated RA


This study proposed an efficient and accurate methodology and elaborated on the entire process of estimating RAs for claims according to the requirements of IFRS 17. The PIC model was adopted to describe the development of unpaid losses based on historical information with the data on incurred claims and paid losses. The MC simulation method was employed to generate samples of future unpaid losses. To approximate the distribution of claims reserves, the moment-based approximation method was used. Risk adjustments were computed in four different ways using the approximated distribution with the bootstrap method that was applied to obtain the distribution of RAs.

The methodology and procedures described in the study could be used in the form of implementation guidelines for the computation of the RAs of insurance companies’ claims. We observe some limitations of this study, which need to be examined further. For instance, the constraint \(P_{i,j,l} \le I_{i,j,l}\) is not strictly assumed in the modeling framework in Merz and Wüthrich (2010), but the effect of possible breaches on forecasting accuracy is worth exploring in the future. Furthermore, the introduction of a dependency structure might result in over-parametrization. This issue can be further investigated with the use of statistical techniques, such as the heuristic estimation proposed by Avanzi et al. (2018), which deserves a thorough analysis and publication.

Future studies could extend the models and methods considered herein in many ways. The independent assumption on the link ratios could be relaxed, while it could be assumed that they follow a multivariate skew normal distribution (see Pigeon et al. 2014). A prior distribution of the model parameters could be considered, leading to the Bayesian PIC method as alluded in Merz and Wüthrich (2010). When considering several types of policies, risk dependencies and aggregation techniques should be treated with the help of an appropriate approach. A copula method could be used to model the dependence structure and calculate the aggregated risks for all the policies (e.g. Shi and Frees 2011; Peter et al. 2014; Abdallah et al. 2015; Jeong and Dey 2020). In addition, the RA measurement only reflects the uncertainties originating from the non-financial risks associated with insurance contracts. To conduct a consolidated risk management, it will be worth analyzing a well-developed risk assessment framework by integrating our proposed technique with the decision-making approaches employed in the studies on financial risk evaluation and analysis (e.g. Kou et al. 2014, 2021a, b; Zha et al. 2021). It should be noted that we only considered the historical data of incurred claims and paid losses. The reliability of the estimation results will increase if we could incorporate other useful information, including earned premium, earned exposure, and reserves, subject to the accessibility of such data. Furthermore, the additional information may also decrease the variability of the model parameter estimates and improve the performance of future cash-flow forecasting. Finally, the computed RAs are solely based on future unpaid losses. When practically available, other relevant future cash flows that satisfy the requirements of IFRS 17 could be embedded in the implementation of the method of calculating RAs in the case of an insurance company.

Availability of data and materials

The data used for the analysis and observed in the research findings of this study are presented in the tables. Complementary data that support the findings of this study can be obtained from the General Insurance Statistical Agency (GISA), but restrictions apply to access the data that are used under the license for the current study; thus, they are not publicly available. However, the authors are willing to provide the data upon request and with the permission of GISA.



Building block approach


Confidence level


Cumulative distribution function


Contractual service margin


Conditional tail expectation


The International Accounting Standards Board


International Financial Reporting Standards






Line of business




Paid-incurred chain


Value at risk


Risk adjustment


Standard error




Wang transform


  1. Abdallah A, Boucher J-P, Cossette H (2015) Modeling dependence between loss triangles with hierarchical archimedean copulas. ASTIN Bull J IAA 45(3):577–599

    Article  Google Scholar 

  2. Avanzi B, Taylor G, Wong B (2018) Common shock models for claim arrays. ASTIN Bull J IAA 48(3):1109–1136

    Article  Google Scholar 

  3. Barnett G, Zehnwirth B (2000) Best estimates for reserves. Proc Casual Actuar Soc 87:245–321

    Google Scholar 

  4. Bhattacharyya GK (1984) 5 Tests of randomness against trend or serial correlations. Handb stat 4:89–111

    Article  Google Scholar 

  5. England PD, Verrall RJ, Wüthrich MV (2019) On the lifetime and one-year views of reserve risk, with application to ifrs 17 and solvency ii risk margins. Insur Math Econom 85:74–88

    Article  Google Scholar 

  6. Gao G (2018) Bayesian claims reserving methods in non-life insurance with stan: an introduction. Springer, New York

    Book  Google Scholar 

  7. Gao H, Mamon R, Liu X (2017) Risk measurement of a guaranteed annuity option under a stochastic modelling framework. Math Comput Simul 132:100–119

    Article  Google Scholar 

  8. Gogol D (1993) Using expected loss ratios in reserving. Insur Math Econom 12(3):297–299

    Article  Google Scholar 

  9. Happ S, Wüthrich MV (2013) Paid-incurred chain reserving method with dependence modeling. ASTIN Bull J IAA 43(1):1–20

    Article  Google Scholar 

  10. Happ S, Merz M, Wüthrich M (2012) Claims development result in the paid-incurred chain reserving method. Insur Math Econom 51(1):66–72

    Article  Google Scholar 

  11. Hertig J (1985) A statistical approach to IBNR reserves in marine reinsurance. ASTIN Bull J IAA 15(2):171–183

    Article  Google Scholar 

  12. International Accounting Standards Board (2017) IFRS 17 insurance contracts. IFRS Foundation, London

  13. International Actuarial Association (2018) Risk adjustments for insurance contracts under IFRS 17, Ontario

  14. Jeong H, Dey DK (2020) Application of vine copula for multi-line insurance reserving. Risks 8(4):111

    Article  Google Scholar 

  15. Jeong H, Chang H, Valdez EA (2021) A non-convex regularization approach for stable estimation of loss development factors. Scand Actuar J.

    Article  Google Scholar 

  16. Jin T, Provost S, Ren J (2016) Moment-based density approximations for aggregate losses. Scand Actuar J 2016(3):216–245

    Article  Google Scholar 

  17. Kou G, Peng Y, Wang G (2014) Evaluation of clustering algorithms for financial risk analysis using MCDM methods. Inf Sci 275:1–12

    Article  Google Scholar 

  18. Kou G, Akdeniz ÖO, Dinçer H, Yüksel S (2021) Fintech investments in European banks: a hybrid IT2 fuzzy multidimensional decision-making approach. Financ Innov 7(1):1–28

    Article  Google Scholar 

  19. Kou G, Xu Y, Peng Y, Shen F, Chen Y, Chang K, Kou S (2021) Bankruptcy prediction for SMEs using transactional data and two-stage multiobjective feature selection. Decis Support Syst 140:113429

    Article  Google Scholar 

  20. KPMG (2017) IFRS 17 insurance contracts first impressions. Accessed 12 Apr 2018

  21. Lin H, Verrall RJ (2010) Bootstrap estimation of the predictive distributions of reserves using paid and incurred claims. Variance 4(2):121–135

    Google Scholar 

  22. Lindholm M, Lindskog F, Palmborg L (2020) Financial position and performance in IFRS 17. (Last version: 11 May 2020). Available at SSRN 3567745

  23. Mack T (1991) A simple parametric model for rating automobile insurance or estimating ibnr claims reserves. ASTIN Bull J IAA 21(1):93–109

    Article  Google Scholar 

  24. Mack T (1993) Distribution-free calculation of the standard error of chain ladder reserve estimates. ASTIN Bull J IAA 23(2):213–225

    Article  Google Scholar 

  25. Merz M, Wüthrich M (2010) Paid-incurred chain claims reserving method. Insur Math Econom 46(3):568–579

    Article  Google Scholar 

  26. Meyers G (2017) A cost of capital risk margin formula for non-life insurance liabilities. In: Casualty actuarial society e-forum

  27. Miccolis R, Heppen D (2010) A practical approach to risk margins in the measurement of insurance liabilities for property and casualty (general insurance) under developing international financial reporting standards. In: International congress of actuaries, Cape Town, South Africa

  28. Montgomery DC, Jennings CL, Kulahci M (2015) Introduction to time series analysis and forecasting. Wiley, New York

    Google Scholar 

  29. Peter GW, Dong AX, Kohn R (2014) A copula based bayesian approach for paid-incurred claims models for non-life insurance reserving. Insur Math Econom 59:258–278

    Article  Google Scholar 

  30. Pigeon M, Antonio K, Denuit M (2014) Individual loss reserving using paid-incurred data. Insur Math Econom 58:121–131

    Article  Google Scholar 

  31. Politis DN, Romano JP (1994) The stationary bootstrap. J Am Stat Assoc 89(428):1303–1313

    Article  Google Scholar 

  32. Provost S (2005) Moment-based density approximants. Math J 9(4):727–756

    Google Scholar 

  33. Renshaw A, Verrall R (1998) A stochastic model underlying the chain-ladder technique. Br Actuar J 4(4):903–923

    Article  Google Scholar 

  34. Salvidio S, Oneto F, Ottonello D, Pastorino MV (2016) Lagged influence of North Atlantic Oscillation on population dynamics of a Mediterranean terrestrial salamander. Int J Biometeorol 60(4):475–480

    Article  Google Scholar 

  35. Shi P, Frees EW (2011) Dependent loss reserving using copulas. Insur Math Econom 59:258–278

    Google Scholar 

  36. Taylor G (2012) Loss reserving: an actuarial perspective. Springer, New York, p 21

    Google Scholar 

  37. Wang S (2000) A class of distortion operators for pricing financial and insurance risks. J Risk Insur 67(1):15

    Article  Google Scholar 

  38. Wang S (2002) A universal framework for pricing financial and insurance risks. ASTIN Bull J IAA 32(2):213–234

    Article  Google Scholar 

  39. Wüthrich M, Merz MI (2008) Stochastic claims reserving methods in insurance. Wiley, New York, p 435

    Google Scholar 

  40. Zehnwirth B (1994) Probabilistic development factor models with applications to loss reserve variability, prediction intervals and risk based capital. Casual Actuar Soc Forum Citeseer 2:447–606

    Google Scholar 

  41. Zha Q, Kou G, Zhang H, Liang H, Chen X, Li CC, Dong Y (2021) Opinion dynamics in finance and business: a literature review and research opportunities. Financ Innov 6(1):1–22

    Google Scholar 

  42. Zhang F (2006) The Schur complement and its applications. Springer, New York, p 4

    Google Scholar 

  43. Zhao Y, Mamon R, Gao H (2018) A two-decrement model for the valuation and risk measurement of a guaranteed annuity option. Econom Stat 8:231–249

    Google Scholar 

Download references


This work is supported by the MITACS Accelerate Grant (IT12339) in partnership with J. S. Cheng & Partners (JSCP) Inc, Toronto, the Ministry of Science and Technology of China through H. Xiong’s Foreign Young Talents Program (QN20200017001), and the China Postdoctoral Science Foundation (2020M672913). Y. Zhao expresses his sincere appreciation for the hospitality of the JSCP, where certain parts of this paper were conceptualised during an internship. Finally, we thank Dr Serge Provost for his valuable comments concerning the moment-based method giving clarity in tailoring it to our modelling set up.


This study was funded by the MITACS Accelerate Grant-Award Number IT12339, the Foreign Young Talents Program of the Ministry of Science and Technology of China (QN20200017001), and the China Postdoctoral Science Foundation (2020M672913).

Author information




All authors in the current work have contributed equally. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Heng Xiong.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhao, Y., Mamon, R. & Xiong, H. Claim reserving for insurance contracts in line with the International Financial Reporting Standards 17: a new paid-incurred chain approach to risk adjustments. Financ Innov 7, 71 (2021).

Download citation


  • Incurred claims
  • Paid losses
  • Paid-incurred chain model
  • Moment-based density approximation
  • Risk measures
  • International Financial Reporting Standards 17