Claim reserving for insurance contracts in line with the International Financial Reporting Standards 17: a new paid-incurred chain approach to risk adjustments

This study considers the risk management of insurance policies in line with the implementation of the new International Financial Reporting Standards 17. It applies the paid-incurred chain method to model the future unpaid losses by combining the information channels of both the incurred claims and paid losses. We propose the recovery of the empirical distribution of the outstanding claims liabilities associated with a group of contracts via moment-based density approximation. We determine the risk measures and adjustments that are compliant with the new standard using the Monte–Carlo simulation method and approximated distributions. The historical data on the aggregate Ontario automobile insurance claims over a 15-year period are analyzed to examine the appropriateness and accuracy of our approach.

Under the BBA, the value of insurance contracts is measured as the sum of four blocks described as follows: • Block 1: Sum of the future cash flows that relate directly to the fulfillment of contractual obligations; • Block 2: The value of future cash flows; • Block 3: Risk adjustment (RA) representing the compensation for bearing uncertainty with regard to cash flows; • Block 4: Contractual service margin (CSM) representing the unearned profit that will be recognized as a profit or loss when services are provided.
In this study, we address the evaluation of the RA (Block 3) of insurance policies by proposing an efficient and accurate method in RA estimation under IFRS 17. It should be noted that IFRS 17 does not provide specific prescriptions for RA calculations or mandate a particular method to determine RAs. Although it does not restrict the use of any technique, it requires RAs to reflect the compensation that the entity should receive for bearing the uncertainty with regard to the amount and timing of cash flows, which arises from non-financial risks. RAs can be computed using various measures of risk, justified by the principles and concepts in probability theory and statistical mathematics.
In this context, we consider the claims development triangle that will be the basis for our RA calculation. It is specifically used for forecasting future claims and estimating outstanding claims liabilities. Under the IFRS specification, Lindholm et al. (2020) used this method in the calculation of the CSM, valuation of a portfolio of insurance contracts, and allocation of the aggregate liability value to different groups of policies. Conversely, our study focuses on the characterization of the distribution of outstanding claims liabilities and the computation of the ensuing RA. Some methods for estimating outstanding liabilities include the chain-ladder, Bornhuetter-Ferguson, and frequency-severity methods; an overview of such methods is presented in (e.g. Taylor 2012; Wüthrich and Merz 2008). Models for the claims supporting the abovementioned methods include the log-normal model (Barnett and Zehnwirth 2000), gamma model (Zehnwirth 1994;Mack 1991), distribution-free approach (Mack 1993), and over-dispersed model (Renshaw and Verrall 1998). In our case, we develop and customize the paid-incurred chain (PIC) model proposed in Merz and Wüthrich (2010) to predict the future claims, using Ontario's automobile claims data.
To evaluate the RAs of insurance policies, the distribution of the outstanding claims liabilities needs to be determined. The general approach is to obtain the empirical cumulative distribution function (CDF) via Monte-Carlo (MC) simulation. The distribution could be approximated using the normal or log-normal distributions. Here, we also employ the moment-based density method propounded in Provost (2005) as an alternative. For example, this method was applied to the risk measurement and management of insurance policies, including guaranteed minimum benefits in Gao et al. (2017) and Zhao et al. (2018) as well as aggregate losses in Jin et al. (2016).
Herein, we first calculate the outstanding claims liabilities of various groups of policies under a novel PIC method. Our method of calculation is an extension of the approach used in Happ and Wüthrich (2013) for a multi-line setting with a dependency structure.
In accordance with the requirements of IFRS 17, we apply four RA calculation methods to determine the RAs based on the distribution of outstanding claims liabilities. As the explicit expression of the distribution of such liabilities is rarely obtainable, the moment-based density approximation is proposed to recover their empirical distribution. Our proposed methodology is subsequently implemented on the historical data of incurred claims and paid losses. Although our application focuses only on the auto insurance sector, we develop an approach with overarching principles applicable to the analysis of other insurance data.
The departure of our contributions from the existing literature on the computation of the RAs of claims liabilities is highlighted through the following: (1) we provide a new PIC approach to model and predict the future unpaid losses based on the information on both incurred claims and paid losses data in practice; (2) in contrast to the methods used in Hertig (1985) and Gogol (1993), our methodology combines their principal elements by introducing the dependency between paid losses and incurred claims; (3) the moment-based density method is used as an efficient alternative to approximate the distribution of claims reserves; (4) finally, to our knowledge, this study is the first to elaborate on the entire process of estimating RAs for auto insurance claims involving different groups of policies to comply with the IFRS 17 implementation.
The remainder of the paper is organized as follows: in "Model description" section, the PIC method is explained, and the claims reserves are computed; four RA computation methods are described in "Risk adjustment calculation" section; in "Numerical illustration" section, numerical illustrations are discussed based on historical data and the moment-based density approximation is employed to approximate the claims reserves' distribution; and finally, the concluding remarks are presented in "Conclusion" section.

Model description
The claims development triangle is an essential method to organize data for actuarial analyses and claims reserving. The triangle (see Table 1) depicts how the claims in each accident year for the same line of business (LOB) develop to achieve their ultimate value. This organization of data greatly facilitates the comparison of the development history experienced in an accident year. In this method, we consider two types of triangles: (1) incurred claims, which refer to the reported claims amounts and (2) paid losses, which refer to the payments made for the claims. The paid losses should be less than or equal to the incurred claims in the same development year. However, they will have the same values ultimately, that is, all the reported claims must be paid off at the end of the lifetime of the policies.
The fundamental ideas of the PIC method can be found in Merz and Wüthrich (2010). It is designed for analyzing the data on both incurred claims and paid losses. We denote the accident year by i ( 0 ≤ i ≤ J ) and the development year by j ( 0 ≤ j ≤ J ). Here, J refers to the largest development year. We assume that there are n LOBs, where l ( 1 ≤ l ≤ n ) is the l th LOB. Let I i,j,l be the cumulative incurred claims in accident year i after j development years for the l th LOB and P i,j,l be the corresponding cumulative paid losses. We further assume that all the claims are settled after J years, that is, P i,J ,l = I i,J ,l . It is worth noting that the assumption of P i,j,l ≤ I i,j,l is not considered in Merz and Wüthrich (2010). However, after investigating the constraint via numerical experiments, we find that the probability of P i,j,l > I i,j,l is extremely small. Furthermore, our focus lies on the prediction and forecast of the outstanding loss liabilities rather than the incurred losses. Therefore, the possible breaches of P i,j,l > I i,j,l are innocuous in our modeling framework. Table 1 graphically portrays the implementation structure of this method for the same LOB l . Write and for the sets of paid losses data, incurred claims data, and joint data information up to development year j, respectively.
To model the dependence between paid and incurred data, we introduce a covariance matrix V based on the idea in Happ et al. (2012) and extend their work to a more general case of n LOBs. The matrix V is set as any positive definite matrix that may incorporate managerial experience and judgment. Pinning down V suitably reduces computational complexity. Given an accident year i for n LOBs, we consider a vector of random variables x i = (ξ i,1,1 , ζ i,1,1 , . . . , ξ i,1,n , ζ i,1,n , . . . , ξ i,J ,1 , ζ i,J ,1 , . . . , ξ i,J ,n , ζ i,J ,n ) ⊤ , following the multivariate normal distribution with mean u and covariance matrix V , where ⊤ is a matrix transpose. The cumulative paid losses P i,j,l are determined by the recursion with an initial value P i,0,l = e ξ i,0,l . For the cumulative incurred claims I i,j,l , we have a backward recursion with an initial value I i,J ,l = P i,J ,l , where e −ζ i,j,l are called the link ratios. Following Happ and Wüthrich (2013), we assume that there is a correlation coefficient between the incurred claims for a specific year and the paid losses after s years in the same LOB given by ρ s,l = Cor (ξ i,j+s,l , ζ i,j,l ).
The correlation coefficients corresponding to the paid losses and incurred claims in different LOBs are defined as where ρ s,(l,l ′ ) denotes the correlation coefficients between the incurred claims in the LOB l for a specific year and the paid losses in the LOB l ′ after s years; ρ ξ s,(l,l ′ ) denotes the correlation coefficients between the paid losses in the LOB l for a specific year and the paid losses in the LOB l ′ after s years; ρ ζ s,(l,l ′ ) denotes the correlation coefficients between the incurred claims in the LOB l for a specific year and the incurred claims in the LOB l ′ after s years. All the remaining entries in V are assigned a value of 0. The covariance matrix indicates that the incurred claim increments ζ i,j,l correlate with the paid loss increments ξ i,j+s,l in the succeeding years. Such dependencies are also extended to other LOBs, which are observed by the correlation between ξ i,j+s,l and ζ i,j,l ′.
Conditional on the sets B j and given the distribution assumptions on ξ i,j,l and ζ i,j,l , we analyze the joint distribution of log P i,j+1,1 , . . . log P i,j+1,n , . . . , log P i,J ,1 , . . . , log P i,J ,n ⊤ .
i,j ∼ μ,� , and Suppose μ ′ and ¯ ′ are the respective projection vector and matrix of μ and ¯ onto the space that contains log P i,j+1,1 , . . . log P i,j+1,n , . . . , log P i,J ,1 , . . . , log P i,J ,n ⊤ . Suppose μ ′ and ¯ ′ are the respective projection vector and matrix of μ and ¯ , respectively. The entries in μ ′ and ¯ ′ then correspond to their mean and covariance, respectively, of P i,j,l for 0 ≤ i ≤ J , 0 ≤ j ≤ J and 1 ≤ l ≤ n . Consistent with the assumption propounded in Merz and Wüthrich (2010), the vector log P i,j+1,1 , . . . log P i,j+1,n , . . . , log P i,J ,1 , . . . , log P i,J ,n ⊤ follows a multivariate normal distribution with parameters μ ′ and ¯ ′ , that is, We denote the paid losses of the l th LOB for the accident year i and during the j th development year by C i,j,l . Thus, As we are interested in the future unpaid losses, we require the distribution of C i,j,l . Now, the total paid losses in year k is given by At time J, the total discounted future unpaid losses is given by where r k is the discount rate at time k. Therefore, this denotes the time-J outstanding loss liabilities of the claims, considering accident years 0 to J. It is also the best estimate of liability for the incurred claims related to the past service up to J. Hence, the total outstanding liabilities of the claims of all LOBs is

Risk adjustment calculation
Notably, IFRS 17 mandates that RAs must be calculated based on the discounted fulfillment cash flows over the term of the policies. As this new standard does not designate any estimation technique in determining the RA, we adopt four methods outlined in an RA document by the International Actuarial Association (2018): value at risk (VaR), conditional tail expectation (CTE), Wang transform (WT), and cost of capital (CoC). (2) VaR is the loss level for which the loss random variable exceeds with probability 1 − α , where α is the confidence level (CL). Therefore, if S is the loss random variable, then the 100α% VaR, denoted by VaR α (S) , is given by RA is equal to the difference between VaR and the corresponding probability weighted expected value. The drawback of VaR is that it disregards the extreme values beyond the CL, and this could lead to an underestimation of risk measures.
Furthermore, CTE's advantage over VaR is its primary concentration on the tail of the loss distribution; it is also a coherent risk measure and an alternative quantile technique. Specifically, CTE is the expected loss given that the loss is greater than VaR α (S) . Therefore,

RA is then calculated as the difference between the CTE and the corresponding probability weighted expected value.
Another coherent risk measure that we include is WT, which is typified as a distortion risk measure; see Wang (2000Wang ( , 2002. Under the WT, the probability distribution is adjusted due to risk preferences. This means that lower adjusted probability values are assigned to favorable outcomes while higher adjusted probability values are assigned to unfavorable outcomes. A WT was chosen in Miccolis and Heppen (2010) to provide RA as per IFRS 17 by calibrating parameters. A general expression for the distortion risk measure for a non-negative loss random variable S is given by where F S (s) is the CDF of S and χ(x) is a distortion function χ : [0, 1] → [0, 1] , which is a non-decreasing function with χ(0) = 0 and χ(1) = 1 . The distortion function for WT is given by where η is a risk aversion parameter. The higher the η , the less is the risk aversion. RA is calculated as the difference between the probability weighted expected value using adjusted probability and the expected value under the original probability.
Under the concept of CoC, an entity will determine its risk preference based on its selection of a capital amount appropriate for the risks that are relevant to IFRS 17's measurement objectives. Among the various approaches in determining capital requirements, the assigned capital amount used to compute the CoC adopted in this paper is the difference between (a) the amount calculated under the probability distribution associated with the selected CL and (b) the amount using the probability weighted expected value.
RA is computed as the present value of the future cost of capital associated with relevant cash flows. This is expressed as where A i is the assigned capital amount for period ending at time i, c i is the CoC's rate at time i, and r i is the discount rate at time i.
The challenge in choosing the appropriate c i is akin to meeting the specific requirement objectives. This should reflect a rate of return that is consistent with the entity being indifferent between insurance contract's liabilities with uncertain cash flows and liabilities with fixed cash flows. Additionally, the CoC's rate should consider the entity's risk preference and experience. A general approach to quantifying an RA based on the CoC technique is described in Meyers (2017), which incorporates stochastic path dependencies given that the capital amount is impacted as time progresses, and such an approach is applied to the claims development triangles of the unpaid losses.

Numerical illustration
In this section, we present empirical demonstrations using incurred claims and paid losses data on Ontario automobile insurance policies from 2002 to 2016 inclusive. We study three different groups of policies categorized as: bodily injury, direct compensation, and accident benefit. This implies that we have six different data sets: (1) bodily injury incurred claims, (2) bodily injury paid losses, (3) direct compensation incurred claims, (4) direct compensation paid losses, (5) accident benefit incurred claims, and (6) accident benefit paid losses. The data sets were compiled by the General Insurance Statistical Agency.
Before performing the numerical implementation under our model setting, it is crucial to verify the assumption of the chain-ladder data structure with dependencies. All the data sets in this empirical illustration are displayed as claims development triangles with a chain-ladder structure. For instance, Tables 2 and 3 display the incurred claims and paid losses of the direct compensation policy in the form of a triangle. Furthermore, we examine the residuals to ensure the existence of a chain-ladder data structure by employing the time series and lag plots, and the turning point test (Bhattacharyya 1984;Montgomery et al. 2015;Salvidio et al. 2016). Our findings reveal that the residuals are independent and identically distributed (IID), and the data conform with the proposed structure. To validate the dependency assumptions, we compute the correlation coefficients from V following Happ and Wüthrich (2013). Table 4 shows the correlation coefficients between the incurred claims for a specific year and the paid losses after s years for all the three groups of policies. Table 5 reports the correlation coefficients between the incurred claims in one LOB for a specific year and the paid losses in another LOB after s years. For instance, ρ s,(1,2) denotes the correlation coefficient between the incurred claims in business line 1 for a specific year and the paid losses in business line 2 after s years. Table 6 displays the correlation coefficients between the paid losses in one LOB for a specific year and the paid losses in another LOB after s years, and the correlation coefficients between the incurred claims in one LOB for a specific year and the incurred claims in another LOB after s years. Our findings show that the dependency structure exists in the data sets. Hence, the implementation of our model to the chain-ladder type data with dependency assumptions is well-justified.      These three types of insurance policies have various features and show different trends over the development years. In this illustration, we have n = 3 and J = 14 , i.e., we assume that all claims are settled after 14 years. Moreover, it is assumed that the horizontal year is 2016, and we must forecast the future claims after 2016. The estimates of parameters φ j,l , σ j,l , ϕ j,l , τ j,l for 0 ≤ j ≤ J and 1 ≤ l ≤ n are obtained by using the maximum likelihood method. The formulae for the estimators are given by and As there are insufficient data points to estimate σ 14,l and τ 14,l , we extrapolate their values. Tables 7 and 8 display the estimates of these parameters.
To verify that our data set for the numerical illustration follows the normality assumption in "Model description" section, we perform the Kolmogorov-Smirnov (KS), Shapiro-Wilk (SW), and Jarque-Bera (JB) tests on the observations log P i,j,l P i,j−1,l and log I i,j,l I i,j−1,l covering the first three accident years. The KS test is a nonparametric test of the null hypothesis that the CDF of the data is equal to the hypothesized CDF. The KS test could be used to assess normality of empirical data when the hypothesized CDF is a normal distribution. The SW test is also used as a normality test, but it is designed for situations of small sample sizes. The JB test is a goodness of fit test to determine whether the sample data follow a normal distribution. The results from the three statistical tests are shown in Table 9. We see that high p-values are obtained so that we could not reject the null hypothesis of a normal CDF. Therefore, it is reasonable to adopt the normal assumption in our modeling approach.
To compute the risk measures, the distribution of reserves for future unpaid losses is necessary. Note that the uncertainty of the predicted unpaid losses comes from both the uncertainties of future processes and parameter estimates. Therefore, both uncertainties must be considered to determine the distribution. We employ the nonparametric bootstrap method to estimate the parameter uncertainty. Bootstrapping is a simulationbased statistical analysis that could be used to measure the accuracy of sample estimates. As statistics calculated from samples can be used to estimate population parameters, we employ the bootstrap distribution to describe the behavior of the quantities being estimated, such as SEs and CLs. Bootstrapping is a powerful technique in stochastic claims reserving (e.g. Gao 2018;England et al. 2019;Jeong et al. 2021). Considering the dependent structure in our model setting, we employ a modified bootstrapping approach based on a stationary bootstrap method posited in Politis and Romano (1994). This approach is applicable for models with dependent data by using a moving block bootstrap technique. The bootstrapped observations under the moving block bootstrap are normally non-stationary. However, the modified method circumvents this problem by resampling blocks of random lengths. Based on this bootstrapping technique, Lin and Verrall (2010)    conducted an adapted bootstrap approach to deal with dependent data containing both paid and incurred claims information. We then follow their bootstrapping to tackle dependent data across multiple LOBs. Suppose {P i,j,l , I i,j,l : 0 < i ≤ J , 0 < j ≤ J − i, 1 ≤ l ≤ n} is an observed data set. We consider the log ratio of the data, that is, Recall that φ j,l and σ j,l are the respective mean and standard deviation of ξ i,j,l in year j, while ϕ j,l and τ j,l are the respective mean and standard deviation of ζ i,j,l in year j. Consider the residuals where φ j,l , σ j,l , ϕ j,l and τ j,l are the estimates of φ j,l , σ j,l , ϕ j,l and τ j,l , respectively. Similar to the work of Lin and Verrall (2010), the residuals are grouped as U i,j = {D i,j,l , E i,j,l : 1 ≤ l ≤ n} . We now follow and apply the stationary bootstrap method. For each i, starting from j = 1 , let U i * i ,j * 1 denote the residuals randomly drawn from {U i,j } . The new observations are given by For j + 1 , we pick the residuals U i * i ,j * j+1 at random from the original residual sets {U i,j } with a probability p, i.e., and then we draw U i * i ,j * j +1 with a probability 1 − p , i.e., As argued in Politis and Romano (1994), the probability p is chosen as p = 1/b , where b is the block size. The new estimates of parameters for each corresponding distribution are determined through these new observations.
Using the parameter estimates, the MC simulation method is conducted to forecast future unpaid losses and estimate the outstanding claims liabilities via the following procedure: 1. Resample the data by using the bootstrap method to estimate the parameters. 2. For each i ( 0 ≤ i ≤ J ), generate values of P i,j,l ( i + 1 ≤ j ≤ J , 1 ≤ l ≤ n ) through equation (1) using the new estimated parameter values. 3. Calculate C i,j,l , C J +l,l , R J ,l and R J using equations (2), (3), (4), and (5), respectively. 4. Repeat steps 1-3 for N times.
Considering that J = 14 and the horizontal year is 2016, we forecast the unpaid losses from 2017 to 2031. These unpaid losses correspond to insurance contracts with accident years from 2002 to 2016. The directive from IFRS 17 states that the discount rate applied to the estimates of the future cash flows should: (1) reflect the cash flows and liquidity's characteristics of the insurance contracts and (2) be consistent with the current observed market prices. IFRS 17 does not specify any technique to determine the discount rate. The "top-down" and "bottom up" methods are suggested by KPMG (2017) to determine the discount rate value. For ease of calculation, we assume r is constant and r = 0.024 in this example. We implemented the MC simulation with 1,000,000 replicates. The histograms of unpaid losses for bodily injury type policies in different years ( C J +l ) are displayed in Fig. 1. The means and standard deviations of the unpaid losses in different years are shown in Table 10; the means and standard deviations of the total discounted unpaid losses (i.e., outstanding claims liabilities) are shown in the last row. From Table 10, the direct compensation policy has much lower means and standard deviations. This is consistent with the fact that the claims are almost always reported and paid in the first few years under this policy type. The accident benefit policy has the highest volatilities, and this observation suggests that the future cash flows for this policy are more uncertain than those of the other two policies. The histograms of the discounted total paid losses ( R J ) are portrayed in Fig. 2.
Note that it is impossible to obtain the exact distribution of outstanding claims liabilities. Therefore, some approximation methods must be employed, and the most commonly used one is the empirical CDF. Some parametric approximations, including normal and log-normal distributions, could also be employed. The moment-based density approximation introduced in Provost (2005) is not only conceptually simple and computationally efficient but also produces approximation results that are quite accurate while providing useful functional representations of the density functions of interest. The resulting density approximants are mathematically equivalent to those obtained by employing orthogonal polynomials. However, this conceptually simple semiparametric technique eliminates some of the complications associated with the   empirical density normal approximation log normal approximation moment-based density Fig. 2 Density of total unpaid losses use of orthogonal polynomials but still yields comparable density approximants. Furthermore, compared to the normal power approximation, the moment-based density approximation provides more accurate approximation results. The abovementioned arguments motivate our adoption of the moment-based density approximation method. Under this method, the exact density function with known first n moments can be approximated by the product of (1) a base density, whose tail behavior is congruent to that of the distribution to be approximated, and (2) a polynomial of degree q. The parameters of the base density can be determined by matching the moments of the loss random variable and the approximated density. The choice of the base function depends on the loss distribution. Based on the preliminary examination on the distribution of claims liabilities, we adopt the normal distribution for the approximation. In particular, the base function �(x) with parameters θ and c is given by Let the moments of the standardized version of the random variable X (i.e. (X − c)/θ ) be α X (i) for i = 0, 1, . . . , q . As it is impossible to obtain the theoretical moments of X, we can use the sample moments obtained from the data. Denote the theoretical moments of the base function �(x) by m X (i) for i = 0, 1, . . . , 2q . The parameters θ and c of �(x) can be determined by calculating the sample mean and sample standard deviation of the data.
The density f X (x) of X is approximated as the k i 's, i = 0, . . . , n are polynomial coefficients and are determined via where M is a (q + 1) × (q + 1) symmetric matrix whose (i + 1) th row is (m X (i), m X (i + 1), . . . , m X (i + q)). Consequently, the approximated density of X is given by The approximated CDF of X can be obtained from equation (6). As an explicit formula for the quantile function is unavailable, quantiles can be calculated numerically (e.g., using Newton's method). We consider the normal and log-normal approximations as benchmarks. The approximated densities for the total unpaid losses based on these three methods are displayed in Fig. 2. The corresponding CDFs are shown in Fig. 3, and the moment-based approximation is the closest to the empirical CDF. Additionally, Fig. 4 illustrates the differences between the corresponding CDF approximations of total unpaid losses and the empirical CDF. We observe that the moment-based density approximation generates the smallest deviations and manifestly outperforms the other two alternatives. The KS test is used to assess the goodness of fit of the approximations. The results of the test are provided in Table 11, showing that the moment-based density approximation outperforms the two other approximations. More specifically, from the p-values, the test shows that we cannot reject the null hypothesis that the moment-based approximation is not different from the empirical density, while the same cannot be said for the other two approximations.
RAs of three policy types are computed using the VaR, CTE, WT, and CoC metrics based on the simulation results. The normal and log-normal approximations serve as benchmarks. We select three CLs for VaR and CTE: 90%, 95%, and 99%. The three risk aversion parameter values for WT are 0.1, 0.05, and 0.01. The capital amounts every year under the CoC method are determined using the VaR. The CoC rate is set at 0.08. The results are shown in Tables 12, 13, 14 and 15; percentages refer to the ratios of the RAs to the probability weighted expected values. In particular, the riskmargin percentages are obtained using the baseline amounts displayed in the last row   Table 10. These are the respective means of 7, 976,162, 177,179 and 7,183,122 for the bodily injury, direct compensation, and accident benefit policies. The results obtained from the moment-based density method are better than those from the normal and log-normal approximations. This is because the differences between the results from the empirical CDF and those from the moment-based density method are much smaller than the differences between the results from the other two methods and those from the empirical CDF. Note that RAs using the CTE method have the highest values because CTE considers potential loss beyond the CL. Risk adjustments using the CoC method have much lower values.  It has to be noted that, notwithstanding IFRS 17's not indicating the method to determine RAs, the CLs are required to be disclosed. Therefore, if the WT or CoC method is used to calculate the RAs, the equivalent risk aversion factor or the CL consistent with the VaR method must be revealed. Table 16 displays the aggregated future cash flows in different years. It should be noted that the sum of the RAs computed under the four methods for three business types (see Tables 12, 13, 14, and 15) are higher than the aggregated RAs computed through the corresponding four techniques (cf Table 17). For instance, as shown in Table 12, the RAs for bodily injury, direct compensation, and accident benefit types are 281,256, 11,421, and 353,028, respectively, calculated using the VaR with the moment-based approximation

Conclusion
This study proposed an efficient and accurate methodology and elaborated on the entire process of estimating RAs for claims according to the requirements of IFRS 17. The PIC model was adopted to describe the development of unpaid losses based on historical information with the data on incurred claims and paid losses. The MC simulation method was employed to generate samples of future unpaid losses. To approximate the distribution of claims reserves, the moment-based approximation method was used. Risk adjustments were computed in four different ways using the approximated distribution with the bootstrap method that was applied to obtain the distribution of RAs. The methodology and procedures described in the study could be used in the form of implementation guidelines for the computation of the RAs of insurance companies' claims. We observe some limitations of this study, which need to be examined further. For instance, the constraint P i,j,l ≤ I i,j,l is not strictly assumed in the modeling framework in Merz and Wüthrich (2010), but the effect of possible breaches on forecasting accuracy is worth exploring in the future. Furthermore, the introduction of a dependency structure might result in over-parametrization. This issue can be further investigated with the use of statistical techniques, such as the heuristic estimation proposed by Avanzi et al. (2018), which deserves a thorough analysis and publication.
Future studies could extend the models and methods considered herein in many ways. The independent assumption on the link ratios could be relaxed, while it could be assumed that they follow a multivariate skew normal distribution (see Pigeon et al. 2014). A prior distribution of the model parameters could be considered, leading to the Bayesian PIC method as alluded in Merz and Wüthrich (2010). When considering several types of policies, risk dependencies and aggregation techniques should be treated with the help of an appropriate approach. A copula method could be used to model the dependence structure and calculate the aggregated risks for all the policies (e.g. Shi and Frees 2011;Peter et al. 2014;Abdallah et al. 2015;Jeong and Dey 2020). In addition, the RA measurement only reflects the uncertainties originating from the non-financial risks associated with insurance contracts. To conduct a consolidated risk management, it will be worth analyzing a well-developed risk assessment framework by integrating our proposed technique with the decision-making approaches employed in the studies on financial risk evaluation and analysis (e.g. Kou et al. 2014Kou et al. , 2021aZha et al. 2021). It should be noted that we only considered the historical data of incurred claims and paid losses. The reliability of the estimation results will increase if we could incorporate other useful information, including earned premium, earned exposure, and reserves, subject to the accessibility of such data. Furthermore, the additional information may also decrease the variability of the model parameter estimates and improve the performance of future cash-flow forecasting. Finally, the computed RAs are solely based on future unpaid losses. When practically available, other relevant future cash flows that satisfy the requirements of IFRS 17 could be embedded in the implementation of the method of calculating RAs in the case of an insurance company.