 Research
 Open access
 Published:
Estimation of default and pricing for invoice trading (P2B) on crowdlending platforms
Financial Innovation volume 10, Article number: 109 (2024)
Abstract
This study developed several machine learning models to predict defaults in the invoicetrading peertobusiness (P2B) market. Using techniques such as logistic regression, conditional inference trees, random forests, support vector machines, and neural networks, the prediction of the default rate was evaluated. The results showed that these techniques can effectively improve the detection of defaults by up to 56% while maintaining levels of specificity above 70%. Unlike other studies on the same topic, this was performed using sampling techniques to address the imbalance of classes and using different time periods for the training and test datasets to ensure intertemporal validation and realistic predictions. For the firsttime, default explainability in the invoicetrading market was studied by examining the impact of macroeconomic factors and invoice characteristics. The findings highlighted that gross domestic product, exports, trade type, and trade bands are significant factors that explain defaults. Furthermore, the pricing mechanisms of P2B platforms were evaluated with the observed and implicit probabilities of the default to analyze the price risk adjustment. The results showed that price reflects a significantly higher implicit probability of default than observed default, which in turn suggests that underlying factors exist besides the borrowers’ probability of default.
Introduction
Crowdlending is a trending topic that has emerged primarily in the last decade and is set to continue growing in the future for FinTechrelated studies (Liu et al. 2020). Online invoice trading is a subfield of crowdlending, a digital market that has experienced exponential growth (Ziegler et al. 2017) and helps businesses finance their working capital. It is a niche segment of the broader P2B market and consists of the financial discount on an invoice via a platform (lender) in exchange for the payment of commissions or fees. Usually, invoices are financed by many investors (crowdlenders), and the platform analyzes the risk of the transaction, establishing a rating and price. The pricing mechanism can be decided in an auction style or by fixed prices set by the platforms. Most of these platforms have evolved towards a fixedprice regime (Dorfleitner et al. 2017). Estimating the default probability is essential for investors and the pricing mechanism of securities (Carmichael 2014). The increased availability of opensource datasets and advances in new techniques have boosted interest in default predictions (Turiel and Aste 2020). Unlike peertopeer (P2P) platforms, which provide a great deal of information about debtors, the main invoicing platform (Kriya) offers more limited information, complicating external default evaluation. Additionally, invoice trading has essential characteristics mainly dedicated to shortterm financing that differentiate it from other lending operations in terms of risk.
Most research on crowdlending has focused on P2P financing (Carmichael 2014; SerranoCinca et al. 2015; Zhu et al. 2019), whereas few studies have addressed the P2B market.Therefore, the first objective of this study is to help investors determine their probability of default (PD) by developing models using publicly available information. To the best of our knowledge, only Dorfleitner et al. (2017) have published a study that focused on estimating the default of an invoice trading platform (Kriya) using logit and Tobit models. Unlike Dorfleitner et al. (2017), our approach focuses on rectifying the imbalance of classes using sampling techniques that allow for the correct classification of defaults with unbiased models. Any model that does not consider this problem is biased towards the majority class (Kotsiantis and Pintelas 2004; Bastani et al. 2019) making it unsuitable for default prediction. Furthermore, greater robustness of the models was provided by ensuring intertemporal validation. This enabled the generation of realistic predictions for future test samples. To the best of our knowledge, this study represents a pioneering application of machinelearning methodologies in the invoicelending market. We compared the performance of logistic regression, conditional inference trees, conditional inference forests, random forests, support vector machines, and neural networks for predicting defaults, similar to Li et al. (2020) for credit ratings. This study is also the first to examine the influence of new macroeconomic factors and invoice characteristics on the default rate in the invoice trading market. By presenting this innovative approach, a new set of variables that are significant determinants for predicting default was identified: gross domestic product (GDP), exports, trade type, and trade band. The current findings demonstrated that machinelearning techniques can improve default detection by over 56%. Furthermore, the use of sampling techniques to address the imbalance between the two classes produces good results in the detection of loan default, with sampling techniques correctly predicting default by more than 50% compared with scenarios without sampling.
Price setting by the platform, using the implicit and observed probability of default (IPD and OPD, respectively) to analyze the overcharging of sellers, is also discussed for the first time in this study. As such, this study assessed whether the platform’s prices correspond to the credit risk inherent to transactions in the context of a diversified retail portfolio. If the charges are not adjusted to the risk of the transactions, this indicates that companies are willing to pay a high interest rate because they are unable to obtain competitive financing from other financial sources, or that they greatly value the flexibility of these platforms. The results revealed a significant distortion between OPD and IPD.
Sect. "Literature review" provides a literature review on the structure of this study. Sect. "Dataset and descriptive analysis" provides a descriptive analysis and details of the data preprocessing performed. Sect. "Predicting invoicelending default" discusses the determinants used to predict default using logistic regression, including an evaluation comparing the predictive capabilities of the model across various sampling techniques. It also compares the performance of different machine learning alternatives in terms of predicting defaults. Sect. "Pricing mechanism with implicit probability of default" examines the platform’s creditpricing method using IPD and OPD. Finally, Sect. "Discussion of results" discusses the findings and conclusions are presented at the end of this paper.
Literature review
P2P lending has attracted considerable attention from the academic community over the past decade. Most research on this topic has used logistic regression to predict the probability of default and/or profitability of loans. Most of the research was performed using publicly available data from the Lending Club platform, with the remainder obtained from a small set of platforms.
Early research was conducted by Carmichael (2014), who predicted the probability of default and expected returns from P2P lending. The former was estimated using a dynamic logistic regression, and the log of income, recent credit inquiries, loan purpose, loan amount, credit score, and subgrade were the most significant explanatory factors. Variables gathered from borrowers' loan descriptions, such as whether the description lacked complete sentences or claimed that the author was creditworthy, were also significant in explaining default. The full model performed better than the final club subgrade. Meanwhile, the model without a subgrade was included as an explanatory variable and performed similarly to the Lending Club subgrade. To estimate the expected returns, the probability of early repayment and the principal repaid were calculated given default. The first was modelled with a dynamic logistic regression using the same regressors as for default, whereas the second was performed with ordinary least squares. The expected return on the lowestrisk subgrade (A1) loans was 5%. This increased steadily to a maximum of 11% for midrisk loans (D2), and then decreased to 10% for the highestrisk loans (E5). Similarly, Li et al. (2016) estimate the probability of prepayment in addition to the probability of default. They used multivariate logistic regression incorporating macroeconomic factors and found that the factors explaining default and prepayment were very similar. Specifically, loan features, macroeconomic factors, and most borrower characteristics are significant. Their results showed that high interest rates are not only associated with higher probabilities of default but also with higher probabilities of prepayment, as borrowers do not want to bear the costs associated with them. SerranoCinca et al. (2015) study the determinants of default in P2P lending. Using a hypothesis test and survival analysis, they first determine the most significant factors for estimating default, namely, loan purpose, annual income, current housing situation, credit history, and indebtedness. Subsequently, several logistic regressions are performed to predict the probability of default based on the previously determined factors. Their results showed a clear relationship between the Lending Club subgrade and the probability of default, where the subgrade was the variable with the highest predictive capability. Furthermore, interest rates appeared to depend on the grade assigned: the higher the interest rate, the higher the probability of default. These results are similar to those of Möllenkamp (2017), who used binary logistic regression to investigate the determinants of P2P loan performance based on different credit grades. The results show a positive relationship between credit grade and loan performance, wherein a higher credit grade was related to a lower probability of default. As for loan performance, credit grade is the most influential factor. Loan amount and annual income were also significant predictors, whereas all other variables lost significance in forecasting. Regarding the determinants of default, Avgeri and Psillaki (2023) explored borrowerrelated and macroeconomic factors in the US P2P market using logistic regression. Their study suggests that the higher the percentage change in the house price, consumer sentiment, and S&P500 indices, the lower the delinquency. Unemployment and GDP affected the rate of default. Nigmonov et al. (2022) used a probit model and found that for the same market, the higher the interest rates and inflation, the higher the probability of default. Moreover, the effect of interest rates on default is significantly higher for loans with lower ratings.
Other authors have attempted to predict the probability of default using a different set of methodologies. For example, Zhu et al. (2019) compared a random forest model with other statistical techniques and discovered that the former outperforms decision trees, logistic regression, and support vector machines with an outstanding 98% level of accuracy. They used a synthetic minority oversampling technique (SMOTE) to solve the imbalance problem in the sample. Similarly, Malekipirbazari and Aksakalli (2015) compared different statistical techniques—such as random forests, support vector machines, logistic regression, and the nearest neighbor algorithm—to predict the default rate of P2P lending. Their results showed that random forests outperformed Fair Isaac Corporation (FICO) credit scores, the Lending Club subgrade, and other methodologies for identifying good borrowers, with an accuracy level of 78%. Moreover, based on their findings, although this technique was highly suitable for identifying good borrowers, misclassifications existed for some borrowers who were erroneously deemed bad. Borrower status was also studied by Fu (2017), who combined random forests with neural networks and compared the performance of each model alone and together, where the probability of default was estimated by one model and then given to the other for reestimation. Preprocessing was also performed, where the data were first normalized, and old observations were discarded. This combined technique, along with data preprocessing, considerably improves accuracy and outperforms the Lending Club subgrade. The highest accuracy was achieved by first using a neural network and then using random forests. A similar method was used by Kim and Cho (2018), who proposed a deep dense convolutional network (DenseNet) for default predictions in P2P. Their findings revealed that this model could achieve a relatively high level of accuracy (79.6%) and reduce overfitting compared with other convolutional neural networks.
Furthermore, Turiel and Aste (2020) developed several artificial intelligence models to predict loan rejection and estimate the probability of default, proving that artificial intelligence can increase accuracy and default risk by 70%. They also proposed the separation of small business subsets to increase the performance of default predictions. Ko et al. (2022) proposed a wide range of prediction models to mitigate the risk of default and asymmetric information on P2P lending platforms, stating that LightGBM outperformed the other methodologies, with a model accuracy of 68.57% and a revenue improvement of 23.8 million US dollars. They argue that the Lending Club, despite being the largest P2P lending platform in the USA, still has a high rate of default, which proves its ineffectiveness when classifying debt. Similarly, Muslim et al. (2022) applied an improved LightGBM based on swarm algorithms to predict default rates on P2P platforms. Their study indicated that the performance increased after feature selection using a swarm algorithm, with LightGBM + ACO achieving the highest level of accuracy (95.64%).
In terms of profitability, the body of work done by SerranoCinca and GutiérrezNieto (2016) is worthy of mention, which used a multivariate regression along with a decision tree model (CHAID) to develop a profit scoring system. The internal rate of return (IRR) of each loan was used as a profitability measure. Using exploratory data analysis and multivariate linear regression, they found that the explanatory factors for predicting default and profitability differed. Similar to other studies, the Lending Club subgrade was found to be significant. Although the loan purpose and housing characteristics were generally significant, they seemed to be more significant in predicting profitability. Moreover, a nonlinear relationship (inverted and Ushaped) existed between the internal rate of return and its factors. With respect to profitability, the decision tree has a mean internal rate of return of 5.98%, outperforming the Lending Club's mean of 3.92%. Their study also suggested that nonlinear data mining techniques can be useful for developing profitscoring systems.
Similarly, Guo et al. (2016) predicted the expected return on a loan using an instancebased model (IOM), by considering the investment decision on a P2P platform as a portfolio optimization problem with boundary constraints. They compared the results of this instancebased model with those of two ratingbased models, the restricted Boltzmann machine (RBM) and RBM + , and showed that the IOM outperformed them in terms of prediction accuracy and investment performance. The probability of default was estimated using logistic regression, which was then applied to the IOM to calculate the expected returns. Similarly, Bastani et al. (2019) propose a twostage system for credit and profit scoring. In Stage 1, they attempted to identify nondefault loans, which were then moved to Stage 2, where they estimated profitability using the internal rate of return. Wide and deep learning using Google was used to build predictive models in both stages. This model can achieve both memorization and generalization, avoiding the overgeneralization frequently observed in deep learning modelling techniques. The factors predicting default and the internal rate of return were similar to each other and those studied by SerranoCinca et al. (2015) and SerranoCinca and GutiérrezNieto (2016), respectively. These two studies used different statistical techniques to address the imbalance problem of the sample, namely, random undersampling, random oversampling, and SMOTE. Nonetheless, the latter method appeared to perform consistently better than the other two. Their results indicated that the proposed scoring approach outperformed existing credit and profit scoring approaches, and the combination of wide and deep learning with the SMOTE method achieved the highest performance.
Elliott and Curet (1999) devised the first framework for invoice discounting, a generic term for financing solutions that use invoices as collateral for loans. They suggested using an inductive algorithm such as casebased reasoning (CBR) and noticed a lack of knowledge regarding invoicediscounting cases. To the best of our knowledge, few similar studies have been published to date. Dorfleitner et al. (2017) estimated default events in the online invoice trading market using logit models. Their study suggests that interest rates, duration, and advance rates are determinant factors in predicting default, where the higher the gross yield and the longer the time to maturity, the higher the probability of default and the loss rate. However, the advance rate is negatively related to default, reflecting the platform's ability to assess sellers’ creditworthiness. This implies that the larger and more creditworthy a company is, the more easily it can obtain greater loans. It has also been concluded that risk can be effectively reduced by diversifying the portfolio of invoices and that investors prefer to deal with a higher risk with higher interest rates rather than lower advance rates. Furthermore, the default and average net returns are lower in a fixedprice regime than in an auction price mechanism. Zhang and Thomas (2015) consider the merits of including economic variables in a logistic regressionbased credit scorecard in an invoicediscounting context. By doing so, they wanted to directly estimate a shortterm, dynamic version of the probability of default (i.e., PointinTime (PIT) probability of default). Perko (2017) also adapted the definition of “probability of default” as a more shortterm and finegrained concept; the outcome should be predicted at a fixed timeframe of 30 days in advance, rather than overdue days.
Other authors have extensively evaluated credit scoring systems and the estimation of default in commercial credit. However, this study focuses on the estimation of the probability of default and pricing optimization on crowdlending platforms, thus we have not referred to the authors in this literature review. A summary of related bodies of research can be found in Table 9 in Appendix, where additional information exists regarding the accuracy of the models and the samples used.
Dataset and descriptive analysis
Institutional background and data
We used publicly available data from the crowdlending platform Kriya,^{Footnote 1} whose headquarters are in the United Kingdom (UK). The company was formerly known as MarketInvoice and was later rebranded as MarketFinance before being given its current name; it specializes in factoring and P2B. According to the information provided on their website, as of November 2023, they had funded around 3.4 billion pounds of sterling (GBP) to small and mediumsized enterprises (SMEs), with an expected net yield of between 4 and 6% and a loan collection rate of 78.01%. After data cleansing, the dataset contained 46,761 observations and 26 variables with loans from March 2011 to December 2019. Some original variables were dropped since considering them for predictive purposes—such as trade payment, trade settlement, and delinquent dates—was not sensible. This was also the case for the currency variables; this study focused on GBP given the low number of observations in other currencies. Meanwhile, the trade expected pay date was subtracted to the advance date, creating the variable “days”, which represents the maturity of the transaction. In addition, the following macroeconomic variables were tested for model fitting: GDP, exports, import price index, producer price index, consumer price index, money supply, M1, US/GBP spot exchange rate, and EUR/GBP spot exchange rate. However, after feature selection, only the first three showed significance and were considered for further analysis. The information provided by the platform was rather limited since no variables related existed regarding financial grade, income, or solvency; financial information about the borrower, seller, or protection provider was missing. As such, it was necessary to study the extent the available information allowed for an analysis of the probability of default and consequently whether investors could accurately assess risk, which can be done in other crowdlending platforms that do not restrict this type of information.
Table 10 lists all the variables used in the present study. In addition to the invoice variables (advance rate, annualized gross yield, total face value, days) sourced from the Kriya database, a set of macroeconomic variables (GDP, exports, and import price index) were considered for the model fitting, while the remaining variables were only used in the descriptive analysis.
Descriptive analysis
A summary of the variables is presented in Tables 1 and 2. Considering the mean values, a typical invoice was funded at a 76% rate, with an annualized gross yield of 11% and a maturity of 55 days. The platform advance rate ranged from 2.5% to the total value of the invoice, with yields reaching as high as 49% and maturities of over a year in some cases. However, the invoice amount varied from a few thousand to several million GBP, indicating that the platform financed a wide range of businesses in terms of income. The total crystallized losses were very low, with a mean absolute value of 108 GBP, mainly because of the low default rate of 1.7%. Regarding macroeconomics, the UK has a steady mean GDP growth rate of 0.89%. Exports experienced a slightly higher growth rate of 1.86%, and the import price index did not change significantly, considering the mean values. Regarding the categorical variables, the most common trade operation was the standard discount (80.9%), wherein the platform discounted a single invoice with negotiated conditions for the seller, followed by the entire ledger (12.8%), wherein the sellers had a special agreement with the platform to discount their entire portfolio of invoices. This is probably why some of the funded invoices had very low face values and maturity. Most loans were traded in Bands 5 (23.7%), 4 (15.3%), and 1 (13.3%), which may be due to the industry in which the borrower is positioned. In addition, 88% of these loans were in “repaid” status and only 11.4% were repurchased, while other statuses were virtually nonexistent.
Table 3 presents the correlation matrices between variables. In general, the data did not exhibit multicollinearity. Exports and the import price index were the most noticeable pairs of correlated variables. When the import price index increased, the production costs of companies increased, which in turn affected their competitiveness and reduced their export levels, thus creating a negative correlation. Maturity was also negatively correlated with advance rate and annualized gross yield. In the first case, the platform may have been trying to reduce the risk involved in a higher fund rate by providing a lower maturity period, limiting investors’ exposure to it. In the second case, because high interest rates with larger periods of maturity result in greater costs for the seller, many of these loans were repaid as soon as possible, sometimes before the expected payment date. This allows for invoices with long periods of maturity and low interest rates to be obtained, mainly because they are repaid earlier than expected. However, those repaid over longer timeframes usually have lower interest rates to compensate for the costs incurred. Financially healthy companies can do this if they are not overcharged interest when assuming a longer period of maturity, thus creating this negative relationship. The platform’s correct credit risk assessment is also supported by the fact that the advance rate and annualized gross yield are negatively correlated, such that higher fund rates are granted with lower interest rates to financially healthy companies, and vice versa.
For outliers, the dataset was primarily affected by the invoice variables, as shown in Fig. 1. It contains several extreme outliers that must be removed from the training dataset because logistic regression might be very sensitive to these outliers. Therefore, no bias was generated during model fitting. Nonetheless, the test dataset remained untouched; therefore, realistic predictions could be made using any type of data point. Consequently, 10% top and bottom winsorization of the training data was performed for model fitting.
Preprocessing and sampling
First, the data were cleaned by rejecting missing or irrelevant observations. Loans pending repayment were discarded. Therefore, only those that had been completed were considered because they had been fully paid, partially recovered, or completely lost. Two levels^{Footnote 2} from the categorical variable trade type were excluded because they had very low representation in the dataset. Given the divergence in variable scales, we standardized the variables by subtracting their mean and dividing by their standard deviation.
Following these adjustments, feature selection was performed using the information gain ratio and chisquared filters. Given a training set S that is partitioned into V subsets \(S_{1} , \ldots ,S_{v}\) according to V different values of feature X, the mutual information of features X and class Y was defined based on Kotsiantis and Pintelas (2004, p. 50) as
However, the informationgain filter is strongly biased toward features with different values. This can be corrected using the following calculation, which represents the potential information gain by partitioning S into V subsets.
The information gain ratio expresses the proportion of information gained by the partition.
The chisquare filter \(X_{c}^{2}\), measures the divergence of the feature distribution by comparing the observed and expected values.
where c is the number of degreesoffreedom, \(O_{i}\) is the observed value, and \(E_{i}\) is the expected value. Statistics were compared using a chisquare table.
All the metrics above reported similar results. The variables that were not significant were excluded from the analysis. Those that were finally included in the model development are shown in Fig. 2.
The data were split into training and testing sets with a 75–25% ratio, respectively. The test dataset collected only loans from 2019, whereas the training dataset gathered a wider selection of older data to ensure intertemporal validation of the models (Lau 1987; SerranoCinca et al. 2015). Outliers were handled in the training dataset only by winsorizing the top and bottom 10% of the invoice variables, as mentioned previously. The target variable of this study represented loan status and had two classes: “Yes” if the loan was default and “No” if the loan was not in default. The positive class, also known as the minority class, represented 1.7% of the total loans, while the negative class, also known as the majority class, accounted for 98.3%.
The dataset suffered from a severe class imbalance problem that would have biased the predictive models towards the majority class (Bastani et al. 2019). “A classifier derived from an imbalanced dataset typically has a low error rate for the majority class and an unacceptable error rate for the minority class” (Kotsiantis and Pintelas 2004, p.48). However, in this case, the misclassification cost for the minority class was higher than for the majority class. That is, the outcome of accepting a defaulting loan was higher than that of rejecting a nondefaulting loan because the former assumes an actual loss, whereas the latter implies an opportunity cost. To resolve the class distribution issue, three wellknown sampling techniques—random undersampling, random oversampling, and SMOTE— were applied and compared.
The random undersampling technique balances classes by randomly eliminating observations from the majority class. This technique can eliminate potentially useful information from the analysis, whereas in random oversampling, the replication of observations in the minority class increases the likelihood of overfitting (Kotsiantis and Pintelas 2004). In both techniques, a parameter controls the final ratio of the classes. Finally, in the synthetic minority oversampling technique, new observations are not replicated but are synthetically derived from the original observations in the minority class. By means of an imaginary segment, these are created by connecting each minority class observation with K nearest neighbors (KNN). KNN calculates the distance between a current observation and all other observations by selecting the K^{th} nearest observation with the least distance. This study involved both categorical and numerical data, thus the Gower distance (Gower 1971) was used and estimated by considering two individuals—\(i\) and \(j\)—compared to a character, \(k\). An \(S_{ijk}\) score of zero was assigned when they differed, and a positive fraction or unity when they had some degree of similarity. The possibility of making comparisons can be represented by a quantity, \(\delta_{ijk}\), which is 1 when the character \(k\) can be compared for \(i\) and \(j\) or 0 otherwise. The similarity between \(i\) and \(j\), \(S_{ij}\), can be expressed as the average score obtained for all possible comparisons.
After obtaining K nearest neighbors, the distance between the observation under consideration and its nearest neighbors is multiplied by a random number between 0 and 1, which is added to the original observation (Chawla et al. 2002) and which “causes the selection of a random point along the line segment between two specific features,” (Chawla et al. 2002, p. 328).
The equation below (Zhu et al. 2019) represents the SMOTE calculation, where \(x_{i}\) is the original observation, \(x_{n}\) is the Kthnearest neighbor, \(R \in \left\{ {0,1} \right\}\) is a random number, and \(x_{new}\) is the resulting artificial observation:
To select the perfect ratio between the minority and majority classes, multiple scenarios were tested using logistic regression as a reference and the F1score as the maximization target. Specifically, logistic regression with SMOTE was evaluated using the five nearest neighbors at a rate of 29 with different balancing rates (ranging from 1 to 50%). The F1score was higher at a balancing rate of approximately 10% and 36%, although the latter reported a considerably higher sensitivity.
Typically, in an imbalanced dataset, accuracy is not a desirable metric for model comparisons (Bastani et al. 2019). Therefore, this study focused on precision^{Footnote 3} and sensitivity^{Footnote 4}; the former penalizes the misclassification of negatives, whereas sensitivity is only to the detriment of the misclassification of positives. The F1score is the harmonic mean of both metrics and gives them equal importance, thus influencing our decision to use it for model parameter optimization and final selection between models.
where TP is the actual number of positives, FP the number of false positives, and FN the number of false negatives.
For easier comparison with other studies, other commonly used metrics were also provided (specificity,^{Footnote 5} accuracy, and McFadden’s R^{2}).
where TN is the number of true negatives, \(L_{c}\) is the (maximized) likelihood value from the current fitted model, and \(L_{null}\) is the corresponding value for the null model (with only an intercept and no covariates).
As previously mentioned, different balancing rates were considered using logistic regression and SMOTE (using the five nearest neighbors at a rate of 29). The best results for F1score and sensitivity were obtained at a balancing rate of 36%. To provide a fair comparison between the undersampling and oversampling techniques, we used rates of 0.345 and 29 for the former and latter, respectively. At these rates, the three sampling techniques yielded three datasets, all with a minority class ratio of 36%. These datasets were used to evaluate the model performance of the sampling techniques with the logistic regression (Table 5). After the entire preprocessing, the final training dataset contained 1,830 observations with the undersampling technique and 53,064 observations with the oversampling techniques. The experimental design is illustrated in Fig. 3.
Predicting invoicelending default
Logistic regression
This section analyzes the determinants of default using a binary logistic regression model, which compares the relationship between predictor variables, \(X = \left\{ {x_{1} \cdots x_{i} } \right\}\), and the categorical response variable, \(Y = \left\{ {0,1} \right\}\). Predicted scores (\(\hat{\pi }\)) and observed probabilities (Y) can also be compared using the “maximization of a loglikelihood” (LL) function.
If:
then:
By logit transformation:
By algebraic manipulation:
Maximum loglikelihood estimation:
Table 4 presents the coefficient estimates, standard errors, and significance of the logistic model. Unstandardized coefficients were used to study the direct effects of the variables on default status and standardized coefficients to rank the predictors. These results show that annualized gross yield, advance rate, and maturity were significant in determining default, in line with Turiel and Aste (2020) for the P2P market and Dorfleitner et al. (2017) for the P2B market. Gross yield has a positive effect on the probability of default, which is consistent with how borrowers accepting higher interest rates are able to recognize lower creditworthiness (Stiglitz and Weiss 1981; Dorfleitner et al. 2017). Furthermore, a negative relationship exists between the advance rate and default, which was also discovered by Dorfleitner et al. (2017), showing that the advance rate can be an effective mechanism to dissuade debtors the platform considers riskier to default. Furthermore, the nominal value was negatively related to default; thus, lower facevalue invoices were more likely to be unpaid. As in the previous case, the platform correctly assesses risk by granting higher capital to companies that are less likely to default, thereby creating a negative relationship. However, maturity is positively linked, increasing the probability of default in longterm transactions, similar to Dorfleitner et al. (2017). Trade type is exceedingly significant in determining default when it is a standard or multidebtor transaction. A strong positive relationship exists between these types of transactions and defaults. Furthermore, GDP and exports have positive and significant relationships with default. This might have been counterintuitive initially because positive economic cycles are associated with a higher rate of default. Nonetheless, because lending standards usually fluctuate with the economic cycle, in stable cycles, lending standards may decrease, making it easier to access credit even for borrowers with higher probabilities of default. As for the trade bands, 10 was highly significant, while 4 and 9 were only significant at the 10% level. Considering the standardized coefficients, the following variables were ranked as the most important based on the size of their effect: trade type, multidebtor and standard, trade band 10, total face value, annualized gross yield, and maturity.
For the model predictions, Table 5 shows the results in terms of performance metrics with a 36% balanced minority class ratio. Better models were achieved with greater balance at a class distribution of less than 50%. Moreover, increasing these to higher ratios can bias the results towards very high sensitivity rates to the detriment of poor specificity. The data were randomly sampled, thus the mean of 200 iterations was calculated.
In general, a high level of overall accuracy was obtained when no sampling techniques were used; however, the resulting sensitivity rate was not acceptable. A clear difference existed between the results using sampling techniques and those that did not, especially in terms of sensitivity, where the former correctly classified 50% more loan defaults. All sampling techniques reported similar results, although SMOTE slightly outperformed the others for almost all metrics, which is in accordance with Bastani et al. (2019). It correctly classified 56% of the loans in default, while preserving its capacity to do so for 76% of the NPLs. A relatively good area under the curve (AUC^{Footnote 6}) score of 73.2% was achieved using SMOTE: higher than that of any of the alternative models used, as shown in Table 5 and Fig. 5.
Predicting default with machine learning alternatives
This section analyzes the performance of alternative models in predicting the default rate in the invoicelending segment. Our objective was to evaluate the precision of alternative logistic regression techniques, such as conditional inference trees, random forests, support vector machines, and neural networks. Logistic regression reported a higher F1score on the SMOTE dataset, thus it was used for alternative model fitting. All models were validated using the same data split as in the logistic regression (75% training—25% testing); however, the package used for modelling^{Footnote 7} (Bischl et al. 2016) allowed for tenfold crossvalidation within the training dataset, which we used to validate the models. This procedure splits the training dataset into ten folds where each fold is excluded iteratively and used to validate the performance of the models against them. Hyperparameter optimization was performed using a random search, with the objective of maximizing the F1score (see Table for the hyperparameter selection).
Conditional inference tree
Decision trees are nonparametric supervised learning algorithms that do not require special assumptions, making them versatile. Their main characteristic is a feature space, which is recursively partitioned by grouping observations that have similar response values (Strobl et al. 2009). In other words, they attempt to describe the conditional distribution of the response variable Y, given a set of m covariates X, restricting the feature space X of the covariates to r disjoint nodes \(B_{1} , \ldots ,B_{r}\), where \(X = \mathop \cup \limits_{k = 1}^{r} B_{k}\) (Hothorn et al. 2006):
The algorithm selects the variable from the covariate vector X with the strongest association with Y, searches for the best split point, and splits the variable into r disjointed nodes. As the splitting process continues, the level of purity^{Footnote 8} is observed. “In each node, the variable that is most strongly associated with the response variable (i.e., that produces the highest impurity reduction or the lowest p value) is selected for the next split (Strobl et al. 2009, p. 327). For conditional inference trees, p values were used instead of entropy measures. In this study, the association between Y and \(X_{j}\), where \(j = 1, \ldots ,m\), was measured using the Bonferroni test, which formulated a global hypothesis of independence in terms of m partial hypothesis, \(H_{0}^{j} :D\left( {Y{}X_{j} } \right) = D\left( Y \right)\). The variable with the lowest p value was selected. When insufficient evidence existed to reject \(H_{0}\), at a prespecified level α, the recursion was halted (Hothorn et al. 2006). Permutation tests were performed because the distribution \(D\left( {Y{}X_{j} } \right)\) is usually unknown. The splitpoint A* was obtained with a test statistic maximized over all possible A subsets (Hothorn et al. 2006), which measured the discrepancy between {\(Y_{i} X_{ji} \in A\)} and {\(Y_{i} X_{ji} \notin A\)}.
Random forest
“A random forest is a classifier consisting of a collection of treestructured classifiers {\(h\left( {{\mathbf{x}},{\Theta }_{k} } \right), k = 1, \ldots\)}, where {\(\Theta_{k}\)} are independent identicallydistributed random vectors and each tree casts a unit vote for the most popular class at input x,” (Breiman 1996, p. 6). Random forests try to predict the response variable with a predictor \(\varphi \left( {x,{\mathcal{L}}} \right)\) where x is the input vector and \({\mathcal{L}}\) is a learning dataset. To obtain a better predictor, repeated bootstrap samples are taken from the learning dataset \({\mathcal{L}}^{B}\), each of them consisting of N cases drawn at random with replacement, forming a sequence of predictors \(\varphi \left( {x,{\mathcal{L}}^{{\mathcal{B}}} } \right)\) (Breiman 1996, p. 123). Each predictor forms a decision tree that votes for the most popular class (mode) at input x. These procedures are known as bootstrap aggregation or bagging. In addition, random forests use feature bagging, in which a random subset of features x is selected for each built tree. Decision trees are usually correlated with strong predictors, thus feature bagging can reduce this correlation, making random forests more accurate. Conditional random forests are a special type of random forest wherein the trees used for bagging are conditional inference trees.
Support vector machine
In support vector machines, the input vectors are nonlinearly mapped to a highdimensional feature space, where a hyperplane or a set of hyperplanes is constructed to provide a linear decision function with a maximal margin between the vectors of the two classes (Cortes and Vapnik 1995).
Using a set of labelled training patterns:
are linearly separable if there exists a vector w and scalar b such that:
The optimal hyperplane that separates the data with maximal margin is given by:
where \(a_{i}^{0} > 0\) and \({\Lambda }_{0}^{T} = \left( {a_{1}^{0} \cdots a_{t}^{0} } \right)\) form a vector of parameters.
and the Lagrangian problem that needs to be minimized is:
A kernel function is often used to transform the feature space when the data cannot be split by a hyperplane without error, which occurs when points overlap.
Artificial neural networks
The basic elements of an artificial neural network are the nodes that represent the neurons of the biological brain. These nodes were connected and received stimulation signals from the input variables. This is not performed directly but with weights and activation functions. The neuron output signal \(O\) is expressed by the following relationship (Abraham 2005):
where \({\varvec{w}} = \left\{ {{\varvec{w}}_{1} \cdots {\varvec{w}}_{{\varvec{n}}} } \right\}\) is the weight vector, \({\varvec{x}} = \left\{ {x_{1} \cdots x_{n} } \right\}\) and the function \(f\left( {net} \right)\) is referred to as an activation (transfer) function. The variable net is defined as the scalar product of the weight and input vectors.
where T is the transpose of the matrix and the output value O is computed as
where \({\uptheta }\) is the threshold level. If the activation function reaches the threshold level, the signal is transmitted to the connected node. Figure 4 illustrates the general structure of an artificial neural network.
Our network architecture was similar to that of the extremelearning machine used by Pang et al. (2021), differing only in the learning method. Specifically, nine input variables existed (Fig. 2) with their respective weights, one hidden layer, and one output node (default). A logistic activation function was used: resilient backpropagation with weight backtracking as the learning algorithm and crossentropy as the error function.
Benchmark predictions
Table 6 presents the results for the performance measures. Apart from the neural network, the alternative models did not show any skill, with an AUC score below or slightly above 50% (Fig. 5). Using the DeLong test to compare the significance between the AUC scores, we developed the following twosided contrast based on DeLong et al. (1988):
where p and k represent the AUC of the two models. Table 7 presents the estimates and significance levels for the scores. Our results showed that the AUC score of the logistic regression was statistically significant in all models. The neural network also showed significance at the 5% and 10% levels. However, in contrast to the logistic regression, greater significance was observed in favor of the latter at the 5% level. The other methods did not show statistical significance.
The conditional inference tree forest and SVM lacked sensitivity to the detriment of greater specificity, whereas the conditional inference tree and random forest had acceptable sensitivity rates but poor specificity. The neural network is a more balanced model and has a greater F1score and AUC percentage. Indeed, it achieved the greatest F1score of all the models assessed, including logistic regression. A greater specificity rate and overall accuracy existed than in logistic regression. However, the AUC was slightly lower and predicted 20% fewer defaulted loans. These results experienced some misclassifications, one of particular importance being that some of the nondefault loans were classified as default, which negatively impacted specificity rates. This was due to a tradeoff between sensitivity and specificity, which affected all the models. However, the greater detection of default loans is more than compensated for by the decrease in specificity rates.
Pricing mechanism with implicit probability of default
This section evaluates the pricing of the platform by comparing the implicit and observed probabilities of default. Many factors can affect invoice prices. However, only the platform knows the extent each is considered. The Discussion section revisits this debate. The price paid by the seller is assumed to exclusively represent the borrower’s probability of default, thus the valuation of an invoice is based on the expected payoff and can be expressed as
where, \({\text{P}}\) is the price, \({\text{i}}\) is the interest rate, \({\text{r}}\) is the recovery rate, \({\text{C}}\) is the total advance amount of the invoice, and \({\text{PD}}\) is the probability of default.
By algebraic manipulation, the implicit probability of default in the price is estimated as
This is the borrower’s probability of default, intrinsic to the price charged to the seller.
Table 8 compares the implicit and observed probabilities of default. Implicit default experienced a slight uptrend and is consistently higher than the observed default for each year. This trend remains even with a lowering of the observed probability of default in later years. Notably, the implicit probability of default does not decrease like the observed default. Explanations for this lowering observed probability of default may be higher experience in loan selection, which may have led to a higher proportion of nonperforming loans gradually being rejected; the increase in business volume, because the higher the number of observations, the higher the convergence of the default rate towards a lower level (law of large numbers); and the positive economic cycle experienced during this period (2011–2019). These results may indicate a low correlation between the implicit probability of default and observed default, with a Pearson correlation coefficient of 0.31. The Mann–Whitney–Wilcoxon test was used to compare the two medians, given that they did not follow a normal distribution and were not paired samples. Furthermore, the medians of the two samples were compared as follows:
The statistics were given by:
where \({\text{n}}_{1}\) and \({\text{n}}_{2}\) are the sample sizes; \({\text{R}}_{1}\) and \({\text{R}}_{2}\) are the rank sums and \({\text{U}}_{1}\) and \({\text{U}}_{2}\) are the statistics of samples 1 and 2, respectively.
The Mann–Whitney–Wilcoxon test reported a pvalue of 4.1e05, which led us to reject the null hypothesis of equal medians between samples at the 1% significance level. By computing the mean difference between the two probabilities of default, implicit default was four times higher than observed default. There is evidence to consider this difference significant, especially if the platform did not consider its own probability of default, the probability of default of the seller, or if some of these transactions were guaranteed by insurance. Given that the borrower’s probability of default is the most relevant factor to consider when pricing an invoice and implicit default did not decrease in the same fashion as the observed default, these results may indicate that sellers pay a premium not realistically adjusted to the real probability of default. An explanation for paying such a premium may be that many of these borrowers must deal with credit rationing (Cowling 2010; Lee et al. 2015) in traditional credit markets, either by not having access to credit or having it at an even higher interest rate. In addition, the high flexibility of this lending method, which can remotely finance an invoice within a few hours, could explain why a higher price is charged. Hence, the platform may take advantage of this situation by charging a premium based on market conditions rather than on operational risk, leading to an imbalance between price and actual risk.
Discussion of results
Research on the prediction of default for invoice trading platforms is a novel topic (Dorfleitner et al. 2017) which is investigated in this study. However, unlike previous papers, the current study employed artificial intelligence techniques and sampling combined to enhance the accuracy and effectiveness of predictions. Current results show that these techniques can effectively improve the detection of defaults by up to 56% while maintaining levels of specificity above 70%. Not addressing the problem of an imbalanced dataset results in models with very high overall accuracy and specificity rates but unacceptable sensitivity rates. Therefore, by applying sampling techniques, the robustness of our models increased since they helped the learning process with artificial data. This in turn improved the reaction to unseen data. Once intertemporal validation was ensured, another layer of robustness was introduced by using predictions made with a 1year lag. Feature selection in the preprocessing stage allowed for the construction of models with relevant factors determining the default, which creates less complex models that are easier to interpret and less prone to overfitting, as well as being more robust. Of the models evaluated, the neural network was the best in terms of the F1score, whereas the logistic regression had a similar score with 20% more sensitivity, which could be more appropriate from an investor’s perspective. Bearing this in mind, limited disclosure on the Kriya platform prevents the advantages of more advanced techniques from being employed, and asymmetric information from being reduced. This negatively affects the performance of any model trying to predict default probability and limits the possibility of investors correctly assessing credit risk. Disclosing the borrower’s and seller’s financial information on the invoice, as well as information on the protection provider, is essential to permit a complete assessment of transaction risk.
Furthermore, the current study of the platform’s credit pricing with an implicit probability of default indicates a consistent discrepancy between the observed and implicit probabilities of default. The price charged to the sellers represented a probability of default four times higher than the observed probability of default, and the differences between the two were statistically significant. Consequently, this study assessed whether the invoice price was realistically adjusted to the borrower’s probability of default, or whether the platform was overcharging sellers for other reasons. Therefore, standard valuation models for invoice lending usually consider a borrower’s probability of default as the only relevant event (Nava et al. 2019). This is a simplification of reality since the price that the seller pays reflects not only that but also the probability of default of the seller (Nava et al. 2019), the value of a flexible method (where an invoice can be sold in just a few hours [Dziuba 2018]), and situations wherein sellers with lower creditworthiness may encounter credit rationing or higher interest rates in traditional commercial credit (Li 2016). The price also reflects the operational risk of this form of financing (Liang et al. 2022). Factors contributing to the apparent discrepancy may include risks associated with invoice factoring such as fraud, the authenticity of sold invoices (fraudulent activities from debtors), and/or operational issues impacting the platform's reliability and investors’ confidence. This uncertainty may prompt investors to demand higher interest rates to offset the perceived risks. Other factors, such as the economic cycle or fierce competition between platforms, may also increase a platform’s probability of default (Yoon et al. 2019), thereby influencing the pricing mechanism.
This study had two practical implications. First, although the neural network model exhibited a high F1score, logistic regression demonstrated comparable overall performance with a 20% higher sensitivity. From an investor’s standpoint, prioritizing sensitivity, which entails identifying truepositive cases, may be more appropriate because it mitigates the risk of overlooking potential defaults. When selecting a predictive model, investors should carefully evaluate the tradeoff between specificity and sensitivity. Second, the invoicetrading P2B market may appeal to small businesses that have limited collateral or lack a credit history and that have trouble being granted traditional bank loans. However, during the examined period, the discrepancy between the observed and implicit probabilities may indicate an inefficiency in the market, likely due to limited competition, resulting in small and medium enterprises (SMEs) being financed at higher interest rates than what their actual risk profile suggests. Investors should make informed decisions regarding their investments in online invoice trading platforms to avoid being aware of the disparity between the observed and implicit probabilities of default in the pricing mechanism of these platforms.
Conclusions
This study developed several machinelearning models to predict default in the invoicetrading P2B market. We used publicly available data from the crowdlending platform Kriya and estimated several techniques such as logistic regression, conditional inference trees, random forest, support vector machines, and neural networks. The current findings demonstrate that implementing these techniques leads to a substantial enhancement in the detection of defaults, achieving an improvement of up to 56%. Remarkably, these improvements were achieved while maintaining specificity levels above 70%. Furthermore, these results were obtained despite limited information provided by the platform. The inclusion of neither the financial information of the borrower nor that of the invoice seller was possible and it is a limitation of this study. Solvency ratios, insurance, and collateral information are variables that can significantly increase model performance, hence greater information disclosure in line with what peertopeer lending platforms offer is desirable to increase transparency and correctly assess risk.
In addition, this study examined the platform’s credit pricing using the implicit probability of default. We discovered a consistent disparity between the observed and implicit default probabilities. Notably, the price charged to sellers reflects a significantly higher probability of default than the observed probability. Several factors could explain this difference, in addition to the borrowers’ risk of default, such as inclusion in the pricing mechanism of the platform’s and seller’s probabilities of default, the flexibility of this form of financing, or credit rationing. However, if the borrower’s probability of default is considered the main factor when pricing an invoice, this may indicate that sellers pay a premium that does not accurately reflect the actual risk involved.
The current study focused on the problem of estimating defaults in the P2B invoice market, which is considered shortterm lending. This study can be applied to medium and longterm lending in P2B. Other methodologies proven effective in related studies include LightGBM with and without swarming techniques (Ko et al. 2022; Muslim et al. 2022) and learningtorank methodologies. Future studies in this area should change their focus from the borrower’s probability of default to the platform’s risk of default, as in Ahelegbey et al. (2019), Chen et al. (2022), and Liang et al. (2022). In addition, niche segments in wellstudied P2P markets, such as equity and real estate, can be explored.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Notes
More information at: https://kriya.co.
Jr. Supply Chain Finance and Purchase Order.
Precision or Positive predictive value (PPV) is the number of true positive results divided by all positive results (including misclassifications).
Sensitivity or recall is the proportion of positives that are correctly classified (true positive rate).
Specificity is the proportion of negatives that are correctly classified (true negative rate).
The AUC is a measure of a model's ability to distinguish between classes, where a higher score indicates better predictive performance in classification.
R’s mlr package.
The number of observations with a majority for a response class that becomes isolated.
Abbreviations
 ACO:

Ant colony optimization
 AGY:

Annualized gross yield
 AI:

Artificial intelligence
 AR:

Advance rate
 AUC:

Area under the curve
 CBR:

Casebased reasoning
 ECAI:

European External Credit Assessment Institution
 FICO:

Fair Isaac Corporation
 FN:

False negative
 FP:

False positive
 GDP:

Gross domestic product
 IOM:

Instancebased model
 IPD:

Implicit probability of default
 IPI:

Import price index
 IRR:

Internal rate of return
 KNN:

K nearest neighbors
 ML:

Machine learning
 NPLs:

Nonperforming loans
 OPD:

Observed probability of default
 P2B:

Peertobusiness
 P2P:

Peertopeer
 PD:

Probability of default
 PIT:

Pointintime
 RBM:

Restricted Boltzmann machine
 SMEs:

Small and mediumsized enterprises
 SMOTE:

Synthetic minority oversampling technique
 TFV:

Total face value
 TN:

True negative
 TP:

True positive
 UK:

United Kingdom
 US:

United States of America
References
Ahelegbey D, Giudici P, HadjiMisheva B (2019) Latent factor models for credit scoring in P2P systems. Physica A 522:112–121. https://doi.org/10.1016/j.physa.2019.01.130
Avgeri E, Psillaki M (2023) Factors determining default in P2P lending. J Econ Stud. https://doi.org/10.1108/JES0720230376
Bastani K, Asgari E, Namavari H (2019) Wide and deep learning for peertopeer lending. Expert Syst Appl 134:209–224. https://doi.org/10.1016/j.eswa.2019.05.042
Bischl B, Lang M, Kotthoff L, Schiffner J, Richter J, Studerus E, Casalicchio G, Jones Z (2016) mlr: Machine Learning in R. J Mach Learn Res 17(170):1–5
Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140. https://doi.org/10.1007/bf00058655
Carmichael D (2014) Modeling default for peertopeer loans. Soc Sci Res Netw. https://doi.org/10.2139/ssrn.2529240
Chawla N, Bowyer K, Hall L, Kegelmeyer W (2002) SMOTE: synthetic minority oversampling technique. J Artif Intell Res 16:321–357. https://doi.org/10.1613/jair.953
Chen X, Chong Z, Giudici P, Huang B (2022) Network centrality effects in peer to peer lending. Physica A Stat Mech Appl. https://doi.org/10.1016/j.physa.2022.127546
Cortes C, Vapnik V (1995) Supportvector networks. Mach Learn 20(3):273–297. https://doi.org/10.1023/a:1022627411411
Cowling M (2010) The role of loan guarantee schemes in alleviating credit rationing in the UK. J Financ Stab 6(1):36–44. https://doi.org/10.1016/j.jfs.2009.05.007
Cumming D, Zhang Y (2016) Are crowdfunding platforms active and effective intermediaries? Soc Sci Res Netw. https://doi.org/10.2139/ssrn.2882026
DeLong E, DeLong D, ClarkePearson D (1988) Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. https://doi.org/10.2307/2531595
Dorfleitner G, Rad J, Weber M (2017) Pricing in the online invoice trading market: first empirical evidence. Econ Lett 161:56–61. https://doi.org/10.1016/j.econlet.2017.09.020
Dziuba D (2018) Crowdfunding platforms in invoice trading as alternative financial markets. Roczniki Kolegium Analiz Ekonomicznych/Szkoła Główna Handlowa 49:455–464
Elliott J, Curet O (1999) Invoice discounting  a strategic analysis using casebased reasoning. Appl Innov Expert Syst VI. https://doi.org/10.1007/9781447105756_15
Faraway J (2016) Extending the linear model with R. In Chapman and Hall/CRC eBooks. https://doi.org/10.1201/9781315382722
Fu Y (2017) Combination of random forests and neural networks in social lending. J Financ Risk Manag 06(04):418–426. https://doi.org/10.4236/jfrm.2017.64030
Gower J (1971) A general coefficient of similarity and some of its properties. Biometrics 27(4):857. https://doi.org/10.2307/2528823
Grosan C, Abraham A (2005) Artificial neural networks. Intell Syst Ref Lib 17:281–323. https://doi.org/10.1007/9783642210044_12
Guo Y, Zhou W, Luo C, Liu C, Xiong H (2016) Instancebased credit risk assessment for investment decisions in P2P lending. Eur J Oper Res 249(2):417–426. https://doi.org/10.1016/j.ejor.2015.05.050
Hothorn T, Hornik K, Zeileis A (2006) Unbiased recursive partitioning: a conditional inference framework. J Comput Graph Stat 15(3):651–674. https://doi.org/10.1198/106186006x133933
Kim J, Cho S (2018) Deep dense convolutional networks for repayment prediction in peertopeer lending. Adv Intell Syst Comput 771:134–144. https://doi.org/10.1007/9783319941202_13
Ko P, Lin P, Do H, Huang Y (2022) P2P lending default prediction based on AI and statistical models. Entropy. https://doi.org/10.3390/e24060801
Kotsiantis S, Pintelas P (2004) Mixture of expert agents for handling imbalanced data sets. Ann Math Comput Teleinform 1:46–55
Lau A (1987) A fivestate financial distress prediction model. J Account Res 25(1):127. https://doi.org/10.2307/2491262
Lee N, Sameen H, Cowling M (2015) Access to finance for innovative SMEs since the financial crisis. Res Policy 44(2):370–380. https://doi.org/10.1016/j.respol.2014.09.008
Li J (2016) Going online? The motive of firms to borrow from the crowd. https://www.federalreserve.gov/conferences/files/goingonlinethemotiveoffirmstoborrowfromthecrowd.pdf
Li J, Mirza N, Rahat B, Xiong D (2020) Machine learning and credit ratings prediction in the age of fourth industrial revolution. Technol Forecast Soc Chang 161:120309. https://doi.org/10.1016/j.techfore.2020.120309
Li Z, Yao X, Wen Q, Yang W (2016) Prepayment and default of consumer loans in online lending. Soc Sci Res Netw. https://doi.org/10.2139/ssrn.2740858
Liang K, Zhang C, Jiang C (2022) Analyzing default risk among P2P platforms based on the LASSTACK method by considering multidimensional signals under specific economic contexts. Electron Commer Res 22(1):77–111. https://doi.org/10.1007/s10660021095059
Liu J, Li X, Wang S (2020) What have we learnt from 10 years of fintech research? A scientometric analysis. Technol Forecast Soc Chang 155:120022. https://doi.org/10.1016/j.techfore.2020.120022
Malekipirbazari M, Aksakalli V (2015) Risk assessment in social lending via random forests. Expert Syst Appl 42(10):4621–4631. https://doi.org/10.1016/j.eswa.2015.02.001
Möllenkamp N (2017) Determinants of loan performance in P2P lending. In: 9th IBA Bachelor Thesis conference, 1–4. https://journals.open.tudelft.nl/sure/article/view/2551/2808
Muslim M, Dasril Y, Saman M, Ifriza Y (2022) An improved light gradient boosting machine algorithm based on swarm algorithms for predicting loan default of peertopeer lending. Indonesian J Electr Eng Comput Sci 28(2):1002–1011
Nava I, Cuccio D, Giada L, Nordio C (2019) A simple factoring pricing model. Soc Sci Res Netw. https://doi.org/10.2139/ssrn.3428749
Nigmonov A, Shams S, Alam K (2022) Macroeconomic determinants of loan defaults: evidence from the US peertopeer lending market. Res Int Bus Finance. https://doi.org/10.1016/j.ribaf.2021.101516
Osborne J (2008) Best practices in quantitative methods. SAGE Publications, Inc. eBooks. https://doi.org/10.4135/9781412995627
Pang S, Hou X, Xia L (2021) Borrowers’ credit quality scoring model and applications, with default discriminant analysis based on the extreme learning machine. Technol Forecast Soc Chang 165:120462. https://doi.org/10.1016/j.techfore.2020.120462
Perko I (2017) Behaviourbased shortterm invoice probability of default evaluation. Eur J Oper Res 257(3):1045–1054. https://doi.org/10.1016/j.ejor.2016.08.039
SerranoCinca C, GutiérrezNieto B (2016) The use of profit scoring as an alternative to credit scoring systems in peertopeer (P2P) lending. Decis Support Syst 89:113–122. https://doi.org/10.1016/j.dss.2016.06.014
SerranoCinca C, GutiérrezNieto B, LópezPalacios L (2015) Determinants of default in P2P lending. PLoS ONE 10(10):e0139427. https://doi.org/10.1371/journal.pone.0139427
Stiglitz J, Weiss A (1981) Credit rationing in markets with rationing credit information imperfect. Am Econ Rev 71(3):393–410
Strobl C, Malley J, Tutz G (2009) An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychol Methods 14(4):323–348. https://doi.org/10.1037/a0016973
Turiel J, Aste T (2020) Peertopeer loan acceptance and default prediction with artificial intelligence. R Soc Open Sci 7(6):191649. https://doi.org/10.1098/rsos.191649
Yoon Y, Li Y, Feng Y (2019) Factors affecting platform default risk in online peertopeer (P2P) lending business: an empirical study using Chinese online P2P platform data. Electron Commer Res 19:131–158. https://doi.org/10.1007/s1066001892911
Zhang J, Thomas L (2015) The effect of introducing economic variables into credit scorecards: an example from invoice discounting. J Risk Model Validation 9(1):57–78
Zhu L, Qiu D, Ergu D, Ying C, Liu K (2019) A study on predicting loan default based on the random forest algorithm. Procedia Comput Sci 162:503–513. https://doi.org/10.1016/j.procs.2019.12.017
Ziegler T, Shneor R, Garvey K, Wenzlaff K, Yerolemou N, Hao R, Zhang B (2017) Expanding horizons: the 3rd European alternative finance industry report. Soc Sci Res Netw. https://doi.org/10.2139/ssrn.3106911
Acknowledgements
Not applicable.
Funding
We acknowledge the funding provided by the Galician Regional Government [ED431C 2020/18] cofunded by the European Regional Development Fund (ERDF/FEDER) within the period 2020–2023.
Author information
Authors and Affiliations
Contributions
CMC contributes to the development of the models, the statistical programming, the analysis of the data, and the writing of the paper. LOG and PDS contributes with the design of the work and the analysis of the data. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Corrales, C.M., González, L.A.O. & Santomil, P.D. Estimation of default and pricing for invoice trading (P2B) on crowdlending platforms. Financ Innov 10, 109 (2024). https://doi.org/10.1186/s40854024006324
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40854024006324