From: Detecting DeFi securities violations from token smart contract code
Study | Method | Features | Performance |
---|---|---|---|
Chen et al. (2021a) | Semantically-aware classifier that includes “a heuristic-guided symbolic execution technique” | Code-based | Precision: 100% Recall: 100% F1: 100% |
Fan et al. (2021) | “Anti-leakage” model based on ordered boosting | Code-based | Precision: 95% Recall: 96% F1: 96% |
Hu and Xu (2021) | Deep learning model | Code-based | Precision: 96.3% Recall: 97.8% F1: 97.1% |
Hu et al. (2021) | Long-term short-term memory neural network | Transaction-based | Precision: Between 88.2% and 96.9% for different types of contracts Recall: Between 81.6% and 97.7% for different types of contracts F1: Between 85% and 96.7% for different types of contracts |
Wang et al. (2021) | Long-term short-term memory neural network | Code- and transaction-based | Precision: 97% Recall: 96% F1: 96% |
Liu et al. (2022) | Heterogeneous Graph Transformer Networks | Code- and transaction-based | F1: Between 78% and 82% for fraudulent smart contracts and 87% and 89% for normal smart contracts for different classification tasks |
Zhang et al. (2021) | LightGBM | Code- and transaction-based | Precision: 96.7% Recall: 96.7% F1: 96.7% |
Chen et al. (2018) | XGBoost | Code- and transaction-based | Precision: 94% Recall: 81% F1: 86% |
Jung et al. (2019) | Decision trees, random forest, stochastic gradient descent | Code- and transaction-based | Precision: Between 90% and 98% for different models Recall: Between 80% and 96% for different models F1: Between 84% and 96% for different models |
Chen et al. (2019) | Random forest | Code- and transaction-based | Precision: Between 64% and 95% for different features Recall: Between 20% and 73% for different features F1: Between 30% and 82% for different features |