Skip to content

Latest commit

 

History

History
12 lines (7 loc) · 2.05 KB

README.md

File metadata and controls

12 lines (7 loc) · 2.05 KB

claim_frequency

GLM, Neural Network and Gradient Boosting for Insurance Pricing, Part 1: Claim Frequency

What are the benefits of neural networks for motor tariffing? To answer this question, claim frequencies are modeled and predicted for a large French motor third party liability insurance portfolio. In a first classical approach generalized linear models (GLM) as well as their mixed model cousin (GLMM) are used. Then this approach is extended to deep artificial neural networks and the novel combined actuarial neural net (CANN) is used in the implementation of Schelldorfer and Wüthrich (2019). Subsequently decision-tree-based model ensembles (eXtreme Gradient Boosting, "XGBoost") are applied and the models examined. In addition, questions of tariff structures and model stability are regarded and cross-validation is carried out. It is shown that deep neural networks as well as decision tree based model ensembles can be used at least for the improvement of classical models. Furthermore, XGBoost models prove to be the superior forecasting models, taking into account the tariff system.


The German Association of Actuaries (Deutsche Aktuarvereinigunge.V., DAV) is the professional representation of all actuaries in Germany. It was founded in 1993 and has more than 5,400 members today. More than 700 members are involved in thirteen committees and in over 60 working groups as a voluntary commitment.

The given repositories have been created by committees and working groups and serve as an aid for our members and interested persons to support them in their work with machine learning methods and data science issues in an actuarial context.

Please note that the repositories provided on GitHub are published by the DAV. The content of linked websites is the sole responsibility of their operators. The DAV is not responsible for the code and data linked to Kaggle.com and referred to in the repositories. These reflect the individual opinion of each user on Kaggle.