IRTG1792DP2020 014
Cross-Fitting and Averaging for Machine Learning Estimation of Heterogeneous
Treatment Effects
Daniel Jacob
Abstract:
We investigate the finite sample performance of sample splitting, cross-fitting
and averaging for the estimation of the conditional average treatment effect.
Recently proposed methods, so-called meta- learners, make use of machine
learning to estimate different nuisance functions and hence allow for fewer
restrictions on the underlying structure of the data. To limit a potential
overfitting bias that may result when using machine learning methods, cross-
fitting estimators have been proposed. This includes the splitting of the data
in different folds to reduce bias and averaging over folds to restore
efficiency. To the best of our knowledge, it is not yet clear how exactly the
data should be split and averaged. We employ a Monte Carlo study with different
data generation processes and consider twelve different estimators that vary in
sample-splitting, cross-fitting and averaging procedures. We investigate the
performance of each estimator independently on four different meta-learners: the
doubly-robust-learner, R-learner, T-learner and X-learner. We find that the
performance of all meta-learners heavily depends on the procedure of splitting
and averaging. The best performance in terms of mean squared error (MSE) among
the sample split estimators can be achieved when applying cross-fitting plus
taking the median over multiple different sample-splitting iterations. Some
meta-learners exhibit a high variance when the lasso is included in the ML
methods. Excluding the lasso decreases the variance and leads to robust and at
least competitive results.
Keywords:
causal inference, sample splitting, cross-fitting, sample averaging, machine
learning, simulation study
JEL Classification:
C01, C14, C31, C63