Litmeyer, Marie-LouiseMarie-LouiseLitmeyerHennemann, StefanStefanHennemann2024-11-292024-11-292024https://jlupub.ub.uni-giessen.de/handle/jlupub/19976https://doi.org/10.22029/jlupub-19331In the context of regional sciences and migration studies, gravity and radiation models are typically used to estimate human spatial mobility of all kinds. These formal models are incorporated as part of regression models along with co-variates, to better represent regional specific aspects. Often, the correlations between dependent and independent variables are of non-linear type and follow complex spatial interactions and multicollinearity. To address some of the model-related obstacles and to arrive at better predictions, we introduce machine learning algorithm class XGBoost to the estimation of spatial interactions and provide useful statistics and visual representations for the model evaluation and the evaluation and interpretation of the independent variables. The methods suggested are used to study the case of the spatial mobility of high-school graduates to the enrolment in higher education institutions in Germany at the county-level. We show that machine learning techniques can deliver explainable results that compare to traditional regression modeling. In addition to typically high model fits, variable-based indicators such as the Shapley Additive Explanations value (SHAP) provide significant additional information on the differentiated and non-linear effect of the variable values. For instance, we provide evidence that the initial study location choice is not related to the quality of local labor-markets in general, as there are both, strong positive and strong negative effects of the local academic employment rates on the migration decision. When controlling for about 28 co-variates, the attractiveness of the study location itself is the most important single factor of influence, followed by the classical distance-related variables travel time (gravitation) and regional opportunities (radiation). We show that machine learning methods can be transparent, interpretable, and explainable, when employed with adequate domain-knowledge and flanked by additional calculations and visualizations related to the model evaluation.enNamensnennung 4.0 Internationalddc:910Forecasting first-year student mobility using explainable machine learning techniques