Predictions from machine learning algorithms have often supported decision-making in industrial processes. Despite this, complex models can be challenging to interpret, sometimes shrouding the entire prediction process in an undesirable mystery. Understanding how the classifiers’ recommendations are made helps human experts understand the phenomenon and develop better data-driven solutions. Therefore, this study takes advantage of Shapley additive explanations to explain the predictions obtained by the classifier and select the most appropriate features for the approaches. The experiments use extreme gradient boosting to evaluate temporal, spectral, and wavelet features of three-phase induction motor current signals. The proposed approach effectively reduces the number of attributes without losing performance, provides an understanding of how each feature affects the model over a wide range of voltage unbalances and torque values, and detects early inter-turn short circuits with severity of 1%. The results show that combining the intelligent model with Shapley explanations improves stator winding fault diagnosis in these highly problematic situations.