Can I refit my training data with the subset of features identified by variable importance?

Is it a bad practice to refit the training data with a subset of features identified by variable importance? I tried this and I'm getting an improved accuracy on my training data and similar accuracy on the test day, however I wasn't sure if this is a good practice.

One always has to guard against outcome motivation— modeling to achieve some desired outcome. p-hacking is the poster child for this. However, feature engineering differs in principle. It's more like the choice among statistical tests that do address the same question different ways (such as with or without a normality assumption). Also, using a subset reduces the likelihood of over-fitting.

The ultimate question is judgmental

What information am I losing my dropping this feature?

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.