Shapiro-Wilks has a null hypothesis that the data are drawn from a population that is normally distributed. The alternative hypothesis is that they are drawn from a population that has a different distribution.
Whenever doing a statistical test and checking a metric such as p-value the starting point is deciding on a threshold decision boundary: If the outcome is x the null hypothesis is rejected or failed to be rejected.
For p-value, the probability that the result is simply random, the conventional \alpha is 0.05. (I call this the laugh test.) Where greater confidence is needed that might be dropped to 0.01 or even 0.001. In any event, after setting the threshold it's bad form to change it to be able to claim that the result is significant (statistic's most unfortunate turn of phase).
In your example, it makes sense to look for p-value < 0.001789 if you have previously selected \alpha = 0.001 or less, but not otherwise. It's been said
If you torture the data long enough, it will confess anything.
and contortions to find the lowest possible p-value are termed p-value hacking, a blot upon the escutcheon.