How do you deal with mixed/imperfect/terrible data?

I'm sure there's not "correct" answer to this, but I'd like to get the opinion from some of you with more experience from ds and statistics.

We've done a study where the resulting data fit perfectly with the description in the header. This is of course partially due to bad planning and unforeseen problems, but also due to the intrinsic nature of what we're studying. Our main problem is that we have two populations where some got one test, and the other got two tests, and we'd like to include both of these in the same analysis. Another problem is that we have no true reference standard, rather we're just comparing proportions of totals.

Reading articles from similar studies they might mention the likes of "used a more conservative p-value of 0.01 to account for repeated measures", GEE, bootstraps, glmer and McNemars test, or they've simply ignored the issue altogether. My field is radiology, so these are just ment as examples to a more general problem.

I'm not looking for a specific answers to my problems, just a general discussion and reasoning around what you would do in situations with messy data like this.

  • Why do you choose one method over the other?
  • Whats the consequence of choosing one method over the other?
  • How should the aspiring ds/statistician deal with this?
2 Likes

I'm not a real statistician, but maybe my comment can kickstart the discussion.

I can't speak to the problem in full generality, but methinks that two populations with one test in common should be tested by comparing the aovs. If they are reasonably consistent within the laugh-test \alpha = 0.05, I would look for other commonalities. If there are any, I'd build a model to test one population against the other stepwise using training, test and validation sets if n is sufficient.

More than that your impostor saith not.