I think you're either making some mistakes, or using wrong terminologies to describe things. I'll try to explain below.
From your updated information, you have several observations on an element, and a single expected record. You want to check whether the observations are same as that or not, but accounting for sampling fluctuations, you accept observations falling in a close neighbourhood (an interval) of the expected record. Right?
This means you're testing for mean, and not for variance.
If you go through the theory of t-test
(from any book or resources you prefer), you'll find that it's the test for normal mean problem, with unknown variance. For example, you test for H_0: \mu = \mu_0, where \mu is the mean of the true population the observations are coming from, and \mu_0 is a known constant, and you're checking where true mean is indeed that or not (in a two tailed test).
For the time being, I assume that your observations are normal. There's absolutely no justification of this assumption from the information you provided, but otherwise you can't use it.
If you ended up rejecting the null hypothesis in a t-test
, that means that the observations do not come from the normal distribution with the prespecified mean (i.e. \mu_0) in the null hypothesis. It provides no information at all regarding correlation. So this comment is wrong:
Now, let's talk about multiple comparisons.
When you test for a null hypothesis, you do at a prespecified level \alpha, such that the probability of wrongly rejecting the null hypothesis doesn't exceed \alpha. If you check for multiple tests (say n
many) with same level, then the overall level increases. I'm not giving the proof, as I'm too lazy to type three to four lines in LaTeX. You can find it here, but I strongly suggest that you try it first yourself, as it's really simple using Boole's inequality.
The above link from Wikipedia explains it all, but just for completeness, I'll briefly mention the steps for Bonferroni Correction:
- The overall level is controlled at level n\alpha (note that this is level, and not size), if you test
n
many hypotheses simultaneously at level \alpha.
- Test the individual hypotheses at level \frac{\alpha}{n}, so that the total level is controlled at \alpha, the desired level of significance.
As you've said that you're very new to statistics, I'll guess it'll be enough for you, and you don't need to check for other multiple comparison techniques. But if you wish, feel free to explore.
Actually, what I've said above has nothing to do with R
. You can test for different elements individually at the adjusted level, and then combine the inferences from that.
If you don't want to do that, you can use the p-values that you already have in the p.adjust function.
Hope this helps.