Compare variance of values in two datasets

Hi all,

I have data from a scientific instrument which in a particular scenario outputs values for a series of detected chemical elements. I'll call this dataset A.
I have a list of expected values for this kind of reading, dataset B, again with a value for each chemical element.

What I'd like to do is compare dataset A with B, to get a sense of whether the variance between values in A and B is consistent overall. In other words, they may not be equal, but the values in A might be 5% more than those in B, +/- 2% overall.

So in the analysis, the value for Fe in A must be compared with Fe in B, Zn in A compared with Zn in B, etc.

Can anyone suggest the best way to go about this in RStudio?

I have done one-sample, two-tailed t-tests for individual elements, where I compared a series of results for an element (e.g. Fe) against the fixed expected value. These mostly rejected the null hypothesis for equality. So I now need to see if there is an overall correspondence between the datasets, looking at all elements.

Thanks!

Welcome to the community!

From the description, it is not really clear to me what exactly do you have, and what you want to achieve.

From your description, what I've understoid is that for a particular element, you have some t many observations, and corresponding to these observations, there are t expected results. Now for each of these t cases, you want to check whether the observations are within a predefined band of their expected counterparts. Then, you want to repeat this for n distinct elements. Is this understanding correct?

If so, then you've said that you've already tested for each of the single elements, and how have you done that? It seems to me that your null hypotheses are not point null, rather they are intervals. Are you using monotone likelihood ratio tests?

But I guess that's not important, any justifiable test will do. Having checked for all elements separately, you've found rejection in each case and now you want to test simultaneously, right? So, it's just a problem of multiple comparison.

In that case, there are several standard options. You can use any of the available methods, like Bonferroni Correction, Sidak's Method, Holm's Step Down Procedure, Hochberg's Step Up Procedure, etc. to control family wise error rate. If you wish to control false discovery rate, you can use Benjamini Hochberg's Step Up method.

I hope this helps, but I'm not at all confident that I understood the problem.

Hi,
Thanks for your answer. It is useful already in that I was not familiar with any of the methods you mentioned, as I have only basic knowledge of statistics. So I will investigate those further.

I'm not sure how to clarify the scenario further, but I will try. You're pretty close anyway.
For each reading the instrument produces a spreadsheet with a column for each element, and a value for each, e.g.:

Fe    Mg      Sr        Rb     etc.
0.1   0.4     0.02   0.006

I have another list of the same elements with expected values.

The t-tests I have already done are per element. I use result from multiple readings, so for example I take 3 results for Fe, add them to a vector and compare them with the single expected result:

test <- t.test(vector_FE, mu=expected_FE, conf.level = 0.95)

These individual element tests do not show a correlation, so yes, I want to make a multiple comparison instead to gauge how similar my results are overall to the expected results. A visualisation would be a bonus.

This may be common problem - I don't know, as this is a new field to me.

I'm afraid I don't know, I'm not familiar with your terminology.

Thanks.

I think you're either making some mistakes, or using wrong terminologies to describe things. I'll try to explain below.

From your updated information, you have several observations on an element, and a single expected record. You want to check whether the observations are same as that or not, but accounting for sampling fluctuations, you accept observations falling in a close neighbourhood (an interval) of the expected record. Right?

This means you're testing for mean, and not for variance.

If you go through the theory of t-test (from any book or resources you prefer), you'll find that it's the test for normal mean problem, with unknown variance. For example, you test for H_0: \mu = \mu_0, where \mu is the mean of the true population the observations are coming from, and \mu_0 is a known constant, and you're checking where true mean is indeed that or not (in a two tailed test).

For the time being, I assume that your observations are normal. There's absolutely no justification of this assumption from the information you provided, but otherwise you can't use it.

If you ended up rejecting the null hypothesis in a t-test, that means that the observations do not come from the normal distribution with the prespecified mean (i.e. \mu_0) in the null hypothesis. It provides no information at all regarding correlation. So this comment is wrong:

Now, let's talk about multiple comparisons.

When you test for a null hypothesis, you do at a prespecified level \alpha, such that the probability of wrongly rejecting the null hypothesis doesn't exceed \alpha. If you check for multiple tests (say n many) with same level, then the overall level increases. I'm not giving the proof, as I'm too lazy to type three to four lines in LaTeX. You can find it here, but I strongly suggest that you try it first yourself, as it's really simple using Boole's inequality.

The above link from Wikipedia explains it all, but just for completeness, I'll briefly mention the steps for Bonferroni Correction:

  1. The overall level is controlled at level n\alpha (note that this is level, and not size), if you test n many hypotheses simultaneously at level \alpha.
  2. Test the individual hypotheses at level \frac{\alpha}{n}, so that the total level is controlled at \alpha, the desired level of significance.

As you've said that you're very new to statistics, I'll guess it'll be enough for you, and you don't need to check for other multiple comparison techniques. But if you wish, feel free to explore.

Actually, what I've said above has nothing to do with R. You can test for different elements individually at the adjusted level, and then combine the inferences from that.

If you don't want to do that, you can use the p-values that you already have in the p.adjust function.

Hope this helps.

3 Likes

Thanks for your detailed reply, it's really helpful.

You've understood the scenario correctly, it seems I've just been using some terms inappropriately in this context.
I did do Shapiro-Wilk tests for normality before making the t-tests.
I'll look into the Bonferroni Correction approach.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.