BrawStats: educational research cycle simulator - Shiny Contest Submission

BrawStats: educational research cycle simulator

Authors: Roger Watt, Elizabeth Collins

Abstract: A research simulator – gain an experiential understanding of statistics. Set research design parameters on the left, run simulations, and see the results on the right. Experiment with the controls to explore the consequences of design choices.

Full Description: Learning statistics is often a hurdle for students in the social sciences. We identified three basic issues: (i) the mathematical language is unfamiliar; (ii) the procedures are lengthy and easy to get wrong; and (iii) the subject can appear as a million unconnected facts to be mastered rather than a single coherent whole. On top of this data analysis is often taught or experienced by learners in isolation - without much reference to the data generation processes such as research design.

We therefore set out to build an app that would provide students with easy access to the whole system of research in a single screen – from identifying possible variables through to inferences from samples. BrawStats is the result. It is an app that takes users very quickly and easily through the research process of devising a hypothesis, creating a research design, obtaining samples of data that match the hypothesis and design and seeing the results of simple statistical analysis. Very quickly, students can become confident in all of the basics such as different variable types, different types of sampling, consequential different statistical tests.

BrawStats uses the concept of effect-size (specifically normalized, r) throughout as the common currency. Users specify their hypothesis as a set of effect-sizes between variables. Results are given as effect-sizes and se/ci. NHST is applied after effect-sizes.

Users very quickly appreciate that the sampling process is the fundamental limitation – they see how variable the results can be, how much uncertainty there is in what a single result can say about a population. This doesn’t need to be taught – it is visible. They specified that the population effect-size is r=0.3 and they see that the sample effect-size is 0.42 (and next time it is 0.27 and so on).

BrawStats is designed to encourage and facilitate experimentation. Just by experimentation, students can find out for themselves how sample size and sampling method affect that uncertainty. For example, given 42 participants, how large are the benefits of using them in a within-participants design compared to a between one? They can then see that there is much more data in the within case (each participant gives two data points, not one), so they can go back and halve the number of participants in the within case. There is still a benefit to it, although it is much less.

There are two steps beyond this basic. First users can run multiple simulations and see the distributions of outcomes. Doing this, students are frequently quite shocked at the high variability of p-values that will inevitably result from a fixed population and design (eg when power is 50%). Students will readily grasp the idea that increasing the sample size doesn’t remove this variability, but can displace it to smaller p-values meaning more chance of a significant result.

In the second step, called Explore, students can choose one of the hypothesis or design parameters (such as variable type, sample size) and run simulations to see how the outcomes vary with the different options for that choice. Those outcomes can be presented in various ways, including the probability of a significant result, which essentially does power analysis in front of their eyes. Users are able to introduce non-independence into the sampling process and find out for themselves why that is a problem. They can add outliers, skew, non-homoscedasticity.

BrawStats has three key features that are responses to the problems faced by learners that were identified above:

  1. The language adopted throughout is entirely graphical. Reading diagrams and graphs is much more comfortable for students. The diagrams and graphs are designed (i) to show everything that is relevant and (ii) to be clear and clean. Students see a graphical representation of their hypothesis, of the underlying population and the prediction they are making. If the edit a variable and change its type from Interval to Categorical, then all the diagrams change to reflect this change in design.
  2. It is not possible to make an error. Everything that can be done is valid. Students cannot accidentally do a t-test between a column of weights and a column of lengths. Actually, they can specify a variable with a negative standard deviation.
  3. The app shows the whole process simultaneously as a single whole. As changes are made to one part of the process, these are propagated throughout the screen immediately. Change the IV from Interval to Categorical and the test done is automatically switched from correlation to t-test (or ANOVA as appropriate).
    Finally, the app embodies the important observation that data doesn’t just happen. The data in the app is always only a product of applying a particular design (involving some important and far-reaching choices) to a specific population. Since the app only allows users to set the design and the population, students are encouraged to understand that these choices determine the most appropriate analysis and that (at this level at least) there is no choice about which analysis/test to use – that choice was already made at the design stage.

Keywords: Teaching Statistics, Psychology, Simulation, Research Design
Shiny app:
Repo: GitHub - rjwatt42/BrawStats
RStudio Cloud: Posit Cloud


Full image: