We've developed internally a large and fairly complicated shiny app which output fully relies on connections made with our DB. The app is already at a certain state of maturity when it makes sense to include automatic tests before each deployment. However, I just wanted to get your opinion on using shinytest (or testing somehow differently) for an app that outputs (and inputs too) constantly vary with time?
What I mean by that is when user enters the main page one needs to enter a time-range for which certain inputs are rendered and visualized. The problem starts here: when a user performs the same search in 2 weeks from this moment, the outputs won't necessarily be the same, hence, the reference point for testing will constantly change. It might also be the case that in 4 weeks time it wouldn't be possible to input the same input values in the date selection as it's being time-restricted.
Is there a way I can pass static testing inputs to shinytest instead of making DB calls or how to approach this best? Can you please share your experiences?
I've seen a couple of options used successfully in cases like this:
If you want a full "integration test" that has your Shiny app hitting a database, you could deploy a "staging" or "testing" copy of your app (ideally on the same platform/infrastructure that your production app runs on) -- the only difference being that it's pointed at a test database that has a static set of data in it. That way, you can get deterministic results every time you try to open the app.
If you're not interested in exercising the database connectivity, you could abstract your data store in your test environment to just use e.g. a local CSV file. Again, that would give you static, deterministic results. You could use e.g. an environment variable that says ENVIRONMENT=production or ENVIRONMENT=testing and then us Sys.getenv() in your R code to determine whether you connect to a database to pull data or just open a local file. Using a package like dplyr can make these kinds of abstractions easier.
Thank you for your prompt answer! Ok, that makes a lot of sense. I think replicating the tables in a separate DB schema and then specifying to which of them to connect can be easily done using the config package. Thanks again!