Working with Shiny in industry

We use shiny server pro to create small (but sometimes complex) shiny apps that are used for following up smelting furnaces.
We have 9 plants with a total of 21 furnaces that each has 10-15 apps in daily use.
Each plant has its own database, but the data is (mostly) standardised across plants.
Both existing apps and new apps are developed continuously, with an average of 5-8 new apps a year (some which go into daily use, others that are removed due to lack of use).

To update and distribute these applications, we create a repository containing the shiny app code files wrapped as an R package. This is done per application. Each app is invoked by a function call from the package it is wrapped in, i.e. apppackagename::run_app(). We call config-files to alter specific configurations in the app. These configurations are typically plant name and furnace number that ensure right data-collection and UI, but also other configurations to account for smaller differences between the furnaces. With version updates the package is re-installed on the server, affecting all app.R-files that calls the package. The directory layout looks like this; where the app.R-files contain a single function call to the right package, e.g. app1packagename::run_app().

	|-- plant1
		|-- furnace1
		|-- furnace2
		|-- furnace3
	|-- plant2
	|-- plant3

There are many benefits to this way of coding and packing shiny-apps:

  1. One app-code can serve a large number of furnaces with different configurations and data-sources
  2. Code changes are done in one place.
  3. Changes are easily installed on the server
  4. All furnaces using the app automatically gets the newest version
  5. Feedback from users, bug-fixes, and new ideas can quickly be rolled out and tested in production
  6. The users do not have to choose plant and furnace before getting data, saving them much time
  7. The UI is configured to show which furnace the open app is for

The main reason for packing apps:
We spend little time on coding a new app, publishing the app when we believe it has a minimum value. Most of the coding is done on feedback from the users and observing how the app is used. Changes are continuously rolled out. This can be done with very limited coding resources.

The main draw-back of strategy is the lack of version control on app level. All apps on the server use the same version of a CRAN-package, fingers crossed that old apps will still work after updates to newer versions of CRAN-packages.

We are considering switching to R studio connect to gain some of the benefits of this platform, e.g. containerising, ease of publishing, and a support for python applications. However, we are much afraid that we will lose the efficiency in development that we’ve had so far.

The question is, what strategy should we use in R Studio Connect?
Is there an efficient way to distribute one app to multiple furnaces with different configurations without duplicating parts of the code?

Can you tell me your organization's name? You probably have a customer success rep here at RStudio. Happy to help you connect if I can know your org name.

This topic was automatically closed 42 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.