Shiny Developer Series - Episode 2 - Follow-up Thread - Colin Fay on `golem` and Effective Shiny Development Methods

Following up with questions from the Shiny Developer series here. If you missed the live webinar, you can view the recording at :+1:

For videos, resources, and more information about the full series, check out the Shiny Developer Series Website

Some questions from the webinar:

Q - Is it hard to switch mid-production to golem if you started without it in the beginning?

Q - Do you use packrat or renv alongside golem when building a shiny app to version all packages used in the project?

Q - I have some issues with Out of Memory problems with my Shiny-App, any best practices suggestions?


Q Is it hard to switch mid-production to golem if you started without it in the beginning?

It's definitely not. If you think of a "classical" Shiny App, there is a ui.R, a server.R and maybe a globals.R. The twi first files can be put (almost) as is in {golem}'s functions inside R/app_ui.R & R/app_server.R. The globals.R is a files where you can find data and/or functions, so as {golem} is built on top of a package structure, you can put data and functions just as you would have done in any package.


Q - Do you use packrat or renv alongside golem when building a shiny app to version all packages used in the project?

You can indeed use {packrat} or {renv} with {golem}. A golem is a Shiny App built on top of a package, so you can definitely use any classical tools for Shiny and for package management that you are used to use in production.

Q - I have some issues with Out of Memory problems with my Shiny-App, any best practices suggestions?

It's kind of hard to tell without having a deeper look into how you Shiny App is built and why you encounter these errors.

It can be linked to one dataset you're operating on being too large for your R session and in that case I would suggest backing your Shiny App with a database (SQL if you're working with tabular data, or maybe MongoDB if you're working with unstructured data).

It might also be linked to a mistake in the way reactive values / values in general are handled in the app. You can for example I recently found in an app I had to optimise a reactive implementation that created a lot of duplication, leading to the app asking for more memory than what was actually needed.

But if that's linked to handling one big object in the application, either choose a database backend, or work with tools for on-disk memory usage.

1 Like

Golem with all its helper functions looks great. As I slowly try to figure out how to pass things in and out of modules, I'd like to ask a question about the (I suppose) easy bits:

If I build an app within a package with Golem, what is the best way to load data in the app? I mean, the read_csv calls and such I would otherwise put in global.r or at the very beginning of the app.r file.

While there may be many options, I'd be particularly interested in the following two use cases:

  • when the data is part of the package itself (e.g. a demo to be used to showcase something about the package), it seems to be quite straightforward: a global.r in the inst/app/ subfolder, and from there relative path to the data inside the package. I don't see particular problems with this.

  • but when the data is on the host computer/server... how can someone who uses the package feed their own data into the app, or how do I pass it in the final call in the docker file when I deploy it? it can also be located in some predefined location respective the working directory... but still unsure what is the best way to go about it.

Hey @giocomai,
thanks for this question.

I think the cleaner way to do it is not through a global.R — if you have data to put inside your app, you can add them just as any data in a package. An app built with {golem} is a package, so anything you know about package applies there :slight_smile:

So basically, what we comonly do is :

  • Create a data-raw folder with usethis::use_data_raw()
  • In the newly created folder, you'll find (with newest version of usethis):
## code to prepare `DATASET` dataset goes here


So just do the read csv, some manipulation if needed, and launch the last line. It will add DATASET as an internal dataset for your package, with can then be used inside your app function.

For more info about data in a package :

If the data is not on your computer, I think there are several use cases :

  • data is on a database — you can create the connection in the app_server() function. Here is an example :
app_server <- function(input, output,session) {

  impala <- reactiveValues()
  mongo <- reactiveValues()


    impala$con <- connect_base( "Cloudera ODBC Driver for Impala 64-bit")
    impala$products <- get_products(impala$con)
    impala$lots <- get_lot(impala$con)
    impala$ok <- TRUE
    mongo$db <- connect_mongo()
    mongo$has_db <- nrow(mongo$db$find('{}')) > 0
  • data is sent by the user: then there is shiny::downloadHandler() and friends, which create an uploader in the UI.

  • data is to be passed as an argument to run_app(), and is located in the package directory. Put the data in the inst folder of your golem, then you can use system.file("data.csv", package = "myapp") once the package is either installed or launched with pkgload::load_all(). For example, system.file(package = "stats") retrieve the path of the {stats} package on the local machine.

More on that :

By the way, the most recent version of {golem} has a new, unified way to pass argument to the run_app() function. You can read more about that here :

Let me know if this answers your questions :slight_smile:


Thanks for the detailed reply!

As for data included in the package, you're right, it make sense to go the usethis::use_data way, document and export, as one would do with other data included in the package, I'm taking this on board.

As for passing data to the shiny app, your last point and the functions described in your thoughtful post is what really nails it for me.

What I used to do earlier in order to be able to retrieve arguments that could be defined at launch of a shiny app that was part of a package was basically to include a wrapper function with a call to shinyApp() and all the code of the app inside it. Like this:

run_my_app <- function(data, option_x = FALSE, option_y = TRUE) {

shiny::shinyApp(ui = 
 #all ui code
 server = 
 #all server code

Arguments could still be retrieved inside the shinyApp(). Needless to say, this "everything and the kitchen sink" approach made reading and changing the code a pain. In principle, with proper use of modules, it could still turn out to be decent, I suppose.

But the new get_golem_options() approach seems to deal with this in a much nicer way. I'll give it a try soon, but it really looks like it does what I need.

Thank you so much again for the thoughtful reply and for all the work on golem!

I read the replies but I'm not sure it solves my issue.

My data is on my computer and will be updated by the user inputs. A simple example would be a form that captures some information given by the user. I already have some historical data stored somewhere locally (or it could be empty at the beginning), and as users fill the form, the data gets updated.

I like to use a global.R so that the data is loaded in the memory when starting the Shiny server and is shared globally among all users, and every time a user inputs new data, the object in memory gets updated, and the stored file is updated as well (so that if I restart the Shiny server, nothing gets lost). I use a SQLite database for this.

Your solution above "data is on a database — you can create the connection in the app_server() function." would work. But what bothers me is that the data is loaded every time a new session is created. As the dataset grows, this can lead to low performance at startup.

What would be the golem-friendly solution in this case?



This topic was automatically closed 54 days after the last reply. New replies are no longer allowed.