Shiny App does not find object after deployment.

I have moved my Shiny APP code to an app.R document, and added a global.R file to load my files only once to save memory per this suggestion ( The app loads and functions correctly locally and it does deploy to without memory issues.

But, I am now receiving the following error regarding the object 'combined sample':

Error in value[3L] : object 'combined_sample' not found Calls: local ... tryCatch -> tryCatchList -> tryCatchOne -> Execution halted

I tried inserting the 'corpus_train' function into the app.R file just above the shiny. (Data object not found when deploying shiny app)
The attached image is a screen grab of the files that I uploaded to for the deployment.

How do I get shiny to find this object after deployment to the shiny servers?

MUCH appreciation in advance for the help of the Community!



blog <- readLines("en_US.blogs.txt", warn=FALSE, encoding="UTF-8")
twit <- readLines("en_US.twitter.txt", warn=FALSE, encoding="UTF-8")
news <- readLines("", warn=FALSE, encoding="UTF-8")

twit_sample <- sample(twit, length(twit)*.05)
news_sample <- sample(news, length(news)*.05)
blog_sample <- sample(blog, length(blog)*.05)

combined_sample <- c(twit_sample, blog_sample, news_sample)
combined_sample <- iconv(combined_sample, "UTF-8","ASCII", sub="")

corpus_train <- sbo_predictor(object = combined_sample, 
                              N = 3, 
                              dict = target ~ 0.75, 
                              .preprocess = sbo::preprocess, # Preprocessing transformation 
                              EOS = ".?!:;", 
                              lambda = 0.4,
                              L = 3L, # Number of predictions for input
                              filtered = "<UNK>"



    headerPanel("Predicitve Text APP"),
        textInput("text", label = h3("Text input"), value = "Enter text..."),
        h4("Predicted Words:"),
        h6("This APP generates three predicted next words based on the text you input. 
                   The prediction algorithm relies on word frequencies in the English twitter, 
                   blogs, and news datasets at:"),
        h6("Created April 2021 as part of my Captsone project for the 
                Data Science Specialization provided by Johns Hopkins University and Coursera.
                All code can be located on GitHub at:") ,

corpus_train <- sbo_predictor(object = combined_sample, 
                              N = 3,
                              dict = target ~ 0.75, 
                              .preprocess = sbo::preprocess, 
                              EOS = ".?!:;",
                              lambda = 0.4, 
                              L = 3L, 
                              filtered = "<UNK>" 

shinyserver <- function(input, output) {
    output$result_output <-renderText({

# Run the application 
shinyApp(ui = ui, server = server)


Hi, welcome!

To help us help you, could you please prepare a reproducible example (reprex) illustrating your issue? Please have a look at these resources, to see how to create one for a shiny app

To help the community reproduce my issue, I have created a Github repository to house my question, all of my code files, and a link to the source data. (the data to create the corpus is too large to upload to my Github).

Github Reprex

Please let me know if there is anything else I can provide to help solve my mystery.

Much appreciated!

I'm afraid this reprex is not ideal, if the data is too large to load as part of it, then I'm disinclined to manually download it etc. You have not indicated that the size of the data is in anyway part of the issue. So you are advised to reduce your data down to a small amount, and add it to your reprex.
A key priority of a reprex should be to reduce friction for the people that you are seeking help from, so as to maximise your chances of receiving help.

Thank you for the assessment of the reprex. I will take that into account. In the mean time, I am making progress on this Shiny issue through another forum that is able to work with the established reprex.

I will circle back here with the resolution as it becomes known.

The issue causing the object to not be found was two fold and I found the solution here, in the documentation for the SBO package: SBO package documentation. See the 'Out of Memory' section for further details/examples. While the issue was not related to memory usage, the solution was found in that section of the package documentation.

  1. The corpus used for predicting the next word should first be created using the sbo_predtable function.

corpus_train <- sbo_predtable(object = combined_sample_final, N = 3, dict = target ~ 0.75, .preprocess = sbo::preprocess, EOS = ".?!:;", lambda = 0.4, L = 3L, filtered = "" )

save(corpus_train, file = "corpus_train_save.rda")

  1. Then the sbo.predtable object should be saved as an .rda object

save(corpus_train, file = "corpus_train_save.rda" )

  1. Then, on the global.R file, load the .rda file, and then identify the predictor with the sbo_predictor function.

load("corpus_train.rda") corpus_train <- sbo_predictor(corpus_train)

The app then deploys without issue.