Docker image construction takes more than 4 hours due to packages

I am on a project to deploy my application with shinyproxy, I cannot build the image because the packages contained in my shiny application cannot be installed. Construction starts but happens when the packages are installed it takes time, and finally I cancel the construction.
Please give me directions, thanks in advance
This is my Dockerfile file

FROM openanalytics/r-base

MAINTAINER Tobias Verbeke

# system libraries of general use
RUN apt-get update && apt-get install -y \
    sudo \
    pandoc \
    pandoc-citeproc \
    libcurl4-gnutls-dev \
    libcairo2-dev \
    libxt-dev \
    libssl-dev \
    libssh2-1-dev \

# system library dependency for the euler app
RUN apt-get update && apt-get install -y \

# basic shiny functionality
RUN R -e "install.packages(c('shiny','rmarkdown','plotly','shinyalert','shinyWidgets','tmap','zoo','tidyverse','questionr','tinytex','data.table','shinycssloaders','sf','readxl''), repos='')"

RUN R -e "install.packages(c('plotly'), repos='')"

# install dependencies of the euler app
RUN R -e "install.packages('Rmpfr', repos='')"

# copy the app to the image
RUN mkdir /root/emop
COPY emop /root/emop

COPY /usr/lib/R/etc/


CMD ["R", "-e", "shiny::runApp('/root/emop')"]

Docker lets you create intermediate packages, if you write the docker file the right way.

This isn't so much an R / Shiny problem so much as how you are defining your starting image.

You need to create an image with all your base installs on it, and then create your final image with your personal scripts. That is, do all your RUN commands to a base Docker image. Maybe that takes an hour but you only do it 1 time. You can then create a final image using just a reference and COPY, so each time you update your R script it takes a minute or less.

Assuming you aren't resource/network constrained, yes, these packages individually take a long time. They must be downloaded and compiled. Alternatively, there is a way to save the binary results of the compilations. That is, instead of using "install.packages" on the Docker image, you run the installs on your desktop, or some other suitable VM and then COPY them into the Docker image. You'll have to research to find the location of the finished binaries. Some DO use the Docker image to build the binaries, then write the binaries to the local disk. This lets you get a copy of the binary Docker would build, and save it locally for your actual docker image. This also lets you avoid installing gcc/dev tools onto your deployment.

Sorry I can't help you more, I haven't done Docker in a year, but this is pretty straight forward.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.