I'm writing a thesis with a large number of ggplot images in them.
I typically render a section at a time using an Rmarkdown notebook, save as PDF, then export the generated \LaTeX code with the images generated into my main thesis \LaTeX document.
Each of the individual figures I generate using ggplot are saved as individual PDF files, and each data point seems to be saved as a separate element in the PDF image file. My graphs have a lot of data points in them and when loading even a single compiled Rmarkdown notebook into a PDF, the notebook PDF itself takes a minute or two to render when viewed in any common PDF Viewer.
I guess one way to solve this problem would be to have the compiler render my ggplot images as PNG rather than PDF. But I haven't been able to find a way to do that. Does anyone have any ideas?
According to this question, you can control which formats knitr renders figures to using the dev option (either set for each chunk or, in the linked question, as a default). You might also find the caching options helpful if you're rebuilding your whole thesis for things like, say, correcting typos
You may want to consider whether aggregating the data points, eg with geom_hexbin, would be helpful, particularly if data points are very close together or overwrite one another.
(1) use the ggsave() function in ggplot and save as a PNG file. Then just use the saved-on-disk ggplot images.
(2) try the ggrastr package (https://github.com/VPetukhov/ggrastr) which tries to solve this problem by rasterizing large numbers of plotted points (with geom_point_rast(), for example).
In the end, I saved all my output as both HTML (turning OFF the option to generate standalone files) and as tex/PDF. That gave me the tex file I needed but from the HTML output I also had a folder with all the images generated into pngs. Then I could simply find-replace through the tex output to refer to the generated pngs.