I normally have the parts that consume most of the time or most of the script (like long code for custom plots) into functions in a different script/scripts.
Because I have to run the code to several datasets that have their own folders, at the end I put any analysis in the same folder, using an ID of each dataset.
In the interactive scripts the data is loaded on the same way with the ID. So my file starts with something like:
ID <- 'aa1'
analysis <- 'gamm'
## Reference for the directory containing the processed sata
floc <- paste('~/data/',analysis,'/',ID, sep = '' )
## Reading Data
Data <- readRDS(paste(floc,'/',ID,'_',analysis,'.RDS', sep = ''))
....
In this way, I keep working not in a workspace where very 'expensive' data lives (expensive computationally speaking, from weeks to months worth of dedicated desktop cpu) and I can access the files very easily (I just need to write which data-set and analysis I want). This may look a bit tedious to code, but is very easy to use (and easy to code), and it keeps me sure that I have one and only one file with the desired processed data-set, and in one and only one folder in use (I keep copies but never accessed to them with R). I process/analyse the different data-sets using a bash script to call R scripts, so also, if I modify something, it will apply to all data
You will get an error if the file does not exist, but alternatively, and if you work interactively most of the time, you can just have some code to check if the file has created or not (and them, make it) with the the function file_test, like:
if (file_test('-f', 'yourfile')){
Data <- readRDS('yourfile')
} else {
'analyse your data / generate yourfile'
}
What problems do you have with the RDS files?
cheers