At this moment we are storing our reproducible result of a statistical analysis (e.g. LMM) as an R object which contains input data, parameters, models and output.
-
Current situation:
We use the serializeJSON function from the JSONlite package to create a very verbose JSON file which can be converted back to an .Rdata object by the unserializeJSON function -
Alternative situation:
Store the complex object as .Rdata file and create plumber api’s to extract the data for non R applications.
In situation 1, the stored file can directly be used by other, non R, applications. But is this situation sustainable, which solution would provide the best compatibility for - future R versions - more complex and - larger objects?
Is there anyone who could share some experience or ideas about the best practice?