After any general advice anyone can offer around managing and maintaining many different adjustments I have to make to a dataset.
What I'm dealing with in the real world is SAS process (that I'm migrating to R) that has a DATA step that changes a whole bunch of values for example:
if Id is in (list of ids) then nature = "new value";
if Account_Id in (list of Ids) then limit = "new value";
if type = 'example_type' and data source in (list of sources ) then do;
amount1 = new_amount;
amount2 = new_amount;
end;
etc. etc.
The thing is there is lots of them.
I think I would rather have the adjustments stored in a database where there's an accompanying reason/comment for the adjustment, an effective date and an expiration date and some other useful info instead of just continually adding lines of code to a code file.
And then have an R process/function that reads that file/database, generates the statements or pieces of code to apply to the input dataset (checking if the adjustment is still effective).
Is this sensible? Should I just write out the individual adjustments as regular R code with # comments and keep changing updating the main R script with changes? Any other ways of managing something like this?
TIA