Using profvis on my app shows that the bottleneck is the code below, which is the entirety of my server file. My app is extremely large and so I have my server code broken up into a collection of individual files (all labeled as "xx.R"). The code below I thought was clever, but perhaps not. My app finds any file with the .R extensions in my server folder and loads it here. This was a nice way to manage code and "works" in the sense that all code is read in properly.
Do you know which of the two steps is the limiting factor? Not sure how much you can speed up listing the files in the first step, but you may get a speed up in the second step using something like the {future.apply} package to source the files in parallel using future_sapply() as a drop-in replacement.
This seems to me like you are interpreting your bottleneck to be all of your executable code, which seems to me a simply misunderstanding...
I guess your approach to sourceing your code is obfuscating the profiling results, so its harder for you to tell which functions out of all your functions, your app spends majority of time in.
This is a good point. Interestingly enough, rather than using the code in my original post, I can source in each file individually and directly. While conceptually both approaches are the same, there appears to be some underlying difference because directly sourcing in when compared to the sapply() is much faster. I'll need to tear this apart more, but thanks for the hints ehre.