Hello. I have an R-CMD-CHECK Github Actions workflow for my package. The action verifies my package against MacOS, Windows, and three versions of Ubuntu. Everything currently passes (yay!) except for "ubuntu-latest (devel)". Here is the most recent failed run: Corrected flipped arrangement of some 2D confidence regions · tripartio/ale@87047fa.
As far as I can tell from the build log, it seems that the ubuntu-latest (devel) server perceives that the {future} parallelization package does not allocate enough space for the objects in my package test. None of the other builds (including the other two Ubuntu builds) report this error. It seems to me that either the {future} installation on ubuntu-latest (devel) is grossly overestimating the storage needs for the package or the other builds comfortably allocate more than 4 GB of storage for {future} parallelization. I really don't know, but I think it is more likely an error on the end of ubuntu-latest (devel). (Perhaps there's a memory leak somewhere?) I find it unlikely that my package is asking for such huge amounts of memory.
Could anyone please give me their assessment of this problem and what I might do to get ubuntu-latest (devel) to pass?
Following your suggestion, I set options(future.globals.maxSize = +Inf) in the two vignettes that were failing. But then, instead of failing, ubuntu-latest (devel) keeps on running until it fails (probably due to a time-out) at 45 minutes. I've rerun it and it fails in the same way with no error messages, even when I turned the debug log on: Removed limit for future.globals.maxSize · tripartio/ale@409ceac. (For reference, before I increased the future.globals.maxSize, that process failed at around 26 minutes.) With this new time-out fail with no indicative error messages, I'm really at a loss on how to proceed. So, I have undone this change.
You suggested that I "check the size and content of the global variables which are passed to the launched process". I'm not sure how to do this in detail, but the output from the run log puzzles me, making me wonder if there might be a problem on the ubuntu-latest (devel) server side. Here's the latest: Revert "Removed limit for future.globals.maxSize" · tripartio/ale@06afdbf.
Concerning the size of the objects, please note this portion of the error messages:
The numbers of exported objects specified are not unusual, but what I find very odd is the massive size of the first three listed functions:
"The three largest globals are 'abort' (99.45 MiB of class 'function'), 'action_dots' (99.42 MiB of class 'function') and 'as_label' (99.41 MiB of class 'function')"
"The three largest globals are 'abort' (83.30 MiB of class 'function'), 'abort_context' (83.29 MiB of class 'function') and 'trace_back' (83.29 MiB of class 'function')".
As far as I can tell, these are all rlang functions; I have no idea why they would be so big. As I mentioned in my first message, I wonder if there might be a memory leak or something like that on the ubuntu-latest (devel) server. I cannot imagine that the other servers that passed my tests with no problem would tolerate 4 GB of future objects being passed; I doubt that the problem that crashes the ubuntu-latest (devel) process is merely tolerated on the others.