I'm running reverse dependency checks, which involves downloading a large number of packages (dependencies of reverse dependencies) in a batch. This workflow worked ok until recently.
After approximately 2,000 Ubuntu binary packages are downloaded from P3M, pak::pkg_install() begins to report "Failed to download" errors. I tried switching the IP address and the number of threads (10 and 50), but the issue persists consistently around the same threshold.
Could this be related to possible new rate limiting rules on P3M? If so, are there documented parameters I can follow to stay within the limits? I'm happy to adjust my workflow, for example, downloading in smaller batches or adding delays between them.
Minimal reproducible example:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sudo apt-get update && sudo apt-get install -y build-essential
cargo install revdeprun
ulimit -n 10240
revdeprun https://github.com/Rdatatable/data.table.git
This downloads 4,000+ packages from P3M. Failures start to happen at around 2,000 packages.