Help with download.file-how to skip over empty urls and save using destfile

this is incorrect: as [i] is inside the " quotes, this means the literal string "[i]". You probably want to generate the destfile at each loop iteration:

temp.destfile.name <- paste0("/Users/.../document_number_", i)

download.file(temp.doc.name, temp.destfile.name)

Another thing is that if you're on Windows, you have to download the file as binary:

download.file(temp.doc.name, temp.destfile.name, mode = "wb")

You can also switch to packages like {httr2} that are more modern when it comes to downloading things (but in your case, the download requests are pretty simple, might not be necessary).

I downloaded one random document, it was 10 kB. So for 120,000 files, that's a bit more than 1.2 GB (assuming all the files have similar size). I'd say that's small enough to just save in a directory along with your script. But then if your goal is to run some text-extraction code, there might not be any point in keeping these files.

One possibility could be to download all the files from one year (storing them in tempdir()), read them with {officer} or other suited package, and save the content as an rds or qs file along with your script. That way you have the full text available for future use, and I expect it'll take less space (you probably have to play with a few files a bit to see what information you actually want to save and whether it's really smaller).

Also, one small detail on the for loop:

for (i in 1:length(document.numbers)) {

In that case you don't really care about i, you only care about the document number. So you can save a few characters using:

for (current_document_number in document.numbers) {
   temp.doc.name <- paste0(base.url,
      document.name.1,
      current_document_number,
      document.extension)
   print(temp.doc.name)
}

By the way, you also have a space at the end of the base.url that will create problems.