Download "files" that are a list of links

I have a long list of links. Each one is for a NetCDF file. If I put a link in my browser, a file automatically starts downloading, but my browser doesn't go anywhere.

What are these, links, or files? How do I read them in R?

When I try RCurl::getURL(), I get

Error in nc_open trying to open file <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<title>302 Found</title>
<p>The document has moved <a href=" .. etc

I have all the links in a folder called "myfiles.dat". Hoping to move ahead and learn purrr with this set.

Looks like things have moved:

Given the oauth in the link, even if you get there, you'll probably need to log in or provide an access token.

Can you provide a list of example links?

Here is a full link:

If I copy and paste it into my browser, a file still downloads.

@alistaire, you're right. I do need to log in to get this information. Now I understand that RCurl won't work without the login. They have some short instructions how to download all the files using Unix. Looks like I will probably have to contact them to see if there is a way around it since I am not a Unix user.

If you use Windows, the Windows Subsystem for Linux will let you run anything you could need.

If you run MacOS or Linux, they're built on top of Unix, and so are ready to go.

In all likelihood, you could do this all directly from R with httr, but it may still take some work.

1 Like

Thanks. The data are so close. . .yet so far away.

NetCDFs are popular with climate scientists and almost nobody else :sweat_smile: If you need advice on getting started with whichever product this is (eg. accessing it), I can ask around the office and see if someone's used it before.

The ncdf4 R package works well with NetCDFs, as does purrr (in fact, I * cough * just wrote a blog post on using purrr with file formats like NetCDF :wink: ). Someone's also working on a package called tidync to make dealing with NetCDFs easier still, but I'm not sure how far along it is.


This blog post looks incredibly helpful! Thank you!

As for accessing the data, Scott Chamberlain has been working on it and sounds like there might be an interface for this dataset: .

1 Like

I see this when I try and access that file:

r <- httr::GET("")
#> Response []
#>   Date: 2018-01-18 23:36
#>   Status: 401
#>   Content-Type: text/html; charset=utf-8
#>   Size: 27 B
#> HTTP Basic: Access denied.

This suggests that you've logged into the site in your browser and it's probably using cookies to remember you.

It might be possible to automate the log-in and download process with rvest, but if you haven't done any webscraping before it's going to be quite a lot of work (and I don't think there's a good single resources where you can learn the basics)

1 Like

ncdf4 will now read this thredds/dap source directly, so I'd try using raster:: raster on the link e.g.

This topic is fraught though, lots of options lots of piecemeal history and lots of confusion

1 Like

I've tried this but I cannot access to the final *.nc file:

lk <- ""

r <- GET(lk, 
         authenticate("myID", "myPWD"),
         path = "~/Desktop")

# or this with curl

h <- curl::new_handle()
  handle = h,
  httpauth = 1,
  userpwd = "myID:myPWD"

resp <- curl::curl_fetch_memory(lk, handle = h)

Where "myID" and "myPWD" are my earthdata ID.

I suggest you using wget, you just need to follow these steps.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.