Accessing a list of URLs (read.csv) from a table

Hi there,

I am trying to import lots of variables at once using R. I have my variables (a list of URLs which point towards a .csv file download) and have categorised these according to their variable type, location etc.

I have downloaded these individually using the read.csv url function before, but wanted to know if it was possible to read these variables from the table,, import the .csv files


and/or specific which ones to download (i.e. download only .csv files which are Required = Y?

Any help would be greatly appreciated.

Kind regards,

Dan

You can filter your list of variables and iterate over it with something like this

library(tidyverse)

your_list %>% 
    filter(Required == "Y") %>% 
    walk2(.x = .$Name,
          .y = .$URL_String,
          .f = ~ assign(x = .x,
                        value = read.csv(.y),
                        envir = globalenv()
                        )
          )

Obviously, I can't test this specific code since you are not providing a reproducible example.

Thank you so much for your quick reply. I attach a copy of the table

The table is here: NGIF_Data_URLs.csv.

NGIF_Data_URLs <- read_csv("https://www.dropbox.com/s/m009sczic4bj7sj/NGIF_Data_URLs.csv?dl=1")

I was able to import 17 variables but got the 'Error in read.table(file = file, header = header, sep = sep, quote = quote, " no lines available in input' error midway through running the process.

This suggests one of the files does not contain any data, I can't test it myself since those URLs require an SSH key.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.