webscraping from dataframe urls

Hi there ,
i want to import a txt that have a list of urls and extract from each , save that extract in a cvs file but i get stuck

First i import the txt no problem but when a i want to iterate over each row i just extrat from the first one


for(i in seq(list_url)) {
    text <- read_html(list_url$url[i]) %>%html_nodes("tr~ tr+ tr strong") %>%html_text()}

any solutions ?

Hi @edgardo888,
If your URLs are held in a list like this

list_url <- list(url = c("google.com", "amazon.com", "facebook.com"))

then seq(list_url) returns 1 not 1,2,3.
Try seq(list_url$url)

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.