automation of script problems.


I have an interesting one here. This piece of code that I am about to produce runs fine within the RStudio IDE.
However, when I run it through an automated script it has problems.

The Reprex is below and it highlights the errors. But, when I just run the script in R Studio none of these errors are mentioned. I don't know what to do in order to have the script run through automation and directly in R Studio.

Reddit_URL <- ""

Reddit_Health_Page = read_html(Reddit_URL)
#> Error in read_html(Reddit_URL): could not find function "read_html"

Topics = Reddit_Health_Page %>% html_nodes("#t3_oqrcdg ._eYtD2XCVieq6emjKBH3m ") %>% html_text()
#> Error in Reddit_Health_Page %>% html_nodes("#t3_oqrcdg ._eYtD2XCVieq6emjKBH3m ") %>% : could not find function "%>%"

Dates = Reddit_Health_Page %>% html_nodes("#t3_orhjln ._3jOxDPIQ0KaOWpzvSQo-1s ") %>% html_text()
#> Error in Reddit_Health_Page %>% html_nodes("#t3_orhjln ._3jOxDPIQ0KaOWpzvSQo-1s") %>% : could not find function "%>%"

Updates_Timed <- data.frame(Topics,Dates)
#> Error in data.frame(Topics, Dates): object 'Topics' not found
Created on 2021-07-26 by the reprex package (v2.0.0)

How do you run the automated script?
You don't mention any library statement. Are these in your RStudio or project startup ?

Hey Hans

So, I am currently attempting to run the automated script via an R Studio add-in.
(Addins >Schedule R script on Windows), this leads to a menu where the script selection is made.
It is my assumption that once parameters are setup in here, it should run in accordance with the time frame that you set.

Thank you.

Hello @jack3 ,
I am afraid that I have nothing to add my previous messages.
Your script as far as I can see it has no library statements, so ...
Maybe one of our fellow forummembers knows the add-in.

I think this is the key for the issue, it seems you are not including library calls in your script, that is why it only works interactively, in a session where you have already loaded the packages, and not when you schedule the script which runs in a clean session where no package is loaded.

Thank-you, I previously tried "load" instead of "library" and thought it would lead to the same outcome.

I am experiencing a new problem with the scrape; the nodes that are coded become redundant once the website being scraped changes the text.

How do I code so that when a particular item gets removed, the next item that takes it place is able to be picked up via the scrape? I am trying to automate the scrape and don't want to continually adjust is to fit the node changes that occur.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.