I would like to run the arules::apriori algorithm in parallel. Can RMarkdown help me separate the iterations of the algorithm onto separate processors? Currently, I have to wait for the sequence to finish on the first category before it loops onto the next category.
This feels like a perfect algorithm to run in parallel but I don't know how to do it & wondering if RMarkdown could help us automate the parallelization of algorithms. Thank you.
As for your question: Do you expect that a single call to arules::apriori() would run in parallel (spanning across multiple cores) or do you want to run individual executions of arules::apriori() in parallel ?
If it is the first case, then this sounds like a feature request you can raise at GitHub with the package maintainer.
If it is the latter, there is various tools out there that could do that: Packages that come to mind are doMC to register a multicore parallel backend where you can run parallel foreach loops or any of the mc*apply functions. Alternatively you could use furrr as as parallel pendant to purrr for more functional programming.
Any of the above is independent of your use of RMarkdown but RMarkdown does not hinder it either.
Thank you, this answered my question. I was attempting the latter option and was able to accomplish this with ‘foreach’. Execution time is 6x faster now. I will research purrr solution as well as I want to stay tidyverse! Thanks again.