I want to find out from you fine folks why this can be the case? Or rather an opinion.
To me I see it as a bias thing, I've created immense value for my company through shiny and leaflet, all through R,
I taught myself R and kept reading and growing and developing my R skills, and simply put, I made an app way quicker than the Py, or Scala devs could. In that way saving near millions to company, just by surfacing the data, no fancy ML or AI stuff yet.
So one thing I heard was that the memory management in R is poor? I don't know enough to say that this is the case, Is it in the way R stores variables? Or something?
Also for loops in R are slow, we'll I mean for loops in general are just slow? I've messed around a little in python and there you map functions to stuff too (instead of for loops) and that blazes through it. But are they slow in particular for R?
Guess I just want a good all round discussion, and/or some resources and reading I can do on this would be greatly appreciated.
See Chapter 10 of Wickham's Advanced R for the memory management and loop issues. R does a decent job of releasing objects when deleted, but the OS doesn't always take it back. Loops proliferate copies, and there are more idiomatic approaches besides. R provides hooks to delegate performance sensitive functions to compiled programs in C++.
R is never going to be (at least in my time horizon) a petabyte productional scale language. It's a superb prototyping language that enforces a degree of user rigor.
If history had taken a strange turn and Haskell, or another functional program language, had assumed the position of C++ and Java, we would hear far fewer complaints. Most software engineers write in procedural/imperative languages and find Python much easier to translate.
This might be more basic than you are after, but the complaint of loops being slow can sometimes be related to programmers from other languages applying loops over each row of a dataframe or vector as they are used to, rather than realising that functions are (generally) vectorised in R. Some can then immediately complain and blame R without discovering their fault.
I can't agree more. Mainstream OO paradigm (c++, java, etc.) still rules the software world and R's CLOS-inspired object system (S3, S4) with its function-centric multiple dispatch is just too alien for a usual enterprise developer. Add the lazy (with caveats!) evaluation and the mind-boggling (especially to implement reliably in your own functions/libraries) NSE plus the weird formulas DSL and the allergy of an IT department to R is easy to understand.
They would not list any of the reasons above though and use the tired "R is slow" trope instead, when in fact, as far as the raw execution speed goes, R is approximately as slow (or as fast) as python. Both languages use dynamic types (so they need to determine the actual type of any operation at runtime), both languages compile to byte code, both languages are effectively single-threaded, both languages (either directly or via libraries) strive to delegate as much as possible to c++, fortran, etc. to alleviate the issues.
Almost all benchmarks python vs. R which demonstrate that python outperfoms usually show ignorance (sometimes intentional) of the standard R idioms (vectorization, etc.) or well-known (to R users) packages.