R Studio uses high CPU on VMware

Hi All,

I have noticed if you are using R Studion on Vmware that is uses high CPU. Is there some tuning that needs to be done for it to not be so resource hungry in a VDI environment? Thanks for your help.


Hi Natalie, Welcome to the R community !

In general resource consumption scales with usage - hence I suspect you may see just that.

To better understand what really is going on:

  • How do you determine the high CPU usage ?
  • Is the system under high load even if there is none using R Studio (This could indicate some "hung" processes or some issues within the OS) ?
  • Can you check the processes running in the Virtual machine by running the top command there ? What does this show you ? The top most lines should show you processes with high CPU usage - either make a screenshot or write down the name of the processes that show more than 10 % of CPU usage.
  • Also run free -hto check memory consumption and report here.

Hi Michael
thanks for responding to my post. We have RSession.exe running at 50% of CPU and RStudio.exe running at 20%. I don't know why it does this for this user. These processes stay like this whilst the user is using the application. It doesn't appear to be hung at all.

The VM's have 4 CPU and 16gb of ram. The memory usages for Rsession is 1,567,476 kb where as Rstudio.exe is 164,308 kb.

Hope this helps.


Hi Natalie,

based on your data I don't think there is anything wrong with your setup or the usage in general (memory usafe is 10% of available RAM and CPU usages is not more than 70 % (depending on how you measured it it is even 70 % of 1 core which would make it < 20 % total).

I only can speculate what the user is doing - maybe he is simply working with a lot of data or is doing some heavy-duty data processing utilising the local launcher. Maybe you can talk to that user and find out what he is doing just as an information gathering exercise. I would not start to tell him off for using the system however for now.

I also understand that using VM's always implies some level of thin-provisioning and oversubscription of hardware resources based on the assumption that not all VM's will always need their full resources. This is true for most use cases (e.g. database and web servers) but for tools that can (and apparently are) be used for number crunching the situation is vastly different. There it is default that CPU's are used at 100 % all the time (which in the case of database servers would likely indicate a problem, e.g. hung processes etc..)

In a previous company I was maintaining an HPC cluster and it took us a few months to convince the IT team monitoring the cluster that 100 % CPU usage is a sign of good usage. They also were trained in the databas and web server world and thought 100 % CPU usage was a problem. They even sent out eMails to the users asking to kill their jobs....

Hope this explains things but for now I think there is (except maybe talking to the user) nothing wrong with the hardware utilisation.

Happy to discuss further as needed.


1 Like

This topic was automatically closed 42 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.