server kicking users off under heavy use

Hi all,
We're looking for some advice on how set to handle a large app. We have a large app (xxxlarge) and the "professional" level plan.

When we have 6-10 users hitting the app simultaneously, users start getting kicked off and having to reload.
The reloading is a huge issue because loading up the data for the app requires almost a minute when loading from IO when there is only one user (load time is 20 seconds running locally on a 2018 macbook pro -six core). With multiple users the load time can spike to several minutes.

There is also a lag when displays are being generated on IO.

When running locally, most of the plots generate with very little lag. On the server, the lag is just noticeable with only one user, but can become very slow (5-10 seconds) with 7-10 users. When 6-7 users hit displays that are processing-intensive, some of the users get kicked off and have to reload.

Optimization efforts to date:
Data is loaded into the global environment and a conditional only allows data loading if a data-load-initiated-flag has not already been triggered. Once in the app, most of the analytics are live because of the large number of options for grouping/filtering the data. Settings:
Our current settings are shown below. The app is xxxlarge with multiple files ranging from 13m to 1.5m rows (the largest are loaded as fst files smaller ones are rds). We are near the maximum upload size for IO.

Is there a way to set the server to eliminate (or at least reduce) the number of times the server kicks people off?

It would be really great to know why this is happening. Is there a way to see why the server kicked people off? It seemed that the server would kick some people off and would leave others on. I'm not sure what I'm looking for in the logs. I don't see any indication of when the users were kicked off, so it's hard to troubleshoot.

Any thoughts or advice would be greatly appreciated.

Hi Steve. Sorry to hear you're having trouble.

I'm not sure what the issue is based on your description, but I'm concerned that we might be running into memory limits on the container.

As a paying customer of, you get access to our professional support team. Do you mind sending this report in to so our team can take a look at it?


Hi Jeff,
Thanks for the reply. You are correct about running into memory limits. When we had users in, I could see the memory usage bump up to 8 gb and then plateau.

We did connect with the support folks, and they sent some useful documentation that I hadn't seen. Helped a lot in understanding what I was doing with the settings. I basically cut the worker process and the max connections way down. That limits the number of users we can serve (we ultimately need to serve 5000 employees --it's a fairly large school district), so we're looking building this one out on Azure where we can customize our containers a little more, and also enable moving data in and out of a database.

Thanks for your help.