Issue description
No any issue when only one pod created.
If there are more than on pod,
- the first loading time is extreme long;
- frequent time out;
- warning from shiny server: websocket timeout
Environment:
One master node and two slave (worker) nodes on a AWS VPC.
A public IP address is associated with the master node.
Ubuntu 18.04
Docker: 19.03.2
Kubernetes: v1.16.0
Kubernetes configuration file
---
apiVersion: v1
kind: Service
metadata:
 name: shiny-server-service
 annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
 labels:
   app: shiny-service-label
spec:
 type: LoadBalancer
 ports:
   - name: http
     protocol: TCP
     port: 80
     targetPort: 3838
 selector:
     app: session-affinity-demo
 externalIPs:
   - 10.0.0.41
 externalTrafficPolicy: Local
 type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: session-affinity-demo
  labels:
    app: session-affinity-demo
spec:
  replicas: 20
  selector:
    matchLabels:
      app: session-affinity-demo
  template:
    metadata:
      labels:
        app: session-affinity-demo
    spec:
      containers:
      - name: session-affinity-demo
        image: docker.epi-interactive.com/image_name_here
        ports:
        - containerPort: 3838
          protocol: TCP
      imagePullSecrets:
      - name: regcred
      # This is necessary for sticky-sessions because it can
      # only consistently route to the same nodes, not pods.
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: session-affinity-demo
            topologyKey: kubernetes.io/10.0.0.41