If your pusher services are crashing, a possible solution is to increase the mcpu limit and memory limit. In this example we increased mcpu from 250 mcpu to 1 (meaning 1 entire core) and memory from 75Mi to 100Mi. However, requirements may be different depending on the deployment. In any case, you'll want to increase both of these values at the same time.
To execute the increase, follow these steps:
kubectl get po -A | grep pusher
output will identify pusher pod with its proper namespace
Next, run:
kubectl get po pusher-service-<pod id> -n <namespace> -o yaml
Out put should be something like this
kubectl get po pusher-service-<pod id> -n <namespace> -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: 100.96.21.32/32
cni.projectcalico.org/podIPs: 100.96.21.32/32
kubernetes.io/psp: domino-restricted
creationTimestamp: "2021-04-05T01:05:32Z"
generateName: pusher-service-664f5846dd-
labels:
app.kubernetes.io/instance: pusher-service
app.kubernetes.io/name: pusher-service
pod-template-hash: 664f5846dd
rabbitmq-ha-client: "true"
name: pusher-service-<id>
namespace: <namespace>
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: pusher-service-664f5846dd
uid: 1ac9c32a-6888-49c9-8a05-55605c8722b1
resourceVersion: "13363893"
the “kind” field says
which means it must be a replicaset or deployment
Replicaset
which means it must be a replicaset or deployment
if we run:
root@ip-10-0-0-19:/home/ubuntu# kubectl get deployment -A | grep -i pusher
<namespace> pusher-service 1/1 1 1 51d
we see it is a deployment and can be edited by the following command:
kubectl edit deployment pusher-service -n <namespace>
Comments
0 comments
Please sign in to leave a comment.