Replies: 1 comment 3 replies
-
Hi @marksweb The other thing (beyond memory and CPU) to measure then is response time... — when does it start getting slower? For simple request-response Nginx can upstream response time. If it's long-lived requests, you may need to add some custom timing hooks at key places. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've got a GKE cluster running Nginx & daphne workloads to provide websockets for some chat functionality. (I've not defined
ASGI_THREADS
, so it's a very default setup)What scaling metrics do people use for their daphne/channels infra?
Currently my cluster is setup to scale on CPU, but daphne/channels don't really take a lot of CPU power. The pod resources that I'm running are;
One potential fix here might be to reduce the CPU resources and lower the scaling threshold, but that's not really how you should do scaling because that's just trying to guess when it might need it.
The django side of things is simple, setting up using redis;
I've been wondering if there are custom metrics that I might be able to gather, similar to getting the queue lengths from celery.
Beta Was this translation helpful? Give feedback.
All reactions