You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
At random points in time, all clients (Desktop and CLI) already connected to the Kubernetes worker are unable to connect. After 5 minutes or so, the connection comes back and clients are able to connect to the requested targets. This happens for all targets and clients.
On the Desktop client, I can't seem to find any relevant logs. On the CLI client, here are the logs:
error fetching connection to send session teardown request to worker: Error dialing the worker: failed to WebSocket dial: failed to send handshake request: Get "http://public-k8s-worker:9202/v1/proxy": context deadline exceeded
This keeps repeating until the connection/session comes back online. Polling /worker-info on the Kubernetes worker yields "READY" for GRPC upstream connection state:
Describe the bug
At random points in time, all clients (Desktop and CLI) already connected to the Kubernetes worker are unable to connect. After 5 minutes or so, the connection comes back and clients are able to connect to the requested targets. This happens for all targets and clients.
On the Desktop client, I can't seem to find any relevant logs. On the CLI client, here are the logs:
This keeps repeating until the connection/session comes back online. Polling /worker-info on the Kubernetes worker yields "READY" for GRPC upstream connection state:
To Reproduce
This issue happens randomly with no known interval. The frequency of the issue is not uniform.
Expected behavior
All sessions opened should not be interrupted
Additional context
Worker version: v0.15.4
Controller version: v0.14.5
CLI version: 0.14.3
3 controllers in HA setup and one Kubernetes worker
The text was updated successfully, but these errors were encountered: