-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get a (async) sentinel connection pool working correctly #3433
Comments
OK so after a fair bit of experimentation, I think this is roughly the way to go:
This seems to handle failovers and restores reasonably gracefully, and doesn't close the connection. There's an awful lot of subtle gotchas here though:
I'm hoping that this feedback can be taken on board at some point by the maintainers in order to try to provide the necessary fixes/hooks/documentation to be able to do this in a cleaner manner client-side |
Version: 5.0.1
Platform: Python 3.12 on Linux
Description: The end-goal is to get a reliable writable or read-only connection from a set of redis servers running sentinel on top. And to get this efficiently from a pool. If we just had a single server we could simply do:
But we want reliability in case the server fails. So we connect to sentinel:
and the docs then advise using
.slave_for()
or.master_for()
. However looking at tcpdump these establish a new connection to the target server each time, rather than using any sort of long-lived pooling on the connection.Looking at the code, for example https://github.com/redis/redis-py/blob/master/redis/asyncio/sentinel.py#L349-L351 (but non-async and slave are the same) it seems to generate a new connection pool on each of the calls to master_for/slave_for. A naive fix would be to have a function like
.master_for_pool()
which returns a pool on the master, from which we can pull new connections, but this wouldn't handle sentinel failovers correctly.So I think there's a need for some sort of SentinelPool abstract over the top of the Sentinel layer which will maintain connection pools to all of the servers in the cluster and dish those connections out to the caller according to the availability of that server and the state of sentinel on the cluster.
I may well be missing something obvious here but from looking at the docs/code I cannot see a simple way to achieve what I'd like to do, and what I'd think would be the purpose of sentinel?
The text was updated successfully, but these errors were encountered: