-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make heavy loaded optimization configurable #2 #211
base: master
Are you sure you want to change the base?
Conversation
950f27f
to
c894f1f
Compare
return c.streams.NumStreams/2 > c.AvailableStreams() | ||
return c.StreamsInUse() > c.session.cfg.HeavyLoadedConnectionThreshold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part was already configurable trough MaxRequestsPerConn
parameter in ClusterConfig.
I think we should either stick with the old parameter or remove it. Having 2 will be confusing.
Also, currently this PR changes the default (previously it was 32768 / 2, now it is 512). Why this number and why should the default be changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having it in an old way do not allow user to switch early and to have lot's of requests in flight at the same time.
MaxRequestsPerConn
used to control not only max number of requests in flight, but also a waterline after which connection is considered under high load.
This kind of coupling is never good.
Beside that, before this PR driver considered connection to be highly loaded only after reaching 16k requests in flight, which is a HUGE number, it is safe to say that this feature never worked before.
Having a separate configuration for a heavy loaded water line allows user to allow driver to have big number of in flight requests and at the same time to switch to underutilized connections early.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having it in an old way do not allow user to switch early and to have lot's of requests in flight at the same time.
MaxRequestsPerConn
used to control not only max number of requests in flight, but also a waterline after which connection is considered under high load. This kind of coupling is never good.
Beside that, before this PR driver considered connection to be highly loaded only after reaching 16k requests in flight, which is a HUGE number, it is safe to say that this feature never worked before.
This number was configurable by the user, so its not really safe to say this.
Having a separate configuration for a heavy loaded water line allows user to allow driver to have big number of in flight requests and at the same time to switch to underutilized connections early.
If switching out early is the goal (which also means disabling shard awareness early!!) then I agree that HeavyLoadedConnectionThreshold
makes sense.
scylla.go
Outdated
if alternative == nil || alternative.AvailableStreams()*120 > c.AvailableStreams()*100 { | ||
return c | ||
} else { | ||
if alternative != nil && alternative.StreamsInUse()*100 >= c.StreamsInUse()*(100-c.session.cfg.HeavyLoadedSwitchConnectionPercentage) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't it be <
?
Right now you are returning the alternative if it's more loaded than the original connection, not less
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to have some test to verify that the inequality is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
both done, added tests and fixed the code
@dkropachev Do you plan to continue work on this PR? |
Sure, I am going to work on it this week. |
c894f1f
to
71fba10
Compare
71fba10
to
43504ff
Compare
43504ff
to
0834b9f
Compare
@sylwiaszunejko , @Lorak-mmk , it is done, please take a look |
0834b9f
to
014cb35
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me it looks good, but maybe @Lorak-mmk has any comments?
I'll try to respond here soon, after reviewing scylladb/scylla-rust-driver#1061 and scylladb/scylla-rust-driver#1065 |
cluster.go
Outdated
if len(cfg.Hosts) == 0 { | ||
return ErrNoHosts | ||
} | ||
|
||
if cfg.Authenticator != nil && cfg.AuthProvider != nil { | ||
return errors.New("Can't use both Authenticator and AuthProvider in cluster config.") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how this change is related to this commit / PR.
Please put it in separate commit at least.
cluster.go
Outdated
if cfg.HeavyLoadedSwitchConnectionPercentage > 100 || cfg.HeavyLoadedSwitchConnectionPercentage < 0 { | ||
return fmt.Errorf("HeavyLoadedSwitchConnectionPercentage must be between 0 and 100, got %d", cfg.HeavyLoadedSwitchConnectionPercentage) | ||
} | ||
|
||
if cfg.HeavyLoadedConnectionThreshold < 0 { | ||
return fmt.Errorf("HeavyLoadedConnectionThreshold must be greater than or equal to 0, got %d", cfg.HeavyLoadedConnectionThreshold) | ||
} | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should also validate that MaxRequestsPerConn >= HeavyLoadedConnectionThreshold
to prevent weird configurations
return c.streams.NumStreams/2 > c.AvailableStreams() | ||
return c.StreamsInUse() > c.session.cfg.HeavyLoadedConnectionThreshold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having it in an old way do not allow user to switch early and to have lot's of requests in flight at the same time.
MaxRequestsPerConn
used to control not only max number of requests in flight, but also a waterline after which connection is considered under high load. This kind of coupling is never good.
Beside that, before this PR driver considered connection to be highly loaded only after reaching 16k requests in flight, which is a HUGE number, it is safe to say that this feature never worked before.
This number was configurable by the user, so its not really safe to say this.
Having a separate configuration for a heavy loaded water line allows user to allow driver to have big number of in flight requests and at the same time to switch to underutilized connections early.
If switching out early is the goal (which also means disabling shard awareness early!!) then I agree that HeavyLoadedConnectionThreshold
makes sense.
Scylla Go Driver has a capability to avoid sending requests to an overloaded shard, instead sending the request on a different connection (at the same node). This change makes it possible to customize the parameters used to determine when this behavior would kick in.
014cb35
to
5c22331
Compare
It is updated #114
Due to the #114 being stuck with no progress