-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU due to lock contention on _queue in DedicatedThreadPoolPipeScheduler.Schedule #44
Comments
I had some success mitigating this by creating a new SocketManager for every 5 multiplexers. That seemed reduce the lock contention. There are still perf issues after this mitigation, but it is not related to lock contention anymore. Is this the recommended approach (create more socket managers)? |
@mgravell I got another query about this issue and it seems like the issue is basically that lock contention spikes up when there is high load on the system, and the resulting lock contention just makes the CPU spikes even worse - and blocks threads. Given that the DedicatedThreadPoolPipeScheduler is already designed to gracefully fall back to a thread pool when there are no available dedicated workers, do you think it would help if it were to use Monitor.TryEnter instead of lock() in the Enqueue implementation, so that it could gracefully fall back to the thread pool under lock contention scenarios also? |
Has there been any traction on this issue? We're also observing this using 48 multiplexers, and moving to 24 seems to have significantly reduced the locking & high cpu. |
I'm currently in the process of overhauling the entire IO core. This isn't
an explicit aim, but a side-effect will almost certainly be to remove this
completely. Probably a few weeks until an early preview is publicly
available.
…On Mon, 11 Apr 2022, 21:18 Matt Hawley, ***@***.***> wrote:
Has there been any traction on this issue? We're also observing this using
48 multiplexers, and moving to 24 seems to have significantly reduced the
locking & high cpu.
—
Reply to this email directly, view it on GitHub
<#44 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAEHMG4LKJIMMRRDKPANODVESCH5ANCNFSM4I2SYJYA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Awesome, @mgravell |
What conditions would we expect this to happen under? Just purely load or could misconfigurations cause it too. We see this too at high load (.Net Framework) (and we have a lot of multiplexers to share the load) and they seem to all use the shared scheduler. Would a good solution be like @doanair mentioned to use multiple SocketManagers to spread the load? |
Under high load, our application is seeing very high CPU spikes due to lock contention in the StackExchange.Redis library. I tracked it down to a lock in this method: DedicatedThreadPoolPipeScheduler.Schedule
This is the stack trace:
LockContentionStackTrace.txt
--
|+ system.io.pipelines!Pipe.FlushAsync
||+ system.io.pipelines!System.IO.Pipelines.Pipe+DefaultPipeWriter.FlushAsync(value class System.Threading.CancellationToken)
|| + stackexchange.redis!PhysicalConnection.FlushAsync
|| |+ stackexchange.redis!PhysicalBridge.WriteMessageTakingWriteLockAsync
|| | + stackexchange.redis!PhysicalBridge.TryWriteAsync
|| | + stackexchange.redis!ServerEndPoint.TryWriteAsync
|| | + stackexchange.redis!ConnectionMultiplexer.ExecuteAsyncImpl
|| | + stackexchange.redis!RedisBase.ExecuteAsync
|| | + OTHER <<clr!InstantiatingMethodStubWorker>>
|| | + stackexchange.redis!RedisDatabase.HashGetAllAsync
|| | + stackexchange.redis!RedisDatabase.ScriptEvaluateAsync
I am using several (20-30) ConnectionMultiplexer objects in a round-robin pool to accommodate the amount of data the service is requesting.
The text was updated successfully, but these errors were encountered: