-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory consumption #94
Comments
|
@vcunat could you please elaborate more on how it may cause constant memory growth. 5 mins ttl seems harmless to me. |
No, the growth itself does sound like a bug. Reducing TTL will make resolver do more work, etc. but otherwise it's probably just some coincidence that it triggers that bug/growth. I just wanted to point out that swapping the lines is basically the same as not changing the TTL limit. |
Thanks for pointing that out. It was not obvious to me. |
I see two plausible options:
|
I'll definitely investigate your suggestions. Thanks for sharing. 🙇♂️ @vcunat But I am still puzzled by the fact that using such a simple setting as |
Cache size is unrelated; that's always exactly 100 MiB file, mapped to memory (according to your config). |
I mean, the cache file will be part of the RAM usage that you see, but it has that hard upper limit. |
Hi,
I encountered some very strange behavior with knot resolver. For some reason this config causes the
kresd
process to bloat linearly (~10Mb / hour) and eat hundreds megabytes of memory even without any load:But when I set
max_ttl
before opening a cache file the problem disappears and the memory footprint stays at ~17Mb:Here is the docker file I used:
Dockerfile
I would be grateful for any ideas and debug suggestions.
UPD Apparently the lower
max_ttl
the quicker RAM is consumed. Callingcache.clear()
does nothing. Runningkres-cache-gc
does nothing.The text was updated successfully, but these errors were encountered: