-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proxy_cache_use_stale like directive #38
Comments
@lloydzhou This is relying on the cache backend storage (like redis) to implement expiration. If you have ideas to implement this "use stale" feature in ngx_srcache, then feel free to submit a pull request. Thanks! |
@agentzh
we need update redis-module to add header "X-Expire".
|
@lloydzhou Maybe you can just use a little bit of Lua for such extra logic in your srcache_fetch subrequest. No need to change existing nginx C modules as far as I can see :) |
Cached in reids, and set expire time into "X-Expire" header.
get cached content from redis by using redis2-nginx-module:
parse the result, and set the expire time into "X-Expire" header
parse the result, and set the expire time into "X-Expire" header
Store content into redis by using redis2-nginx-module
TODO |
@agentzh |
@lloydzhou Nope, not really. That directive runs in the header filter phase. It's already too late to serve anything else in that phase. The ngx_proxy module does implement the use stale cache thing directly in the underlying upstream mechanism. The best bet for us to redo it in Lua is to rely on the upcoming balancer_by_lua* directive in ngx_lua, which also runs directly inside the stock upstream facility. |
@agentzh |
@lloydzhou cool :) We have something similar with memcached at CloudFlare. In addition, we have a secondary caching layer based on lua_shared_dict ;) |
@agentzh
|
@agentzh |
@agentzh I came here looking for a similar solution. Any chances this will get merged into
|
@lloydzhou May I know what is the latest update on this? Are you still using https://github.com/lloydzhou/lua-resty-cache or https://gist.github.com/lloydzhou/d1dfc41f56866c4b82a6 ? |
@rahul286 i using https://github.com/lloydzhou/lua-resty-cache in production |
here is my solution of this problem, combine the srcache with nginx's proxy_cache to setup a two level caching. here is the snippet
if add '$srcache_fetch_status $srcache_store_status $upstream_cache_status' in the access log, we could see the following:
The benefits of above config:
|
Now we just need a way to send stale link preload/preconnect headers as "103 Early Hints", instead of sending the whole staled response, while waiting for the fresh response to be available. |
the proxy_cache_use_stale directive can determines in which cases a stale cached response can be used when an error occurs during communication with the proxied server.
but when i using srcache+redis, the reids always delete keys...
The text was updated successfully, but these errors were encountered: