Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix flakes in ./topdown/cache #7188

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

evankanderson
Copy link
Contributor

@evankanderson evankanderson commented Nov 22, 2024

Why the changes in this PR are needed?

In #7180, I got a few test failures in untouched code in ./topdown/cache. Running the tests with go test -count 200 ./topdown/cache (which takes around 400s), I got 5 failures across a few different tests.

The underlying cause seems to be the timing of the ticker in comparison te the tests. The tests create a cache with a 1s expiry scan (which starts a timer to tick at 1s, then re-creates the timer to run 1s later, etc), and then immediately goes on to create objects with 1s expiry. The test then sleeps for 2s, and checks to see if the items have been evicted. Unfortunately, the following timing can happen:

  • Ticker is created at +0.000s
  • Cache item is created, expiring at +1.001s
  • Ticker ticks for cache cleanup at +1.000s, the item is still valid
  • Cache cleanup completes at +1.001s, and the next tick is scheduled for +2.001s
  • Check code runs at +2.000s, and finds the item not yet evicted
  • Ticker ticks at +2.001s, and cleans up cache items after the test has failed

What are the changes in this PR?

I'm willing to reduce the changes to simply timing changes, but I tried to improve the codebase a bit. 😁

  • It may be slightly surprising to users that items may expire up to stale_entry_eviction_period_seconds after their expiration. I made the ticker tick twice for this interval, halving the "items remain after expiry" visibility duration.
  • The current code re-creates and re-schedules the ticker for each cache scan. This generates a certain amount of extra work, for some possible pacing benefits: if the cache cleanup takes most of the stale_entry_eviction_period_seconds duration, the next cache cleanup won't be scheduled until stale_entry_eviction_period_seconds after the previous cleanup succeeds. On the other hand, this also means that items can expire even later than expected. This change maintains a fixed ticker cycle, regardless of the size of the cache. I suspect any OPA instance where cache clearing approaches stale_entry_eviction_period_seconds is going to be pretty unhappy regardless of the pacing.

In order to reduce test execution time and prefer higher counts of go test -count N, I reduced the execution time of ./topdown/cache from 2.2s to around 1.2s by tweaking the various expirations and sleeps.

Notes to assist PR review:

My gut feel is that the additional cost of running the cache scan twice as frequently will be mostly offset by not needing to clear and re-create Ticker objects (and the associated channels, etc). Also, hopefully, this cost should be in the noise for most installations.

A more conservative change would be simply to increase the 2 * time.Second sleeps to be 2.2 seconds; in any case, I'd preserve the extra comments as the code seems slighly subtle.

(A third, rejected option would be to check the expiry in unsafeGet, and pretend the object was not in the cache as soon as it expired. This would add a time fetch to each unsafeGet, which seemed more expensive than necessary.)

Further comments:

This passed successfully on a 500-count run, as well as several smaller runs (MacBook M2, moderate other load):

$ go test -count 500 ./topdown/cache
ok      github.com/open-policy-agent/opa/topdown/cache  550.795s

@ashutosh-narkar
Copy link
Member

@evankanderson thanks for looking into this. I think it's fine to fix the tests in this change and open a separate issue with more discussion around stale_entry_eviction_period_seconds. @rudrakhp have may some thoughts on that. I don't quite understand the change made in cache and why we should make that change just to make the tests work.

@evankanderson
Copy link
Contributor Author

As I said, I may have gotten overzealous in the cleanup!

I was confused at first why an entry with a 1s lifespan was still alive 2s later, but this is existing behavior, so I'm willing to leave it as-is and have the tests run a bit longer. (About 0.5s)

The main impact is on folks running the tests hundreds of times, which is admittedly not a common use-case.

@ashutosh-narkar
Copy link
Member

@rudrakhp any thoughts about the tests?

@rudrakhp
Copy link
Contributor

rudrakhp commented Nov 23, 2024

@evankanderson thanks for the catch! We can choose to sleep for 3 * stale_entry_eviction_period_seconds in the tests that will ensure at least one cleanup routine run after the cache entry expires. Or if we are conservative and don't want to fix the tests for folks running the tests large number of times concurrently, 2.3 * stale_entry_eviction_period_seconds also works.

My gut feel is that the additional cost of running the cache scan twice as frequently will be mostly offset by not needing to clear and re-create Ticker objects (and the associated channels, etc). Also, hopefully, this cost should be in the noise for most installations.

Irrespective of whether any items have expired, we do a linear scan of the cache in the cleanup routine. Unless I am missing something here, the cost of running the cleanup routine (on each tick) seems to be way greater than cost to recreate the ticker and associated channel. I understand that the proposal is to run the routine twice to ensure stale entries are deleted, but I am not sure if correctness is worth what we loose in runtime performance.

In order to reduce test execution time and prefer higher counts of go test -count N, I reduced the execution time of ./topdown/cache from 2.2s to around 1.2s by tweaking the various expirations and sleeps.

Agree that it takes a lot of time for tests to tun as 1s is minimum expiry time we can set today. From a runtime performance aspect, I didn't see why anyone would want to schedule the (costly) cleanup routine more frequently. But we could set floating point numbers instead of integers for stale_entry_eviction_period_seconds, that can allow us to schedule more frequent cleanups in the tests.

Like @ashutosh-narkar already mentioned, don't think we should change the cache cleanup behaviour just for the tests, especially if it's something like invoking the cleanup twice as frequently which users might not expect. We could brainstorm on this aspect in a separate PR though.

Signed-off-by: Evan Anderson <evan@stacklok.com>
@evankanderson
Copy link
Contributor Author

I removed the frequency change, and validated the new test timings for 500 runs.

iqCache.cleanStaleValues()
cleanupTicker = time.NewTicker(time.Duration(iqCache.staleEntryEvictionTimePeriodSeconds()) * time.Second)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we ok with this in scenarios where cleanStaleValues routine can consume more time than staleEntryEvictionTimePeriodSeconds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can switch this back, but since the lock is held throughout cleanStaleValues, the actual evaluation experience will certainly be suffering either way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants