You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While integrating gdrivefs to a test suite, I noticed that it doesn't handle rate limit exceeding errors in the exponential waiting fashion (https://developers.google.com/drive/api/v3/handle-errors#exponential-backoff). Would it make sense if I contribute a similiar (maybe without using funcy.retry, so no dependencies would be introduced) decorator to the example below? So that this would be handled for all use cases (which is very important, if you are making simultaneous requests [e.g during CI, 10 workers], without a retry logic it just fails but with this it succeeds on one of the tries).
Yes, I completely agree there should be proper retry logic with exponential backoff. This is done for s3fs and gcsfs, for example. dvc's implementation would be fine too (since it is meant for gdrive); but elsewhere we have found that the number of conditions that do need a retry easily grows with time!
While integrating gdrivefs to a test suite, I noticed that it doesn't handle rate limit exceeding errors in the exponential waiting fashion (https://developers.google.com/drive/api/v3/handle-errors#exponential-backoff). Would it make sense if I contribute a similiar (maybe without using funcy.retry, so no dependencies would be introduced) decorator to the example below? So that this would be handled for all use cases (which is very important, if you are making simultaneous requests [e.g during CI, 10 workers], without a retry logic it just fails but with this it succeeds on one of the tries).
https://github.com/iterative/dvc/blob/63f32936b20c23abc32e9dba1aba19ab5db804e9/tests/remotes/gdrive.py#L19-L47
The text was updated successfully, but these errors were encountered: