-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Queue tasks #3
Comments
I'm supportive of this! Seeing as AccessHandles is supposed to be the fastest storage primitive ever exposed to the web, does it makes sense to provide the Access Handles API its own task source? (in addition to one for storage more broadly) |
Yeah, I think that could be reasonable. (Though we should probably sort out the model in #28 first.) |
Revisiting this... All methods on the SyncAccessHandle are now sync (as per #7), so the argument made in my above comment about wanting a separate task source for this API is somewhat less relevant (although we'd like to eventually support an async version of SyncAccessHandles which is similarly performant - see #41). It seems like we should have a new task source. @annevk do you have a preference for whether it's for this API or for storage in general? |
I think ideally we consider which tasks need to happen in order across storage specifications and which can go out-of-order. And then put those in separate sources? |
Hmm good point. Thinking about this a bit more.... Writes using any storage API (including this one) can be rejected if the site is out of quota. So do all storage APIs need to use same task source for writes, since they all interact with the Quota Management API? There are a couple of other [1, 2] storage-related task sources that I'm aware of, but they both queue operations that are read-only (i.e. don't affect quota) |
That's a good point too. So, the tasks are for informing the event loop about the result of a storage operation. So yes, write storage operations need to be in order (at least per site), but reporting whether that succeeded or not back to JavaScript (this involves tasks) doesn't necessarily have to be in the same order, but we might as well stick to a single task source for that I think. |
I was about to file an issue in whatwg/storage but stumbled upon whatwg/storage#89. I don't personally understand much of the discussion happening on that issue - is there more to it than defining a new task source for write operations in the storage spec and then pointing all the storage API specs, including this one, to it? |
It's better now that we have "queue a global task". We should probably define a wrapper around that called "queue a storage task" as well as the "storage task source" in the Storage Standard. Would you be willing to take that on? |
Filed whatwg/storage#155 (which is erroring on something which doesn't seem to be my fault... :) ) |
Also make use of them internally. Additionally cross-reference more and adopt modern practices. Helps with whatwg/fs#3. Fixes #89. Co-authored-by: Anne van Kesteren <annevk@annevk.nl>
The new task source was recently specified here: https://storage.spec.whatwg.org/#storage-task-source Prompted by discussion specifically about FSA here: whatwg/fs#3 Bug: 1412596 Change-Id: I83639555a6f0ad886556ecfe82e96cd90a10993e Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/4220277 Code-Coverage: Findit <findit-for-me@appspot.gserviceaccount.com> Reviewed-by: Daseul Lee <dslee@chromium.org> Commit-Queue: Austin Sullivan <asully@chromium.org> Cr-Commit-Position: refs/heads/main@{#1101850}
This was addressed in #95. All promises are now rejected/resolved on the new storage task source (in this spec, at least. See WICG/file-system-access#420) |
While preparing #1 I noticed that many promises are resolved and rejected from in parallel steps. Instead this has to be done from tasks. https://html.spec.whatwg.org/#queue-a-global-task seems most appropriate here, not entirely sure if this needs a new task source. Perhaps we should have one for storage in general?
The text was updated successfully, but these errors were encountered: