-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
increase shot limit for Aquila to 5k #370
Comments
Hello, thank you for this request. One reason we set it to 1000, is to limit accidental spending on Aquila. Can you provide more details about what you are trying to achieve, to help us re-evaluate this decision? Until then, you can try running your use case that require many shots as a series of identical tasks with 1000 shots each back-to-back. If you are looking to run 10k shots, the cost difference between 10[tasks] x 1k[shots] and 1[task] x 10k[shots] is around 3 USD, 3% of the total cost. Would this solve your issue? |
Hi Peter,
The Aquila has relatively high probability of loosing atoms. The probability of correctly placing an atom is only ~99%.
I currently work with a 60-atoms lattice and I need no defects in the lattice to evaluate the results. When I fire 1k shots only 65% of them give me the full lattice - I discard the rest.
So for this case my practical limit is ~650 usable shots per circuit. And the observable I calculate from this ~650 usable shots is the average wall domain density for 1D lattice - it has substantial statistical error. I need few times more shots.
Yes, I can fire more 1k shots jobs w/ the same 'circuit', but for my research I need to measure the average wall domain density for ~20 different Hamiltonians - I have already 20 different 'circuits'.
So last night I needed to fire 100+ jobs. For larger problem (larger M-atom grid) the fraction of usable shots will diminish further ~0.99^M . Aquila supports M<=256.
With this low limit I'll be force to submit 1000s of jobs from the larger lattice.
It is interesting you brought the cost argument.
I'm more concerned with managing 100s of ARNs for all those jobs - it becomes a problem at scale.
Thanks
Jan
… On Oct 2, 2023, at 9:32 AM, Peter Komar ***@***.***> wrote:
Hello, thank you for this request. One reason we set it to 1000, is to limit accidental spending on Aquila. Can you provide more details about what you are trying to achieve, to help us re-evaluate this decision?
Until then, you can try running your use case that require many shots as a series of identical tasks with 1000 shots each back-to-back. If you are looking to run 10k shots, the cost difference between 10[tasks] x 1k[shots] and 1[task] x 10k[shots] is around 3 USD, 3% of the total cost. Would this solve your issue?
—
Reply to this email directly, view it on GitHub <#370 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAAMOFHWVWRQRLPYBYHYRODX5LUAXAVCNFSM6AAAAAA5NHLDAGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONBTGM2DGMJQHE>.
You are receiving this because you authored the thread.
|
I could not resit: when you take your money from an ATM, are you concern with the bank fee for it?
… On Oct 2, 2023, at 9:32 AM, Peter Komar ***@***.***> wrote:
3% of the total cost. Would this solve your issue?
|
Thank you very much, Jan. I appreciate the difficulty of having to agregate across hundreds or thousands of tasks, and certainly the 3% cost overhead is also not something to ignore. It is very useful for us to understand when and how these limit your work. Thank you for providing details. To help us further evaluate the effects (and side effects) of increasing the limit, do you mind telling us how big of an increase would make substantial difference in your experience? In your initial message you mentioned 5-10k, but based on the N_qubit=256 example (where the probability of defect-free arrangement is around (0.99)^256 = 7.6%), it sounds like even an increase by a factor of 1/0.076 = 13 would be useful. Is this correct? (I guess it comes down to exactly how many perfect shots you need to suppress the statistical fluctuation in the domain wall position.) Thank you, |
Hi Peter,
the max number of shots limit in software is not an exact science.
1k is too small. IBMQ has limit at 32k shots/job. I think for QuEra you could set it at 10k shots max - this will give you a ~year before someone else asks to bump it up again.
You need to have a lot of $$$ to be able to afford to work on a quantum problem which requires a O(1M) total shots.
Thanks
Jan
|
Hi Jan, Thank you. I've just shared the details you provided with our team. I will follow up on this issue. Please feel free to comment to ping me if I don't give an update in a week. Best, |
Hey @balewski I'm on the product management team for Amazon Braket. Thanks again for opening this issue and being active in our developer community. We are speaking internally and with QuEra to discuss our options and exploring if we add this to our future roadmap. In the interim, I'll add this to our internal feature request tracking and will post here if this feature is rolled out 😊 |
When I submit Aquila job which requests more than 1k shots it is rejected by Braket.
Can you please bump up this limit to 5k or 10k? I run problems which require higher statistics than 1k shots.
This is the Braket error I see
The text was updated successfully, but these errors were encountered: