-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to set up LSF specific conf file for caper #93
Comments
That link doesn't work. Does your LSF cluster have a wiki page? |
well, what I'm basically asking you is what platform should I specify for
this: caper init [PLATFORM]
I don't think my HPC has a wiki, but here
<https://labs.icahn.mssm.edu/minervalab/lsf-queues/> is some description.
…On Tue, Sep 22, 2020 at 1:52 PM Jin Lee ***@***.***> wrote:
That link doesn't work. Does your LSF cluster have a wiki page?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#93 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFPJDGTBBPDVDTMHZMCFEDSHDP7HANCNFSM4RVXZ47Q>
.
--
German Nudelman, Ph.D.
Sr. Bioinformatics Developer/Analyst
Icahn School of Medicine at Mount Sinai
https://mssm.zoom.us/j/4969297959
|
Caper doesn't currently support LSF. If I can get some detailed info about |
You may need to run Caper with Login on a compute node and then run
Use If you want to save resources on a compute node, then serialize all tasks by using |
Thanks Jin for the suggestion.
For practical reasons, deploying a pipeline such as the ENCODE atac seq
without the ability to submit jobs is somewhat of limited utility,
unfortunately. It is especially relevant nowadays when the datasets may
contain hundreds of samples.
Whenever you have the resources, I'd be happy to work with you to add the
LSF support to Casper, if needed .
Could you pls advise what would be a good starting point for this?
Best,
German
…On Tue, Sep 22, 2020 at 3:42 PM Jin Lee ***@***.***> wrote:
You may need to run Caper with local backend, which means that Caper will
not bsub tasks. It will run all tasks on a current shell.
Login on a compute node and then run
caper run ATAC_WDL -i INPUT_JSON --singularity --max-concurrent-tasks 2
Use screen or nohup to keep the session on.
Or bsub caper command line itself with very large resources.
If you want to save resources on a compute node, then serialize all tasks
by using --max-concurrent-tasks 1.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#93 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFPJDGYFSKQ6ZDG4MAIY5LSHD43PANCNFSM4RVXZ47Q>
.
--
German Nudelman, Ph.D.
Sr. Bioinformatics Developer/Analyst
Icahn School of Medicine at Mount Sinai
https://mssm.zoom.us/j/4969297959
|
Sorry for the late reply, currently we don't have a plan to add a LSF backend. If you are familiar with https://github.com/ENCODE-DCC/caper/blob/master/caper/cromwell_backend.py#L710 You need to modify bash command lines under keys
|
I'll probably be working on getting this to LSF soon. In the mean time this might help, it's old but the basic commands typically don't change that much. https://modelingguru.nasa.gov/docs/DOC-1040 |
Should look something like this
Note: have not tested this I got rid of GPU because GPU use is dependent on LSF implmenetation. However, @leepc12 how is "job-id-regex" grabbed in PBS? I'm not completely familiar with how job id's are grabbed from PBS so any insight on this would be much appreciated. It shouldn't be too difficult to construct a regex. |
Never mind, went threw cromwell docs and found this
so everything is fairly similar |
I am modifying the PBS backend at https://github.com/ENCODE-DCC/caper/blob/master/caper/cromwell_backend.py but when I run caper I still get an error saying I am using qsub. |
For future reference this is the LSF backend file that was ran that worked. You cluster may not have a -G or -q flag so adjust as you may. I also had to set up specific paths (the PATHS="" and LSF_DOCKER_VOLUMES="") for my LSF call. If your compute cluster or system is different you'll want to take those out too. When you run caper just run
|
Hi everyone. I'm charged with standing up the ENCODE ATAC-seq pipeline to work in our environment, which is LSF, and I'm willing to take the baton across the finish line with GitHub pull request to see LSF supported out of the box for all users of Caper. @HenryCWong, you have done most of the legwork already. If I can test your changes locally, and everything works for the two of us, at our two different sites, is there a way I can walk you through how to do a PR on GitHub, or... are you clear on how to do that? Do you have the time? It would be a shame for you not to get credit, if it gets merged into the codebase. |
@ernstki: Please let me make a dev PR for you and you can pull it (you may need to All I need a working custom backend file ( If that works for you two @ernstki and @HenryCWong then I can merge it to master. |
Hi sorry for the late response y'all. So do you still want me to take the PR since @leepc12 is making a dev PR? I can get you the custom lsf backend file tomorrow. The one above should work but I also haven't been in here in 2 months so I'll double check things. |
@HenryCWong If you're willing to just
…I'm willing to cherry-pick that commit from your fork and do any remaining work to get it in a state that meets @leepc12's requirements. This way you'll get credit for the work you've done in the Git commit history for Caper, and you will be Internet Famous. I think we can discuss whether @leepc12 wants to put custom backends in a |
Thanks for the info. I forked the and made a commit here https://github.com/HenryCWong/caper. |
I opened my password manager to log in to specifically thank you for doing this. (I am another bioinformatician at Mount Sinai, on the same computing environment, that needed this fix) |
It seems IBM has been customizing specific LSF things for customers so if it doesn't work for you guys and you need to do run caper with a custom backend I can try to help out. |
Thank you so much!
…On Tue, Sep 21, 2021 at 11:38 AM Henry C. Wong ***@***.***> wrote:
It seems IBM has been customizing specific LSF things for customers so if
it doesn't work for you guys and you need to do run caper with a custom
backend I can try to help out.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#93 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABVNCLOIVAAPNTNMHKYRNU3UDCRG5ANCNFSM4RVXZ47Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
It looks like #148 (release 2.0.0) implements LSF support, so thanks @leepc12! Not sure if that's based on what @HenryCWong shared here or not, but it looks like this issue could be closed if v2.0.0 meets @ls233's requirements. The project I needed this for is not nearing the stage where it's ready to submit jobs to a cluster anyway, so I wouldn't have been to work on this for several weeks at least. |
Hi Jin,
I'm looking for the right value of a platform parameter to be specified to init Caper on my HPC (Mount Sinai). My HPC uses the LSF system. I'm referring to section 2.3 of this manual - https://github.com/MoTrPAC/motrpac-atac-seq-pipeline.
Thanks,
German Nudelman, Ph.D.
Sr. Bioinformatics Developer/Analyst
Icahn School of Medicine at Mount Sinai
The text was updated successfully, but these errors were encountered: