-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
caper submit leader job but no children jobs on slurm HPC #101
Comments
Please post |
Hi, I commit similar error with caper 2.0/chip-seq-pipeline2 v2.0. I installed chip-seq-pipeline2 using conda and installed the caper in encode-chip-seq-pipeline environment using pip. |
Here is the actual sbatch command line used for submitting a job (in
Please check if these resource parameters work on your cluster:
Also, do not activate Conda environment. If you want to use conda then use |
Thank you very much! Actually, I wanna install an standalone version of caper and chip-seq-pipeline2, so I tried to install caper in another conda environment or singularity image... |
Hi team,
I'm using the ENCODE chip-seq-pipeline2 and installed the conda environment for it.
I also edited the
~/.caper/default.conf
as follow:Then I activated the conda environment and run this command as per your manual
sbatch -A ls25 -p genomics --qos=genomics -J chip-seq --export=ALL --mem 4G -t 4:00:00 --wrap 'caper run /home/fyan0011/ls25_scratch/feng.yan/software/chip-seq-pipeline2/chip.wdl -i template.json'
I noticed the qos flag seems not used according to the logs, anyway, the job was submitted, but no children job was seen.
The slurm out file showed that jobs are
Could you help with this?
Thank you!
The text was updated successfully, but these errors were encountered: