You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried using easy-clust to perform clustering to generate smaller representative subsets from Uniref50 (Uniref40, Uniref 30, Uniref 20, etc.)
Current Behavior
The clustering dies after a few steps with an Out of Memory Error.
Steps to Reproduce (for bugs)
Please make sure to execute the reproduction steps with newly recreated and empty tmp folders.
MMseqs Output (for bugs)
Please make sure to also post the complete output of MMseqs. You can use gist.github.com for large output.
tmp/5521603387764364218/clu_tmp/3834389364932800683/cascaded_clustering.sh: line 102: 208445 Killed $RUNNER "$MMSEQS" prefilter
"$INPUT" "$INPUT" "${TMP_PATH}/pref_step$STEP" ${TMP}
slurmstepd: error: Detected 1 oom_kill event in StepId=16746209.batch. Some of the step tasks have been OOM Killed.
Context
Providing context helps us come up with a solution and improve our documentation for the future.
#SBATCH --output=slurm_mlm.out
#SBATCH -e slurm_mlm.err
#SBATCH -p scavenger-gpu
#SBATCH --gres=gpu:1
#SBATCH --mem=200G
#SBATCH -c 32
mmseqs easy-cluster /hpc/group/naderilab/eleanor/prose/data/uniref50.fasta uniref40 tmp --min-seq-id 0.4 -c 0.8 --cov-mode 1 --split-memory-limi
t 160G --threads 12
Your Environment
Include as many relevant details about the environment you experienced the bug in.
Git commit used (The string after "MMseqs Version:" when you execute MMseqs without any parameters):
Which MMseqs version was used (Statically-compiled, self-compiled, Homebrew, etc.):
For self-compiled and Homebrew: Compiler and Cmake versions used and their invocation:
Server specifications (especially CPU support for AVX2/SSE and amount of system memory):
Operating system and version:
The text was updated successfully, but these errors were encountered:
The excessive memory consumption issue should be fixed in git mmseqs and will be part of release 16 that we are going to release in the next few days. It would be great if you could test if this crash still happens, you can use the precompiled binaries at https://mmseqs.com/latest/
Expected Behavior
I tried using easy-clust to perform clustering to generate smaller representative subsets from Uniref50 (Uniref40, Uniref 30, Uniref 20, etc.)
Current Behavior
The clustering dies after a few steps with an Out of Memory Error.
Steps to Reproduce (for bugs)
Please make sure to execute the reproduction steps with newly recreated and empty tmp folders.
MMseqs Output (for bugs)
Please make sure to also post the complete output of MMseqs. You can use gist.github.com for large output.
tmp/5521603387764364218/clu_tmp/3834389364932800683/cascaded_clustering.sh: line 102: 208445 Killed $RUNNER "$MMSEQS" prefilter
"$INPUT" "$INPUT" "${TMP_PATH}/pref_step$STEP" ${TMP}
slurmstepd: error: Detected 1 oom_kill event in StepId=16746209.batch. Some of the step tasks have been OOM Killed.
Context
Providing context helps us come up with a solution and improve our documentation for the future.
#SBATCH --output=slurm_mlm.out
#SBATCH -e slurm_mlm.err
#SBATCH -p scavenger-gpu
#SBATCH --gres=gpu:1
#SBATCH --mem=200G
#SBATCH -c 32
mmseqs easy-cluster /hpc/group/naderilab/eleanor/prose/data/uniref50.fasta uniref40 tmp --min-seq-id 0.4 -c 0.8 --cov-mode 1 --split-memory-limi
t 160G --threads 12
Your Environment
Include as many relevant details about the environment you experienced the bug in.
The text was updated successfully, but these errors were encountered: