You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran 10 single-cell samples together using SCEVAN, but the program exceeded the available memory on our server (which has 1200 GB of memory). Can you please check if I processed the samples correctly and provide feedback on how to manage the memory usage?
Raw Read10X object (I have already read your paper on SCEVAN, which has a filtering step. However, I am wondering if it is possible to use the integration matrix with removed doublets. If it is possible to use the integration matrix for input data, I may be able to solve this problem.)
Hi @seoyounglee6445,
This issue is probably due to mclapply function used for parallelisation and its approach to memory management. I will try to solve the problem in the next commit, at moment you can resolve the issue reducing the number of cores used in parallel.
Let me know at which step SCEVAN runs out of available memory and if reducing the number of cores solves the problem
I ran 10 single-cell samples together using SCEVAN, but the program exceeded the available memory on our server (which has 1200 GB of memory). Can you please check if I processed the samples correctly and provide feedback on how to manage the memory usage?
or
listCountMtx <- list(sampel1= sample1, sample2=sample2,sample3=sample3 ..... sample10=sample10)
results <- SCEVAN::multiSampleComparisonClonalCN(listCountMtx, analysisName = "all", organism = "human" , par_cores = 60)
The text was updated successfully, but these errors were encountered: