Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MemoryError when fitting model.fit #34

Open
tatyana-perlova opened this issue Apr 3, 2023 · 10 comments
Open

MemoryError when fitting model.fit #34

tatyana-perlova opened this issue Apr 3, 2023 · 10 comments

Comments

@tatyana-perlova
Copy link

Thank you for a great tool! I tested the tutorial workflow on a subset of my data and it worked like charm. Now I'm doing it on the full dataset of 440k cells. When running model.fit(min_iter=10, max_iter=50) after 2.5 hours I got the memory error:

MemoryError                               Traceback (most recent call last)
Cell In[12], line 1
----> 1 model.fit(min_iter=10, max_iter=50)

File ~/anaconda3/envs/scanpy_env/lib/python3.8/site-packages/SEACells/core.py:574, in SEACells.fit(self, max_iter, min_iter, initial_archetypes)
    572 if max_iter < min_iter:
    573     raise ValueError("The maximum number of iterations specified is lower than the minimum number of iterations specified.")
--> 574 self._fit(max_iter=max_iter, min_iter=min_iter, initial_archetypes=initial_archetypes, initial_assignments=None)

File ~/anaconda3/envs/scanpy_env/lib/python3.8/site-packages/SEACells/core.py:531, in SEACells._fit(self, max_iter, min_iter, initial_archetypes, initial_assignments)
    519 def _fit(self, max_iter: int = 50, min_iter:int=10, initial_archetypes=None, initial_assignments=None):
    520     """
    521     Compute archetypes and loadings given kernel matrix K. Iteratively updates A and B matrices until maximum
    522     number of iterations or convergence has been achieved.
   (...)
    529 
    530     """
--> 531     self.initialize(initial_archetypes=initial_archetypes, initial_assignments=initial_assignments)
    533     converged = False
    534     n_iter = 0

File ~/anaconda3/envs/scanpy_env/lib/python3.8/site-packages/SEACells/core.py:181, in SEACells.initialize(self, initial_archetypes, initial_assignments)
    178 self.B_ = B
    180 # Create convergence threshold
--> 181 RSS = self.compute_RSS(A, B)
    182 self.RSS_iters.append(RSS)
    184 if self.convergence_threshold is None:

File ~/anaconda3/envs/scanpy_env/lib/python3.8/site-packages/SEACells/core.py:464, in SEACells.compute_RSS(self, A, B)
    461 if B is None:
    462     B = self.B_
--> 464 reconstruction = self.compute_reconstruction(A, B)
    465 return np.linalg.norm(self.kernel_matrix - reconstruction)

File ~/anaconda3/envs/scanpy_env/lib/python3.8/site-packages/SEACells/core.py:446, in SEACells.compute_reconstruction(self, A, B)
    444 if A is None or B is None:
    445     raise RuntimeError('Either assignment matrix A or archetype matrix B is None.')
--> 446 return (self.kernel_matrix.dot(B)).dot(A)

MemoryError: Unable to allocate 1.41 TiB for an array with shape (440911, 440911) and data type float64

Is it not using sparse matrices?

@sitarapersad
Copy link
Collaborator

Hi Tatyana! Are you using the GPU version? The non-GPU does use sparse matrices so I will check on this asap for you!

@tatyana-perlova
Copy link
Author

Thank you for getting back to me!
I'm using version 0.3.3. It's not using gpu though (model.gpu is False).

@sitarapersad
Copy link
Collaborator

Hi Tatyana - thanks for catching this! I think that somehow the non-sparse matrix version ended up being the one pushed most recently. I will fix this ASAP and push the sparse version, hopefully by Friday at latest and update this thread when it's done.

@sitarapersad
Copy link
Collaborator

Hi - just providing an update on this. I misunderstood which matrix the sparsity error was coming from. The kernel matrix itself is sparse, but the A_ and B_ matrices are not necessarily sparse (although in practice they tend to be). It'll take a bit longer to check how a sparse version of those matrices would perform but in the meanwhile my recommendation would be to split the cells into chunks to process. e.g. you can compute SEACells within each sample and then merge them, or compute SEACells within cell types/ low resolution leiden clusters as a way to chunk it.

@tatyana-perlova
Copy link
Author

Hi, thanks a lot! Got it. I though about analyzing it by cell type/cluster. The problem is that then the number of resulting metacells will be determined by original number of cells per cell type, while I was hoping that with Seacells more heterogeneous cell types will be have more metacells and vice versa.

@sitarapersad
Copy link
Collaborator

sitarapersad commented Apr 14, 2023

Just updated the CPU version to use sparse matrices - the current issue is that it's a bit slower than the CPU version without sparse matrices on small inputs since computing the reconstruction error using scipy.sparse.linalg.norm is a decent amount slower than np.linalg.norm. Not totally sure how to solve this yet, but if you don't mind it being a bit slow you can use this version.

You can use this option by specifying use_sparse=True as an argument to the SEACells model initialisation.

WIP: Fully sparse version for GPU still in progress, will update when that one is complete!

@tatyana-perlova
Copy link
Author

Great, thanks a lot! I'll try it out.

@ccruizm
Copy link

ccruizm commented Jun 20, 2023

Good day! Any news on when the fully sparse version for GPU will be available? :)
Running into the same problem.

Randomly initialized A matrix.
---------------------------------------------------------------------------
OutOfMemoryError                          Traceback (most recent call last)
Cell In[13], line 1
----> 1 model.fit(min_iter=10, max_iter=200)

File /hpc/pmc_stunnenberg/cruiz/miniconda3/envs/seacells/lib/python3.9/site-packages/SEACells-0.3.3-py3.9.egg/SEACells/gpu.py:617, in SEACellsGPU.fit(self, max_iter, min_iter, initial_archetypes, initial_assignments)
    614 if max_iter < min_iter:
    615     raise ValueError(
    616         "The maximum number of iterations specified is lower than the minimum number of iterations specified.")
--> 617 self._fit(max_iter=max_iter,
    618           min_iter=min_iter,
    619           initial_archetypes=initial_archetypes,
    620           initial_assignments=initial_assignments)

File /hpc/pmc_stunnenberg/cruiz/miniconda3/envs/seacells/lib/python3.9/site-packages/SEACells-0.3.3-py3.9.egg/SEACells/gpu.py:571, in SEACellsGPU._fit(self, max_iter, min_iter, initial_archetypes, initial_assignments)
    559 def _fit(self, max_iter: int = 50, min_iter: int = 10, initial_archetypes=None, initial_assignments=None):
    560     """
    561     Internal method to compute archetypes and loadings given kernel matrix K.
    562     Iteratively updates A and B matrices until maximum number of iterations or convergence has been achieved.
   (...)
    569     :return: None
    570     """
--> 571     self.initialize(initial_archetypes=initial_archetypes, initial_assignments=initial_assignments)
    573     converged = False
    574     n_iter = 0

File /hpc/pmc_stunnenberg/cruiz/miniconda3/envs/seacells/lib/python3.9/site-packages/SEACells-0.3.3-py3.9.egg/SEACells/gpu.py:234, in SEACellsGPU.initialize(self, initial_archetypes, initial_assignments)
    232 self.A0 = A0
    233 A = self.A0.copy()
--> 234 A = self._updateA(B, A)
    236 self.A_ = A
    237 self.B_ = B

File /hpc/pmc_stunnenberg/cruiz/miniconda3/envs/seacells/lib/python3.9/site-packages/SEACells-0.3.3-py3.9.egg/SEACells/gpu.py:389, in SEACellsGPU._updateA(self, B, A_prev)
    385 A = A_prev
    387 t = 0  # current iteration (determine multiplicative update)
--> 389 Ag = cp.array(A)
    390 Bg = cp.array(B)
    391 Kg = cupyx.scipy.sparse.csc_matrix(self.K)

File /hpc/pmc_stunnenberg/cruiz/miniconda3/envs/seacells/lib/python3.9/site-packages/cupy/_creation/from_data.py:46, in array(obj, dtype, copy, order, subok, ndmin)
      7 def array(obj, dtype=None, copy=True, order='K', subok=False, ndmin=0):
      8     """Creates an array on the current device.
      9 
     10     This function currently does not support the ``subok`` option.
   (...)
     44 
     45     """
---> 46     return _core.array(obj, dtype, copy, order, subok, ndmin)

File cupy/_core/core.pyx:2382, in cupy._core.core.array()

File cupy/_core/core.pyx:2406, in cupy._core.core.array()

File cupy/_core/core.pyx:2538, in cupy._core.core._array_default()

File cupy/_core/core.pyx:136, in cupy._core.core.ndarray.__new__()

File cupy/_core/core.pyx:224, in cupy._core.core._ndarray_base._init()

File cupy/cuda/memory.pyx:742, in cupy.cuda.memory.alloc()

File cupy/cuda/memory.pyx:1419, in cupy.cuda.memory.MemoryPool.malloc()

File cupy/cuda/memory.pyx:1440, in cupy.cuda.memory.MemoryPool.malloc()

File cupy/cuda/memory.pyx:1120, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc()

File cupy/cuda/memory.pyx:1141, in cupy.cuda.memory.SingleDeviceMemoryPool._malloc()

File cupy/cuda/memory.pyx:1379, in cupy.cuda.memory.SingleDeviceMemoryPool._try_malloc()

OutOfMemoryError: Out of memory allocating 25,267,875,328 bytes (allocated so far: 0 bytes).

And not sure why since it says that can not allocate 23,5GB since I am using a server with 500GB RAM and NVIDIA A100 80GB PCIe MIG 2g.20gb.

I saved the model and can load it but would like to know if could change the initialization parameters so I can set use_gpu=False and also try this use_sparse=True, and avoid to re-run everything.

Thanks in advance!

@tatyana-perlova
Copy link
Author

I think you will have to reinitialize the model, but then you can just assign precomputed kernel matrix and archetypes from saved model.

@ccruizm
Copy link

ccruizm commented Jun 28, 2023

Thanks @tatyana-perlova, for your suggestion. In the end, re-run the model but did not finish after five days (limit time on our HPC for any job). How long did it take for you in your 440K dataset?

I am still waiting for an answer about the GPU implementation handling sparse matrices. I subset a population of interest (260K cells) and still could not run it using GPU :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants