-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Parallel Ensemble Simulations with generic linear algebra #110
Comments
Yes, I don't think it can generate generic linear algebra calls. |
Did you try using StaticArrays? |
I'm using very large matrices (usually 1500x1500), this is the reason i use sparse matrices. However, also with a 100x100 matrix it tooks a lot of time to convert
|
That makes no sense to solve with EnsembleGPUArray. Read the DiffEqGPU documentation on the two forms of GPU problems. You're using the form for 100 ODEs or less, not the one for 1000 ODEs or more. https://github.com/SciML/DiffEqGPU.jl#within-method-gpu-parallelism-with-direct-cuarray-usage That's more applicable here. |
Do you mean that I need to convert |
That would evolve the ODE in parallel on the GPU. You want to use EnsembleSerial unless you have multiple GPUs which you would want to use at the same time. |
Ok perfect. Thank you for your help. However I have the following question. I'm yet able to perform this ODE directly on the GPU, using the differential function
where this time Why there is no way to implement this method for an |
To GPU it like this, you'd want to expand it out to its scalar form with something like ModelingToolkit, in order to then do the GPU codegen. EnsembleGPUArray is all about kernel code generation, so it's not using CUDA kernels like those from CUSPARSE since it's really made to generate nonlinear kernels. |
Hello,
I'm trying to solve and ODE with my GPU. I tested the example code present in the DiffEqGPU.js repository, and it worked fine.
I have the following function to implement in the ODE Problem
or equivalently
where
A
andw_l
are two numbers, whileL
andL_t
are two global matrices. Following the documentation, I used the following code to run a Parallel Ensemble:where
METHOD!!
can be one of the methods, e.g.EnsembleSerial()
,EnsembleThreads()
,EnsembleCPUArray()
orEnsembleGPUArray()
.The problem is divided in two cases:
L
andL_t
are sparse matrices. In this case all the methods work, except forEnsembleGPUArray()
. In this case the error isL
andL_t
are dense matrices. In this case not evenEnsembleCPUArray()
works, but the others yes. The error for the GPU method should be the same. While the error for the CPU one isThe text was updated successfully, but these errors were encountered: