You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For CNF layer in PointFlow model, I am using FFJORD from DiffEqFlux. Currently, I am having some trouble with matching generated data with original distribution after training dummy CNF layer. I have discussed this issue with the one who wrote this layer and will get back to this issue in some time. Additionally, this layer also lacks batched format, which is also a serious problem as training a single pointcloud of 1000 points for 10 epochs takes ~10 hours. I tried fixing this for a forward pass, but zygote gave some weird error.
The text was updated successfully, but these errors were encountered:
With SciML/DiffEqFlux.jl#415 the training time is supposed to significantly improve. Might be a good idea to revisit this.
Here is a demo snippet
using DiffEqFlux, Distributions, CUDA, Flux, OrdinaryDiffEq
nn =Chain(Dense(3, 32, relu), Dense(32, 3)) |> gpu
tspan = (0.0f0, 1.0f0)
ffjord =FFJORD(nn, tspan, Tsit5())
pc =randn(Float32, 3, 1000) |> gpu; # A single point cloud sampled from gaussian
e =randn(Float32, size(pc)) |> gpu;
@btime CUDA.@sync$ffjord($pc, $ffjord.p, $e)
# 184.201 ms (538473 allocations: 14.75 MiB)@btime CUDA.@syncgradient(p ->sum($ffjord($pc, p, $e)[1]), $ffjord.p)
# 6.324 s (10207728 allocations: 441.39 MiB)
The timings are on a 1650Ti. If you want to train on a batched point cloud just reshaping the 3 x P x N array in to 3 x (P x N) should just do it (it takes only 3GB GPU memory to run a batch size of 1000). The training time after this would be around ~70s for 10 epochs (rather than 10 hrs).
PointFlow is a really interesting application of both Point Clouds and CNFs, and will as a great demo from a "marketing" perspective.
The current blocker for this is on the DiffEqFlux.jl side SciML/DiffEqFlux.jl#342. Reposting what @nirmal-suthar pointed out on Julia Slack:
The text was updated successfully, but these errors were encountered: