Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance improvements #6

Open
3 of 7 tasks
TheCodez opened this issue Nov 29, 2019 · 15 comments
Open
3 of 7 tasks

Performance improvements #6

TheCodez opened this issue Nov 29, 2019 · 15 comments

Comments

@TheCodez
Copy link
Owner

TheCodez commented Nov 29, 2019

  • Pointer aliasing
  • Optimize memory access
  • Block size tuning
  • Intrinsics
  • Particles AoS to SoA
  • Grid Cells AoS to SoA
  • Remove calls to cudaDeviceSynchronize when not necessary
@TheCodez
Copy link
Owner Author

TheCodez commented Apr 8, 2020

Conversation moved here from #19

@cbachhuber wrote:

Good question! I'm on a Nvidia Quadro P2000, driver 435.21. I think that performance-wise, it's roughly equivalent to a GTX 1050. I see the following execution times of demo on the current master branch:

Parameter modification Cycle time Setup duration (until the for loop in main) overall execution time
none 138-156ms 44s 47s
particle_count = 1*10e5 52-63ms 12s 13s
particle_count = 6*10e4 35-39ms 6.8s 8.4s

I don't see a significant time difference between release and debug, tested with a few runs. I saw a strong influence of particle_count on the execution time of setupRandomStatesKernel when playing around with the code earlier. Is the same true for you? Which executions times do you see? Also, is such a high particle count necessary? When I used lower numbers (>5*10e4), I didn't see (subjectively) worse results.

Since execution time is one of the main motivations of having this project in addition to mitkina/dogma, I would suggest documenting mine and your cycle/iteration times somewhere quickly accessible (in the top-level readme, or in a file linked from it). What do you think?

@TheCodez wrote:

Thanks for the detailed benchmark. I’m seeing similar timings (slightly slower) using a GTX 1050 on my laptop.

I have an idea to improve the random setup time. Overall my goal is to improve the performance to get close to the timings in the paper, from which I‘m still far away considering the amount of particles they use. I hope that switching from AoS to SoA will give a ~3x performance boost.

Yes, the particle count might be too high for this basic scenario and grid Size. I just tried setting the particle count about as high as my system supports (2GB Video ram only). Actually if you set the resolution to 0.1 you‘ll see that the particle count is too low for that grid size.

I will add your timings to the readme plus a comparison with the paper timings/particle count.

@cbachhuber wrote:

I'm looking forward to your setup time improvement!

For the paper, they use a GTX 980, which is more than twice as powerful as our GPUs. Therefore, if you achieve approximately double the cycle time as reported in the paper, you should have an equally optimized algorithm, right? Of course there is still some way to go.

I see, so the particle count is well motivated 👍

Cool; I would also mention the GPU performance difference between the paper and our experiments.

@TheCodez
Copy link
Owner Author

TheCodez commented Apr 8, 2020

@cbachhuber I moved the conversation to this issue, instead of a closed pr :)

Commit 36232e0 should fix the long init times and also slightly improve runtime performance. Init times went from ~44s to ~1.5s with no visible degrade in quality. Calling a CUDA function before calling the DOGM init further reduces the time to about ~700ms.

@cbachhuber
Copy link
Contributor

@cbachhuber I moved the conversation to this issue, instead of a closed pr :)

The one and only correct thing to do 😅 👍

Commit 36232e0 should fix the long init times and also slightly improve runtime performance. Init times went from ~44s to ~1.5s with no visible degrade in quality. Calling a CUDA function before calling the DOGM init further reduces the time to about ~700ms.

Now that's amazing stuff! Thanks for improving so drastically in no time, I'm impressed! 😮 I can confirm these numbers, I now see ~740ms init time.

Also thanks for adding performance to the readme! 👍

@cbachhuber
Copy link
Contributor

no visible degrade in quality

Let me just ask this one off-topic question here: do you have plans for an objective performance measurement? I.e. a module that compares the result of the grid to the ground truth input, computes noise etc.

I think this would be valuable to have, we could open another issue for that.

@TheCodez
Copy link
Owner Author

TheCodez commented Apr 8, 2020

I plan to add clustering to compare cluster velocity with the simulator ground thruth at some point (not in the near future). This way we have a qualitative comparison of this projects performance. I'll add an issue to track this.

@TheCodez
Copy link
Owner Author

@cbachhuber
Copy link
Contributor

I'm also not seeing a significant difference, unfortunately.

Branch Init Time Iteration Time
master 1384ms 132-174ms
reduce_thread_divergence 1384ms 136-185ms
vectorized_types 1360ms 137-202ms

I always see an outlier iteration time during the second iteration. This is 40-60ms higher than the other valuse I see, see maxima in table above. What could be the reason for this?

What do you think about logging iteration time during execution, and printing min/mean/median/max at the end? Similar to what the precision evaluator is doing?

@TheCodez
Copy link
Owner Author

Thanks for testing, I'll take a look.

That's a pretty good idea 👍
I'm thinking of adding CI add some point. This way we could always see if we're regressing on quality/performance on each change.

@cbachhuber
Copy link
Contributor

+1 for CI! I'm just reading that github hosts CI for free for open source projects, I didn't expect that. I guess setting up CI will be interesting with the CUDA dependency; let's see. We can also make first baby-steps with CI once the first unit tests for the utils library are working.

Cool, I'm adding the time logging to my plans 👍

Repository owner deleted a comment from cbachhuber Apr 15, 2020
@TheCodez
Copy link
Owner Author

TheCodez commented Apr 16, 2020

@cbachhuber
(Your comment was duplicated so I removed it for clarity)

I added CI for Ubuntu. It's not compiling successfully atm because of some linker errors
e.g. undefined reference to __glewBindVertexArray. Any ideas how to fix that?

As the CI machines have no GPU we can only compile not run the code, but I think this is fine.

@cbachhuber
Copy link
Contributor

Thanks for removing, I had a weird connection issue yesterday and actually clicked 'Comment' twice 😅

I don't know why this happens on CI only. I'm also playing around with CI at the moment (though I don't have much time today).

  • I see that you already tried the issues with static linking. This would also have been my first suggestion.
  • In general, I suggest to replace ubuntu-latest with ubuntu-18.04 for stability. I'm on 18.04 and it's running. CUrrently this should not make a difference, as the two are equivalent.

I will also try to get this to compile in the next days, let's see who solves this first ;)

And I agree, compiling is for now enough.

@TheCodez
Copy link
Owner Author

Moved to here #31 :)

@TheCodez
Copy link
Owner Author

TheCodez commented Apr 20, 2020

Using SoA instead of AoS particles reduced the runtime by around 50ms. Changing the grid cells to SoA gives a small improvement.

See https://github.com/TheCodez/dynamic-occupancy-grid-map/tree/soa

Big improvements coming soon 😄

@cbachhuber cbachhuber mentioned this issue Apr 20, 2020
@cbachhuber
Copy link
Contributor

Awesome, looking forward to that! 😃 👍

@TheCodez
Copy link
Owner Author

Might take some time I'm thinking of using this abstraction https://asc.ziti.uni-heidelberg.de/sites/default/files/research/papers/public/St11ASX_CUDA.pdf to keep the AoS syntax.
But at least with the current implementation I know that SoA is the way to go :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants