-
-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking #183
Comments
|
I reran the script, restarted julia each run and varied the batchsize. I got the result below:
In general, for batchsize 100-1000 Knet is about twice as fast but for batchsize 5000 both perform similarly. I normally use small batchsize and my model has many layers. Is ArrayFire optimized for large matrices but not many small ones with basic matrix operations? |
In my experience ArrayFire works best with large matrices and complicated kernels, only then JIT / async pays off big time. |
Thanks a lot. Just a side question, is there a pooling function (max/average pooling)? This is needed in convolutional neural network. |
convolution is there, pooling is coming in arrayfire/arrayfire-ml#17 |
I am not sure if I should ask here but I'd like to ask if my benchmarking is correct. I am trying to check the speed of AFArray and compare with the KnetArray in Knet.jl. Since AFArray is asynchronous, I am not sure if BenchmarkTools.jl's
@btime
can do it. Please comment:The text was updated successfully, but these errors were encountered: