-
Notifications
You must be signed in to change notification settings - Fork 16
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
just a question #1
Comments
Good question But you need to take a look into the goals. 1st create cudnn alternative. Today there isn't such a thing. Even libdnn tightly coupled with caffe isn't developed any more... 2nd is to create inference library with minimal dependencies. Today if you want to do inference on gpu you need to bring some specific toolkit like cuda. cldnn, miopen that are vendor specific. MIOpen doesn't even support AMD's own hardware: ROCm/ROCm#887 Something close today is ngraph + plaidml but their approach is problematic as they relay on auto-magic and can't achieve good performance. Finally, the deep learning framework is merely side effect of the library since I need to be able to test all operators. Regarding Caffe. I wish it was developed... but it isn't. I contributed several patches in the past but at this point, nothing is get accepted or even reviewed, even something basic like for example cudnn 8 support BVLC/caffe#7000 I submitted not long ago. The author is moved to caffe2 that is now part of pytorch. So as a end goal I want to have an ability to use pytorch or tensorflow with dlprimitives as a backend for OpenCL devices rather than try to beat a dead horse (caffe, keras+plaidml). However I need to prove that I have actually working system that gives reasonable performance - this way it would be much easier to get into pytorch/tf/mxnet or any other framework as backend. |
Adding reference to Keras/Plaidml issue: plaidml/plaidml#586 |
Hi, So you think ngraph + plaidml is the best way to go right now to tackle incremental learning or is there anything else worth looking into? If anyone can point in the best direction that'd be gr8! I'm sure it's very complicated but i have to at least look into such a valid goal, any help from anyone sounds good to me! |
ngraph is inference only
What are you trying to achieve? |
sorry, does the 1st sentence of 1st post not make sense? add new things like car, bicycle etc on top of what is already learned. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi,
Have you considered making the learning incremental? Being able to teach it new things without starting from scratch.
From what I gather it's a technical challenge, but it's a sure way to have solid future, surely caffe would still be going strong.
Thanks for any insights.
The text was updated successfully, but these errors were encountered: