-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various discussions arising from examples and tutorials #71
Comments
Hello vajrabisj, thank you for having interest in my project. For efficient computing, Before loading cl-waffe2, try running the following code please: # Ensure that OpenBLAS has been installed in your device
$ sudo apt install libblas ;; Declare the path where you've installed OpenBLAS so that CFFI can use the shared library correctly
(defparameter cl-user::*cl-waffe-config*
`((:libblas \"libblas.dylib\"))) For details, please visit the documentation: https://hikettei.github.io/cl-waffe2/install/#openblas-backend
I'm very happy to hear that! Currently, I'm working on reducing the compiling time at a |
it's worth developing a frame work in CL for the AI/ML evovling world. just suggestion, since your code is in developing, is it possible that you could put more examples from simple to higher level, such as examples on how to use your framework to implement different kind of nn, like the logic gates, like the xor etc. and also for the inputs, normally we will load input from files, could you also show some examples on that? thanks for the great work! |
Yes, the lack of examples/documentation is exactly the problem! but I've started this project only two months ago and still developing fundamental features: In fact, here's an example package of training MNIST on MLP/CNN, and data loaders is here, but I feel APIs still have room to be improved. ( Anyway, I'm keen to enhance the documentation :) Thanks.
Speaking of which, cl-waffe2 ;; depends on:
(use-package :cl-waffe2)
(use-package :cl-waffe2/vm.generic-tensor)
;; No copying
(change-facet
(make-array `(10 10) :initial-element 1.0)
:direction 'AbstractTensor)
{CPUTENSOR[float] :shape (10 10)
((1.0 1.0 1.0 ~ 1.0 1.0 1.0)
(1.0 1.0 1.0 ~ 1.0 1.0 1.0)
...
(1.0 1.0 1.0 ~ 1.0 1.0 1.0)
(1.0 1.0 1.0 ~ 1.0 1.0 1.0))
:facet :exist
:requires-grad NIL
:backward NIL} That is, cl-waffe2 can be also used with other CL great libraries such as numcl, numericals etc... |
Really appreciate your prompt reply! if I just have input like ‘((0 0)(0 1)(1 0)(1 1)), how is the simplest way to put it into waffe? :) |
In the first place, for numerical computing, I think using a simple array is a much better choice than using a list since the data structure isn't appropriate, and produces a unnecessary coying. If you want to do the same thing, simply call: CL-WAFFE2-REPL> (change-facet
#2A((0 0)
(1 0)
(0 1)
(1 1))
:direction 'AbstractTensor)
{CPUTENSOR[int32] :shape (4 2)
((0 0)
(1 0)
...
(0 1)
(1 1))
:facet :exist
:requires-grad NIL
:backward NIL} (I forgot to say: sparse tensor supports and data casts aren't enough to use. so pass or making (make-array `(4 2)
:initial-contents '((0.0 0.0) (0.0 1.0) (1.0 0.0) (1.0 1.0))) Digging a little deeper: the combination of array type transformations is described here, and extended by overloading the |
This is fantastic! Thx a lot! |
i have successfully setup a simple model and run the test! something still need your kind input:
|
If you're working with Another option is that if you can access the trained model, this example would be more straightforward :). The procedure is a little complicated compared to other libraries. This is because cl-waffe2 needs to be compiled twice, to generate specialised code for training and inference respectively. And I'm trying to find ways to make this much easier to understand for everyone.
Thank you for that! but as for the Windows computers, I don't have a machine so I can't test on it :< No code dependent on the OS environment is used in cl-waffe2, OpenBLAS are called via We've confirmed that cl-waffe2 works in the following environments thanks to @elderica .
|
I'm thinking I might need a tutorial as soon as possible. Which ones would you like to see? |
that will be great. i think your framework looks great, especially i feel when you go through the current example code such as mlp.lisp, for one who has certain knowledge of ML/NN it will be quite clear such as defsequence, deftrainer and train etc. but for majority of people who may be interested in your code would like to see step by step further to be familiar with every aspect of your code as well as some printing intermidia and/or final results. My humble suggestion that take reference to other ML/NN framework is:
i am still analyzing your framework when i have time, because as a common lisp lover and also i am having plan to implement a good framework onto my other models. BTW, for your previous mentioned two examples, ie mlp and cnn, i still cannot make myself to proceed the predict function and print out the result... but anyway, your framework is so far the most systematic and interested one in the common lisp community, pls keep it up! well done! |
Thank you for your valuable suggestions/feedback! In fact, cl-waffe2 is a unique library from other existing libraries, so step-by-step examples will help others learn how to use it quickly. I'll put in my TODO list :)
I'm sorry but I still don't get what you mean 'print'.... does this work? (
It's a great honour! The ultimate goal is to create a framework that is comparable to PyTorch and other large-scale libraries in Python over the next few years, all in ANSI Common Lisp. And, the next issue to solve is performance, so I'm spending all my free time on this. Anyway, thank you for your feedback! Feel free to contact me/make an issue if there are any problems/suggestions. |
Oh, I mean after training how I can test the model whether it can correctly predict the result? Normally I will feed it new data and then print the predict process and result. |
I see it. Validation is included in the train-and-valid-mlp function. Of course testing data is separated from training data. So 0.9522 is exactly the accuracy of the trained model. |
That's the difference between AbstractNode
(defnode (AddNode-Revisit (myself)
:where (A[~] B[~] -> A[~])
:documentation "A <- A + B"
:backward ((self dout x y)
(declare (ignore x y))
(values dout dout)))) and is a CLOS class which mainly has this information:
However, (define-impl (AddNode-Revisit :device CPUTensor)
:forward ((self x y)
`(,@(expand-axpy-form x y)
,x)))
(forward (AddNode-Revisit) (randn `(3 3)) (randn `(3 3))) ;; to invoke the node Here, the function expand-axpy-form calls the function CompositeComposite is used to describe a set of (defmodel (Softmax-Model (self)
:where (X[~] -> [~])
:on-call-> ((self x)
(declare (ignore self))
(let* ((x1 (!sub x (!mean x :axis 1 :keepdims t)))
(z (!sum (!exp x1) :axis 1 :keepdims t)))
(!div (!exp x1) z))))) It originally intended to be used as just a subroutine: (call (Softmax-Model) (randn `(10 10)))
;; Still keeps lazy-evaluation
(proceed *) ;; to evaluate it However, in addition, the macro (define-composite-function (Softmax-Model) !softmax-static) With (!softmax-static (randn `(10 10))) ;; No need to call build/proceed
(!softmax-static (randn `(10 10)))
;; will directly return:
{JITCPUTENSOR[float] :shape (10 10) :named ChainTMP3533
((0.4604313 0.0073007448 0.10543401 ~ 0.087809734 0.031668983 0.028546946)
(0.10716986 0.022830745 0.07476129 ~ 0.24503188 0.2015392 0.07642471)
...
(0.015032278 0.028409397 0.12348003 ~ 0.05110904 0.18238431 0.08728184)
(0.28009242 0.0570261 0.2081007 ~ 0.01599786 0.064734206 0.083274655))
:facet :input
:requires-grad NIL
:backward NIL} Note that |
BTW, this issue seems really useful for those who are interested in my framework, so I pinned it :). |
Really thanks for the detailed explanation, very helpful. so in short, the difference between defnode and defmodel is implementation of backward (regardless base implement)? I mean after define-impl and define-composite-function. Thx. |
Yes exactly, |
Just curious, why not put all these forward backward etc simply in defnode together? :) |
I wanna keep the definition of |
If one wants to implement a new operator (e.g.: |
when running examples, there are always erros following:
Couldn't find any implementation of MATMULNODE for (LISPTENSOR).
[Condition of type CL-WAFFE2/VM.NODES::NODE-NOT-FOUND]
how to address?
btw, your waffe2 system looks great, pls keep it on!
The text was updated successfully, but these errors were encountered: