Go binding for Pytorch C++ API. This is used by the Pytorch agent in MLModelScope to perform model inference in Go.
Download and install go-pytorch:
go get -v github.com/rai-project/go-pytorch
The binding requires Pytorch C++ (libtorch) and other Go packages.
The Pytorch C++ library is expected to be under /opt/libtorch
.
To install Pytorch C++ on your system, you can
-
download pre-built binary from Pytorch website: Choose
Pytorch Build = Stable (1.3)
,Your OS = <fill>
,Package = LibTorch
,Language = C++
andCUDA = <fill>
. Then downloadcxx11 ABI
version. Unzip the packaged directory and copy to/opt/libtorch
(or modify the correspondingCFLAGS
andLDFLAGS
paths if using a custom location). -
build it from source: Refer to our scripts or the
LIBRARY INSTALLATION
section in the dockefiles.
-
The default blas is OpenBLAS. The default OpenBLAS path for macOS is
/usr/local/opt/openblas
if installed throught homebrew (openblas is keg-only, which means it was not symlinked into /usr/local, because macOS provides BLAS and LAPACK in the Accelerate framework). -
The default pytorch C++ installation path is
/opt/libtorch
for linux, darwin and ppc64le without powerai -
The default CUDA path is
/usr/local/cuda
See lib.go for details.
If you get an error about not being able to write to /opt
then perform the following
sudo mkdir -p /opt/libtorch
sudo chown -R `whoami` /opt/libtorch
If you are using Pytorch docker images or other libary paths, change CGO_CFLAGS, CGO_CXXFLAGS and CGO_LDFLAGS enviroment variables. Refer to Using cgo with the go command.
For example,
export CGO_CFLAGS="${CGO_CFLAGS} -I/tmp/libtorch/include"
export CGO_CXXFLAGS="${CGO_CXXFLAGS} -I/tmp/libtorch/include"
export CGO_LDFLAGS="${CGO_LDFLAGS} -L/tmp/libtorch/lib"
You can install the dependency through go get
.
cd $GOPATH/src/github.com/rai-project/go-pytorch
go get -u -v ./...
Or use Dep.
dep ensure -v
This installs the dependency in vendor/
.
Configure the linker environmental variables since the Pytorch C++ library is under a non-system directory. Place the following in either your ~/.bashrc
or ~/.zshrc
file
Linux
export LIBRARY_PATH=$LIBRARY_PATH:/opt/libtorch/lib
export LD_LIBRARY_PATH=/opt/libtorch/lib:$DYLD_LIBRARY_PATH
macOS
export LIBRARY_PATH=$LIBRARY_PATH:/opt/libtorch/lib
export DYLD_LIBRARY_PATH=/opt/libtorch/lib:$DYLD_LIBRARY_PATH
Run go build
in to check the dependences installation and library paths set-up.
On linux, the default is to use GPU, if you don't have a GPU, do go build -tags nogpu
instead of go build
.
Note : The CGO interface passes go pointers to the C API. This is an error by the CGO runtime. Disable the error by placing
export GODEBUG=cgocheck=0
in your ~/.bashrc
or ~/.zshrc
file and then run either source ~/.bashrc
or source ~/.zshrc
Examples of using the Go Pytorch binding to do model inference are under examples
This example shows how to use the MLModelScope tracer to profile the inference.
Refer to Set up the external services to start the tracer.
Then run the example by
cd example/batch_mlmodelscope
go build
./batch
Now you can go to localhost:16686
to look at the trace of that inference.
This example shows how to use nvprof to profile the inference. You need GPU and CUDA to run this example.
cd example/batch_nvprof
go build
nvprof --profile-from-start off ./batch_nvprof
Refer to Profiler User's Guide for using nvprof.
Parts of the implementation have been borrowed from orktes/go-torch