Releases: RidgeRun/r2inference
Releases · RidgeRun/r2inference
v0.5.3
v0.5.2
Release v0.5.2 includes:
- Fix heisenbug where the tensorflow backend could segfault
when configuring GPU memory usage if the default value was
used.
v0.5.0
Introduced features:
- Add RAM usage property to tensorflow backend
- Check if the model is quantization is float32 on tflite
Known issues:
- NCSDK does not support multiple calls to the inference engine from the same thread. This causes the NCSDK backend to fail after the second start.
v0.4.2
v0.4.1
v0.4.0
v0.3.0
Introduced features:
- Examples updated:
- Inception for NCSDK
- TinyYoloV2 for NCSDK
- Inception for TensorFlow
- TensorFlow test for batch size fixed.
- Copyright license LGPL added.
- Supported platforms:
- Intel Movidius Neural Compute Stick (version 1)
- NVIDIA Jetson AGX Xavier
- x86 systems
- NVIDIA TX2
- i.MX8
Known issues:
- NCSDK does not support multiple calls to the inference engine from the same thread. This causes the NCSDK backend to fail after the second start.
v0.2.0
Introduced features:
- Support for the following platforms:
- Intel Movidius Neural Compute Stick (version 1)
- NVIDIA Jetson AGX Xavier
- x86 systems
- NVIDIA TX2
- i.MX8
- Compatibility support to handle models with generic or defined batch size
Known issues:
- NCSDK does not support multiple calls to the inference engine from the same thread. This causes the NCSDK backend to fail after the second start.
v0.1.0
Introduced features:
- Support for the following architectures:
- GoogLeNet (Inception v4)
- Tiny Yolo v2
- Support for the following backends:
- NCSDK
- TensorFlow
- Support for the following platforms:
- Intel Movidius Neural Compute Stick (version 1)
- NVIDIA Jetson AGX Xavier
- x86 systems
Known issues:
- NCSDK does not support multiple calls to the inference engine from the same thread. This causes the NCSDK backend to fail after the second start.