The git is aimed to act as a demo for our flexatar technology. The SFU inherited from original example, with server-side animating of flexatar.
In the context of the given demo,
- you can create a simplified flexatar by your own
- try any flexatar in WebRTC conference, where SFU decodes opus audio track, sends it to animating core, builds h.264 video track and uses it as if came from a user.
- Alternatively, you can try a demo of rendering with WebGL
To try this yourself, please visit our demo at .
The final goal is to offer , introducing personalized WebRTC Virtual Webcam.