This demo demonstrates an end to end automated MLOps approach for model training and inference. It leverages Red Hat Openshift Data Science (RHODS) as well as other products from Red Hat portfolio.
A data scientist setup his Jupyter envrionement and develops his model. Once statisfied he commits his code and makes a pull request to merge into the production branch. The pull request automatically triggers a data science pipeline. During the training process, the model is tagged and stored into a bucket storage. A model server is serving this model. The new model is automatically served and is consumable through api requests. It becomes a scalable model that can be used for live inference or batch streaming.
Walkthourgh and highlights can be found on this documentation.
The following procedure will deploy all the demo components. If you want to deploy only specific components, look at this documentation.
Install the operators.
oc apply -k ./manifests/operators/
Wait for the installations to complete. Confirm that all operators are ready.
Deploy the data science cluster and the knative instances by runnning:
oc apply -k ./manifests/operators-instances/
Deploy the demo instances:
helm template ./manifests/instances/core | oc apply -f -
oc kustomize ./manifests/instances/automated-pipelines/ --enable-helm | oc apply -f -
oc kustomize ./manifests/instances/streaming/ --enable-helm | oc apply -f -