-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NO-ISSUE: Add IBM _watsonx_ flavor to OpenShift AI operator #6996
base: master
Are you sure you want to change the base?
NO-ISSUE: Add IBM _watsonx_ flavor to OpenShift AI operator #6996
Conversation
Currently when we add the _OpenShift AI_ operator we add it with all its dependencies and the default configuration. This is suitable for most users, but it brings dependencies that aren't really needed in all cases. _IBM watsonx_, for example, doesn't require many of the dependencies. In order to simplify that use case this patch adds support for a new _flavor_ property. It can have the values `default` and `watsonx`. When the value is `watsonx` the operator will be installed without the _pipelines_, _serverless_ and _servicemesh_ dependencies, and with the following configuration: ```yaml apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Managed defaultDeploymentMode: RawDeployment serving: managementState: Removed name: knative-serving kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Managed trustyai: managementState: Removed workbenches: managementState: Remove ``` The assisted installer UI doesn't support the operator properties mechanisms, so this needs to be done via the API. For example, the following Python script creates a new clsuter using the _watsonx_ flavor: ```python #!/usr/bin/env python3 # -*- coding: utf-8 -*- import base64 import json import pathlib import requests import subprocess # Details of the cluster: name = "mycluster" base_url = "https://api.openshift.com" base_dns_domain = "mydomain" openshift_version = "4.16" cpu_architecture = "x86_64" # Find the home directory, as we will take the pull secret and SSH key from there: home_dir = pathlib.Path.home() # Read the pull secret. To obtain your pull secret visit this page: # # https://console.redhat.com/openshift/install/pull-secret # # Then save the result to a `pull.txt` file in your home directory. with open(home_dir / "pull.txt", "r") as file: pull_secret = file.read().strip() # Read the public SSH key: with open(home_dir / ".ssh" / "id_rsa.pub", "r") as file: ssh_public_key = file.read().strip() # Prepare the properties for the operator: openshift_ai_properties = json.dumps({ "flavor": "watsonx", }) # Create the cluster: response = requests.post( f"{base_url}/api/assisted-install/v2/clusters", json={ "name": name, "openshift_version": openshift_version, "base_dns_domain": base_dns_domain, "cpu_architecture": cpu_architecture, "pull_secret": pull_secret, "ssh_public_key": ssh_public_key, "machine_networks": [ { "cidr": "192.168.100.0/24", }, ], "api_vips": [ { "ip": "192.168.100.20", }, ], "ingress_vips": [ { "ip": "192.168.100.21", }, ], "olm_operators": [ { "name": "openshift-ai", "properties": openshift_ai_properties, }, ], }, ) if response.status_code != 201: raise Exception(f"Failed to create cluster: {response.content}") cluster = response.json() cluster_id = cluster["id"] print(f"cluster_id: {cluster_id}") # Create the infrastructure environment: response = requests.post( f"{base_url}/api/assisted-install/v2/infra-envs", json={ "name": name, "cluster_id": cluster_id, "openshift_version": openshift_version, "cpu_architecture": cpu_architecture, "pull_secret": pull_secret, "ssh_authorized_key": ssh_public_key, "image_type": "full-iso", }, ) if response.status_code != 201: raise Exception(f"Failed to create infrastructure environment: {response.content}") infra_env = response.json() infra_env_id = infra_env["id"] print(f"infra_env_id: {infra_env_id}") # Print ISO URL: iso_url = infra_env["download_url"] print(f"iso_url: {iso_url}") ``` Related: https://issues.redhat.com/browse/MGMT-19056 Related: https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.11/html/installing_and_uninstalling_openshift_ai_self-managed/preparing-openshift-ai-for-ibm-cpd_prepare-openshift-ai-ibm-cpd#installing-openshift-data-science-operator-using-cli-ibm-cpd_prepare-openshift-ai-ibm-cpd Signed-off-by: Juan Hernandez <juan.hernandez@redhat.com>
@jhernand: This pull request explicitly references no jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/hold This is experimental. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jhernand The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #6996 +/- ##
===========================================
- Coverage 68.28% 56.79% -11.50%
===========================================
Files 271 172 -99
Lines 38650 13823 -24827
===========================================
- Hits 26394 7851 -18543
+ Misses 9862 5258 -4604
+ Partials 2394 714 -1680 |
@jhernand: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Currently when we add the OpenShift AI operator we add it with all its dependencies and the default configuration. This is suitable for most users, but it brings dependencies that aren't really needed in all cases. IBM watsonx, for example, doesn't require many of the dependencies. In order to simplify that use case this patch adds support for a new flavor property. It can have the values
default
andwatsonx
. When the value iswatsonx
the operator will be installed without the pipelines, serverless and servicemesh dependencies, and with the following configuration:The assisted installer UI doesn't support the operator properties mechanisms, so this needs to be done via the API. For example, the following Python script creates a new clsuter using the watsonx flavor:
Related: https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.11/html/installing_and_uninstalling_openshift_ai_self-managed/preparing-openshift-ai-for-ibm-cpd_prepare-openshift-ai-ibm-cpd#installing-openshift-data-science-operator-using-cli-ibm-cpd_prepare-openshift-ai-ibm-cpd
List all the issues related to this PR
https://issues.redhat.com/browse/MGMT-19056
What environments does this code impact?
How was this code tested?
Tested manually creating a cluster with the watsonx flavor enabled.
Checklist
docs
, README, etc)Reviewers Checklist