Currently when we add the _OpenShift AI_ operator we add it with all its
dependencies and the default configuration. This is suitable for most
users, but it brings dependencies that aren't really needed in all
cases. _IBM watsonx_, for example, doesn't require many of the
dependencies. In order to simplify that use case this patch adds support
for a new _flavor_ property. It can have the values `default` and
`watsonx`. When the value is `watsonx` the operator will be installed
without the _pipelines_, _serverless_ and _servicemesh_ dependencies,
and with the following configuration:
```yaml
apiVersion: datasciencecluster.opendatahub.io/v1
kind: DataScienceCluster
metadata:
name: default-dsc
spec:
components:
codeflare:
managementState: Removed
dashboard:
managementState: Removed
datasciencepipelines:
managementState: Removed
kserve:
managementState: Managed
defaultDeploymentMode: RawDeployment
serving:
managementState: Removed
name: knative-serving
kueue:
managementState: Removed
modelmeshserving:
managementState: Removed
ray:
managementState: Removed
trainingoperator:
managementState: Managed
trustyai:
managementState: Removed
workbenches:
managementState: Remove
```
The assisted installer UI doesn't support the operator properties
mechanisms, so this needs to be done via the API. For example, the
following Python script creates a new clsuter using the _watsonx_
flavor:
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import base64
import json
import pathlib
import requests
import subprocess
# Details of the cluster:
name = "mycluster"
base_url = "https://api.openshift.com"
base_dns_domain = "mydomain"
openshift_version = "4.16"
cpu_architecture = "x86_64"
# Find the home directory, as we will take the pull secret and SSH key from there:
home_dir = pathlib.Path.home()
# Read the pull secret. To obtain your pull secret visit this page:
#
# https://console.redhat.com/openshift/install/pull-secret
#
# Then save the result to a `pull.txt` file in your home directory.
with open(home_dir / "pull.txt", "r") as file:
pull_secret = file.read().strip()
# Read the public SSH key:
with open(home_dir / ".ssh" / "id_rsa.pub", "r") as file:
ssh_public_key = file.read().strip()
# Prepare the properties for the operator:
openshift_ai_properties = json.dumps({
"flavor": "watsonx",
})
# Create the cluster:
response = requests.post(
f"{base_url}/api/assisted-install/v2/clusters",
json={
"name": name,
"openshift_version": openshift_version,
"base_dns_domain": base_dns_domain,
"cpu_architecture": cpu_architecture,
"pull_secret": pull_secret,
"ssh_public_key": ssh_public_key,
"machine_networks": [
{
"cidr": "192.168.100.0/24",
},
],
"api_vips": [
{
"ip": "192.168.100.20",
},
],
"ingress_vips": [
{
"ip": "192.168.100.21",
},
],
"olm_operators": [
{
"name": "openshift-ai",
"properties": openshift_ai_properties,
},
],
},
)
if response.status_code != 201:
raise Exception(f"Failed to create cluster: {response.content}")
cluster = response.json()
cluster_id = cluster["id"]
print(f"cluster_id: {cluster_id}")
# Create the infrastructure environment:
response = requests.post(
f"{base_url}/api/assisted-install/v2/infra-envs",
json={
"name": name,
"cluster_id": cluster_id,
"openshift_version": openshift_version,
"cpu_architecture": cpu_architecture,
"pull_secret": pull_secret,
"ssh_authorized_key": ssh_public_key,
"image_type": "full-iso",
},
)
if response.status_code != 201:
raise Exception(f"Failed to create infrastructure environment: {response.content}")
infra_env = response.json()
infra_env_id = infra_env["id"]
print(f"infra_env_id: {infra_env_id}")
# Print ISO URL:
iso_url = infra_env["download_url"]
print(f"iso_url: {iso_url}")
```
Related: https://issues.redhat.com/browse/MGMT-19056
Related: https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.11/html/installing_and_uninstalling_openshift_ai_self-managed/preparing-openshift-ai-for-ibm-cpd_prepare-openshift-ai-ibm-cpd#installing-openshift-data-science-operator-using-cli-ibm-cpd_prepare-openshift-ai-ibm-cpd
Signed-off-by: Juan Hernandez <juan.hernandez@redhat.com>