Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not create keystone endpoints when Glance is not available #633

Closed
wants to merge 1 commit into from

Conversation

fmount
Copy link
Contributor

@fmount fmount commented Oct 11, 2024

Unlike services like CinderVolume where replicas:0 still sees an available api and scheduler, if the default GlanceAPI has replicas:0, - which is something suggested at day1 when the storage backend is still not available -, a keystone endpoint is created anyway in the catalog.
This means that the Glance service can be discovered, the endpoint can be reached from an OpenShift point of view, and it fails returning a 503 if a request is made.
This behavior might result confusing for the human operator, who thinks that there's a problem with the service, ignoring that is should still be configured.

This patch fixes this problem by not creating the endpoints at all if no Glance replica is available.

Unlike services like CinderVolume, where replicas:0 still sees an
available api and scheduler, if the default GlanceAPI has replicas:0,
- which is something suggested at day1 when the storage backend is still
not available -, a keystone endpoint is created anyway in the catalog.
That means that the glance service is discovered, the endpoint can be
reached from an OpenShift point of view, but it fails returning a 503 if
a request is made.
This behavior might result confusing for the human operator, who thinks
that there's a problem with the service, ignoring that is should still
be configured.
This patch fixes this problem by **not** creating the endpoints at all
if no Glance replica is available.

Signed-off-by: Francesco Pantano <fpantano@redhat.com>
Copy link
Contributor

openshift-ci bot commented Oct 11, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fmount

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@stuggi
Copy link
Contributor

stuggi commented Oct 11, 2024

maybe I read it wrong, but this will still make glance to be in setup complete state, right? just the endpoint is not registered. So it won't point the end user to what is missing and you just get a different error when using the openstack cli to create/list images

@fmount
Copy link
Contributor Author

fmount commented Oct 11, 2024

maybe I read it wrong, but this will still make glance to be in setup complete state, right? just the endpoint is not registered. So it won't point the end user to what is missing and you just get a different error when using the openstack cli to create/list images

Correct, it reconciles (like before) and reaches Setup complete, but simply it won't register the endpoints, so users don't get a 503 when they try to openstack image list.

@fmount fmount requested review from abays and stuggi October 11, 2024 07:32
@stuggi
Copy link
Contributor

stuggi commented Oct 11, 2024

maybe I read it wrong, but this will still make glance to be in setup complete state, right? just the endpoint is not registered. So it won't point the end user to what is missing and you just get a different error when using the openstack cli to create/list images

Correct, it reconciles (like before) and reaches Setup complete, but simply it won't register the endpoints, so users don't get a 503 when they try to openstack image list.

ack, I guess we need to revisit what we discussed a couple of months back if we need a condition which reflects the actual functional state in addition the the Setup complete

@fmount
Copy link
Contributor Author

fmount commented Oct 11, 2024

maybe I read it wrong, but this will still make glance to be in setup complete state, right? just the endpoint is not registered. So it won't point the end user to what is missing and you just get a different error when using the openstack cli to create/list images

Correct, it reconciles (like before) and reaches Setup complete, but simply it won't register the endpoints, so users don't get a 503 when they try to openstack image list.

ack, I guess we need to revisit what we discussed a couple of months back if we need a condition which reflects the actual functional state in addition the the Setup complete

Do you want me to hold this change? It doesn't change the problem and the current state of things, but I'm ok to hold in case we need to make it part of a broader discussion.

@stuggi
Copy link
Contributor

stuggi commented Oct 11, 2024

maybe I read it wrong, but this will still make glance to be in setup complete state, right? just the endpoint is not registered. So it won't point the end user to what is missing and you just get a different error when using the openstack cli to create/list images

Correct, it reconciles (like before) and reaches Setup complete, but simply it won't register the endpoints, so users don't get a 503 when they try to openstack image list.

ack, I guess we need to revisit what we discussed a couple of months back if we need a condition which reflects the actual functional state in addition the the Setup complete

Do you want me to hold this change? It doesn't change the problem and the current state of things, but I'm ok to hold in case we need to make it part of a broader discussion.

No, if @dprince agrees that this is a better then the current behavior I am fine to get it in and we follow up with the discussion on general improvement to reflect you got what you requested and the service is functional

@dprince
Copy link
Contributor

dprince commented Oct 11, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

@fmount
Copy link
Contributor Author

fmount commented Oct 11, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

Right, when replicas: 0 the endpoints are removed. This doesn't address the replicas 1 vs 0, it's more to avoid getting a 503 if we scale the service down (it might result misleading).

@fmount
Copy link
Contributor Author

fmount commented Oct 14, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

Right, when replicas: 0 the endpoints are removed. This doesn't address the replicas 1 vs 0, it's more to avoid getting a 503 if we scale the service down (it might result misleading).

@stuggi @dprince Note that Glance is a "special" component here: both API and the underlying engine are up or down at the same time, you can't have API up and running, but not accepting data/image upload.
If you explicitly set replicas: 0, k8s Route returns a 503 because HAProxy has no backends. However, in theory, it shouldn't behave this way, because there's no backend to forward requests to, and the client should be notified about that, instead of thinking that "there's a problem in the API".
Do you think this represents of an anti-pattern in the openstack-k8s-operators ?

@stuggi
Copy link
Contributor

stuggi commented Oct 15, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

Right, when replicas: 0 the endpoints are removed. This doesn't address the replicas 1 vs 0, it's more to avoid getting a 503 if we scale the service down (it might result misleading).

@stuggi @dprince Note that Glance is a "special" component here: both API and the underlying engine are up or down at the same time, you can't have API up and running, but not accepting data/image upload. If you explicitly set replicas: 0, k8s Route returns a 503 because HAProxy has no backends.

technically, I'd say the route behaves correct as it has no knowledge of what kind of service it servers. If there is no service running it can just return the 503.

However, in theory, it shouldn't behave this way, because there's no backend to forward requests to, and the client should be notified about that, instead of thinking that "there's a problem in the API". Do you think this represents of an anti-pattern in the openstack-k8s-operators ?

Returning information to the user on what is missing would be beneficial. Not registering the endpoint probably also does not point to that the config is missing, right? But as you said in other discussion to proceed e.g. with ceph hci deployment, at least right now, we first need a ctlplane deployment, then do the ceph deployment and go back to the ctlplane and configure it as backend.
As an idea, what about if the controller recognize that there is no backend configured, we bring up a simple/custom httpd deployment, which returns a static json for status/images which reports, no backend configured/available. I did a simple test and it could look like this:

sh-5.1$ openstack image list 
+--------------------------------------+-----------------------+--------+
| ID                                   | Name                  | Status |
+--------------------------------------+-----------------------+--------+
| 00000000-0000-0000-0000-000000000000 | NO BACKEND CONFIGURED | queued |
+--------------------------------------+-----------------------+--------+

@stuggi
Copy link
Contributor

stuggi commented Oct 15, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

Right, when replicas: 0 the endpoints are removed. This doesn't address the replicas 1 vs 0, it's more to avoid getting a 503 if we scale the service down (it might result misleading).

@stuggi @dprince Note that Glance is a "special" component here: both API and the underlying engine are up or down at the same time, you can't have API up and running, but not accepting data/image upload. If you explicitly set replicas: 0, k8s Route returns a 503 because HAProxy has no backends.

technically, I'd say the route behaves correct as it has no knowledge of what kind of service it servers. If there is no service running it can just return the 503.

However, in theory, it shouldn't behave this way, because there's no backend to forward requests to, and the client should be notified about that, instead of thinking that "there's a problem in the API". Do you think this represents of an anti-pattern in the openstack-k8s-operators ?

Returning information to the user on what is missing would be beneficial. Not registering the endpoint probably also does not point to that the config is missing, right? But as you said in other discussion to proceed e.g. with ceph hci deployment, at least right now, we first need a ctlplane deployment, then do the ceph deployment and go back to the ctlplane and configure it as backend. As an idea, what about if the controller recognize that there is no backend configured, we bring up a simple/custom httpd deployment, which returns a static json for status/images which reports, no backend configured/available. I did a simple test and it could look like this:

sh-5.1$ openstack image list 
+--------------------------------------+-----------------------+--------+
| ID                                   | Name                  | Status |
+--------------------------------------+-----------------------+--------+
| 00000000-0000-0000-0000-000000000000 | NO BACKEND CONFIGURED | queued |
+--------------------------------------+-----------------------+--------+

maybe then we could default to 1?

@fmount
Copy link
Contributor Author

fmount commented Oct 15, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

Right, when replicas: 0 the endpoints are removed. This doesn't address the replicas 1 vs 0, it's more to avoid getting a 503 if we scale the service down (it might result misleading).

@stuggi @dprince Note that Glance is a "special" component here: both API and the underlying engine are up or down at the same time, you can't have API up and running, but not accepting data/image upload. If you explicitly set replicas: 0, k8s Route returns a 503 because HAProxy has no backends.

technically, I'd say the route behaves correct as it has no knowledge of what kind of service it servers. If there is no service running it can just return the 503.

However, in theory, it shouldn't behave this way, because there's no backend to forward requests to, and the client should be notified about that, instead of thinking that "there's a problem in the API". Do you think this represents of an anti-pattern in the openstack-k8s-operators ?

Returning information to the user on what is missing would be beneficial. Not registering the endpoint probably also does not point to that the config is missing, right? But as you said in other discussion to proceed e.g. with ceph hci deployment, at least right now, we first need a ctlplane deployment, then do the ceph deployment and go back to the ctlplane and configure it as backend. As an idea, what about if the controller recognize that there is no backend configured, we bring up a simple/custom httpd deployment, which returns a static json for status/images which reports, no backend configured/available. I did a simple test and it could look like this:

sh-5.1$ openstack image list 
+--------------------------------------+-----------------------+--------+
| ID                                   | Name                  | Status |
+--------------------------------------+-----------------------+--------+
| 00000000-0000-0000-0000-000000000000 | NO BACKEND CONFIGURED | queued |
+--------------------------------------+-----------------------+--------+

maybe then we could default to 1?

I'm not sure about this: my concern is that you're writing this data in the database, and I don't think we have a way to clean it up automatically when we redeploy (and rolling out a new config doesn't restart everything from scratch). Also, the human operator is allowed to mess up w/ the customServiceConfig interface, and it might happen that decommissioning a backend results in an inconsistent state as well.
We could .Delete the API, but I feel like we might risk to overcomplicate the code, and I'm not sure we want to get into that business here (and actually, the goal is to start a process of simplify the use cases and the operator from Epoxy+).
It's easy to detect that no backend is set from the operator, just wondering if we can return a static response telling the human operator that he's not allowed to upload an image until a backend in configured (@konan-abhi knows more on that front).
I need to think more about this, but from your replies so far I would say the endpoint creation is a NACK from the operators perspective.

@stuggi
Copy link
Contributor

stuggi commented Oct 15, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

Right, when replicas: 0 the endpoints are removed. This doesn't address the replicas 1 vs 0, it's more to avoid getting a 503 if we scale the service down (it might result misleading).

@stuggi @dprince Note that Glance is a "special" component here: both API and the underlying engine are up or down at the same time, you can't have API up and running, but not accepting data/image upload. If you explicitly set replicas: 0, k8s Route returns a 503 because HAProxy has no backends.

technically, I'd say the route behaves correct as it has no knowledge of what kind of service it servers. If there is no service running it can just return the 503.

However, in theory, it shouldn't behave this way, because there's no backend to forward requests to, and the client should be notified about that, instead of thinking that "there's a problem in the API". Do you think this represents of an anti-pattern in the openstack-k8s-operators ?

Returning information to the user on what is missing would be beneficial. Not registering the endpoint probably also does not point to that the config is missing, right? But as you said in other discussion to proceed e.g. with ceph hci deployment, at least right now, we first need a ctlplane deployment, then do the ceph deployment and go back to the ctlplane and configure it as backend. As an idea, what about if the controller recognize that there is no backend configured, we bring up a simple/custom httpd deployment, which returns a static json for status/images which reports, no backend configured/available. I did a simple test and it could look like this:

sh-5.1$ openstack image list 
+--------------------------------------+-----------------------+--------+
| ID                                   | Name                  | Status |
+--------------------------------------+-----------------------+--------+
| 00000000-0000-0000-0000-000000000000 | NO BACKEND CONFIGURED | queued |
+--------------------------------------+-----------------------+--------+

maybe then we could default to 1?

I'm not sure about this: my concern is that you're writing this data in the database, and I don't think we have a way to clean it up automatically when we redeploy (and rolling out a new config doesn't restart everything from scratch). Also, the human operator is allowed to mess up w/ the customServiceConfig interface, and it might happen that decommissioning a backend results in an inconsistent state as well. We could .Delete the API, but I feel like we might risk to overcomplicate the code, and I'm not sure we want to get into that business here (and actually, the goal is to start a process of simplify the use cases and the operator from Epoxy+). It's easy to detect that no backend is set from the operator, just wondering if we can return a static response telling the human operator that he's not allowed to upload an image until a backend in configured (@konan-abhi knows more on that front). I need to think more about this, but from your replies so far I would say the endpoint creation is a NACK from the operators perspective.

I think it would just be wrong to create/delete the endpoint when there is a scale down to 0 and up again. And also the response does not point to that there is a config missing.

What I did was not to add something to the DB, its just a static response from the custom deployment. for this I was just using an nginx deployment which returned

      location /v2/images {
        default_type application/json;
        return 200 '{"images": [{"owner_specified.openstack.md5": "", "owner_specified.openstack.sha256": "", "owner_specified.openstack.object": "images/cirros", "name": "NO BACKEND CONFIGURED", "disk_format": "qcow2", "container_format": "bare", "visibility": "shared", "size": null, "virtual_size": null, "status": "queued", "checksum": null, "protected": false, "min_ram": 0, "min_disk": 0, "owner": "a6d19bd37e9b4a998b04193f5564e033", "os_hidden": false, "os_hash_algo": null, "os_hash_value": null, "id": "00000000-0000-0000-0000-000000000000", "created_at": "2024-10-14T15:53:00Z", "updated_at": "2024-10-14T15:53:00Z", "locations": [], "tags": [], "self": "/v2/images/00000000-0000-0000-0000-000000000000", "file": "/v2/images/00000000-0000-0000-0000-000000000000/file", "schema": "/v2/schemas/image"}], "first": "/v2/images", "schema": "/v2/schemas/images"}';
      }

      location / {
        default_type application/json;
        return 200 '{"versions": [{"id": "v2.15", "status": "CURRENT", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.13", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.12", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.11", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.10", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.9", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.8", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.7", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.6", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.5", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.4", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.3", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.2", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.1", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.0", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}]}';
      }

@fmount
Copy link
Contributor Author

fmount commented Oct 15, 2024

Would we then remove it if replicas get scaled down to 0. My issue with replicas wasn't that the endpoint existed, it was that I expected the default glance-api to be 1.

Right, when replicas: 0 the endpoints are removed. This doesn't address the replicas 1 vs 0, it's more to avoid getting a 503 if we scale the service down (it might result misleading).

@stuggi @dprince Note that Glance is a "special" component here: both API and the underlying engine are up or down at the same time, you can't have API up and running, but not accepting data/image upload. If you explicitly set replicas: 0, k8s Route returns a 503 because HAProxy has no backends.

technically, I'd say the route behaves correct as it has no knowledge of what kind of service it servers. If there is no service running it can just return the 503.

However, in theory, it shouldn't behave this way, because there's no backend to forward requests to, and the client should be notified about that, instead of thinking that "there's a problem in the API". Do you think this represents of an anti-pattern in the openstack-k8s-operators ?

Returning information to the user on what is missing would be beneficial. Not registering the endpoint probably also does not point to that the config is missing, right? But as you said in other discussion to proceed e.g. with ceph hci deployment, at least right now, we first need a ctlplane deployment, then do the ceph deployment and go back to the ctlplane and configure it as backend. As an idea, what about if the controller recognize that there is no backend configured, we bring up a simple/custom httpd deployment, which returns a static json for status/images which reports, no backend configured/available. I did a simple test and it could look like this:

sh-5.1$ openstack image list 
+--------------------------------------+-----------------------+--------+
| ID                                   | Name                  | Status |
+--------------------------------------+-----------------------+--------+
| 00000000-0000-0000-0000-000000000000 | NO BACKEND CONFIGURED | queued |
+--------------------------------------+-----------------------+--------+

maybe then we could default to 1?

I'm not sure about this: my concern is that you're writing this data in the database, and I don't think we have a way to clean it up automatically when we redeploy (and rolling out a new config doesn't restart everything from scratch). Also, the human operator is allowed to mess up w/ the customServiceConfig interface, and it might happen that decommissioning a backend results in an inconsistent state as well. We could .Delete the API, but I feel like we might risk to overcomplicate the code, and I'm not sure we want to get into that business here (and actually, the goal is to start a process of simplify the use cases and the operator from Epoxy+). It's easy to detect that no backend is set from the operator, just wondering if we can return a static response telling the human operator that he's not allowed to upload an image until a backend in configured (@konan-abhi knows more on that front). I need to think more about this, but from your replies so far I would say the endpoint creation is a NACK from the operators perspective.

I think it would just be wrong to create/delete the endpoint when there is a scale down to 0 and up again. And also the response does not point to that there is a config missing.

What I did was not to add something to the DB, its just a static response from the custom deployment. for this I was just using an nginx deployment which returned

      location /v2/images {
        default_type application/json;
        return 200 '{"images": [{"owner_specified.openstack.md5": "", "owner_specified.openstack.sha256": "", "owner_specified.openstack.object": "images/cirros", "name": "NO BACKEND CONFIGURED", "disk_format": "qcow2", "container_format": "bare", "visibility": "shared", "size": null, "virtual_size": null, "status": "queued", "checksum": null, "protected": false, "min_ram": 0, "min_disk": 0, "owner": "a6d19bd37e9b4a998b04193f5564e033", "os_hidden": false, "os_hash_algo": null, "os_hash_value": null, "id": "00000000-0000-0000-0000-000000000000", "created_at": "2024-10-14T15:53:00Z", "updated_at": "2024-10-14T15:53:00Z", "locations": [], "tags": [], "self": "/v2/images/00000000-0000-0000-0000-000000000000", "file": "/v2/images/00000000-0000-0000-0000-000000000000/file", "schema": "/v2/schemas/image"}], "first": "/v2/images", "schema": "/v2/schemas/images"}';
      }

      location / {
        default_type application/json;
        return 200 '{"versions": [{"id": "v2.15", "status": "CURRENT", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.13", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.12", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.11", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.10", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.9", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.8", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.7", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.6", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.5", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.4", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.3", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.2", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.1", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}, {"id": "v2.0", "status": "SUPPORTED", "links": [{"rel": "self", "href": "https://glance-default-public-openstack.apps-crc.testing/v2/"}]}]}';
      }

I agree that creating/deleting keystone endpoints might be seen as a workaround other than a real fix, and 503 is the Ingress default behavior that is does not depend on the application.
Thank you for your suggestions, I'm going to explore with the team the idea of not having replicas: 0 and instead reconfigure httpd to always return a static response, and update this patch accordingly.

@dprince
Copy link
Contributor

dprince commented Oct 15, 2024

I think my preference might be to just always create the endpoint. Seems simpler that way

@fmount
Copy link
Contributor Author

fmount commented Oct 15, 2024

Closing this change as we gathered two main things:

  1. it's ok to always create endpoints, and it's not a good way to tell the openstack cli that the service is not there
  2. we can use httpd to provide a static response when no explicit backend is configured: this would allow to always set replicas: 1 (or greater) and pack a message to tell the human operator the backend must be configured.

We're going to follow up with a different patch, but thanks everyone involved for the suggestions and feedback!

@fmount fmount closed this Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants