-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc(1.8.0): add automatic RWX expansion #1012
Conversation
✅ Deploy Preview for longhornio ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few small suggestions for your consideration. Thank you.
Signed-off-by: James Munson <james.munson@suse.com>
a62f27c
to
0c834d8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
```shell | ||
kubectl -n longhorn-system delete pod <the share manager pod> | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this replies on RWX volume's failover feature. Can we provide two methods
- Scale down the workloads using the RWX volume
- Delete the share-manager pod and recreate it. Need to mention why this works.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the scale-down is covered by the Offline
section below.
Are you asking that we make it clearer that these are alternatives by numbering them 1 and 2?
Which issue(s) this PR fixes:
Issue longhorn/longhorn#9736
What this PR does / why we need it:
Document changes to RWX online expansion sequence.
Special notes for your reviewer:
Additional documentation or context