diff --git a/docs/content/manual/release-specific/v1.6.0/test-replica-disk-soft-anti-affinity.md b/docs/content/manual/release-specific/v1.6.0/test-replica-disk-soft-anti-affinity.md new file mode 100644 index 0000000000..42e9cf84dd --- /dev/null +++ b/docs/content/manual/release-specific/v1.6.0/test-replica-disk-soft-anti-affinity.md @@ -0,0 +1,63 @@ +--- +title: Test Replica Disk Soft Anti-Affinity +--- + +## Related issue + +https://github.com/longhorn/longhorn/issues/3823 + +## Test initial behavior of global Replica Disk Soft Anti-Affinity setting + +**Given** A newly created Longhorn cluster + +**Then** `Replica Zone Disk Anti-Affinity` shows as `false` in the UI + +*And* the `replica-soft-anti-affinity` setting shows `false` with kubectl + +## Test initial behavior of global Replica Disk Soft Anti-Affinity setting after upgrade + +**Given** A newly upgraded Longhorn cluster + +**Then** `Replica Zone Disk Anti-Affinity` shows as `false` in the UI + +*And* the `replica-soft-anti-affinity` shows `false` with kubectl + +## Test behavior of volume Replica Disk Soft Anti-Affinity setting + +**Given** A newly created Longhorn cluster + +**When** Create a volume + +**Then** The UI shows `Replica Disk Soft Anti Affinity: ignored` on the volume details page + +*And* `volume.spec.replicaDiskSoftAntiAffinity` shows `ignored` with kubectl + +**When** The `Update Replica Disk Soft Anti Affinity` operation is used on the UI volume details page to change `Replica +Disk Soft Anti-Affinity` to `enabled` + +**Then** The UI shows `Replica Disk Soft Anti Affinity: enabled` on the volume details page + +*And* `volume.spec.replicaDiskSoftAntiAffinity` shows `enabled` with kubectl + +**When** The `Update Replica Disk Soft Anti Affinity` batch operation is used on the UI volumes page to change `Replica +Disk Soft Anti-Affinity` to `disabled` + +**Then** The UI shows `Replica Disk Soft Anti Affinity: disabled` on the volume details page + +*And* `volume.spec.replicaDiskSoftAntiAffinity` shows `disabled` with kubectl + +## Test initial behavior of volume Replica Disk Soft Anti-Affinity setting after upgrade + +**Given** An outdated Longhorn cluster with at least one volume + +**When** The cluster is upgraded + +**Then** The UI shows `Replica Disk Soft Anti Affinity: ignored` on the volume details page + +*And* `volume.spec.replicaDiskSoftAntiAffinity` shows `ignored` with kubectl + +## Test effect of Replica Disk Soft Anti-Affinity on replica scheduling + +If an implementation of test_global_disk_soft_anti_affinity hasn't merged, follow its skeleton manually + +If an implementation of test_volume_disk_soft_anti_affinity hasn't merged, follow its skeleton manually diff --git a/manager/integration/tests/test_scheduling.py b/manager/integration/tests/test_scheduling.py index cbe2c1b2df..45aa5b6ccf 100644 --- a/manager/integration/tests/test_scheduling.py +++ b/manager/integration/tests/test_scheduling.py @@ -1722,3 +1722,102 @@ def finalizer(): create_and_wait_statefulset(statefulset) wait_for_statefulset_pods_healthy(statefulset) + + +@pytest.mark.skip(reason="TODO") +def test_global_disk_soft_anti_affinity(): # NOQA + """ + 1. When Replica Disk Soft Anti-Affinity is false, it should be impossible + to schedule replicas to the same disk. + 2. When Replica Disk Soft Anti-Affinity is true, it should be possible to + schedule replicas to the same disk. + 3. Whether or not Replica Disk Soft Anti-Affinity is true or false, the + scheduler should prioritize scheduling replicas to different disks. + + Given + - One node has three disks + - The three disks have very different sizes + - Only two disks are available for scheduling + - No other node is available for scheduling + + When + - Global Replica Node Level Soft Anti-Affinity is true + - Global Replica Zone Level Soft Anti-Affinity is true + - Global Replica Disk Level Soft Anti-Affinity is false + - Create a volume with three replicas and a size such that all replicas + could fit on the largest disk and still leave it with the most available + space + - Attach the volume to the schedulable node + + Then + - Verify the volume is in a degraded state + - Verify only two of the three replicas are healthy + - Verify the remaining replica doesn't have a spec.nodeID + + When + - Change the global Replica Disk Level Soft Anti-Affinity to true + + Then + - Verify the volume is in a healthy state + - Verify all three replicas are healthy (two replicas have the same + spec.diskID) + + When + - Enable scheduling on the third disk + - Delete one of the two replicas with the same spec.diskID + + Then + - Verify the volume is in a healthy state + - Verify all three replicas are healthy + - Verify all three replicas have a different spec.diskID + """ + pass + + +@pytest.mark.skip(reason="TODO") +def test_volume_disk_soft_anti_affinity(): # NOQA + """ + 1. When Replica Disk Soft Anti-Affinity is disabled, it should be + impossible to schedule replicas to the same disk. + 2. When Replica Disk Soft Anti-Affinity is enabled, it should be possible + to schedule replicas to the same disk. + 3. Whether or not Replica Disk Soft Anti-Affinity is enabled or disabled, + the scheduler should prioritize scheduling replicas to different disks. + + Given + - One node has three disks + - The three disks have very different sizes + - Only two disks are available for scheduling + - No other node is available for scheduling + + When + - Global Replica Node Level Soft Anti-Affinity is true + - Global Replica Zone Level Soft Anti-Affinity is true + - Create a volume with three replicas, a size such that all replicas could + fit on the largest disk and still leave it with the most available space, + and spec.replicaDiskSoftAntiAffinity = disabled + - Attach the volume to the schedulable node + + Then + - Verify the volume is in a degraded state + - Verify only two of the three replicas are healthy + - Verify the remaining replica doesn't have a spec.nodeID + + When + - Change the volume's spec.replicaDiskSoftAntiAffinity to enabled + + Then + - Verify the volume is in a healthy state + - Verify all three replicas are healthy (two replicas have the same + spec.diskID) + + When + - Enable scheduling on the third disk + - Delete one of the two replicas with the same spec.diskID + + Then + - Verify the volume is in a healthy state + - Verify all three replicas are healthy + - Verify all three replicas have a different `replica.HostID` + """ + pass