Skip to content

Commit

Permalink
ci: refactor robot test folder structure and generate junit.xml for q…
Browse files Browse the repository at this point in the history
…ase sync

Signed-off-by: Yang Chiu <yang.chiu@suse.com>
  • Loading branch information
yangchiu authored and khushboo-rancher committed Aug 29, 2024
1 parent b49ab88 commit ec43ace
Show file tree
Hide file tree
Showing 19 changed files with 228 additions and 317 deletions.
2 changes: 1 addition & 1 deletion e2e/run.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
#!/bin/bash

robot -P ./libs -d /tmp/test-report "$@" ./tests
robot -x junit.xml -P ./libs -d /tmp/test-report "$@" ./tests
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
*** Settings ***
Documentation Negative Test Cases
Test Tags negative
Test Tags negative cluster

Resource ../keywords/common.resource
Resource ../keywords/deployment.resource
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,27 +1,60 @@
*** Settings ***
Documentation Node disconnection test
... https://github.com/longhorn/longhorn/issues/1545
Documentation Negative Test Cases
Test Tags manual_test_case
Test Tags negative

Resource ../keywords/common.resource
Resource ../keywords/storageclass.resource
Resource ../keywords/volume.resource
Resource ../keywords/setting.resource
Resource ../keywords/network.resource
Resource ../keywords/storageclass.resource
Resource ../keywords/statefulset.resource
Resource ../keywords/workload.resource
Resource ../keywords/common.resource
Resource ../keywords/network.resource
Resource ../keywords/setting.resource

Test Setup Set test environment
Test Teardown Cleanup test resources

*** Variables ***
${LOOP_COUNT} 3
${LOOP_COUNT} 1
${LATENCY_IN_MS} 0
${RETRY_COUNT} 300
${RETRY_INTERVAL} 1
${RWX_VOLUME_FAST_FAILOVER} false
${DATA_ENGINE} v1

*** Test Cases ***
Disconnect Volume Node Network While Workload Heavy Writing
Given Set setting rwx-volume-fast-failover to ${RWX_VOLUME_FAST_FAILOVER}
And Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Create statefulset 0 using RWO volume with longhorn-test storageclass
And Create statefulset 1 using RWX volume with longhorn-test storageclass
FOR ${i} IN RANGE ${LOOP_COUNT}
And Keep writing data to pod of statefulset 0
And Keep writing data to pod of statefulset 1
When Disconnect volume nodes network for 20 seconds statefulset 0 statefulset 1
And Wait for volume of statefulset 0 healthy
And Wait for volume of statefulset 1 healthy
And Wait for workloads pods stable statefulset 0 statefulset 1
Then Check statefulset 0 works
And Check statefulset 1 works
END

Disconnect Volume Node Network For More Than Pod Eviction Timeout While Workload Heavy Writing
Given Set setting rwx-volume-fast-failover to ${RWX_VOLUME_FAST_FAILOVER}
And Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Create statefulset 0 using RWO volume with longhorn-test storageclass
And Create statefulset 1 using RWX volume with longhorn-test storageclass
FOR ${i} IN RANGE ${LOOP_COUNT}
And Keep writing data to pod of statefulset 0
And Keep writing data to pod of statefulset 1
When Disconnect volume nodes network for 360 seconds statefulset 0 statefulset 1
And Wait for volume of statefulset 0 healthy
And Wait for volume of statefulset 1 healthy
And Wait for workloads pods stable statefulset 0 statefulset 1
Then Check statefulset 0 works
And Check statefulset 1 works
END

Node Disconnect And Keep Data Writing And No Replica On The Disconnected Node
[Documentation] -- Manual test plan --
... Disable auto-salvage.
Expand Down
File renamed without changes.
File renamed without changes.
138 changes: 137 additions & 1 deletion e2e/tests/node_reboot.robot → e2e/tests/negative/node_reboot.robot
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Reboot Node One By One While Workload Heavy Writing
And Check statefulset 2 works
END

Power Off Node One By Once For More Than Pod Eviction Timeout While Workload Heavy Writing
Power Off Node One By One For More Than Pod Eviction Timeout While Workload Heavy Writing
[Tags] reboot
Given Set setting rwx-volume-fast-failover to ${RWX_VOLUME_FAST_FAILOVER}
And Create storageclass strict-local with numberOfReplicas=1 dataLocality=strict-local
Expand Down Expand Up @@ -255,6 +255,142 @@ Reboot Volume Node While Heavy Writing And Recurring Jobs Exist
And Check volume 2 works
END

Physical Node Reboot With Attached Deployment
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Create persistentvolumeclaim 0 using ${VOLUME_TYPE} volume with longhorn-test storageclass
And Create deployment 0 with persistentvolumeclaim 0
And Write 100 MB data to file data in deployment 0

And Reboot volume node of deployment 0
And Wait for deployment 0 pods stable
Then Check deployment 0 data in file data is intact

Physical Node Reboot With Attached Statefulset
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Create statefulset 0 using ${VOLUME_TYPE} volume with longhorn-test storageclass
And Write 100 MB data to file data in statefulset 0

And Reboot volume node of statefulset 0
And Wait for statefulset 0 pods stable
Then Check statefulset 0 data in file data is intact

Single Replica Node Down Deletion Policy do-nothing With RWO Volume Replica Locate On Replica Node
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Set setting node-down-pod-deletion-policy to do-nothing
When Create persistentvolumeclaim 0 using RWO volume with longhorn-test storageclass
And Create deployment 0 with persistentvolumeclaim 0
And Wait for volume of deployment 0 healthy
And Write 100 MB data to file data in deployment 0

# Delete replicas to have the volume with its only replica located on different nodes.
And Update volume of deployment 0 replica count to 1
And Delete replica of deployment 0 volume on replica node
And Delete replica of deployment 0 volume on volume node
And Power off volume node of deployment 0
Then Wait for volume of deployment 0 stuck in state attaching
And Wait for deployment 0 pod stuck in Terminating on the original node

When Power on off node
And Wait for deployment 0 pods stable
And Check deployment 0 pod is Running on another node
Then Check deployment 0 data in file data is intact

Single Replica Node Down Deletion Policy do-nothing With RWO Volume Replica Locate On Volume Node
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Set setting node-down-pod-deletion-policy to do-nothing
When Create persistentvolumeclaim 0 using RWO volume with longhorn-test storageclass
And Create deployment 0 with persistentvolumeclaim 0
And Wait for volume of deployment 0 healthy
And Write 100 MB data to file data in deployment 0

# Delete replicas to have the volume with its only replica located on the same node.
And Update volume of deployment 0 replica count to 1
And Delete replica of deployment 0 volume on all replica node
And Power off volume node of deployment 0
Then Wait for volume of deployment 0 faulted
And Wait for deployment 0 pod stuck in Terminating on the original node

When Power on off node
And Wait for deployment 0 pods stable
And Check deployment 0 pod is Running on the original node
Then Check deployment 0 data in file data is intact

Single Replica Node Down Deletion Policy delete-deployment-pod With RWO Volume Replica Locate On Replica Node
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Set setting node-down-pod-deletion-policy to delete-deployment-pod
When Create persistentvolumeclaim 0 using RWO volume with longhorn-test storageclass
And Create deployment 0 with persistentvolumeclaim 0
And Wait for volume of deployment 0 healthy
And Write 100 MB data to file data in deployment 0

# Delete replicas to have the volume with its only replica located on different nodes.
And Update volume of deployment 0 replica count to 1
And Delete replica of deployment 0 volume on replica node
And Delete replica of deployment 0 volume on volume node
And Power off volume node of deployment 0
Then Wait for volume of deployment 0 attaching

And Wait for deployment 0 pods stable
Then Check deployment 0 data in file data is intact
And Power on off node

Single Replica Node Down Deletion Policy delete-deployment-pod With RWO Volume Replica Locate On Volume Node
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Set setting node-down-pod-deletion-policy to delete-deployment-pod
When Create persistentvolumeclaim 0 using RWO volume with longhorn-test storageclass
And Create deployment 0 with persistentvolumeclaim 0
And Wait for volume of deployment 0 healthy
And Write 100 MB data to file data in deployment 0

# Delete replicas to have the volume with its only replica located on the same node
And Update volume of deployment 0 replica count to 1
And Delete replica of deployment 0 volume on all replica node
And Power off volume node of deployment 0
Then Wait for volume of deployment 0 faulted
And Wait for deployment 0 pod stuck in ContainerCreating on another node

When Power on off node
And Wait for deployment 0 pods stable
And Check deployment 0 pod is Running on the original node
Then Check deployment 0 data in file data is intact

Single Replica Node Down Deletion Policy delete-both-statefulset-and-deployment-pod With RWO Volume Replica Locate On Replica Node
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Set setting node-down-pod-deletion-policy to delete-both-statefulset-and-deployment-pod
When Create statefulset 0 using RWO volume with longhorn-test storageclass
And Wait for volume of statefulset 0 healthy
And Write 100 MB data to file data in statefulset 0

# Delete replicas to have the volume with its only replica located on different nodes.
And Update volume of statefulset 0 replica count to 1
And Delete replica of statefulset 0 volume on replica node
And Delete replica of statefulset 0 volume on volume node
And Power off volume node of statefulset 0
Then Wait for volume of statefulset 0 attaching

And Wait for statefulset 0 pods stable
Then Check statefulset 0 data in file data is intact
And Power on off node

Single Replica Node Down Deletion Policy delete-both-statefulset-and-deployment-pod With RWO Volume Replica Locate On Volume Node
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Set setting node-down-pod-deletion-policy to delete-both-statefulset-and-deployment-pod
When Create statefulset 0 using RWO volume with longhorn-test storageclass
And Wait for volume of statefulset 0 healthy
And Write 100 MB data to file data in statefulset 0

# Delete replicas to have the volume with its only replica located on the same.
And Update volume of statefulset 0 replica count to 1
And Delete replica of statefulset 0 volume on all replica node
And Power off volume node of statefulset 0
Then Wait for volume of statefulset 0 faulted
And Wait for statefulset 0 pod stuck in ContainerCreating on another node

When Power on off node
And Wait for statefulset 0 pods stable
And Check statefulset 0 pod is Running on the original node
Then Check statefulset 0 data in file data is intact

Reboot Replica Node While Heavy Writing And Recurring Jobs Exist
[Tags] recurring_job
Given Create volume 0 with size=2Gi numberOfReplicas=1 dataEngine=${DATA_ENGINE}
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
56 changes: 0 additions & 56 deletions e2e/tests/network_disconnect.robot

This file was deleted.

45 changes: 45 additions & 0 deletions e2e/tests/regression/test_basic.robot
Original file line number Diff line number Diff line change
Expand Up @@ -95,3 +95,48 @@ Test Snapshot
And Validate snapshot 2 is not in volume 0 snapshot list

And Check volume 0 data is data 1

Replica Rebuilding
[Documentation] -- Manual test plan --
... 1. Create and attach a volume.
... 2. Write a large amount of data to the volume.
... 3. Disable disk scheduling and the node scheduling for one replica.
... 4. Crash the replica progress. Verify
... - the corresponding replica in not running state.
... - the volume will keep robustness Degraded.
... 5. Enable the disk scheduling. Verify nothing changes.
... 6. Enable the node scheduling. Verify.
... - the failed replica is reused by Longhorn.
... - the data content is correct after rebuilding.
... - volume r/w works fine.
...
... == Not implemented ==
... 7. Direct delete one replica via UI. Verify
... - a new replica will be replenished immediately.
... - the rebuilding progress in UI page looks good.
... - the data content is correct after rebuilding.
... - volume r/w works fine.
When Create volume 0 with size=10Gi numberOfReplicas=3 dataEngine=${DATA_ENGINE}
And Attach volume 0 to node 0
And Wait for volume 0 healthy

And Write 1 GB data to volume 0

And Disable node 1 scheduling
And Disable node 1 default disk

And Crash volume 0 replica process on node 1
Then Wait volume 0 replica on node 1 stopped
And Wait for volume 0 degraded

And Enable node 1 default disk
Then Check volume 0 replica on node 1 kept in stopped
And Check for volume 0 kept in degraded

And Enable node 1 scheduling
Then Wait until volume 0 replica rebuilding started on node 1
And Wait for volume 0 healthy
And Check volume 0 crashed replica reused on node 1

And Check volume 0 data is intact
And Check volume 0 works
File renamed without changes.
43 changes: 0 additions & 43 deletions e2e/tests/test_cases/physical_node_reboot.robot

This file was deleted.

Loading

0 comments on commit ec43ace

Please sign in to comment.