diff --git a/CHANGELOG.md b/CHANGELOG.md index ab3e71198..aa9be175c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,34 @@ +## 1.9.0 (May 26, 2023) +[Full Changelog](https://github.com/nutanix/terraform-provider-nutanix/compare/feat/1.8.1...feat/1.9.0) + +**New Feature:** +- Add support for new Karbon features. [\#290](https://github.com/nutanix/terraform-provider-nutanix/issues/290) + + New Resource : + - nutanix_karbon_worker_nodepool + +**Implemented enhancements:** +- Adding timeouts in "nutanix_karbon_cluster" resource. [\#563](https://github.com/nutanix/terraform-provider-nutanix/pull/563) +- Vlan with 0 vlan_id should be supported in subnet resource. [\#562](https://github.com/nutanix/terraform-provider-nutanix/pull/562) +- Adding contributing doc and code of conduct in provider. [\#603](https://github.com/nutanix/terraform-provider-nutanix/pull/603) +- Schema Validation for NDB database provision when profiles are required or optional. [\#591](https://github.com/nutanix/terraform-provider-nutanix/issues/591) + +**Fixed bugs:** +- Intermittent "context deadline exceeded" errors on "nutanix_karbon_cluster" resource. [\#544](https://github.com/nutanix/terraform-provider-nutanix/issues/544) +- Resource "nutanix_subnet" fails when creating a managed IPAM network using a VLAN that overlaps with existing network. [\#543](https://github.com/nutanix/terraform-provider-nutanix/issues/543) +- In NDB database resource, Required profile values for provisioning a database with registered dbserver or with new dbserver should be properly listed. [#\604](https://github.com/nutanix/terraform-provider-nutanix/issues/604) + +**Closed issues:** +- Typo in documentation guest_customization_sysrep_custom_key_values. [\#495](https://github.com/nutanix/terraform-provider-nutanix/issues/495) +- Documentation about subnet_type for nutanix_subnet is missing. [\#506](https://github.com/nutanix/terraform-provider-nutanix/issues/506) +- parent_reference misspelled. [\#507](https://github.com/nutanix/terraform-provider-nutanix/issues/507) +- availability_zone_reference not returning in nutanix_clusters. [\#573](https://github.com/nutanix/terraform-provider-nutanix/issues/573) + +**Merged pull requests:** +- Add information about how to import virtual machine to state. [\#500](https://github.com/nutanix/terraform-provider-nutanix/pull/500) +- Removed UUID field from documentation of nutanix address group. [\#462](https://github.com/nutanix/terraform-provider-nutanix/pull/462) + + ## 1.8.1 (April 18, 2023) [Full Changelog](https://github.com/nutanix/terraform-provider-nutanix/compare/feat/1.8.0-ga...feat/1.8.1) diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 000000000..e1afa760f --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,76 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as +contributors and maintainers pledge to making participation in our project and +our community a harassment-free experience for everyone, regardless of age, body +size, disability, ethnicity, sex characteristics, gender identity and expression, +level of experience, education, socio-economic status, nationality, personal +appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment +include: + +- Using welcoming and inclusive language +- Being respectful of differing viewpoints and experiences +- Gracefully accepting constructive criticism +- Focusing on what is best for the community +- Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +- The use of sexualized language or imagery and unwelcome sexual attention or + advances +- Trolling, insulting/derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or electronic + address, without explicit permission +- Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable +behavior and are expected to take appropriate and fair corrective action in +response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or +reject comments, commits, code, wiki edits, issues, and other contributions +that are not aligned to this Code of Conduct, or to ban temporarily or +permanently any contributor for other behaviors that they deem inappropriate, +threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces +when an individual is representing the project or its community. Examples of +representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed +representative at an online or offline event. Representation of a project may be +further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported by contacting the project team. All +complaints will be reviewed and investigated and will result in a response that +is deemed necessary and appropriate to the circumstances. The project team is +obligated to maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good +faith may face temporary or permanent repercussions as determined by other +members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, +available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html + +[homepage]: https://www.contributor-covenant.org + +For answers to common questions about this code of conduct, see +https://www.contributor-covenant.org/faq \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 124e8cf3e..4a9fc9798 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,3 +1,193 @@ +## Contributing to the provider + +Thank you for your interest in contributing to the Nutanix provider. We welcome your contributions. Here you'll find information to help you get started with provider development. + + +## Cloning the Project + +First, you will want to clone the repository into your working directory: + +```shell +git clone git@github.com:nutanix/terraform-provider-nutanix.git +``` + +## Running the Build + +After the clone has been completed, you can enter the provider directory and build the provider. + +```shell +cd terraform-provider-nutanix +make build +``` + +## Developing the Provider + +**NOTE:** Before you start work on a feature, please make sure to check the [issue tracker](https://github.com/nutanix/terraform-provider-nutanix/issues) and existing [pull requests](https://github.com/nutanix/terraform-provider-nutanix/pulls) to ensure that work is not being duplicated. For further clarification, you can also ask in a new issue. + + +If you wish to work on the provider, you'll first need [Go][go-website] installed on your machine. + +[go-website]: https://golang.org/ +[gopath]: http://golang.org/doc/code.html#GOPATH + + +## Building Provider + +### Installing the Local Plugin + +*Note:* manual provider installation is needed only for manual testing of custom +built Nutanix provider plugin. + +Manual installation process differs depending on Terraform version. +Run `terraform version` command to determine version of your Terraform installation. + +1. Create `/registry.terraform.io/nutanixtemp/nutanix/1.99.99/darwin_amd64/` directories +under: + + * `~/.terraform.d/plugins` (Mac and Linux) + + ```sh + mkdir -p ~/.terraform.d/plugins/registry.terraform.io/nutanixtemp/nutanix/1.99.99/darwin_amd64/ + ``` + +2. Build the **binary file**. + ```sh + go build -o bin/terraform-provider-nutanix_macosx-v1.99.99 + ``` + +3. Copy Equinix provider **binary file** there. + + ```sh + cp bin/terraform-provider-nutanix_macosx-v1.99.99 ~/.terraform.d/plugins/registry.terraform.io/nutanixtemp/nutanix/1.99.99/darwin_amd64/terraform-provider-nutanix_v1.99.99 + cp bin/terraform-provider-nutanix_macosx-v1.99.99 ~/.terraform.d/plugins/terraform-provider-nutanix_v1.99.99 + ``` + +4. In every Terraform template directory that uses Equinix provider, ship below + `terraform.tf` file *(in addition to other Terraform files)* + + ```hcl + terraform { + required_providers { + nutanix = { + source = "nutanixtemp/nutanix" + version = "1.99.99" + } + } + } + ``` + +5. **Done!** + + Local Nutanix provider plugin will be used after `terraform init` + command execution in Terraform template directory + + +### Running tests of provider + +For running unit tests: +```sh +make test +``` + +For running integration tests: + +1. Add environment variables for setup related details: +```ssh +export NUTANIX_USERNAME="" +export NUTANIX_PASSWORD="" +export NUTANIX_INSECURE=true +export NUTANIX_PORT=9440 +export NUTANIX_ENDPOINT="" +export NUTANIX_STORAGE_CONTAINER="" +export FOUNDATION_ENDPOINT="" +export FOUNDATION_PORT=8000 +export NOS_IMAGE_TEST_URL="" +export NDB_ENDPOINT="" +export NDB_USERNAME="" +export NDB_PASSWORD="" +``` + +2. Some tests need setup related constants for resource creation. So add/replace details in test_config.json (for pc tests) and test_foundation_config.json (for foundation and foundation central tests and NDB) + +3. To run all tests: +```ssh +make testacc +``` + +4. To run specific tests: +```ssh +export TESTARGS='-run=TestAccNutanixPbr_WithSourceExternalDestinationNetwork' +make testacc +``` + +5. To run collection of tests: +``` ssh +export TESTARGS='-run=TestAccNutanixPbr*' +make testacc +``` + +### Common Issues using the development binary. + +Terraform download the released binary instead developent one. + +Just follow this steps to get the development binary: + +1. Copy the development terraform binary in the root folder of the project (i.e. where your main.tf is), this should be named `terraform-provider-nutanix` +2. Remove the entire “.terraform” directory. + ```sh + rm -rf .terraform/ + ``` + +3. Run the following command in the same folder where you have copied the development terraform binary. + ```sh + terraform init -upgrade + terraform providers -version + ``` + +4. You should see version as “nutanix (unversioned)” +5. Then run your main.tf + + +## Step to raise a Pull Request + +1. Create a github issue with following details. + * **Title** should contain one of the following + - [Feat] Develop terraform resource for \ + - [Imprv] Modify terraform resource to support \ + - [Bug] Fix \ bug in \ + + * **Template** for raising issue is already defined. Refer below for creating any issue. + - [Issue:Bug Report](https://github.com/nutanix/terraform-provider-nutanix/issues/new?assignees=&labels=&projects=&template=bug_report.md&title=) + - [Issue:Feature Request](https://github.com/nutanix/terraform-provider-nutanix/issues/new?assignees=&labels=&projects=&template=feature_request.md&title=) + +2. Create one of the following git branch from `master` branch. Use `issue#`. + * `feat/_issue#` + * `imprv/issue#` + * `bug/issue#` + +3. Please use code comments on the Pull Request. + +4. If a reviewer commented on your PR or asked for changes, please mark the discussion as resolved after you make suggested changes in it. PRs with unresolved issues should not be merged. +4. Tests are mandatory for each PRs except documentation changes. +5. Ensure 85% code coverage on the pull request. Pull request with less than 85% coverage will be rejected. +6. Link the pull requested with the associated issue. +6. Once PR is merged, close the issue. + +## Additional Resources + +We've got a handful of resources outside of this repository that will help users understand the interactions between terraform and Nutanix + +* YouTube + _ Overview Video: [](https://www.youtube.com/watch?v=V8_Lu1mxV6g) + _ Working with images: [](https://www.youtube.com/watch?v=IW0eQevZ73I) +* Nutanix GitHub + _ [](https://github.com/nutanix/terraform-provider-nutanix) + _ Private repo until code goes upstream +* Jon’s GitHub + _ [](https://github.com/JonKohler/ThisOldCloud/tree/master/Terraform-Nutanix) + _ Contains sample TF’s and PDFs from the youtube videos +* Slack channel \* User community slack channel is available on nutanix.slack.com. Email terraform@nutanix.com to gain entry. + # Nutanix Contributor License Agreement By submitting a pull request or otherwise contributing to the project, you agree to the following terms and conditions. You reserve all right and title in your contributions. @@ -9,4 +199,4 @@ You hereby grant Nutanix and to recipients of software distributed by Nutanix, a You represent that your contributions are your original creation, and that you are legally entitled to grant the above license. If your contributions include other third party code, you will include complete details on any third party licenses or restrictions associated with your contributions. ## Notifications -You will notify Nutanix if you become aware that the above representations are inaccurate. +You will notify Nutanix if you become aware that the above representations are inaccurate. \ No newline at end of file diff --git a/README.md b/README.md index f274a6a16..439433ad1 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ Terraform provider plugin to integrate with Nutanix Enterprise Cloud -NOTE: The latest version of the Nutanix provider is [v1.8.1](https://github.com/nutanix/terraform-provider-nutanix/releases/tag/v1.8.1) +NOTE: The latest version of the Nutanix provider is [v1.9.0](https://github.com/nutanix/terraform-provider-nutanix/releases/tag/v1.9.0) Modules based on Terraform Nutanix Provider can be found here : [Modules](https://github.com/nutanix/terraform-provider-nutanix/tree/master/modules) ## Build, Quality Status @@ -46,6 +46,8 @@ The Terraform Nutanix provider is designed to work with Nutanix Prism Central an > For the 1.7.1 release of the provider it will have N-2 compatibility with the Prism Central APIs. This release was tested against Prism Central versions pc2022.6, pc2022.4.0.1 and pc2022.1.0.2. +> For the 1.9.0 release of the provider it will have N-1 compatibility with the Prism Central APIs. This release was tested against Prism Central versions pc2022.9 and pc2023.1.0.1. + ### note With v1.6.1 release of flow networking feature in provider, IAMv2 setups would be mandate. Also, there is known issue for access_control_policies resource where update would be failing. We are continuously tracking the issue internally. @@ -210,6 +212,7 @@ From foundation getting released in 1.5.0-beta, provider configuration will acco * nutanix_ndb_cluster * nutanix_ndb_maintenance_task * nutanix_ndb_maintenance_window +* nutanix_karbon_worker_nodepool ## Data Sources @@ -289,167 +292,10 @@ From foundation getting released in 1.5.0-beta, provider configuration will acco * nutanix_ndb_maintenance_windows * nutanix_ndb_network_available_ips -## Quick Install - -### Install Dependencies - -* [Terraform](https://www.terraform.io/downloads.html) 0.12+ - -### For developing or build from source - - -* [Go](https://golang.org/doc/install) 1.12+ (to build the provider plugin) - - -### Building/Developing Provider - -We recomment to use Go 1.12+ to be able to use `go modules` - -```sh -$ git clone https://github.com/nutanix/terraform-provider-nutanix.git -``` - -Enter the provider directory and build the provider - -```sh -$ make tools -$ make build -``` - -This will create a binary file `terraform-provider-nutanix` you can copy to your terraform specific project. - -Alternative build: with our demo - -```sh -$ make tools -$ go build -o examples/terraform-provider-nutanix -$ cd examples -$ terraform init #to try out our demo -``` - -If you need multi-OS binaries such as Linux, macOS, Windows. Run the following command. - -```sh -$ make tools -$ make cibuild -``` - -This command will create a `pkg/` directory with all the binaries for the most popular OS. - -### Running tests of provider - -For running unit tests: -```sh -make test -``` - -For running integration tests: - -1. Add environment variables for setup related details: -```ssh -export NUTANIX_USERNAME="" -export NUTANIX_PASSWORD="" -export NUTANIX_INSECURE=true -export NUTANIX_PORT=9440 -export NUTANIX_ENDPOINT="" -export NUTANIX_STORAGE_CONTAINER="" -export FOUNDATION_ENDPOINT="" -export FOUNDATION_PORT=8000 -export NOS_IMAGE_TEST_URL="" -``` - -2. Some tests need setup related constants for resource creation. So add/replace details in test_config.json (for pc tests) and test_foundation_config.json (for foundation and foundation central tests) - -3. To run all tests: -```ssh -make testacc -``` - -4. To run specific tests: -```ssh -export TESTARGS='-run=TestAccNutanixPbr_WithSourceExternalDestinationNetwork' -make testacc -``` - -5. To run collection of tests: -``` ssh -export TESTARGS='-run=TestAccNutanixPbr*' -make testacc -``` - -### Common Issues using the development binary. - -Terraform download the released binary instead developent one. - -Just follow this steps to get the development binary: - -1. Copy the development terraform binary in the root folder of the project (i.e. where your main.tf is), this should be named `terraform-provider-nutanix` -2. Remove the entire “.terraform” directory. - ```sh - rm -rf .terraform/ - ``` - -3. Run the following command in the same folder where you have copied the development terraform binary. - ```sh - terraform init -upgrade - terraform providers -version - ``` - -4. You should see version as “nutanix (unversioned)” -5. Then run your main.tf - -## Release it - -1. Install `goreleaser` tool: - - ```bash - go get -v github.com/goreleaser/goreleaser - cd $GOPATH/src/github.com/goreleaser/goreleaser - go install - ``` - - Alternatively you can download a latest release from [goreleaser Releases Page](https://github.com/goreleaser/goreleaser/releases) - -1. Clean up folder `(builds)` if exists - -1. Make sure that the repository state is clean: - - ```bash - git status - ``` - -1. Tag the release: - - ```bash - git tag v1.1.0 - ``` - -1. Run `goreleaser`: - - ```bash - cd (TODO: go dir) - goreleaser --skip-publish v1.1.0 - ``` - -1. Check builds inside `(TODO: build dir)` directory. - -1. Publish release tag to GitHub: - - ```bash - git push origin v1.1.0 - ``` -## Additional Resources +## Developing the provider -We've got a handful of resources outside of this repository that will help users understand the interactions between terraform and Nutanix +The Nutanix Provider for Terraform is the work of many contributors. We appreciate your help! -* YouTube - _ Overview Video: [](https://www.youtube.com/watch?v=V8_Lu1mxV6g) - _ Working with images: [](https://www.youtube.com/watch?v=IW0eQevZ73I) -* Nutanix GitHub - _ [](https://github.com/nutanix/terraform-provider-nutanix) - _ Private repo until code goes upstream -* Jon’s GitHub - _ [](https://github.com/JonKohler/ThisOldCloud/tree/master/Terraform-Nutanix) - _ Contains sample TF’s and PDFs from the youtube videos -* Slack channel \* User community slack channel is available on nutanix.slack.com. Email terraform@nutanix.com to gain entry. +* [Contribution Guidelines](./CONTRIBUTING.md) +* [Code of Conduct](./CODE_OF_CONDUCT.md) diff --git a/client/karbon/karbon_cluster_service.go b/client/karbon/karbon_cluster_service.go index 18243f50f..ad5f10896 100644 --- a/client/karbon/karbon_cluster_service.go +++ b/client/karbon/karbon_cluster_service.go @@ -29,6 +29,11 @@ type ClusterService interface { ListPrivateRegistries(karbonClusterName string) (*PrivateRegistryListResponse, error) AddPrivateRegistry(karbonClusterName string, createRequest PrivateRegistryOperationIntentInput) (*PrivateRegistryResponse, error) DeletePrivateRegistry(karbonClusterName string, privateRegistryName string) (*PrivateRegistryOperationResponse, error) + // worker nodes + AddWorkerNodePool(karbonClusterName string, addPoolRequest *ClusterNodePool) (*ClusterActionResponse, error) + RemoveWorkerNodePool(karbonClusterName, karbonNodepoolName string, removeWorkerPool *RemoveWorkerNodeRequest) (*ClusterActionResponse, error) + DeleteWorkerNodePool(karbonClusterName, workerNodepoolName string) (*ClusterActionResponse, error) + UpdateWorkerNodeLables(karbonClusterName, workerNodepoolName string, body *UpdateWorkerNodeLabels) (*ClusterActionResponse, error) } // karbon 2.1 @@ -201,3 +206,59 @@ func (op ClusterOperations) ScaleDownKarbonCluster(karbonClusterName, karbonNode return karbonClusterActionResponse, op.client.Do(ctx, req, karbonClusterActionResponse) } + +func (op ClusterOperations) AddWorkerNodePool(karbonClusterName string, addPoolRequest *ClusterNodePool) (*ClusterActionResponse, error) { + ctx := context.TODO() + + path := fmt.Sprintf("/v1-alpha.1/k8s/clusters/%s/add-node-pool", karbonClusterName) + req, err := op.client.NewRequest(ctx, http.MethodPost, path, addPoolRequest) + karbonClusterActionResponse := new(ClusterActionResponse) + + if err != nil { + return nil, err + } + + return karbonClusterActionResponse, op.client.Do(ctx, req, karbonClusterActionResponse) +} + +func (op ClusterOperations) RemoveWorkerNodePool(karbonClusterName, karbonNodepoolName string, removeWorkerPool *RemoveWorkerNodeRequest) (*ClusterActionResponse, error) { + ctx := context.TODO() + + path := fmt.Sprintf("/v1-alpha.1/k8s/clusters/%s/node-pools/%s/remove-nodes", karbonClusterName, karbonNodepoolName) + req, err := op.client.NewRequest(ctx, http.MethodPost, path, removeWorkerPool) + karbonClusterActionResponse := new(ClusterActionResponse) + + if err != nil { + return nil, err + } + + return karbonClusterActionResponse, op.client.Do(ctx, req, karbonClusterActionResponse) +} + +func (op ClusterOperations) DeleteWorkerNodePool(karbonClusterName, workerNodepoolName string) (*ClusterActionResponse, error) { + ctx := context.TODO() + + path := fmt.Sprintf("/v1-beta.1/k8s/clusters/%s/node-pools/%s", karbonClusterName, workerNodepoolName) + req, err := op.client.NewRequest(ctx, http.MethodDelete, path, "") + karbonClusterActionResponse := new(ClusterActionResponse) + + if err != nil { + return nil, err + } + + return karbonClusterActionResponse, op.client.Do(ctx, req, karbonClusterActionResponse) +} + +func (op ClusterOperations) UpdateWorkerNodeLables(karbonClusterName, workerNodePoolName string, body *UpdateWorkerNodeLabels) (*ClusterActionResponse, error) { + ctx := context.TODO() + + path := fmt.Sprintf("/v1-alpha.1/k8s/clusters/%s/node-pools/%s/update-labels", karbonClusterName, workerNodePoolName) + req, err := op.client.NewRequest(ctx, http.MethodPost, path, body) + karbonClusterActionResponse := new(ClusterActionResponse) + + if err != nil { + return nil, err + } + + return karbonClusterActionResponse, op.client.Do(ctx, req, karbonClusterActionResponse) +} diff --git a/client/karbon/karbon_cluster_structs.go b/client/karbon/karbon_cluster_structs.go index dd01337a7..1427d5be9 100644 --- a/client/karbon/karbon_cluster_structs.go +++ b/client/karbon/karbon_cluster_structs.go @@ -58,6 +58,7 @@ type ClusterNodePool struct { NodeOSVersion *string `json:"node_os_version" mapstructure:"node_os_version, omitempty"` NumInstances *int64 `json:"num_instances" mapstructure:"num_instances, omitempty"` Nodes *[]ClusterNodeIntentResponse `json:"nodes" mapstructure:"nodes, omitempty"` + Labels map[string]string `json:"labels,omitempty" mapstructure:"labels,omitempty"` } type ClusterNodeIntentResponse struct { @@ -139,6 +140,7 @@ type ClusterNodePoolAHVConfig struct { MemoryMib int64 `json:"memory_mib" mapstructure:"memory_mib, omitempty"` NetworkUUID string `json:"network_uuid" mapstructure:"network_uuid, omitempty"` PrismElementClusterUUID string `json:"prism_element_cluster_uuid" mapstructure:"prism_element_cluster_uuid, omitempty"` + IscsiNetworkUUID string `json:"iscsi_network_uuid" mapstructure:"iscsi_network_uuid"` } type ClusterStorageClassConfigIntentInput struct { @@ -204,3 +206,12 @@ type ClusterScaleDownIntentInput struct { Count int64 `json:"count" mapstructure:"count, omitempty"` NodeList []*string `json:"node_list" mapstructure:"node_list, omitempty"` } + +type RemoveWorkerNodeRequest struct { + NodeList []*string `json:"node_list,omitempty"` +} + +type UpdateWorkerNodeLabels struct { + AddLabel map[string]string `json:"add_labels,omitempty"` + RemoveLabel []string `json:"remove_labels,omitempty"` +} diff --git a/nutanix/common_schema_validation.go b/nutanix/common_schema_validation.go index 752762509..043bcbb1d 100644 --- a/nutanix/common_schema_validation.go +++ b/nutanix/common_schema_validation.go @@ -6,19 +6,33 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" ) -var requiredResourceFields map[string][]string = map[string][]string{ - "era_provision_database": {"databasetype", "dbparameterprofileid", "timemachineinfo", "nodes"}, +var requiredResourceFields map[string]map[string][]string = map[string]map[string][]string{ + "ndb_provision_database": { + "createdbserver": {"databasetype", "softwareprofileid", "softwareprofileversionid", "computeprofileid", + "networkprofileid", "dbparameterprofileid", "nxclusterid", "sshpublickey", "timemachineinfo", "nodes"}, + "registerdbserver": {"databasetype", "dbparameterprofileid", "timemachineinfo", "nodes"}}, } func schemaValidation(resourceName string, d *schema.ResourceData) error { var diagMap []string if vals, ok := requiredResourceFields[resourceName]; ok { - for _, attr := range vals { - if _, ok := d.GetOk(attr); !ok { - diagMap = append(diagMap, attr) + if dbVal, ok := d.GetOkExists("createdbserver"); ok { + if dbVal.(bool) { + createVals := vals["createdbserver"] + for _, attr := range createVals { + if _, ok := d.GetOk(attr); !ok { + diagMap = append(diagMap, attr) + } + } + } else { + registerVals := vals["registerdbserver"] + for _, attr := range registerVals { + if _, ok := d.GetOk(attr); !ok { + diagMap = append(diagMap, attr) + } + } } } - if diagMap != nil { return fmt.Errorf("missing required fields are %s for %s", diagMap, resourceName) } diff --git a/nutanix/data_source_nutanix_karbon_cluster_kubeconfig_test.go b/nutanix/data_source_nutanix_karbon_cluster_kubeconfig_test.go index 04bf2855f..5f6821744 100644 --- a/nutanix/data_source_nutanix_karbon_cluster_kubeconfig_test.go +++ b/nutanix/data_source_nutanix_karbon_cluster_kubeconfig_test.go @@ -1,7 +1,6 @@ package nutanix import ( - "fmt" "testing" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" @@ -9,21 +8,21 @@ import ( ) func TestAccKarbonClusterKubeConfigDataSource_basic(t *testing.T) { - t.Skip() r := acctest.RandInt() subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + KubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccKarbonClusterKubeConfigDataSourceConfig(subnetName, r, defaultContainter, 1), + Config: testAccKarbonClusterKubeConfigDataSourceConfig(subnetName, r, defaultContainter, 1, KubernetesVersion), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "data.nutanix_karbon_cluster_kubeconfig.config", "id"), resource.TestCheckResourceAttr( - "data.nutanix_karbon_cluster_kubeconfig.config", "karbon_cluster_name", fmt.Sprintf("test-karbon-%d", r)), + "data.nutanix_karbon_cluster_kubeconfig.config", "name", "test-karbon"), ), }, }, @@ -34,35 +33,40 @@ func TestAccKarbonClusterKubeConfigDataSource_basicByName(t *testing.T) { r := acctest.RandInt() subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + KubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccKarbonClusterKubeConfigDataSourceConfigByName(subnetName, r, defaultContainter, 1), + Config: testAccKarbonClusterKubeConfigDataSourceConfigByName(subnetName, r, defaultContainter, 1, KubernetesVersion), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "data.nutanix_karbon_cluster_kubeconfig.config", "id"), resource.TestCheckResourceAttr( - "data.nutanix_karbon_cluster_kubeconfig.config", "karbon_cluster_name", fmt.Sprintf("test-karbon-%d", r)), + "data.nutanix_karbon_cluster_kubeconfig.config", "karbon_cluster_name", "test-karbon"), ), }, }, }) } -func testAccKarbonClusterKubeConfigDataSourceConfig(subnetName string, r int, containter string, workers int) string { - return testAccNutanixKarbonClusterConfig(subnetName, r, containter, workers, "flannel") + ` - data "nutanix_karbon_cluster_kubeconfig" "config" { - karbon_cluster_id = nutanix_karbon_cluster.cluster.id - } +func testAccKarbonClusterKubeConfigDataSourceConfig(subnetName string, r int, containter string, workers int, k8s string) string { + return ` + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_karbon_cluster_kubeconfig" "config" { + karbon_cluster_id = data.nutanix_karbon_clusters.kclusters.clusters.0.uuid + } ` } -func testAccKarbonClusterKubeConfigDataSourceConfigByName(subnetName string, r int, containter string, workers int) string { - return testAccNutanixKarbonClusterConfig(subnetName, r, containter, workers, "flannel") + ` - data "nutanix_karbon_cluster_kubeconfig" "config" { - karbon_cluster_name = nutanix_karbon_cluster.cluster.name - } +func testAccKarbonClusterKubeConfigDataSourceConfigByName(subnetName string, r int, containter string, workers int, k8s string) string { + return ` + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_karbon_cluster_kubeconfig" "config" { + karbon_cluster_name = data.nutanix_karbon_clusters.kclusters.clusters.0.name + } ` } diff --git a/nutanix/data_source_nutanix_karbon_cluster_ssh_test.go b/nutanix/data_source_nutanix_karbon_cluster_ssh_test.go index 2c144a120..488088f22 100644 --- a/nutanix/data_source_nutanix_karbon_cluster_ssh_test.go +++ b/nutanix/data_source_nutanix_karbon_cluster_ssh_test.go @@ -8,17 +8,17 @@ import ( ) func TestAccKarbonClusterSSHDataSource_basicx(t *testing.T) { - t.Skip() r := acctest.RandInt() //resourceName := "nutanix_karbon_cluster.cluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + KubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccKarbonClusterSSHDataSourceConfig(subnetName, r, defaultContainter, 1), + Config: testAccKarbonClusterSSHDataSourceConfig(subnetName, r, defaultContainter, 1, KubernetesVersion), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "data.nutanix_karbon_cluster_ssh.ssh", "id"), @@ -35,12 +35,13 @@ func TestAccKarbonClusterSSHDataSource_basicByName(t *testing.T) { //resourceName := "nutanix_karbon_cluster.cluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + KubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccKarbonClusterSSHDataSourceConfigByName(subnetName, r, defaultContainter, 1), + Config: testAccKarbonClusterSSHDataSourceConfigByName(subnetName, r, defaultContainter, 1, KubernetesVersion), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "data.nutanix_karbon_cluster_ssh.ssh", "id"), @@ -52,18 +53,22 @@ func TestAccKarbonClusterSSHDataSource_basicByName(t *testing.T) { }) } -func testAccKarbonClusterSSHDataSourceConfig(subnetName string, r int, containter string, workers int) string { - return testAccNutanixKarbonClusterConfig(subnetName, r, containter, workers, "flannel") + ` - data "nutanix_karbon_cluster_ssh" "ssh" { - karbon_cluster_id = nutanix_karbon_cluster.cluster.id - } +func testAccKarbonClusterSSHDataSourceConfig(subnetName string, r int, containter string, workers int, k8s string) string { + return ` + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_karbon_cluster_ssh" "ssh" { + karbon_cluster_id = data.nutanix_karbon_clusters.kclusters.clusters.0.uuid + } ` } -func testAccKarbonClusterSSHDataSourceConfigByName(subnetName string, r int, containter string, workers int) string { - return testAccNutanixKarbonClusterConfig(subnetName, r, containter, workers, "flannel") + ` - data "nutanix_karbon_cluster_ssh" "ssh" { - karbon_cluster_name = nutanix_karbon_cluster.cluster.name - } +func testAccKarbonClusterSSHDataSourceConfigByName(subnetName string, r int, containter string, workers int, k8s string) string { + return ` + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_karbon_cluster_ssh" "ssh" { + karbon_cluster_name = data.nutanix_karbon_clusters.kclusters.clusters.0.name + } ` } diff --git a/nutanix/data_source_nutanix_karbon_cluster_test.go b/nutanix/data_source_nutanix_karbon_cluster_test.go index 107c47340..80953c255 100644 --- a/nutanix/data_source_nutanix_karbon_cluster_test.go +++ b/nutanix/data_source_nutanix_karbon_cluster_test.go @@ -8,17 +8,17 @@ import ( ) func TestAccKarbonClusterDataSource_basic(t *testing.T) { - t.Skip() r := acctest.RandInt() dataSourceName := "data.nutanix_karbon_cluster.kcluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + KubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccKarbonClusterDataSourceConfig(subnetName, r, defaultContainter, 1), + Config: testAccKarbonClusterDataSourceConfig(subnetName, r, defaultContainter, 1, KubernetesVersion), Check: resource.ComposeTestCheckFunc( testAccCheckNutanixKarbonClusterExists(dataSourceName), resource.TestCheckResourceAttrSet( @@ -34,12 +34,13 @@ func TestAccKarbonClusterDataSource_basicByName(t *testing.T) { //resourceName := "nutanix_karbon_cluster.cluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + KubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccKarbonClusterDataSourceConfigByName(subnetName, r, defaultContainter, 1), + Config: testAccKarbonClusterDataSourceConfigByName(subnetName, r, defaultContainter, 1, KubernetesVersion), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "data.nutanix_karbon_cluster.kcluster", "id"), @@ -49,18 +50,23 @@ func TestAccKarbonClusterDataSource_basicByName(t *testing.T) { }) } -func testAccKarbonClusterDataSourceConfig(subnetName string, r int, containter string, workers int) string { - return testAccNutanixKarbonClusterConfig(subnetName, r, containter, workers, "flannel") + ` - data "nutanix_karbon_cluster" "kcluster" { - karbon_cluster_id = nutanix_karbon_cluster.cluster.id - } +func testAccKarbonClusterDataSourceConfig(subnetName string, r int, containter string, workers int, k8s string) string { + return ` + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_karbon_cluster" "kcluster" { + karbon_cluster_id = data.nutanix_karbon_clusters.kclusters.clusters.0.uuid + } ` } -func testAccKarbonClusterDataSourceConfigByName(subnetName string, r int, containter string, workers int) string { - return testAccNutanixKarbonClusterConfig(subnetName, r, containter, workers, "flannel") + ` - data "nutanix_karbon_cluster" "kcluster" { - karbon_cluster_name = nutanix_karbon_cluster.cluster.name - } +func testAccKarbonClusterDataSourceConfigByName(subnetName string, r int, containter string, workers int, k8s string) string { + return ` + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_karbon_cluster" "kcluster" { + karbon_cluster_name = data.nutanix_karbon_clusters.kclusters.clusters.0.name + } + ` } diff --git a/nutanix/data_source_nutanix_karbon_clusters_test.go b/nutanix/data_source_nutanix_karbon_clusters_test.go index 54c529a42..d3ed8b3a4 100644 --- a/nutanix/data_source_nutanix_karbon_clusters_test.go +++ b/nutanix/data_source_nutanix_karbon_clusters_test.go @@ -12,12 +12,13 @@ func TestAccKarbonClustersDataSource_basic(t *testing.T) { //resourceName := "nutanix_karbon_cluster.cluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + KubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccKarbonClustersDataSourceConfig(subnetName, r, defaultContainter, 1), + Config: testAccKarbonClustersDataSourceConfig(subnetName, r, defaultContainter, 1, KubernetesVersion), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "data.nutanix_karbon_clusters.kclusters", "clusters.#"), @@ -27,9 +28,9 @@ func TestAccKarbonClustersDataSource_basic(t *testing.T) { }) } -func testAccKarbonClustersDataSourceConfig(subnetName string, r int, containter string, workers int) string { - return testAccNutanixKarbonClusterConfig(subnetName, r, containter, workers, "flannel") + ` - data "nutanix_karbon_clusters" "kclusters" {} +func testAccKarbonClustersDataSourceConfig(subnetName string, r int, containter string, workers int, k8s string) string { + return ` + data "nutanix_karbon_clusters" "kclusters" {} ` } diff --git a/nutanix/main_test.go b/nutanix/main_test.go index 87954c738..028482806 100644 --- a/nutanix/main_test.go +++ b/nutanix/main_test.go @@ -24,8 +24,9 @@ type TestConfig struct { ExpectedDisplayName string `json:"expected_display_name"` DirectoryServiceUUID string `json:"directory_service_uuid"` } `json:"users"` - NodeOsVersion string `json:"node_os_version"` - AdRuleTarget struct { + KubernetesVersion string `json:"kubernetes_version"` + NodeOsVersion string `json:"node_os_version"` + AdRuleTarget struct { Name string `json:"name"` Values string `json:"values"` } `json:"ad_rule_target"` diff --git a/nutanix/provider.go b/nutanix/provider.go index 694e79644..33ab866a7 100644 --- a/nutanix/provider.go +++ b/nutanix/provider.go @@ -263,6 +263,7 @@ func Provider() *schema.Provider { "nutanix_ndb_stretched_vlan": resourceNutanixNDBStretchedVlan(), "nutanix_ndb_clone_refresh": resourceNutanixNDBCloneRefresh(), "nutanix_ndb_cluster": resourceNutanixNDBCluster(), + "nutanix_karbon_worker_nodepool": resourceNutanixKarbonWorkerNodePool(), }, ConfigureContextFunc: providerConfigure, } diff --git a/nutanix/resource_nutanix_karbon_cluster.go b/nutanix/resource_nutanix_karbon_cluster.go index 2de9dafe7..ab92fb8b4 100644 --- a/nutanix/resource_nutanix_karbon_cluster.go +++ b/nutanix/resource_nutanix_karbon_cluster.go @@ -63,6 +63,11 @@ func resourceNutanixKarbonCluster() *schema.Resource { Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(DEFAULTWAITTIMEOUT * time.Minute), + Update: schema.DefaultTimeout(DEFAULTWAITTIMEOUT * time.Minute), + Delete: schema.DefaultTimeout(DEFAULTWAITTIMEOUT * time.Minute), + }, SchemaVersion: 1, Schema: KarbonClusterResourceMap(), } @@ -75,6 +80,7 @@ func KarbonClusterResourceMap() map[string]*schema.Schema { Optional: true, Default: DEFAULTWAITTIMEOUT, ValidateFunc: validation.IntAtLeast(MINIMUMWAITTIMEOUT), + Deprecated: "use timeouts instead", }, "name": { Type: schema.TypeString, @@ -513,7 +519,7 @@ func resourceNutanixKarbonClusterCreate(ctx context.Context, d *schema.ResourceD } // Set terraform state id d.SetId(createClusterResponse.ClusterUUID) - err = WaitForKarbonCluster(ctx, client, timeout, createClusterResponse.TaskUUID) + err = WaitForKarbonCluster(ctx, client, timeout, createClusterResponse.TaskUUID, d.Timeout(schema.TimeoutCreate)) if err != nil { return diag.FromErr(err) } @@ -638,9 +644,11 @@ func resourceNutanixKarbonClusterUpdate(ctx context.Context, d *schema.ResourceD if err != nil { return diag.FromErr(err) } - err = WaitForKarbonCluster(ctx, client, timeout, taskUUID) - if err != nil { - return diag.FromErr(err) + if taskUUID != "" { + err = WaitForKarbonCluster(ctx, client, timeout, taskUUID, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return diag.FromErr(err) + } } } if d.HasChange("private_registry") { @@ -685,7 +693,7 @@ func resourceNutanixKarbonClusterDelete(ctx context.Context, d *schema.ResourceD if err != nil { return diag.Errorf("error while deleting Karbon Cluster UUID(%s): %s", d.Id(), err) } - err = WaitForKarbonCluster(ctx, client, timeout, clusterDeleteResponse.TaskUUID) + err = WaitForKarbonCluster(ctx, client, timeout, clusterDeleteResponse.TaskUUID, d.Timeout(schema.TimeoutDelete)) if err != nil { return diag.Errorf("error while waiting for Karbon Cluster deletion with UUID(%s): %s", d.Id(), err) } @@ -984,12 +992,12 @@ func GetNodePoolsForCluster(conn *karbon.Client, karbonClusterName string, nodep return nodepoolStructs, nil } -func WaitForKarbonCluster(ctx context.Context, client *Client, waitTimeoutMinutes int64, taskUUID string) error { +func WaitForKarbonCluster(ctx context.Context, client *Client, waitTimeoutMinutes int64, taskUUID string, timeout time.Duration) error { stateConf := &resource.StateChangeConf{ Pending: []string{"QUEUED", "RUNNING"}, Target: []string{"SUCCEEDED"}, Refresh: taskStateRefreshFunc(client.API, taskUUID), - Timeout: time.Duration(waitTimeoutMinutes) * time.Minute, + Timeout: timeout, Delay: WAITDELAY, MinTimeout: WAITMINTIMEOUT, } diff --git a/nutanix/resource_nutanix_karbon_cluster_test.go b/nutanix/resource_nutanix_karbon_cluster_test.go index aeea48b1e..fd6367db8 100644 --- a/nutanix/resource_nutanix_karbon_cluster_test.go +++ b/nutanix/resource_nutanix_karbon_cluster_test.go @@ -12,18 +12,18 @@ import ( ) func TestAccKarbonCluster_basic(t *testing.T) { - t.Skip() r := acctest.RandInt() resourceName := "nutanix_karbon_cluster.cluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + kubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckNutanixKarbonClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 1, "flannel"), + Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 1, "flannel", kubernetesVersion), Check: resource.ComposeTestCheckFunc( testAccCheckNutanixKarbonClusterExists(resourceName), resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("test-karbon-%d", r)), @@ -35,7 +35,7 @@ func TestAccKarbonCluster_basic(t *testing.T) { ), }, { - Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 2, "flannel"), + Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 2, "flannel", kubernetesVersion), Check: resource.ComposeTestCheckFunc( testAccCheckNutanixKarbonClusterExists(resourceName), resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("test-karbon-%d", r)), @@ -50,7 +50,7 @@ func TestAccKarbonCluster_basic(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"version", "master_node_pool", "worker_node_pool", "storage_class_config"}, //Wil be fixed on future API versions + ImportStateVerifyIgnore: []string{"version", "master_node_pool", "worker_node_pool", "storage_class_config", "wait_timeout_minutes"}, //Wil be fixed on future API versions }, }, }) @@ -58,17 +58,17 @@ func TestAccKarbonCluster_basic(t *testing.T) { func TestAccKarbonCluster_scaleDown(t *testing.T) { r := acctest.RandInt() - t.Skip() resourceName := "nutanix_karbon_cluster.cluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + kubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckNutanixKarbonClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 3, "flannel"), + Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 3, "flannel", kubernetesVersion), Check: resource.ComposeTestCheckFunc( testAccCheckNutanixKarbonClusterExists(resourceName), resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("test-karbon-%d", r)), @@ -80,7 +80,7 @@ func TestAccKarbonCluster_scaleDown(t *testing.T) { ), }, { - Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 1, "flannel"), + Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 1, "flannel", kubernetesVersion), Check: resource.ComposeTestCheckFunc( testAccCheckNutanixKarbonClusterExists(resourceName), resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("test-karbon-%d", r)), @@ -95,7 +95,7 @@ func TestAccKarbonCluster_scaleDown(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"version", "master_node_pool", "worker_node_pool", "storage_class_config"}, //Wil be fixed on future API versions + ImportStateVerifyIgnore: []string{"version", "master_node_pool", "worker_node_pool", "storage_class_config", "wait_timeout_minutes"}, //Wil be fixed on future API versions }, }, }) @@ -103,17 +103,17 @@ func TestAccKarbonCluster_scaleDown(t *testing.T) { func TestAccKarbonCluster_updateCNI(t *testing.T) { r := acctest.RandInt() - t.Skip() resourceName := "nutanix_karbon_cluster.cluster" subnetName := testVars.SubnetName defaultContainter := testVars.DefaultContainerName + kubernetesVersion := testVars.KubernetesVersion resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckNutanixKarbonClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 1, "flannel"), + Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 1, "flannel", kubernetesVersion), Check: resource.ComposeTestCheckFunc( testAccCheckNutanixKarbonClusterExists(resourceName), resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("test-karbon-%d", r)), @@ -125,7 +125,7 @@ func TestAccKarbonCluster_updateCNI(t *testing.T) { ), }, { - Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 2, "calico"), + Config: testAccNutanixKarbonClusterConfig(subnetName, r, defaultContainter, 2, "calico", kubernetesVersion), Check: resource.ComposeTestCheckFunc( testAccCheckNutanixKarbonClusterExists(resourceName), resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("test-karbon-%d", r)), @@ -140,7 +140,7 @@ func TestAccKarbonCluster_updateCNI(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"version", "master_node_pool", "worker_node_pool", "storage_class_config"}, //Wil be fixed on future API versions + ImportStateVerifyIgnore: []string{"version", "master_node_pool", "worker_node_pool", "storage_class_config", "wait_timeout_minutes"}, //Wil be fixed on future API versions }, }, }) @@ -182,30 +182,30 @@ func testAccCheckNutanixKarbonClusterExists(n string) resource.TestCheckFunc { } } -func testAccNutanixKarbonClusterConfig(subnetName string, r int, containter string, workers int, cni string) string { +func testAccNutanixKarbonClusterConfig(subnetName string, r int, containter string, workers int, cni, k8sVersion string) string { return fmt.Sprintf(` locals { cluster_id = [ for cluster in data.nutanix_clusters.clusters.entities : cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" ][0] - node_os_version = "%s" + node_os_version = "%[1]s" deployment_type = "" - amount_of_workers = %d + amount_of_workers = %[2]d amount_of_masters = 1 - cni = "%s" + cni = "%[3]s" master_vip = "" } data "nutanix_clusters" "clusters" {} data "nutanix_subnet" "karbon_subnet" { - subnet_name = "%s" + subnet_name = "%[4]s" } resource "nutanix_karbon_cluster" "cluster" { - name = "test-karbon-%d" - version = "1.19.8-0" + name = "test-karbon-%[5]d" + version = "%[7]s" dynamic "active_passive_config" { for_each = local.deployment_type == "active-passive" ? [1] : [] @@ -233,7 +233,7 @@ func testAccNutanixKarbonClusterConfig(subnetName string, r int, containter stri volumes_config { flash_mode = false prism_element_cluster_uuid = local.cluster_id - storage_container = "%s" + storage_container = "%[6]s" } } cni_config { @@ -281,5 +281,5 @@ func testAccNutanixKarbonClusterConfig(subnetName string, r int, containter stri } } - `, testVars.NodeOsVersion, workers, cni, subnetName, r, containter) + `, testVars.NodeOsVersion, workers, cni, subnetName, r, containter, k8sVersion) } diff --git a/nutanix/resource_nutanix_karbon_cluster_worker_pool.go b/nutanix/resource_nutanix_karbon_cluster_worker_pool.go new file mode 100644 index 000000000..0d2674171 --- /dev/null +++ b/nutanix/resource_nutanix_karbon_cluster_worker_pool.go @@ -0,0 +1,403 @@ +package nutanix + +import ( + "context" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/terraform-providers/terraform-provider-nutanix/client/karbon" + "github.com/terraform-providers/terraform-provider-nutanix/utils" +) + +func resourceNutanixKarbonWorkerNodePool() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceNutanixKarbonWorkerNodePoolCreate, + ReadContext: resourceNutanixKarbonWorkerNodePoolRead, + UpdateContext: resourceNutanixKarbonWorkerNodePoolUpdate, + DeleteContext: resourceNutanixKarbonWorkerNodePoolDelete, + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(DEFAULTWAITTIMEOUT * time.Minute), + Update: schema.DefaultTimeout(DEFAULTWAITTIMEOUT * time.Minute), + Delete: schema.DefaultTimeout(DEFAULTWAITTIMEOUT * time.Minute), + }, + Schema: map[string]*schema.Schema{ + "cluster_name": { + Type: schema.TypeString, + Required: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "node_os_version": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "num_instances": { + Type: schema.TypeInt, + Required: true, + ForceNew: false, + ValidateFunc: validation.IntAtLeast(MINNUMINSTANCES), + }, + "ahv_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cpu": { + Type: schema.TypeInt, + Optional: true, + Default: "8", + ValidateFunc: validation.IntAtLeast(MINCPU), + }, + "disk_mib": { + Type: schema.TypeInt, + Optional: true, + Default: "122880", + ValidateFunc: validation.IntAtLeast(DEFAULTWORKERNODEDISKMIB), + }, + "memory_mib": { + Type: schema.TypeInt, + Optional: true, + Default: "8192", + ValidateFunc: validation.IntAtLeast(DEFAULTWORKERNODEEMORYMIB), + }, + "network_uuid": { + Type: schema.TypeString, + Required: true, + }, + "prism_element_cluster_uuid": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "iscsi_network_uuid": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Computed: true, + }, + "nodes": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "hostname": { + Type: schema.TypeString, + Computed: true, + }, + "ipv4_address": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + } +} + +func resourceNutanixKarbonWorkerNodePoolCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + client := meta.(*Client) + conn := client.KarbonAPI + + addworkerRequest := &karbon.ClusterNodePool{} + nkeName := "" + if karbonNodeName, ok := d.GetOk("cluster_name"); ok && len(karbonNodeName.(string)) > 0 { + nkeName = karbonNodeName.(string) + } else { + return diag.Errorf("cluster_name is a required field") + } + + if workerName, ok := d.GetOk("name"); ok { + addworkerRequest.Name = utils.StringPtr(workerName.(string)) + } + + if numOfInst, ok := d.GetOk("num_instances"); ok { + numInstances := int64(numOfInst.(int)) + addworkerRequest.NumInstances = &numInstances + } + + if ahvConfig, ok := d.GetOk("ahv_config"); ok { + ahvConfigList := ahvConfig.([]interface{}) + nodepool := &karbon.ClusterNodePool{ + AHVConfig: &karbon.ClusterNodePoolAHVConfig{}, + } + if len(ahvConfigList) != 1 { + return diag.Errorf("ahv_config must have 1 element") + } + ahvConfig := ahvConfigList[0].(map[string]interface{}) + if valCPU, ok := ahvConfig["cpu"]; ok { + i := int64(valCPU.(int)) + nodepool.AHVConfig.CPU = i + } + if valDiskMib, ok := ahvConfig["disk_mib"]; ok { + i := int64(valDiskMib.(int)) + nodepool.AHVConfig.DiskMib = i + } + if valMemoryMib, ok := ahvConfig["memory_mib"]; ok { + i := int64(valMemoryMib.(int)) + nodepool.AHVConfig.MemoryMib = i + } + if valNetworkUUID, ok := ahvConfig["network_uuid"]; ok && len(valNetworkUUID.(string)) > 0 { + nodepool.AHVConfig.NetworkUUID = valNetworkUUID.(string) + } + if valPrismElementClusterUUID, ok := ahvConfig["prism_element_cluster_uuid"]; ok && len(valPrismElementClusterUUID.(string)) > 0 { + nodepool.AHVConfig.PrismElementClusterUUID = valPrismElementClusterUUID.(string) + } + if valICSUUUID, ok := ahvConfig["iscsi_network_uuid"]; ok && len(valICSUUUID.(string)) > 0 { + nodepool.AHVConfig.IscsiNetworkUUID = valICSUUUID.(string) + } + addworkerRequest.AHVConfig = nodepool.AHVConfig + } + if label, ok := d.GetOk("labels"); ok && label.(map[string]interface{}) != nil { + addworkerRequest.Labels = utils.ConvertMapString(label.(map[string]interface{})) + } + karbonClusterActionResponse, err := conn.Cluster.AddWorkerNodePool( + nkeName, + addworkerRequest, + ) + if err != nil { + return diag.FromErr(err) + } + err = WaitForKarbonCluster(ctx, client, 0, karbonClusterActionResponse.TaskUUID, d.Timeout(schema.TimeoutCreate)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(karbonClusterActionResponse.TaskUUID) + return resourceNutanixKarbonWorkerNodePoolRead(ctx, d, meta) +} + +func resourceNutanixKarbonWorkerNodePoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*Client).KarbonAPI + setTimeout(meta) + // Make request to the API + var err error + karbonClsName := d.Get("cluster_name") + resp, err := conn.Cluster.GetKarbonCluster(karbonClsName.(string)) + if err != nil { + d.SetId("") + return nil + } + karbonClusterName := *resp.Name + workerName := d.Get("name") + nodepool, err := conn.Cluster.GetKarbonClusterNodePool(karbonClusterName, workerName.(string)) + if err != nil { + return diag.FromErr(err) + } + + nodes := make([]map[string]interface{}, 0) + for _, npn := range *nodepool.Nodes { + nodes = append(nodes, map[string]interface{}{ + "hostname": npn.Hostname, + "ipv4_address": npn.IPv4Address, + }) + } + if err = d.Set("name", nodepool.Name); err != nil { + return diag.Errorf("error setting name for nke Worker Node Pool %s: %s", d.Id(), err) + } + if err = d.Set("node_os_version", nodepool.NodeOSVersion); err != nil { + return diag.Errorf("error setting node_os_version for nke Worker Node Pool %s: %s", d.Id(), err) + } + if err = d.Set("num_instances", nodepool.NumInstances); err != nil { + return diag.Errorf("error setting num_instances for nke Worker Node Pool %s: %s", d.Id(), err) + } + if err = d.Set("nodes", nodes); err != nil { + return diag.Errorf("error setting nodes for nke Worker Node Pool %s: %s", d.Id(), err) + } + if err = d.Set("labels", nodepool.Labels); err != nil { + return diag.Errorf("error setting labels for nke Worker Node Pool %s: %s", d.Id(), err) + } + if err = d.Set("ahv_config", flattenAHVNodePoolConfig(nodepool.AHVConfig)); err != nil { + return diag.Errorf("error setting ahv_config for nke Worker Node Pool %s: %s", d.Id(), err) + } + return nil +} + +func resourceNutanixKarbonWorkerNodePoolUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + client := meta.(*Client) + conn := client.KarbonAPI + + karbonClsName := d.Get("cluster_name") + workerName := d.Get("name") + resp, err := conn.Cluster.GetKarbonCluster(karbonClsName.(string)) + if err != nil { + d.SetId("") + return nil + } + nodepool, err := conn.Cluster.GetKarbonClusterNodePool(*resp.Name, workerName.(string)) + if err != nil { + return diag.FromErr(err) + } + + if d.HasChange("num_instances") { + old, new := d.GetChange("num_instances") + + if old.(int) > new.(int) { + amountOfNodes := old.(int) - new.(int) + scaleDownRequest := &karbon.ClusterScaleDownIntentInput{ + Count: int64(amountOfNodes), + } + karbonClusterActionResponse, err := client.KarbonAPI.Cluster.ScaleDownKarbonCluster( + *resp.Name, + *nodepool.Name, + scaleDownRequest, + ) + if err != nil { + return diag.FromErr(err) + } + err = WaitForKarbonCluster(ctx, client, 0, karbonClusterActionResponse.TaskUUID, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return diag.FromErr(err) + } + } else { + amountOfNodes := new.(int) - old.(int) + scaleUpRequest := &karbon.ClusterScaleUpIntentInput{ + Count: int64(amountOfNodes), + } + karbonClusterActionResponse, err := client.KarbonAPI.Cluster.ScaleUpKarbonCluster( + *resp.Name, + *nodepool.Name, + scaleUpRequest, + ) + if err != nil { + return diag.FromErr(err) + } + err = WaitForKarbonCluster(ctx, client, 0, karbonClusterActionResponse.TaskUUID, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return diag.FromErr(err) + } + } + } + if d.HasChange("labels") { + old, new := d.GetChange("labels") + updateLabelRequest := &karbon.UpdateWorkerNodeLabels{} + + newMap := new.(map[string]interface{}) + oldMap := old.(map[string]interface{}) + addLabelMap := map[string]string{} + removeLabel := []string{} + + // check any new is label is added. + for key := range newMap { + if _, ok := oldMap[key]; ok { + continue + } else { + addLabelMap[key] = (newMap[key]).(string) + } + } + // check any label is removed + for key := range oldMap { + if _, ok := newMap[key]; ok { + continue + } else { + removeLabel = append(removeLabel, key) + } + } + + updateLabelRequest.AddLabel = addLabelMap + updateLabelRequest.RemoveLabel = removeLabel + + nodeLabelActionResponse, err := client.KarbonAPI.Cluster.UpdateWorkerNodeLables( + *resp.Name, + *nodepool.Name, + updateLabelRequest, + ) + if err != nil { + return diag.FromErr(err) + } + err = WaitForKarbonCluster(ctx, client, 0, nodeLabelActionResponse.TaskUUID, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return diag.FromErr(err) + } + } + return resourceNutanixKarbonWorkerNodePoolRead(ctx, d, meta) +} + +func resourceNutanixKarbonWorkerNodePoolDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + client := meta.(*Client) + conn := client.KarbonAPI + + var err error + karbonClsName := d.Get("cluster_name") + resp, err := conn.Cluster.GetKarbonCluster(karbonClsName.(string)) + if err != nil { + d.SetId("") + return nil + } + karbonClusterName := *resp.Name + workerName := d.Get("name") + nodepool, err := conn.Cluster.GetKarbonClusterNodePool(karbonClusterName, workerName.(string)) + if err != nil { + return diag.FromErr(err) + } + nodes := []*string{} + for _, v := range *nodepool.Nodes { + nodes = append(nodes, v.Hostname) + } + removeWorkerRequest := &karbon.RemoveWorkerNodeRequest{ + NodeList: nodes, + } + workerPoolActionResponse, err := conn.Cluster.RemoveWorkerNodePool( + karbonClusterName, + workerName.(string), + removeWorkerRequest, + ) + if err != nil { + return diag.FromErr(err) + } + err = WaitForKarbonCluster(ctx, client, 0, workerPoolActionResponse.TaskUUID, d.Timeout(schema.TimeoutDelete)) + if err != nil { + return diag.FromErr(err) + } + + workerNodeDeleteResponse, er := client.KarbonAPI.Cluster.DeleteWorkerNodePool( + karbonClusterName, + workerName.(string), + ) + if er != nil { + return diag.FromErr(er) + } + err = WaitForKarbonCluster(ctx, client, 0, workerNodeDeleteResponse.TaskUUID, d.Timeout(schema.TimeoutDelete)) + if err != nil { + return diag.FromErr(err) + } + return nil +} + +func flattenAHVNodePoolConfig(ahv *karbon.ClusterNodePoolAHVConfig) []map[string]interface{} { + if ahv != nil { + ahvConfig := make([]map[string]interface{}, 0) + + config := map[string]interface{}{} + + config["cpu"] = ahv.CPU + config["disk_mib"] = ahv.DiskMib + config["memory_mib"] = ahv.MemoryMib + config["network_uuid"] = ahv.NetworkUUID + config["prism_element_cluster_uuid"] = ahv.PrismElementClusterUUID + if ahv.IscsiNetworkUUID != "" { + config["iscsi_network_uuid"] = ahv.IscsiNetworkUUID + } + + ahvConfig = append(ahvConfig, config) + return ahvConfig + } + return nil +} diff --git a/nutanix/resource_nutanix_karbon_cluster_worker_pool_test.go b/nutanix/resource_nutanix_karbon_cluster_worker_pool_test.go new file mode 100644 index 000000000..800145c07 --- /dev/null +++ b/nutanix/resource_nutanix_karbon_cluster_worker_pool_test.go @@ -0,0 +1,164 @@ +package nutanix + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccKarbonClusterWorkerPool_basic(t *testing.T) { + resourceName := "nutanix_karbon_worker_nodepool.nodepool" + subnetName := testVars.SubnetName + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNutanixKarbonClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNutanixKarbonClusterWorkerNodePoolConfig(subnetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNutanixKarbonClusterExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "name", "workerpool1"), + resource.TestCheckResourceAttr(resourceName, "num_instances", "1"), + resource.TestCheckResourceAttrSet(resourceName, "nodes.#"), + resource.TestCheckResourceAttrSet(resourceName, "ahv_config.#"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.cpu", "4"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.disk_mib", "122880"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.memory_mib", "8192"), + resource.TestCheckResourceAttrSet(resourceName, "node_os_version"), + resource.TestCheckResourceAttr(resourceName, "labels.k1", "v1"), + resource.TestCheckResourceAttr(resourceName, "labels.k2", "v2"), + ), + }, + { // Test for non-empty plans. No modification. + Config: testAccNutanixKarbonClusterWorkerNodePoolConfig(subnetName), + PlanOnly: true, + }, + }, + }) +} + +func TestAccKarbonClusterWorkerPool_Update(t *testing.T) { + resourceName := "nutanix_karbon_worker_nodepool.nodepool" + subnetName := testVars.SubnetName + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNutanixKarbonClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNutanixKarbonClusterWorkerNodePoolConfig(subnetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNutanixKarbonClusterExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "name", "workerpool1"), + resource.TestCheckResourceAttr(resourceName, "num_instances", "1"), + resource.TestCheckResourceAttrSet(resourceName, "nodes.#"), + resource.TestCheckResourceAttrSet(resourceName, "ahv_config.#"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.cpu", "4"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.disk_mib", "122880"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.memory_mib", "8192"), + resource.TestCheckResourceAttrSet(resourceName, "node_os_version"), + resource.TestCheckResourceAttr(resourceName, "labels.k1", "v1"), + resource.TestCheckResourceAttr(resourceName, "labels.k2", "v2"), + ), + }, + { // Test for non-empty plans. No modification. + Config: testAccNutanixKarbonClusterWorkerNodePoolConfig(subnetName), + PlanOnly: true, + }, + { // Test to update labels and increase nodes + Config: testAccNutanixKarbonClusterWorkerNodePoolConfigUpdate(subnetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNutanixKarbonClusterExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "name", "workerpool1"), + resource.TestCheckResourceAttr(resourceName, "num_instances", "2"), + resource.TestCheckResourceAttrSet(resourceName, "nodes.#"), + resource.TestCheckResourceAttrSet(resourceName, "ahv_config.#"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.cpu", "4"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.disk_mib", "122880"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.memory_mib", "8192"), + resource.TestCheckResourceAttrSet(resourceName, "node_os_version"), + resource.TestCheckResourceAttr(resourceName, "labels.k1", "v1"), + resource.TestCheckResourceAttr(resourceName, "labels.k2", "v2"), + resource.TestCheckResourceAttr(resourceName, "labels.k3", "v3"), + ), + }, + { // Test to decrease the number of nodes + Config: testAccNutanixKarbonClusterWorkerNodePoolConfig(subnetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNutanixKarbonClusterExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "name", "workerpool1"), + resource.TestCheckResourceAttr(resourceName, "num_instances", "1"), + resource.TestCheckResourceAttrSet(resourceName, "nodes.#"), + resource.TestCheckResourceAttrSet(resourceName, "ahv_config.#"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.cpu", "4"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.disk_mib", "122880"), + resource.TestCheckResourceAttr(resourceName, "ahv_config.0.memory_mib", "8192"), + resource.TestCheckResourceAttrSet(resourceName, "node_os_version"), + resource.TestCheckResourceAttr(resourceName, "labels.k1", "v1"), + resource.TestCheckResourceAttr(resourceName, "labels.k2", "v2"), + ), + }, + }, + }) +} + +func testAccNutanixKarbonClusterWorkerNodePoolConfig(subnetName string) string { + return fmt.Sprintf(` + + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_subnet" "karbon_subnet" { + subnet_name = "%s" + } + + resource "nutanix_karbon_worker_nodepool" "nodepool" { + cluster_name = data.nutanix_karbon_clusters.kclusters.clusters.0.name + name = "workerpool1" + num_instances = 1 + ahv_config { + cpu= 4 + disk_mib= 122880 + memory_mib=8192 + network_uuid= data.nutanix_subnet.karbon_subnet.id + } + labels={ + k1="v1" + k2="v2" + } + depends_on = [ data.nutanix_karbon_clusters.kclusters ] + } + + `, subnetName) +} + +func testAccNutanixKarbonClusterWorkerNodePoolConfigUpdate(subnetName string) string { + return fmt.Sprintf(` + + data "nutanix_karbon_clusters" "kclusters" {} + + data "nutanix_subnet" "karbon_subnet" { + subnet_name = "%s" + } + + resource "nutanix_karbon_worker_nodepool" "nodepool" { + cluster_name = data.nutanix_karbon_clusters.kclusters.clusters.0.name + name = "workerpool1" + num_instances = 2 + ahv_config { + cpu= 4 + disk_mib= 122880 + memory_mib=8192 + network_uuid= data.nutanix_subnet.karbon_subnet.id + } + labels={ + k1="v1" + k2="v2" + k3="v3" + } + depends_on = [ data.nutanix_karbon_clusters.kclusters ] + } + + `, subnetName) +} diff --git a/nutanix/resource_nutanix_ndb_database.go b/nutanix/resource_nutanix_ndb_database.go index 8fcc5e6c1..ec1c9b862 100644 --- a/nutanix/resource_nutanix_ndb_database.go +++ b/nutanix/resource_nutanix_ndb_database.go @@ -467,7 +467,7 @@ func createDatabaseInstance(ctx context.Context, d *schema.ResourceData, meta in conn := meta.(*Client).Era // check for resource schema validation - er := schemaValidation("era_provision_database", d) + er := schemaValidation("ndb_provision_database", d) if er != nil { return diag.FromErr(er) } diff --git a/nutanix/resource_nutanix_ndb_database_test.go b/nutanix/resource_nutanix_ndb_database_test.go index d8fb584b2..a35979183 100644 --- a/nutanix/resource_nutanix_ndb_database_test.go +++ b/nutanix/resource_nutanix_ndb_database_test.go @@ -2,6 +2,7 @@ package nutanix import ( "fmt" + "regexp" "testing" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" @@ -15,6 +16,8 @@ func TestAccEra_basic(t *testing.T) { desc := "this is desc" vmName := fmt.Sprintf("testvm-%d", r) sshKey := testVars.SSHKey + updatedName := fmt.Sprintf("test-pg-inst-tf-updated-%d", r) + updatedesc := "this is updated desc" resource.Test(t, resource.TestCase{ PreCheck: func() { testAccEraPreCheck(t) }, Providers: testAccProviders, @@ -30,6 +33,17 @@ func TestAccEra_basic(t *testing.T) { resource.TestCheckResourceAttrSet(resourceNameDB, "time_machine.#"), ), }, + { + Config: testAccEraDatabaseConfig(updatedName, updatedesc, vmName, sshKey, r), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceNameDB, "name", updatedName), + resource.TestCheckResourceAttr(resourceNameDB, "description", updatedesc), + resource.TestCheckResourceAttr(resourceNameDB, "databasetype", "postgres_database"), + resource.TestCheckResourceAttr(resourceNameDB, "database_nodes.#", "1"), + resource.TestCheckResourceAttrSet(resourceNameDB, "time_machine_id"), + resource.TestCheckResourceAttrSet(resourceNameDB, "time_machine.#"), + ), + }, }, }) } @@ -58,6 +72,42 @@ func TestAccEraDatabaseProvisionHA(t *testing.T) { }) } +func TestAccEra_SchemaValidationwithCreateDBserver(t *testing.T) { + r := randIntBetween(1, 10) + name := fmt.Sprintf("test-pg-inst-tf-%d", r) + desc := "this is desc" + vmName := fmt.Sprintf("testvm-%d", r) + sshKey := testVars.SSHKey + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccEraPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccEraDatabaseSchemaValidationConfig(name, desc, vmName, sshKey, r), + ExpectError: regexp.MustCompile(`missing required fields are \[softwareprofileid softwareprofileversionid computeprofileid networkprofileid dbparameterprofileid\] for ndb_provision_database`), + }, + }, + }) +} + +func TestAccEra_SchemaValidationwithCreateDBserverFalse(t *testing.T) { + r := randIntBetween(1, 10) + name := fmt.Sprintf("test-pg-inst-tf-%d", r) + desc := "this is desc" + vmName := fmt.Sprintf("testvm-%d", r) + sshKey := testVars.SSHKey + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccEraPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccEraDatabaseSchemaValidationConfigWithoutCreateDBserver(name, desc, vmName, sshKey, r), + ExpectError: regexp.MustCompile(`missing required fields are \[dbparameterprofileid timemachineinfo\] for ndb_provision_database`), + }, + }, + }) +} + func testAccEraDatabaseConfig(name, desc, vmName, sshKey string, r int) string { return fmt.Sprintf(` data "nutanix_ndb_profiles" "p"{ @@ -327,3 +377,149 @@ func testAccEraDatabaseHAConfig(name, desc, sshKey string, r int) string { } `, name, desc, sshKey, r) } + +func testAccEraDatabaseSchemaValidationConfig(name, desc, vmName, sshKey string, r int) string { + return fmt.Sprintf(` + data "nutanix_ndb_profiles" "p"{ + } + data "nutanix_ndb_slas" "slas"{} + data "nutanix_ndb_clusters" "clusters"{} + + locals { + profiles_by_type = { + for p in data.nutanix_ndb_profiles.p.profiles : p.type => p... + } + storage_profiles = { + for p in local.profiles_by_type.Storage: p.name => p + } + compute_profiles = { + for p in local.profiles_by_type.Compute: p.name => p + } + network_profiles = { + for p in local.profiles_by_type.Network: p.name => p + } + database_parameter_profiles = { + for p in local.profiles_by_type.Database_Parameter: p.name => p + } + software_profiles = { + for p in local.profiles_by_type.Software: p.name => p + } + slas = { + for p in data.nutanix_ndb_slas.slas.slas: p.name => p + } + clusters = { + for p in data.nutanix_ndb_clusters.clusters.clusters: p.name => p + } + } + + resource "nutanix_ndb_database" "acctest-managed" { + databasetype = "postgres_database" + name = "%[1]s" + description = "%[2]s" + + postgresql_info{ + listener_port = "5432" + database_size= "200" + db_password = "password" + database_names= "testdb1" + } + nxclusterid= local.clusters.EraCluster.id + sshpublickey= "%[4]s" + nodes{ + vmname= "%[3]s" + networkprofileid= local.network_profiles.DEFAULT_OOB_POSTGRESQL_NETWORK.id + } + timemachineinfo { + name= "test-pg-inst-%[5]d" + description="" + slaid=local.slas["DEFAULT_OOB_BRONZE_SLA"].id + schedule { + snapshottimeofday{ + hours= 16 + minutes= 0 + seconds= 0 + } + continuousschedule{ + enabled=true + logbackupinterval= 30 + snapshotsperday=1 + } + weeklyschedule{ + enabled=true + dayofweek= "WEDNESDAY" + } + monthlyschedule{ + enabled = true + dayofmonth= "27" + } + quartelyschedule{ + enabled=true + startmonth="JANUARY" + dayofmonth= 27 + } + yearlyschedule{ + enabled= false + dayofmonth= 31 + month="DECEMBER" + } + } + } + } + `, name, desc, vmName, sshKey, r) +} + +func testAccEraDatabaseSchemaValidationConfigWithoutCreateDBserver(name, desc, vmName, sshKey string, r int) string { + return fmt.Sprintf(` + data "nutanix_ndb_profiles" "p"{ + } + data "nutanix_ndb_slas" "slas"{} + data "nutanix_ndb_clusters" "clusters"{} + + locals { + profiles_by_type = { + for p in data.nutanix_ndb_profiles.p.profiles : p.type => p... + } + storage_profiles = { + for p in local.profiles_by_type.Storage: p.name => p + } + compute_profiles = { + for p in local.profiles_by_type.Compute: p.name => p + } + network_profiles = { + for p in local.profiles_by_type.Network: p.name => p + } + database_parameter_profiles = { + for p in local.profiles_by_type.Database_Parameter: p.name => p + } + software_profiles = { + for p in local.profiles_by_type.Software: p.name => p + } + slas = { + for p in data.nutanix_ndb_slas.slas.slas: p.name => p + } + clusters = { + for p in data.nutanix_ndb_clusters.clusters.clusters: p.name => p + } + } + + resource "nutanix_ndb_database" "acctest-managed" { + databasetype = "postgres_database" + name = "%[1]s" + description = "%[2]s" + + postgresql_info{ + listener_port = "5432" + database_size= "200" + db_password = "password" + database_names= "testdb1" + } + nxclusterid= local.clusters.EraCluster.id + sshpublickey= "%[4]s" + nodes{ + vmname= "%[3]s" + networkprofileid= local.network_profiles.DEFAULT_OOB_POSTGRESQL_NETWORK.id + } + createdbserver=false + } + `, name, desc, vmName, sshKey, r) +} diff --git a/nutanix/resource_nutanix_subnet.go b/nutanix/resource_nutanix_subnet.go index 4a0e78f69..4b97e91e7 100644 --- a/nutanix/resource_nutanix_subnet.go +++ b/nutanix/resource_nutanix_subnet.go @@ -717,10 +717,8 @@ func getSubnetResources(d *schema.ResourceData, subnet *v3.SubnetResources) { dhcpo.DomainSearchList = expandStringList(v.([]interface{})) } - if v, ok := d.GetOk("vlan_id"); ok { - if v.(int) == 0 || ok { - subnet.VlanID = utils.Int64Ptr(int64(v.(int))) - } + if v, ok := d.GetOkExists("vlan_id"); ok { + subnet.VlanID = utils.Int64Ptr(int64(v.(int))) } if v, ok := d.GetOk("network_function_chain_reference"); ok { diff --git a/nutanix/resource_nutanix_subnet_test.go b/nutanix/resource_nutanix_subnet_test.go index 2b1b6afd7..665585816 100644 --- a/nutanix/resource_nutanix_subnet_test.go +++ b/nutanix/resource_nutanix_subnet_test.go @@ -255,6 +255,24 @@ func TestAccNutanixSubnet_nameDuplicated(t *testing.T) { }) } +func TestAccNutanixSubnet_WithVlan0(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNutanixSubnetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNutanixSubnetConfig(0), + Check: resource.ComposeTestCheckFunc( + testAccCheckNutanixSubnetExists(resourceNameSubnet), + resource.TestCheckResourceAttr(resourceNameSubnet, "name", "acctest-managed-0"), + resource.TestCheckResourceAttr(resourceNameSubnet, "description", "Description of my unit test VLAN"), + ), + }, + }, + }) +} + func testAccCheckNutanixSubnetExists(n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] diff --git a/nutanix/resource_nutanix_user_groups_test.go b/nutanix/resource_nutanix_user_groups_test.go index 5b414d2ff..2ec9b29a9 100644 --- a/nutanix/resource_nutanix_user_groups_test.go +++ b/nutanix/resource_nutanix_user_groups_test.go @@ -59,7 +59,7 @@ func TestAccNutanixUserGroups_DuplicateEntity(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccNutanixUserGroupsConfig(directoryServiceDistName), - ExpectError: regexp.MustCompile("DUPLICATE_ENTITY"), + ExpectError: regexp.MustCompile("bad Request"), }, }, }) diff --git a/test_config.json b/test_config.json index 0ad04e01f..c3bc9faf3 100644 --- a/test_config.json +++ b/test_config.json @@ -45,6 +45,7 @@ } ], "node_os_version": "", + "kubernetes_version": "", "ad_rule_target": { "name": "", "values": "" diff --git a/website/docs/d/address_groups.html.markdown b/website/docs/d/address_groups.html.markdown index f46874392..b7724afd6 100644 --- a/website/docs/d/address_groups.html.markdown +++ b/website/docs/d/address_groups.html.markdown @@ -44,7 +44,6 @@ The following attributes are exported as list: The following attributes are exported: -* `uuid`:- (ReadOnly) UUID of the address group * `name`:- (ReadOnly) Name of the address group * `description`:- (ReadOnly) Description of the address group * `ip_address_block_list`: - (ReadOnly) list of IP address blocks with their prefix length diff --git a/website/docs/r/karbon_cluster.html.markdown b/website/docs/r/karbon_cluster.html.markdown index 2e274c1fb..c2b3fdd0c 100644 --- a/website/docs/r/karbon_cluster.html.markdown +++ b/website/docs/r/karbon_cluster.html.markdown @@ -65,6 +65,61 @@ resource "nutanix_karbon_cluster" "example_cluster" { ``` + +### resource to create karbon cluster with timeouts +```hcl +resource "nutanix_karbon_cluster" "example_cluster" { + name = "example_cluster" + version = "1.18.15-1" + storage_class_config { + reclaim_policy = "Delete" + volumes_config { + file_system = "ext4" + flash_mode = false + password = "my_pe_pw" + prism_element_cluster_uuid = "my_pe_cluster_uuid" + storage_container = "my_storage_container_name" + username = "my_pe_username" + } + } + cni_config { + node_cidr_mask_size = 24 + pod_ipv4_cidr = "172.20.0.0/16" + service_ipv4_cidr = "172.19.0.0/16" + } + worker_node_pool { + node_os_version = "ntnx-1.0" + num_instances = 1 + ahv_config { + network_uuid = "my_subnet_id" + prism_element_cluster_uuid = "my_pe_cluster_uuid" + } + } + etcd_node_pool { + node_os_version = "ntnx-1.0" + num_instances = 1 + ahv_config { + network_uuid = "my_subnet_id" + prism_element_cluster_uuid = "my_pe_cluster_uuid" + } + } + master_node_pool { + node_os_version = "ntnx-1.0" + num_instances = 1 + ahv_config { + network_uuid = "my_subnet_id" + prism_element_cluster_uuid = "my_pe_cluster_uuid" + } + } + timeouts { + create = "1h" + update = "30m" + delete = "10m" + } +} +``` + + ## Argument Reference The following arguments are supported: @@ -153,6 +208,8 @@ The `etcd_node_pool`, `master_node_pool`, `worker_node_pool` attribute supports * `calico_config.ip_pool_config`: - (Optional) List of IP pools to be configured/managed by calico. * `calico_config.ip_pool_config.cidr`: - (Optional) IP range to use for this pool, it should fall within pod cidr. +* `timeouts`: timeouts can customize the default timeout on CRUD functions with default timeouts. Supports "h", "m" or "s" . + **Note:** Updates to this attribute forces new resource creation. See detailed information in [Nutanix Karbon Cluster](https://www.nutanix.dev/reference/karbon/api-reference/cluster/). diff --git a/website/docs/r/karbon_cluster_worker_nodepool.html.markdown b/website/docs/r/karbon_cluster_worker_nodepool.html.markdown new file mode 100644 index 000000000..6566934f1 --- /dev/null +++ b/website/docs/r/karbon_cluster_worker_nodepool.html.markdown @@ -0,0 +1,85 @@ +--- +layout: "nutanix" +page_title: "NUTANIX: nutanix_karbon_worker_nodepool" +sidebar_current: "docs-nutanix-resource-karbon-worker-nodepool" +description: |- + Provides a resource to add/remove worker nodepool in an existing Nutanix Kubernetes Engine (NKE). +--- + +# nutanix_karbon_worker_nodepool + +Provides a resource to add/remove worker nodepool in an existing Nutanix Kubernetes Engine (NKE). + +## Example Usage + +```hcl + resource "nutanix_karbon_worker_nodepool" "kworkerNp" { + cluster_name = "karbon" + name = "workerpool1" + num_instances = 1 + ahv_config { + cpu= 4 + disk_mib= 122880 + memory_mib=8192 + network_uuid= "61213511-6383-4a38-9ac8-4a552c0e5865" + } + } +``` + +```hcl + resource "nutanix_karbon_worker_nodepool" "kworkerNp" { + cluster_name = "karbon" + name = "workerpool1" + num_instances = 1 + ahv_config { + cpu= 4 + disk_mib= 122880 + memory_mib=8192 + network_uuid= "61213511-6383-4a38-9ac8-4a552c0e5865" + } + labels={ + k1="v1" + k2="v2" + } + } +``` + + +## Argument Reference + +The following arguments are supported: + +* `cluster_name`: (Required) Kubernetes cluster name +* `name`: (Required) unique worker nodepool name +* `node_os_version`: (Optional) The version of the node OS image +* `num_instances`: (Required) number of node instances +* `ahv_config`: (Optional) VM configuration in AHV. +* `labels`: (Optional) labels of node + +### ahv_config +The following arguments are supported for ahv_config: + +* `cpu`: - (Required) The number of VCPUs allocated for each VM on the PE cluster. +* `disk_mib`: - (Optional) Size of local storage for each VM on the PE cluster in MiB. +* `memory_mib`: - (Optional) Memory allocated for each VM on the PE cluster in MiB. +* `network_uuid`: - (Required) The UUID of the network for the VMs deployed with this resource configuration. +* `prism_element_cluster_uuid`: - (Optional) The unique universal identifier (UUID) of the Prism Element +* `iscsi_network_uuid`: (Optional) VM network UUID for isolating iscsi data traffic. + + +## Attributes Reference + +The following attributes are exported: + +* `nodes`: List of node details of pool. +* `nodes.hostname`: hostname of node +* `nodes.ipv4_address`: ipv4 address of node + +## Timeouts + +* create +* update +* delete + + +See detailed information in [Add Node Pool in NKE](https://www.nutanix.dev/api_references/nke/#/5e68a51e9d3fa-add-a-node-pool-to-a-k8s-cluster) diff --git a/website/docs/r/subnet.html.markdown b/website/docs/r/subnet.html.markdown index 0b1c425c2..03004c5e9 100644 --- a/website/docs/r/subnet.html.markdown +++ b/website/docs/r/subnet.html.markdown @@ -55,7 +55,7 @@ resource "nutanix_subnet" "next-iac-managed" { * `owner_reference`: - (Optional) The reference to a user. * `project_reference`: - (Optional) The reference to a project. * `vswitch_name`: - (Optional). -* `subnet_type`: - (Optional). +* `subnet_type`: - (Optional). Valid Types are ["VLAN", "OVERLAY"] * `default_gateway_ip`: - (Optional) Default gateway IP address. * `prefix_length`: - (Optional). * `subnet_ip`: - (Optional) Subnet IP address. diff --git a/website/docs/r/virtual_machine.html.markdown b/website/docs/r/virtual_machine.html.markdown index 7ee01f8ed..353580b5c 100644 --- a/website/docs/r/virtual_machine.html.markdown +++ b/website/docs/r/virtual_machine.html.markdown @@ -78,7 +78,7 @@ The following arguments are supported: * `num_vcpus_per_socket`: - (Optional) Number of vCPUs per socket. * `num_sockets`: - (Optional) Number of vCPU sockets. * `gpu_list`: - (Optional) GPUs attached to the VM. -* `parent_referece`: - (Optional) Reference to an entity that the VM cloned from. +* `parent_reference`: - (Optional) Reference to an entity that the VM cloned from. * `memory_size_mib`: - (Optional) Memory size in MiB. * `boot_device_order_list`: - (Optional) Indicates the order of device types in which VM should try to boot from. If boot device order is not provided the system will decide appropriate boot device order. * `boot_device_disk_address`: - (Optional) Address of disk to boot from. @@ -91,7 +91,7 @@ The following arguments are supported: * `guest_customization_cloud_init_custom_key_values`: - (Optional) Generic key value pair used for custom attributes in cloud init. * `guest_customization_is_overridable`: - (Optional) Flag to allow override of customization by deployer. * `guest_customization_sysprep`: - (Optional) VM guests may be customized at boot time using one of several different methods. Currently, cloud-init w/ ConfigDriveV2 (for Linux VMs) and Sysprep (for Windows VMs) are supported. Only ONE OF sysprep or cloud_init should be provided. Note that guest customization can currently only be set during VM creation. Attempting to change it after creation will result in an error. Additional properties can be specified. For example - in the context of VM template creation if \"override_script\" is set to \"True\" then the deployer can upload their own custom script. -* `guest_customization_sysrep_custom_key_values`: - (Optional) Generic key value pair used for custom attributes in sysrep. +* `guest_customization_sysprep_custom_key_values`: - (Optional) Generic key value pair used for custom attributes in sysrep. * `should_fail_on_script_failure`: - (Optional) Extra configs related to power state transition. Indicates whether to abort ngt shutdown/reboot if script fails. * `enable_script_exec`: - (Optional) Extra configs related to power state transition. Indicates whether to execute set script before ngt shutdown/reboot. * `power_state_mechanism`: - (Optional) Indicates the mechanism guiding the VM power state transition. Currently used for the transition to \"OFF\" state. Power state mechanism (ACPI/GUEST/HARD). @@ -249,3 +249,10 @@ The `project_reference`, `owner_reference`, `availability_zone_reference`, `netw * `uuid`: - the UUID(Required). See detailed information in [Nutanix Virtual Machine](http://developer.nutanix.com/reference/prism_central/v3/#vms). + +## Import +Nutanix Virtual machines can be imported using the `UUID` eg, + +` +terraform import nutanix_virtual_machine.vm01 0F75E6A7-55FB-44D9-A50D-14AD72E2CF7C +` diff --git a/website/nutanix.erb b/website/nutanix.erb index 00f6ebd47..ddda8d97c 100644 --- a/website/nutanix.erb +++ b/website/nutanix.erb @@ -382,6 +382,9 @@ > nutanix_ndb_maintenance_task + > + nutanix_karbon_worker_nodepool +