Skip to content

Latest commit

 

History

History
196 lines (140 loc) · 8.38 KB

README.md

File metadata and controls

196 lines (140 loc) · 8.38 KB

terraform-openstack-rke

Terraform Registry Build Status

Terraform module to deploy Kubernetes with RKE on OpenStack.

Inspired by Marco Capuccini work, rewritten from scratch for Terraform 0.12+ and new terraform-rke-provider.

💥 You can now use the next generation rke/openstack module, go to terraform-openstack-rke2 💥

Table of contents

Prerequisites

  • Terraform 0.13+. For Terraform 0.12.x, use terraform/v0.12 branch.
  • OpenStack environment properly sourced.
  • A Openstack image fullfiling RKE requirements.
  • At least one Openstack floating IP.

Terraform 0.13 upgrade

terraform-openstack-rke >= 0.5 supports Terraform >= 0.13. Some changes in the way Terraform manage providers require manual operations.

terraform 0.13upgrade
terraform  state replace-provider 'registry.terraform.io/-/rke' 'registry.terraform.io/rancher/rke'
terraform init

For more informations see Upgrading to Terraform v0.13

⚠️ There is some deep changes between 0.4 and 0.5 branches. This may lead to a replacement of the nodes and the rke cluster resources ⚠️

Examples

Minimal example with master node as egde node and two worker nodes

# Consider using 'export TF_VAR_os_auth_url=$OS_AUTH_URL'
variable "os_auth_url"{}
# Consider using 'export TF_VAR_os_password=$OS_PASSWORD'
variable "os_password"{}

 module "rke" {
  source  = "remche/rke/openstack"
  image_name          = "ubuntu-18.04-docker-x86_64"
  public_net_name     = "public"
  master_flavor_name  = "m1.small"
  worker_flavor_name  = "m1.small"
  os_auth_url         = var.os_auth_url
  os_password         = var.os_password
}

Minimal example with two egde nodes and one worker nodes

# Consider using 'export TF_VAR_os_auth_url=$OS_AUTH_URL'
variable "os_auth_url"{}
# Consider using 'export TF_VAR_os_password=$OS_PASSWORD'
variable "os_password"{}

 module "rke" {
  source  = "remche/rke/openstack"
  image_name          = "ubuntu-18.04-docker-x86_64"
  public_net_name     = "public"
  master_flavor_name  = "m1.small"
  worker_flavor_name  = "m1.small"
  edge_count          = 2
  worker_count        = 1
  master_labels       = {"node-role.kubernetes.io/master" = "true"}
  edge_labels         = {"node-role.kubernetes.io/edge" = "true"}
  os_auth_url         = var.os_auth_url
  os_password         = var.os_password
}

Documentation

See USAGE.md for all available options.

Keypair

You can either specify a ssh key file to generate new keypair via ssh_key_file (default) or specify already existent keypair via ssh_keypair_name.

⚠️ Default config will try to use ssh agent for ssh connections to the nodes. Add use_ssh_agent = false if you don't use it.

Secgroup

You can define your own rules (e.g. limiting port 22 and 6443 to admin box).

secgroup_rules      = [ { "source" = "x.x.x.x", "protocol" = "tcp", "port" = 22 },
                        { "source" = "x.x.x.x", "protocol" = "tcp", "port" = 6443 },
                        { "source" = "0.0.0.0/0", "protocol" = "tcp", "port" = 80 },
                        { "source" = "0.0.0.0/0", "protocol" = "tcp", "port" = 443}
                      ]

Nodes

Default config will deploy one master and two worker nodes. It will use Traefik (nginx not supported in this case). You can define edge nodes (see above).

You can set affinity policy for each nodes group (master, worker, edge) via {master,worker,edge}_server_affinity. Default is soft-anti-affinity.

⚠️ soft-anti-affinity and soft-affinity needs Compute service API 2.15 or above.

You can use wait_for_commands to specify a list of commands to be run before invoking RKE. It can be useful when installing Docker at provision time (note that cooking your image embedding Docker with Packer is a better practice though) : wait_for_commands = ["while docker info ; [ $? -ne 0 ]; do echo wait for docker; sleep 30 ; done"]

Boot from volume

Some providers require to boot the instances from an attached boot volume instead of the nova ephemeral volume. To enable this feature, provide the variables to the config file:

boot_from_volume = true
boot_volume_size = 20

Loadbalancer

If enable_loadbalancer = true this module will create a layer 4 loadbalancer using LBaaS or LBaaSv2 in front of the master nodes or the edge nodes if there are any. It creates appropriate TCP listeners and monitors for HTTP (:80), HTTPS (:443) and Kubernetes API (:6443).

To use Octavia instead of Neutron Networking as LBaaS, use

use_octavia = true

Kubernetes version

You can specify kubernetes version with kubernetes_version variables. Refer to RKE supported version.

Cloud provider

The module will deploy Openstack Cloud Provider. It will create the Kubernetes Storageclasses for Cinder. If you have many Cinder storage type, you can specify it in storage_types variable.

You can disable cloud provider via cloud_provider variable.

Reverse Proxy

The module will deploy Traefik by default but you can use nginx-ingress instead. Note that nginx is not supported when master node is the edge node.

User Add-Ons

You can specify you own User Add_Ons with addons_include variable.

Usage with RancherOS

RancherOS needs a node config drive to be configured. You can also provide a cloud config file :

image_name          = "rancheros-1.5.5-x86_64"
system_user         = "rancher"
nodes_config_drive  = "true"
user_data_file      = "rancher.yml"

⚠️ Interpolating provider variables from module output is not the recommended way to achieve integration. See here and here.

Use of a data sources is recommended.

(Not recommended) You can use this module to populate Terraform Kubernetes Provider :

provider "kubernetes" {
  host     = module.rke.rke_cluster.api_server_url
  username = module.rke.rke_cluster.kube_admin_user

  client_certificate     = module.rke.rke_cluster.client_cert
  client_key             = module.rke.rke_cluster.client_key
  cluster_ca_certificate = module.rke.rke_cluster.ca_crt
}

Recommended way needs two apply operations, and setting the proper terraform_remote_state data source :

provider "kubernetes" {
  host     = data.terraform_remote_state.rke.outputs.cluster.api_server_url
  username = data.terraform_remote_state.rke.outputs.cluster.kube_admin_user
  client_certificate     = data.terraform_remote_state.rke.outputs.cluster.client_cert
  client_key             = data.terraform_remote_state.rke.outputs.cluster.client_key
  cluster_ca_certificate = data.terraform_remote_state.rke.outputs.cluster.ca_crt
  load_config_file = "false"
}