- if scaleway is slow, weave may not bootstrap correctly perhaps? If your slave nodes stay 'NotReady' and weave pods are bellyaching, try again, perhaps when scaleway is less busy?
- terraform =< 0.8.6 for some reason
- kubernetes 1.6.x max, kubeadm 1.7.x breaks on kubernetes/kubeadm#345
- removed DO; ufw has a lot of hacks for SCW now
- we're now running on C2S, not VC2S so we can feed extra volumes to gluster in the future
- default cluster is brought up with 50G root volume and 50G gluster volume. This comes out to about EUR 30,- a month, not 10!
- some small tweaks to make our cluster ready for gluster
- you can use gluster-kubernetes to set up your gluster env
- fill in your topology.json: hostnames as in kubectl, storage as in wireguard wg0 IP addresses, devices e.g. /dev/nbd1
- you will need to change the glusterfs-daemonset to tolerate running on the kube-master; in my opinion this is not a huge deal; we've got 4cores, 8GB on this master, not the 2core 2GB as on the VC1S
tolerations: - key: node-role.kubernetes.io/master operator: Equal effect: NoSchedule
- it may not be immediately apparent to you that the gluster-kubernetes './gk-deploy' script must be executed on the kubernetes nodes themselves; this is because heketi-cli is connecting to internal ClusterIP's to deploy.
-
PR 155 is not yet merged, uses ansible to deploy on our existing SCW cluster
-
PR 168 was closed, but is valid except if you need to run the glusterfs-daemonset on your master was well, as above. If you have a non-storage master, this should suit you, though.
-
you'll need heketi-cli and a working kubectl on the node you execute gk-deploy
-
if the heketi pod is acting up with read only DB's, please note that the old pod may not have released its mount on the specific node it was running on. make sure to schedule the pod on this specific node, or give up and always schedule the heketi pod on the master node just like you're doing for your ingress controller
-
This is part of the Hobby Kube project. Functionality of the modules is described in the guide.
Deploy a secure Kubernetes cluster on Scaleway using Terraform.
The following packages are required to be installed locally:
brew install terraform kubectl jq wireguard-tools
Modules are using ssh-agent for remote operations. Add your SSH key with ssh-add -K
if Terraform repeatedly fails to connect to remote hosts.
Export the following environment variables depending on the modules you're using.
export TF_VAR_scaleway_organization=<ACCESS_KEY>
export TF_VAR_scaleway_token=<TOKEN>
export TF_VAR_domain=<domain> # e.g. example.org
export TF_VAR_cloudflare_email=<email>
export TF_VAR_cloudflare_token=<token>
# fetch the required modules
$ terraform get
# see what `terraform apply` will do
$ terraform plan
# execute it
$ terraform apply
Modules in this repository can be used independently:
module "kubernetes" {
source = "github.com/hobby-kube/provisioning/service/kubernetes"
}
After adding this to your plan, run terraform get
to fetch the module.