Skip to content

A collection of bash scripts to automate the Kubernetes clusters creation and deployment with flexible topologies, from single-node to high availability setups.

Notifications You must be signed in to change notification settings

andrescosta/jobico-cloud

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

This educational project started as a set of scripts to automate the process described in Kubernetes the Hard Way. Over time, it has evolved into a broader initiative that supports various cluster topologies and integrates several Kubernetes extensions as addons. Despite these enhancements, the project remains an educational tool to deepen my understanding of technologies such as Kubernetes, its high availability configurations, Bash scripting, and Linux.

For details on the project's implementation, check IMPLEMENTATION.md.

Overview

The following sections outline the various topologies that the tool can create, based on the parameters specified by command line or as a YAML file.

Single Control Plane Configuration

one cpl

In this setup, there is one dedicated control plane server managing one or more worker nodes, offering a straightforward configuration often used in small-scale deployments.

High Availability Configuration

ha

This topology features multiple control plane servers and one or more worker nodes, designed to ensure redundancy and fault tolerance in production environments. It leverages HAProxy and Keepalived to implement a highly available load balancer setup.

Single Node Configuration

zernode

This topology involves a single server that fulfills both the control plane and worker node roles, providing a basic setup typically used for learning and testing purposes.

Kubernetes Cluster

Architecture

architecture

Extensions

Extensions are additional components installed in a cluster to enhance its capabilities. These enhancements include tools and features such as observability solutions for monitoring and logging, database management systems, metrics collection and analysis, container registries, and more.

Add-Ons & Services

Add-ons and Services are part of a rudimentary mechanism for managing dependencies among extensions within a cluster. Core add-ons have no dependencies other than Kubernetes itself, while Extra add-ons depend on the core add-ons. Service components are installed after the cluster is created and all Pods are in the "Ready" state. Core services have no dependencies, whereas Extra services depend on the Core services.

Enhacements list

  • CoreDNS: It provides cluster wide DNS services.

  • k8s_gateway: This component acts as a single external DNS interface into the cluster. It supports Ingress, Service of type LoadBalancer and resources from the Gateway API project.

  • Metallb: A network load balancer implementation. The pool of IP address can be configured here: /addons/core/metallb

  • NFS: This driver allows Kubernetes to access NFS server on Linux node.

  • Traefik: The Traefik Kubernetes Ingress provider is a Kubernetes Ingress controller. It manages access to cluster services by supporting the Ingress specification. The dashboard is accessible here: https://ingress.jobico.local/dashboard

  • Metrics: It collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. It can also be accessed by kubectl top.

  • Distribution Registry: It is a server side application that stores and lets you distribute container images and other content.

  • Observability: It installs and integrates the following services:

    • The Kube-Prometheus stack
    • Grafana Loki
    • Grafana Tempo
    • Grafana Pyroscope
    • Prometheus Postgres Exporter

    In addition, it provisions the following dashboards:

    • JVM-micrometer: A dashboard that presents a collection of metrics collected from services running on a Java Virtual Machine.
    • Trace: This dashboard displays information from Loki(logs), Tempo(traces), and Prometheus(metrics) correlated by a Trace ID and associated to a service call.
    • Pg: It displays the information collected by the Prometheus Postgres Exporter.

    Grafana can be accessed from here: https://grafana.jobico.local/

  • Dashboard: A general purpose, web-based UI for Kubernetes clusters. It allows to manage applications running in the cluster. It can be accessed from here: https://dashboard.jobico.local

  • CloudNativePG: CloudNativePG is an operator that covers the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture, using native streaming replication.

  • pgAdmin: Feature rich administration and development platform for PostgreSQL. pgAdmin is accessible here: https://pg.jobico.local/

  • ZITADEL: Identity management service that implements several standards like OpenID Connect and SAML. The dashboard is accessible here: https://id.jobico.local/

  • Tekton: It is an open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems. The dashboard is accessible here: https://cd.jobico.local/

  • ArgoCD: Declarative continuous delivery with a fully-loaded UI. The dashboard is accessible here: https://argocd.jobico.local/

Disabling Enhacements

To omit the deployment of an extension, when a cluster is built using command line, create a file named disabled in its directory. This simple step ensures that the specified extension(add-on or service) will not be deployed, allowing you to customize the cluster setup according to your needs.

jobico.local

DNS

The script deploys CoreDNS for internal DNS resolution and k8s_gateway for external DNS resolution. It designates jobico.local as the primary domain for all subdomains corresponding to services that expose functionality externally. You can specify a different domain by using the --domain option with the new command. On local machines, you can configure the DNS server at 192.168.122.23 to handle all domains within jobico.local. One method to achieve this is by implementing Split DNS. Numerous tutorials are available online that can guide you through this setup.

Certificate

A common certificate for all services exposed in jobico.local is generated and signed by the cluster's CA. After creating the cluster, you can install the cluster's CA on Linux using the CLI.

Management

Creation

Before proceeding with the cluster creation, install the dependencies described in the section Prerequisites and generate the cloud-init cfg files by running:

$ ./cluster.sh cfg

After a cluster is created, you can configure Split DNS to access services using the jobico.local domain, more info.

From command line

# Cluster without worker nodes and one control plane server.
./cluster.sh new --nodes 0

# Cluster with two worker nodes and one control plane server.
./cluster.sh new

# Cluster with two worker nodes and one control plane server that can be also schedulable.
./cluster.sh new --schedulable-server

# HA Cluster with three worker nodes, two control plane servers and two load balancer.
./cluster.sh new --nodes 3 --cpl 2

# HA Cluster with ten worker nodes, five control plane servers and three load balancers.
./cluster.sh new --nodes 10 --cpl 5 --lb 3

# HA Cluster with three worker nodes, two control plane servers and one load balancer. 
# After the construction is completed (all pods Ready), it installs the scripts in the 
# /services directory.
./cluster.sh new --nodes 3 --cpl 2 --services

For more information, check The cluster.sh commands reference

Using a YAML file

The cluster description can also be specified in a YAML file to facilitate automation scripts:

./cluster.sh yaml [FILE NAME]
Schema
cluster:
  node:
    size: [NUMBER OF WORKER NODES]
  cpl:
    schedulable: [true if the server is tainted.]
    size: 0
  addons:
    - dir: [Subdirectory of addons]
      list:
      [List of addons]
  services:
    - dir: [Subdirectory of services]
      list:
      [List of services]

Examples

  • One control plane and worker node server with several addons and services.
cluster:
  node:
    size: 3
  cpl:
    schedulable: true
  addons:
    - dir: core
      list:
        - lb
        - dns
        - storage
    - dir: extras
      list:
        - registry
        - database
  services:
    - dir: core
      list:
        - database
    - dir: extras
      list:
        - identity
  • High availability configuration with 5 worker nodes, 3 control plane servers and two load balancers.
cluster:
  node:
    size: 5
  cpl:
    schedulable: false
    size: 3
    lb:
      size: 2
  addons:
    - dir: core
      list:
        - lb
        - dns
        - storage
    - dir: extras
      list:
        - registry
        - database
  services:
    - dir: core
      list:
        - database
    - dir: extras
      list:
        - identity
Running examples
  • One node
/cluster.sh yaml examples/basic/cluster.yaml 
  • High availability
./cluster.sh yaml examples/ha/cluster.yaml

Kubernetes versions

The flag --vers [file_name] allows you to specify the file containing the versions for each Kubernetes cluster component to be installed. If not provided, it defaults to extras/downloads_db/vers.txt.

# The file vers131.txt install the K8s version v1.31.0
./cluster.sh new --vers extras/downloads_db/vers131.txt

Work Directories

The tool creates several directories to store the files generated during cluster creation, including those for Kubernetes components. These files are crucial for both normal cluster operation and adding new nodes. By default, the directories are located in $HOME/.jobico, but you can specify a different location using the --dir flag when running the new command.

Installing the CA

Once the cluster is created, you can add the generated CA to local certificate databases by running the following command:

./cluster.sh ca add

Add nodes

After creating a cluster, you can add extra nodes by running the following command:

./cluster.sh add [NUMBER OF WORKER NODES]

For example:

# Adds 1 worker node
./cluster.sh add

# Adds 5 worker node
./cluster.sh add 5

Managing VMs

# Starts the cluster's VMs.
./cluster.sh start
# Shutdown the cluster's VMs.
./cluster.sh shutdown
# Suspend the cluster's VMs.
./cluster.sh suspend
# Resume from suspension the cluster's VMs.
./cluster.sh resume
# List the cluster's VMs.
./cluster.sh list

Destroying the cluster

The following command stops and deletes the VMs that form a cluster:

./cluster.sh destroy

Kubectl plugin

All these commands can be executed from the Kubectl command line tool. You just need to add the installation directory to the PATH, and then they can be accessed using the command jobico:

kubectl jobico <command>

# Examples:
## Creating a cluster with 3 worker nodes
kubectl jobico new --nodes 3
## Destroying a cluster
kubectl jobico destroy

Cluster.sh command reference

Usage: 
       ./cluster.sh <command> [arguments]
Commands:
          help
          new
          add
          destroy
          addons
          start
          shutdown
          suspend
          resume
          info
          state
          list
          local
          kvm
          cfg
          debug

Additional help: ./cluster.sh help <command>
Usage: ./cluster.sh new [arguments]
Create the VMs and deploys Kubernetes cluster into them.
The arguments that define how the cluster will be created:
     --nodes n
            Specify the number of worker nodes to be created. The default value is 2. 
     --cpl n
            Specify the number of control planed nodes to be created. The default value is 1. 
     --lb n
            Specify the number of load balancers to be created in case --cpl is greater than 1. The default value is 2. 
     --addons dir_name
            Specify a different directory name for the addons. Default: /home/andres/projs/jobico-cloud/addons
     --no-addons
            Skip the instalation of addons
     --services
            Waits for the cluster to be created and then runs the scripts on the 'services' directory.
     --vers
            File name with version numbers.
     --schedulable-server
            The control plane nodes will be available to schedule pods. The default is false(tainted).
     --dry-run
            Create the dabases, kubeconfigs, and certificates but does not create the actual cluster. This option is useful for debugging.
     --dir
            Directory where to store the support files.
     --domain
            Cluster's domain. Default: jobico.local
     --debug [ s | d ]
            Enable the debug mode.
       s: displays basic information.
       d: display advanced information.
Usage: ./cluster.sh add [arguments]
Add new nodes to the current Kubernetes cluster.
The arguments that define how the cluster will be updated:
     --nodes n
            Specify the number of worker nodes to be added. The default value is 1. 
     --dry-run
            Update the dabases, kubeconfigs, and certificates but does not create the actual cluster. This option is useful for debugging.
     --debug [ s | d ]
            Enable the debug mode.
       s: displays basic information.
       d: display advanced information.
Usage: ./cluster.sh destroy
Destroy the Kubernetes cluster and the VMs
Usage: ./cluster.sh addons [dir]
Install the addons from the folder ./addons or the one passed as argument.
Usage: ./cluster.sh cfg
Create the cloud init cfg files.
Usage: ./cluster.sh debug
Prints the content of the internal databases using the dao scripts.
Usage: ./cluster.sh <info|state|list>
Display information about the cluster's VM(s).
Usage: ./cluster.sh kvm
Install kvm and its dependencies locally.
Usage: ./cluster.sh <info|state|list>
Display information about the cluster's VM(s).
Usage: ./cluster.sh local
Prepares the local enviroment. It creates the kubeconfig and installs kubectl.
Usage: ./cluster.sh start
Starts the cluster's VMs
Usage: ./cluster.sh <info|state|list>
Display information about the cluster's VM(s).
Usage: ./cluster.sh ca <command>
Cluster CA management.
Commands:
          add - Add the CA to the local certificate repositories.

Prerequisites

The following packages must be installed locally before creating a cluster::

This script deps.sh can facilitate the installation of these dependencies (except Helm).

Possible future areas of work

  • Add more capabalities configured by YAML (CIDRs, main domain, etc.)
  • Improvements to the observavility stack
  • Improvements to the plugins mechanism
  • Imporvements to the templating system.
  • Performance
  • Cloud-Init
  • TLS updates using Let's Encrypt
  • Control Plane Kubelet
  • Kubeadm
  • External Etcd

About

A collection of bash scripts to automate the Kubernetes clusters creation and deployment with flexible topologies, from single-node to high availability setups.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published