Skip to content

Latest commit

 

History

History
587 lines (495 loc) · 34.1 KB

README.md

File metadata and controls

587 lines (495 loc) · 34.1 KB

Terraforming AWS: deploy AVE, DDVE and more from AWS Marketplace

This Modules can deploy Dell PowerProtect DataDomain Virtual Edition, PowerProtect DataManager, Networker Virtual Edition and Avamar Virtual edition to AWS using terraform. Instance Sizes and Disk Count/Size will be automatically evaluated my specifying a ddve_type and ave_type.

Individual Modules will be called from main by evaluating Variables

Requirements

Name Version
terraform >= 0.14.9
aws ~> 4.34.0
random ~> 3.1
tls ~> 3.1

Modules

Name Source Version
ave ./modules/ave n/a
bastion ./modules/bastion n/a
client_vpn ./modules/client_vpn n/a
cr ./modules/cr n/a
crs_client_vpn ./modules/client_vpn n/a
crs_networks ./modules/networks n/a
crs_s2s_vpn ./modules/s2s_vpn n/a
ddmc ./modules/ddmc n/a
ddve ./modules/ddve n/a
eks ./modules/eks n/a
networks ./modules/networks n/a
nve ./modules/nve n/a
ppdm ./modules/ppdm n/a
s2s_vpn ./modules/s2s_vpn n/a
vault_nve ./modules/nve n/a
vault_ppdm ./modules/ppdm n/a

Resources

No resources.

Inputs

Name Description Type Default Required
AVE_HOSTNAME Hotname of the AVE Machine string "ave_terraform" no
BASTION_HOSTNAME Hotname of the PPDM Machine string "bastion_terraform" no
DDMC_HOSTNAME Hotname of the DDMC Machine string "ddmc_terraform" no
DDVE_HOSTNAME Hotname of the DDVE Machine string "ddve_terraform" no
NVE_HOSTNAME Hostname of the nve Machine string "nve_terraform" no
PPDM_HOSTNAME Hotname of the PPDM Machine string "ppdm_terraform" no
availability_zone availability_zone to use string "eu-central-1a" no
ave_count How many AVE(s) you want to create .... number 0 no
ave_type AVE Type, can be '0.5 TB AVE','1 TB AVE','2 TB AVE','4 TB AVE','8 TB AVE','16 TB AVE' string "0.5 TB AVE" no
aws_profile n/a any n/a yes
cr_sg_id id of default security group when using existing networks any null no
create_bastion Do you want to create an PPDM bool false no
create_client_vpn Create a pre-conig site2client bool false no
create_crs_client_vpn Do you want to create a Cyber Vault bool false no
create_crs_networks Do you want to create a VPC bool false no
create_crs_s2s_vpn Do you want to create a Cyber Vault bool false no
create_networks Do you want to create a VPC bool false no
create_s2s_vpn Do you want to create a Site 2 Site VPN for default VPN Device ( e.g. UBNT-UDM Pro) bool false no
create_vault Do you want to create a Cyber Vault bool false no
crs_environment will be added to many Resource Names / Tags, should be in lower case, abc123 and - string "crs" no
crs_open_sesame open 2051 to vault for creating replication context bool false no
crs_private_route_table Private Routing table for S2S VPN string "" no
crs_private_subnets_cidr cidr of the private subnets cidrs when creating the vpc list(any) n/a yes
crs_public_subnets_cidr cidr of the public subnets cidrs when creating the vpc list(any) n/a yes
crs_subnet_id n/a any n/a yes
crs_tunnel1_preshared_key the preshared key for teh vpn tunnel when deploying S2S VPN string "" no
crs_vault_subnet_id n/a any n/a yes
crs_vpc_cidr n/a any n/a yes
crs_vpc_id id of the vpc when using existing networks/vpc string "" no
crs_vpn_destination_cidr_blocks the cidr blocks as string !!! for the destination route in you local network, when s2s_vpn is deployed string "[]" no
crs_wan_ip The IP of your VPN Device if S2S VPN any n/a yes
ddmc_count Do you want to create a DDMC number 0 no
ddmc_type DDMC Type, can be: '12.5 Gigabit Ethernet DDMC', '10 Gigabit Ethernet DDMC' string "12.5 Gigabit Ethernet DDMC" no
ddmc_version DDMC Version, can be: '7.13.0.10', '7.12.0.0', '7.10.1.20', '7.7.5.30','7.7.5.25' string "7.13.0.10" no
ddve_count Do you want to create a DDVE number 0 no
ddve_type DDVE Type, can be: '16 TB DDVE', '32 TB DDVE', '96 TB DDVE', '256 TB DDVE' string "16 TB DDVE" no
ddve_version DDVE Version, can be: '7.13.0.20','7.10.1.20', '7.7.5.30' string "7.13.0.20" no
default_sg_id id of default security group when using existing networks any null no
eks_cluster_name the name ( prefix ) of the eks cluster string "tfeks" no
eks_count the cout of eks clusters number 0 no
environment will be added to many Resource Names / Tags, should be in lower case, abc123 and - any n/a yes
ingress_cidr_blocks Machines to allow ingress, other than default SG ingress list(any)
[
"0.0.0.0/0"
]
no
nve_count How many nve(s) you want to create .... number 0 no
nve_type nve Type, can be 'small','medium','large' string "small" no
nve_version nve Version, can be '19.10.0.1', '19.9.0.0' string "19.10.0.1" no
ppdm_count Do you want to create an PPDM number 0 no
ppdm_version VERSION Version, can be: '19.14.0', '19.15.0', '19.16.0' string "19.16.0" no
private_route_table Private Routing table for S2S VPN string "" no
private_subnets_cidr cidr of the private subnets cidrs when creating the vpc list(any) n/a yes
public_subnets_cidr cidr of the public subnets cidrs when creating the vpc. Public Cidr´(s) are most likely used for Bastion´s list(any) n/a yes
region the region for deployment string n/a yes
subnet_id the subnet to deploy the machines in if vpc is not deployed automatically list(any) [] no
tags Key/value tags to assign to resources. map(string) {} no
tags_all Key/value for TopLevel Tagsntags to assign to all resources. map(string) {} no
tunnel1_preshared_key the preshared key for teh vpn tunnel when deploying S2S VPN string "" no
vault_ingress_cidr_blocks n/a any n/a yes
vault_nve_count n/a number 0 no
vault_ppdm_count n/a number 0 no
vault_sg_id id of default security group when using existing networks any null no
vpc_cidr cidr of the vpc when creating the vpc any null no
vpc_id id of the vpc when using existing networks/vpc string "" no
vpn_destination_cidr_blocks the cidr blocks as string !!! for the destination route in you local network, when s2s_vpn is deployed string "[]" no
wan_ip The IP of your VPN Device if S2S VPN any n/a yes

Outputs

Name Description
PPDM_FQDN The private ip address for the DDVE Instance
VAULT_PPDM_FQDN The private ip address for the DDVE Instance
atos_bucket The S3 Bucket Name created for ATOS configuration
ave_instance_id The instance id (initial password) for the DDVE Instance
ave_private_ip The sprivate ip address for the AVE Instance
ave_ssh_private_key The ssh private key for the AVE Instance
ave_ssh_public_key The ssh public key for the AVE Instance
ave_ssh_public_key_name The ssh public key Name for the AVE Instance
bastion_instance_id The instance id (initial password) for the DDVE Instance
bastion_public_ip The private ip address for the DDVE Instance
bastion_ssh_private_key The ssh private key for the DDVE Instance
bastion_ssh_public_key The ssh public key for the DDVE Instance
bastion_ssh_public_key_name The ssh public key name for the DDVE Instance
crjump_ssh_private_key The ssh public key name for the DDVE Instance
crs_tunnel1_address The address for the VPN tunnel to configure your local device
ddcr_ssh_private_key The ssh private key for the DDVE Instance
ddmc_instance_id The instance id (initial password) for the ddmc Instance
ddmc_private_ip The private ip address for the ddmc Instance
ddmc_ssh_private_key The ssh private key for the ddmc Instance
ddmc_ssh_public_key The ssh public key for the ddmc Instance
ddmc_ssh_public_key_name The ssh public key name for the ddmc Instance
ddve_instance_id The instance id (initial password) for the DDVE Instance
ddve_private_ip The private ip address for the DDVE Instance
ddve_ssh_private_key The ssh private key for the DDVE Instance
ddve_ssh_public_key The ssh public key for the DDVE Instance
ddve_ssh_public_key_name The ssh public key name for the DDVE Instance
kubernetes_cluster_host EKS Cluster Host
kubernetes_cluster_name EKS Cluster Name
nve_instance_id The instance id (initial password) for the DDVE Instance
nve_instance_ids The instance id (initial password) for the DDVE Instance
nve_private_ip The sprivate ip address for the nve Instance
nve_private_ips The sprivate ip address for the nve Instance
nve_ssh_private_key The ssh private key for the nve Instance
nve_ssh_private_keys The ssh private key for the nve Instance
nve_ssh_public_key The ssh public key for the nve Instance
nve_ssh_public_key_name The ssh public key Name for the nve Instance
ppcr_ssh_private_key The ssh private key for the DDVE Instance
ppdm_instance_id The instance id (initial password) for the DDVE Instance
ppdm_ssh_private_key The ssh private key for the DDVE Instance
ppdm_ssh_public_key The ssh public key for the DDVE Instance
ppdm_ssh_public_key_name The ssh public key name for the DDVE Instance
private_route_table The VPC private route table
subnet_ids The VPC subnet id´s
tunnel1_address The address for the VPN tunnel to configure your local device
vault_nve_instance_id The instance id (initial password) for the DDVE Instance
vault_nve_instance_ids The instance id (initial password) for the DDVE Instance
vault_nve_private_ip The sprivate ip address for the nve Instance
vault_nve_private_ips The sprivate ip address for the nve Instance
vault_nve_ssh_private_key The ssh private key for the nve Instance
vault_nve_ssh_private_keys The ssh private key for the vault_nve Instance
vault_nve_ssh_public_key The ssh public key for the vault_nve Instance
vault_nve_ssh_public_key_name The ssh public key Name for the vault_nve Instance
vault_ppdm_instance_id The instance id (initial password) for the DDVE Instance
vault_ppdm_ssh_private_key The ssh private key for the DDVE Instance
vault_ppdm_ssh_public_key The ssh public key for the DDVE Instance
vault_ppdm_ssh_public_key_name The ssh public key name for the DDVE Instance
vpc_id The VPC id

default Variables

I tried to keep the structure modular, given the many variations vvp´s may be designed. You can always in or exclude a module by setting it´s count / create variable to >= 0 / true or false. Also, when set to false, required ID´s like vpc, default sg´s or subnet, must be provided via variable

AVE_HOSTNAME                    = "ave_terraform"
AVE_HOSTNAME                    = "ave_terraform"
BASTION_HOSTNAME                = "bastion_terraform"
DDMC_HOSTNAME                   = "ddmc_terraform"
DDVE_HOSTNAME                   = "ddve_terraform"
NVE_HOSTNAME                    = "nve_terraform"
PPDM_HOSTNAME                   = "ppdm_terraform"
availability_zone               = "eu-central-1a"
ave_count                       = 0
ave_type                        = "0.5 TB AVE"
aws_profile                     = ""
cr_sg_id                        = ""
create_bastion                  = false
create_client_vpn               = false
create_crs_client_vpn           = false
create_crs_networks             = false
create_crs_s2s_vpn              = false
create_networks                 = false
create_s2s_vpn                  = false
create_vault                    = false
crs_environment                 = "crs"
crs_open_sesame                 = false
crs_private_route_table         = ""
crs_private_subnets_cidr        = ""
crs_public_subnets_cidr         = ""
crs_subnet_id                   = ""
crs_tunnel1_preshared_key       = ""
crs_vault_subnet_id             = ""
crs_vpc_cidr                    = ""
crs_vpc_id                      = ""
crs_vpn_destination_cidr_blocks = "[]"
crs_wan_ip                      = ""
ddmc_count                      = 0
ddmc_type                       = "12.5 Gigabit Ethernet DDMC"
ddmc_version                    = "7.13.0.10"
ddve_count                      = 0
ddve_type                       = "16 TB DDVE"
ddve_version                    = "7.13.0.20"
default_sg_id                   = ""
eks_cluster_name                = "tfeks"
eks_count                       = 0
environment                     = ""
ingress_cidr_blocks = [
  "0.0.0.0/0"
]
nve_count                   = 0
nve_type                    = "small"
nve_version                 = "19.10.0.1"
ppdm_count                  = 0
ppdm_version                = "19.16.0"
private_route_table         = ""
private_subnets_cidr        = ""
public_subnets_cidr         = ""
region                      = ""
subnet_id                   = []
tags                        = {}
tags_all                    = {}
tunnel1_preshared_key       = ""
vault_ingress_cidr_blocks   = ""
vault_nve_count             = 0
vault_ppdm_count            = 0
vault_sg_id                 = ""
vpc_cidr                    = ""
vpc_id                      = ""
vpn_destination_cidr_blocks = "[]"
wan_ip                      = ""

usage

initialize Terraform Providers and Modules

terraform init

do a dry run with

terraform plan

everything looks good ? run

terraform apply --auto-approve

Enabling Internet Access for Networks

Per default, machines do not have internet Access / are deployed into a Private VPC.
I leave this disabled by default, as i do not want do deploy anything to the default network config automatically

Configuration ....

this assumes that you use my ansible Playbooks for AVE, PowerProtect DataManager and PowerProtect DataDomain Set the Required Variables: (don´t worry about the "Public" notations / names)

module_ddve

when the deployment is finished, you can connect and configure DDVE in multiple ways. my preferred way is ansible, but depending on needs one might to get into DDVE with ssh

Configure using CLI via SSH:

for an ssh connection, use:

export DDVE_PRIVATE_FQDN=$(terraform output -raw ddve_private_ip)
terraform output ddve_ssh_private_key > ~/.ssh/ddve_key
chmod 0600 ~/.ssh/ddve_key
ssh -i ~/.ssh/ddve_key sysadmin@${DDVE_PRIVATE_FQDN}

Proceed with CLi configuration

configure using ansible

export outputs from terraform into environment variables:

export DDVE_PUBLIC_FQDN=$(terraform output -raw ddve_private_ip)
export DDVE_USERNAME=sysadmin
export DDVE_INITIAL_PASSWORD=$(terraform output -raw ddve_instance_id)
export DDVE_PASSWORD=Change_Me12345_
export PPDD_PASSPHRASE=Change_Me12345_!
export DDVE_PRIVATE_FQDN=$(terraform output -raw ddve_private_ip)
export ATOS_BUCKET=$(terraform output -raw atos_bucket)
export PPDD_LICENSE=$(cat ~/workspace/internal.lic)
export PPDD_TIMEZONE="Europe/Berlin"

Configure DataDomain

set the Initial DataDomain Password

ansible-playbook ~/workspace/ansible_ppdd/1.0-Playbook-configure-initial-password.yml

If you have a valid dd license, set the variable PPDD_LICENSE, example:

ansible-playbook ~/workspace/ansible_ppdd/3.0-Playbook-set-dd-license.yml

next, we set the passphrase, as it is required for ATOS then, we will set the Timezone and the NTP to AWS NTP link local Server

ansible-playbook ~/workspace/ansible_ppdd/2.1-Playbook-configure-ddpassphrase.yml
ansible-playbook ~/workspace/ansible_ppdd/2.1.1-Playbook-set-dd-timezone-and-ntp-aws.yml
ansible-playbook ~/workspace/ansible_ppdd/2.2-Playbook-configure-dd-atos-aws.yml

this concludes basic DDVE Configuration

Optional task(s)

Optionally, create a ddboost user for Avamar:

export AVAMAR_DDBOOST_USER=ddboostave
ansible-playbook ../../ansible_ppdd/3.2-Playbook-set-boost_avamar.yml \
--extra-vars "ppdd_password=${DDVE_PASSWORD}" \
--extra-vars "ava_dd_boost_user=${AVAMAR_DDBOOST_USER}"

module_ddmc

when the deployment is finished, you can connect and configure DDMC in multiple ways. DDMC shares the same set of API´s that can be used to manage a DataDomain as well. So we reuse the DDVE Methods to configure DDMC my preferred way is ansible, but depending on needs one might to get into DDVE with ssh

Configure using CLI via SSH:

for an ssh connection, use:

export DDVE_PRIVATE_FQDN=$(terraform output -raw ddmc_private_ip)
terraform output ddmc_ssh_private_key > ~/.ssh/ddmc_key
chmod 0600 ~/.ssh/ddmc_key
ssh -i ~/.ssh/ddmc_key sysadmin@${DDVE_PRIVATE_FQDN}

Proceed with CLi configuration

configure using ansible

export outputs from terraform into environment variables:

export DDVE_PUBLIC_FQDN=$(terraform output -raw ddmc_private_ip)
export DDVE_USERNAME=sysadmin
export DDVE_INITIAL_PASSWORD=$(terraform output -raw ddmc_instance_id)
export DDVE_PASSWORD=Change_Me12345_
export PPDD_PASSPHRASE=Change_Me12345_!
export DDVE_PRIVATE_FQDN=$(terraform output -raw ddmc_private_ip)
export PPDD_TIMEZONE="Europe/Berlin"

Configure DataDomain

set the Initial DataDomain Management Center Password

ansible-playbook ~/workspace/ansible_ppdd/1.0-Playbook-configure-initial-password.yml

If you have a valid dd license, set the variable PPDD_LICENSE, example:

ansible-playbook ~/workspace/ansible_ppdd/3.0-Playbook-set-dd-license.yml

module_ave

Configuring Avamar Virtual Edition Software using AVI API

The initial configuration can be made via the avi installer ui or by using the avi rest api to configure Avamar using the AVI api, you van use my avi Ansible playbook(s)

Export Mandatory Variables:

export AVA_COMMON_PASSWORD=Change_Me12345_
export AVE_PUBLIC_IP=$(terraform output -raw ave_private_ip)
export AVE_PRIVATE_IP=$(terraform output -raw ave_private_ip)
export AVE_TIMEZONE="Europe/Berlin" # same as PPDD Timezone

Run the AVI Configuration Playbook

ansible-playbook ~/workspace/ansible_dps/avi/playbook-postdeploy_AVE.yml \
--extra-vars "ave_password=${AVA_COMMON_PASSWORD}"

Configure DataDomain for avamar using avamar api via ansible

export AVA_FQDN=$(terraform output -raw ave_private_ip)
export AVA_HFS_ADDR=$(terraform output -raw ave_private_ip)
export AVA_DD_HOST=$(terraform output -raw ddve_private_ip)
ansible-playbook ~/workspace/ansible_dps/ava/playbook_add_datadomain.yml \
--extra-vars "ava_password=${AVA_COMMON_PASSWORD}" \
--extra-vars "ava_username=root" \
--extra-vars "ava_dd_host=${DDVE_PUBLIC_FQDN}" \
--extra-vars "ava_dd_boost_user_pwd=${DDVE_PASSWORD}" \
--extra-vars "ava_dd_boost_user=${AVAMAR_DDBOOST_USER}"

check deployment:

ansible-playbook ~/workspace/ansible_dps/ava/playbook_get_datadomain.yml \
--extra-vars "ava_username=root" \
--extra-vars "ava_password=${AVA_COMMON_PASSWORD}"

connect to AVE using ssh

retrieve the ave ssh key

terraform output -raw ave_ssh_private_key > ~/.ssh/ave_key_aws
chmod 0600 ~/.ssh/ave_key_aws
ssh -i ~/.ssh/ave_key_aws admin@${AVE_PRIVATE_IP}

module_nve

Configuring Networker Virtual Edition Software using AVI API

lets export all Upper Case Keys:

eval "$(terraform output --json | jq -r 'with_entries(select(.key|test("^[A-Z]+"))) | keys[] as $key | "export \($key)=\"\(.[$key].value)\""')"
export NVE_TIMEZONE="Europe/Berlin"
export NVE_FQDN=$(terraform output -raw nve_private_ip)
export NVE_PRIVATE_IP=$(terraform output -raw nve_private_ip)
export NVE_PASSWORD=Change_Me12345_

Run the AVI Configuration Playbook

ansible-playbook ~/workspace/ansible_avi/01-playbook-configure-nve.yml

Configure [n] nve

This example configures the 2nd [1] nve as storage node:

export NVE_TIMEZONE="Europe/Berlin"
export NVE_FQDN=$(terraform output -json nve_private_ips | jq -r '.[1]')
export NVE_PRIVATE_IP=$(terraform output -json nve_private_ips | jq -r '.[1]')
export NVE_PASSWORD=Change_Me12345_
 ansible-playbook ~/workspace/ansible_avi/01-playbook-configure-nve.yml --extra-vars="nve_as_storage_node=true"

getting ssh keys

This example get SSH Keys of 2nd nve ( [1] )

terraform output -json nve_ssh_private_keys | jq -r '.[1]' > ~/.ssh/nve1
 chmod 0600  ~/.ssh/nve1
  ssh -i ~/.ssh/nve1 admin@${NVE_PRIVATE_IP}

module_ppdm

Configure PowerProtect DataManager

Similar to the DDVE Configuration, we will set Environment Variables for Ansible to Automatically Configure PPDM

# Refresh you Environment Variables if Multi Step !
eval "$(terraform output --json | jq -r 'with_entries(select(.key|test("^PP+"))) | keys[] as $key | "export \($key)=\"\(.[$key].value)\""')"
export PPDM_INITIAL_PASSWORD=Change_Me12345_
export PPDM_NTP_SERVERS='["169.254.169.123"]'
export PPDM_SETUP_PASSWORD=admin          # default password on the EC2 PPDM
export PPDM_TIMEZONE="Europe/Berlin"
export PPDM_POLICY=PPDM_GOLD

Set the initial Configuration:

ansible-playbook ~/workspace/ansible_ppdm/1.0-playbook_configure_ppdm.yml

image we add the DataDomain:
image

ansible-playbook ~/workspace/ansible_ppdm/2.0-playbook_set_ddve.yml 

we can get the sdr config after Data Domain Boost auto-configuration for primary source from PPDM

image

ansible-playbook ~/workspace/ansible_ppdm/3.0-playbook_get_sdr.yml

image

module_eks

set eks_count to >= 1

terraform plan

when everything meets your requirements, run the deployment with

terraform apply --auto-approve

EKS configuration

get the context / login

aws eks update-kubeconfig --name $(terraform output --raw kubernetes_cluster_name)

add the cluster to powerprotect

ansible-playbook ~/workspace/ansible_ppdm/playbook_set_k8s_root_cert.yml --extra-vars "certificateChain=$(eksctl get cluster tfeks1 -o yaml | awk '/Cert/{getline; print $2}')"
ansible-playbook ~/workspace/ansible_ppdm/playbook_rbac_add_k8s_to_ppdm.yml

and we add a PPDM Policy / Rule

ansible-playbook ~/workspace/ansible_ppdm/playbook_add_k8s_policy_and_rule.yml

we need to create snapshot crd´s and snapshotter

kubectl apply -k "github.com/kubernetes-csi/external-snapshotter/client/config/crd/?ref=release-6.1"
kubectl apply -k "github.com/kubernetes-csi/external-snapshotter/deploy/kubernetes/snapshot-controller/?ref=release-6.1"

and then add the CSI Driver:

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.18"

Let´s create and view the Storageclasses

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/dynamic-provisioning/manifests/storageclass.yaml
kubectl get sc

We need to create a new default class

kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass ebs-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl get sc

we need to create a Volumesnapshotclass:

kubectl apply -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: ebs-snapclass
driver: ebs.csi.aws.com
deletionPolicy: Delete
EOF

run ppdm demo

PPDM_K8S_Demo

getting started with EKS

Note: EKS Changed to version >= 1.24, thus changed the api version for client authentication to client.authentication.k8s.io/v1beta1

this requires aws cli >= 2.10, otherwise you might see a failure:

kubectl cluster-info To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"