-
Notifications
You must be signed in to change notification settings - Fork 50
use:percona cluster
- Introduction
- Installation
- Configuration
- Bootstrap
- Configure Second Node
- Configure Third Node
- Restart Bootstrap Host
- Testing
- Upgrading 5.7 to 8.0
This document describes how to set up a replicating Percona MySQL cluster across 3 hosts.
Note that there are some differences depending on whether you are running 5.7 or 8.0, where appropriate please select the instructions for your particular version.
For this example we will create 3 hosts, all running the minimal-64-lts 19.4.0
image (5417ab20-3156-11ea-8b19-2b66f5e7a439-1d60-11e9-9a77-ff9f06554b0f
).
Hostname | IP Address |
---|---|
pxc0 | 192.168.1.10 |
pxc1 | 192.168.1.11 |
pxc2 | 192.168.1.12 |
After provisioning, install the percona-cluster package on each of them:
$ pkgin -y install percona-cluster
Before starting the service we need to configure /opt/local/etc/my.cnf
.
For version 5.7 you'll need the following:
# Cluster settings
wsrep_provider=/opt/local/lib/mysql/plugin/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.1.10,192.168.1.11,192.168.1.12
innodb_autoinc_lock_mode=2
wsrep_cluster_name=test-cluster
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:passw0rd"
pxc_strict_mode=ENFORCING
# These two need to match the host they are running on.
wsrep_node_name=pxc0
wsrep_node_address=192.168.1.10
For version 8.0 the configuration file has been trimmed to only include relevant settings, and will need to look like this:
[mysqld]
innodb_autoinc_lock_mode=2
pxc_strict_mode=ENFORCING
# older releases are at /opt/local/lib/mysql/plugin/libgalera_smm.so
wsrep_provider=/opt/local/lib/libgalera_smm.so
wsrep_sst_method=xtrabackup-v2
#
# Modify these to match your particular cluster settings.
#
wsrep_cluster_address=gcomm://192.168.1.10,192.168.1.11,192.168.1.12
wsrep_cluster_name=test-cluster
#
# Modify these to match the host this node is running on.
#
wsrep_node_name=pxc0
wsrep_node_address=192.168.1.10
Note the final two variables are specific to the host they are running on, so
ensure pxc1 uses pxc1
and 192.168.1.11
, and so on. Change the
wsrep_cluster_name
to something suitable for your deployment.
For version 5.7 you may also want to pick a more secure wsrep_sst_auth
password. Version 8.0 uses SSL certificate authentication, and no longer
requires a hardcoded password.
One of the hosts needs to initially be configured for bootstrap, this sets the
cluster address to gcomm://
and allows other hosts to be initialized. We
will choose pxc0 for this task.
$ /opt/local/lib/svc/method/mysqld bootstrap
This will run in the foreground and send some information to stderr
. We can
check that it is initially up and running in another login session:
$ mysql -uroot
mysql> show status like 'wsrep_local_state%';
+---------------------------+--------------------------------------+
| Variable_name | Value |
+---------------------------+--------------------------------------+
| wsrep_local_state_uuid | 743e3795-990d-11ea-ae43-fa5fff7c17b0 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
+---------------------------+--------------------------------------+
mysql> show status like 'wsrep_cluster%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_cluster_weight | 1 |
| wsrep_cluster_capabilities | |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 743e3795-990d-11ea-ae43-fa5fff7c17b0 |
| wsrep_cluster_status | Primary |
+----------------------------+--------------------------------------+
For version 5.7 only, prior to starting the service on the other hosts we need to configure the sstuser on the primary. Do not run this for version 8.0.
mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';
mysql> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT
ON *.* TO 'sstuser'@'localhost';
mysql> FLUSH PRIVILEGES;
If using version 8.0, then prior to starting any subsequent nodes you will need
to copy over the SSL certificates from the bootstrap host. For example, if SSH
authentication is configured, you can run on pxc1
:
$ rsync -aq pxc0:/var/mysql/\*.pem /var/mysql/
Ensure that everything under /var/mysql
is owned by mysql:mysql
.
For either version, we can now start the Percona service as normal on pxc1
:
$ svcadm enable percona
Tailing the error log should show things happening, and eventually showing the cluster is now ready.
$ tail -f /var/log/mysql/error.log
...
2020-05-18T13:47:04.380745Z 1 [Note] [MY-000000] [Galera] State transfer required:
Group state: 743e3795-990d-11ea-ae43-fa5fff7c17b0:2
Local state: 00000000-0000-0000-0000-000000000000:-1
2020-05-18T13:47:04.381360Z 1 [Note] [MY-000000] [WSREP] Server status change connected -> joiner
...
2020-05-18T13:47:07.933716Z 0 [Note] [MY-000000] [WSREP-SST] Proceeding with SST.........
2020-05-18T13:47:07.966716Z 0 [Note] [MY-000000] [WSREP-SST] ............Waiting for SST streaming to complete!
2020-05-18T13:47:21.693863Z 0 [Note] [MY-000000] [WSREP-SST] Preparing the backup at /var/mysql//sst-xb-tmpdir
2020-05-18T13:47:21.753832Z 0 [Note] [MY-000000] [Galera] 1.0 (pxc0): State transfer to 0.0 (pxc1) complete.
...
2020-05-18T13:47:29.186679Z 0 [Note] [MY-000000] [WSREP] Server status change initializing -> initialized
2020-05-18T13:47:29.186836Z 3 [Note] [MY-000000] [WSREP] Server status change initialized -> joined
...
2020-05-18T13:47:29.198518Z 0 [Note] [MY-000000] [Galera] Shifting JOINER -> JOINED (TO: 2)
2020-05-18T13:47:29.198935Z 0 [Note] [MY-000000] [Galera] Member 0.0 (pxc1) synced with group.
2020-05-18T13:47:29.199003Z 0 [Note] [MY-000000] [Galera] Shifting JOINED -> SYNCED (TO: 2)
2020-05-18T13:47:29.199077Z 1 [Note] [MY-000000] [Galera] Server pxc1 synced with group
2020-05-18T13:47:29.199149Z 1 [Note] [MY-000000] [WSREP] Server status change joined -> synced
2020-05-18T13:47:29.199196Z 1 [Note] [MY-000000] [WSREP] Synchronized with group, ready for connections
We can check this matches on the bootstrap host:
mysql> show status like 'wsrep_cluster%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_cluster_weight | 2 |
| wsrep_cluster_capabilities | |
| wsrep_cluster_conf_id | 2 |
| wsrep_cluster_size | 2 |
| wsrep_cluster_state_uuid | 743e3795-990d-11ea-ae43-fa5fff7c17b0 |
| wsrep_cluster_status | Primary |
+----------------------------+--------------------------------------+
Finally we can start up the third node on pxc2
:
$ svcadm enable percona
and eventually this should show up as synced.
mysql> show status like 'wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------+
| wsrep_cluster_weight | 3 |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | 743e3795-990d-11ea-ae43-fa5fff7c17b0 |
| wsrep_cluster_status | Primary |
+--------------------------+--------------------------------------+
Now that the cluster is bootstrapped we can restart the mysqld on pxc0
into
normal operation.
mysql> shutdown;
Query OK, 0 rows affected (0.00 sec)
mysql> exit
Bye
Wait for mysqld to stop (pgrep -fl mysqld
), and then enable the main service.
You may wish to again look at show status
and the error logs on the other
hosts to verify that the host is taken out of the cluster and then re-inserted.
$ svcadm enable percona
We can now test replication by performing database modifications across all three nodes and ensuring that the changes show up across them all.
Create a database on pxc0
:
mysql@pxc0> CREATE DATABASE test;
Create a table on pxc1
:
mysql@pxc1> USE test;
mysql@pxc1> CREATE TABLE example (node_id INTEGER PRIMARY KEY,
node_name VARCHAR(30));
Insert data on pxc2
:
mysql@pxc2> INSERT INTO test.example VALUES (2, 'pxc2');
Verify it shows up on pxc0
:
mysql@pxc0> SELECT * FROM test.example;
+---------+------------+
| node_id | node_name |
+---------+------------+
| 2 | pxc2 |
+---------+------------+
At this point we're happy that replication works ok when everything is fine,
but it's also worth testing what happens when things go wrong. For this we
will simulate a network failure on a host, for example on pxc2
, prior to
writing some data to the cluster on pxc1
:
: On pxc2. Obviously only do this if you are connected on the console!
$ ifconfig net0 down
mysql@pxc1> INSERT INTO test.example VALUES (1, 'pxc1');
The view from pxc0
should be fine, but trying to access the cluster from pxc2
should result in failure:
mysql@pxc0> SELECT * FROM test.example;
+---------+------------+
| node_id | node_name |
+---------+------------+
| 1 | pxc1 |
| 2 | pxc2 |
+---------+------------+
mysql@pxc2> SELECT * FROM test.example;
ERROR 1047 (08S01): WSREP has not yet prepared node for application use
Bringing the interface back up should initiate a replication event and after a little while pxc2 should be back up and in sync:
$ ifconfig net0 up
mysql@pxc2> SELECT * FROM test.example;
+---------+------------+
| node_id | node_name |
+---------+------------+
| 1 | pxc1 |
| 2 | pxc2 |
+---------+------------+
At this point we should be reasonably confident in the health of our cluster and can begin to write data into it.
It is possible to upgrade a 5.7 cluster to version 8.0, but it needs to be done carefully and it is recommended that you make yourself fully aware of the procedure and experiment with a test cluster before making changes to a production environment.
The following Percona resources will be helpful:
The method below worked for me (jperkin), but again I must stress that this should be undertaken carefully and with consideration to any local settings.
First, the cluster should be using SSL encryption. This is required with 8.0 and upgraded nodes will not be able to join the cluster without it.
Either:
- Create your own certificates using the "Encrypting PXC Traffic" guide above.
- Use the certificates automatically created on one of the nodes.
Ensure that all nodes are using the same certificates, then enable the
pxc-encrypt-cluster-traffic=ON
option and restart the cluster. Make sure the
cluster is functioning correctly before proceeding.
The basic overview is that each node is upgraded individually in a rolling fashion, each of them upgrading its own copy of the database while still being able to communicate with the cluster, and then once the final node is upgraded the replication version will automatically upgrade.
First, shutdown the database:
$ svcadm disable percona
Wait for it to stop (pgrep -fl mysql
), then remove the installed packages
(but not the data directory!) and move aside any configuration.
$ pkgin remove percona-cluster percona-xtrabackup
$ /usr/sbin/svccfg delete svc:/pkgsrc/percona
$ mv /opt/local/etc/my.cnf /opt/local/etc/my.cnf.57
Install the new version.
$ pkgin install percona-cluster
This will install a new default configuration file. Edit it, and replace only the required settings from your saved 5.7 configuration:
wsrep_cluster_address=...
wsrep_cluster_name=...
wsrep_node_name=...
wsrep_node_address=...
Don't copy anything else unless absolutely necessary. The default values should be used wherever possible, and some older configuration values no longer apply.
Start up the server:
$ svcadm enable percona
Viewing the error log should show that the upgrade completed successfully:
2020-05-19T09:42:50.475319Z 8 [System] [MY-013381] [Server] Server upgrade from '50700' to '80018' started.
2020-05-19T09:42:55.058665Z 8 [System] [MY-013381] [Server] Server upgrade from '50700' to '80018' completed.
and the node should be connected again to the cluster.
Verify that everything is working as expected, then repeat the procedure across the rest of the nodes.
When upgrading the final node, you should notice that the protocol version will automatically update.
Before final upgrade:
mysql> show status like 'wsrep_protocol_version';
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| wsrep_protocol_version | 9 |
+------------------------+-------+
1 row in set (0.00 sec)
After final upgrade:
mysql> show status like 'wsrep_protocol_version';
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| wsrep_protocol_version | 10 |
+------------------------+-------+
1 row in set (0.00 sec)
At this point you should be done. If at any point you end up with a cluster that fails to achieve quorom and will no longer sync, you should be able to recover by starting a known up-to-date node in bootstrap mode and recover from there. Again, read the documentation carefully to ensure no data is lost.
pkgsrc is a registered trademark of The NetBSD Foundation, Inc.
About
Usage Guides
Package Development
External Links