Installation and configuration of Percona XtraDB Cluster on CentOS 7.3

This blog will show how to install the Percona XtraDB Cluster on three CentOS 7.3 servers, using the packages from Percona repositories. This is a step-by-step installation and configuration blog, We recommend Percona XtraDB Cluster for maximum availability / reliability and scale-out READ/WRITE optimally. We are an private-label independent and vendor neutral consulting, support, managed services and education solutions provider for MySQL, MariaDB, Percona Server and ClickHouse with core expertise in performance, scalability, high availability and database reliability engineering. All our blog posts are purely focussed on education and research across open source database systems infrastructure operations. To engage us for building and managing web-scale database infrastructure operations, Please contact us on contact@minervadb.com

This cluster will be assembled of three servers/nodes:

node #1

hostname: PXC1

IP: 138.197.70.35

node #2

hostname: PXC2

IP: 159.203.118.230

node #3

hostname: PXC3

IP: 138.197.8.226

Prerequisites

  • All three nodes have a CentOS 7.3 installation.
  • Firewall has been set up to allow connecting to ports 3306, 4444, 4567 and 4568
  • SELinux is disabled

Installing from Percona Repository on 138.197.70.35

  • Install the Percona repository package:
$ sudo yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
  • You should see the following if successful:
Installed:

 percona-release.noarch 0:0.1-4

Complete!
  • Check that the packages are available:
$ sudo yum list | grep Percona-XtraDB-Cluster-57

Percona-XtraDB-Cluster-57.x86_64          5.7.14-26.17.1.el7         percona-release-x86_64

Percona-XtraDB-Cluster-57-debuginfo.x86_64 5.7.14-26.17.1.el7         percona-release-x86_64
  • Install the Percona XtraDB Cluster packages:
$ sudo yum install Percona-XtraDB-Cluster-57
  • Start the Percona XtraDB Cluster server:
$ sudo service mysql start
  • Copy the automatically generated temporary password for the superuser account:
$ sudo grep 'temporary password' /var/log/mysqld.log
  • Use this password to login as root:
$ mysql -u root -p
  • Change the password for the superuser account and log out. For example:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';

Query OK, 0 rows affected (0.00 sec)

mysql> exit

Bye
  • Stop the mysql service:
$ sudo service mysql stop

Repeat the same Percona XtraDB Cluster installation process for 159.203.118.230 and 138.197.8.226

Configuring nodes

We have to configure separately the nodes 138.197.70.35, 159.203.118.230 and 138.197.8.226 for successfully implementing an fully operational Percona XtraDB Cluster ecosystem.

Configuring the node 138.197.70.35

Configuration file /etc/my.cnf for the first node should look like:

[mysqld]

datadir=/var/lib/mysql

user=mysql

# Path to Galera library

wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226

# In order for Galera to work correctly binlog format should be ROW

binlog_format=ROW

# MyISAM storage engine has only experimental support

default_storage_engine=InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

innodb_autoinc_lock_mode=2

# Node #1 address

wsrep_node_address=138.197.70.35

# SST method

wsrep_sst_method=xtrabackup-v2

# Cluster name

wsrep_cluster_name=pxc_cluster

# Authentication for SST method

wsrep_sst_auth="sstuser:sstuser"

The first node can be started with the following command:

# /etc/init.d/mysql bootstrap-pxc

We are using CentOS 7.3 so systemd bootstrap service should be used:

# systemctl start mysql@bootstrap.service

This command will start the cluster with initial wsrep_cluster_address set to gcomm://. This way the cluster will be bootstrapped and in case the node or MySQL have to be restarted later, there would be no need to change the configuration file.

After the first node has been started, cluster status can be checked by:

mysql> show status like 'wsrep%';

+------------------------------+------------------------------------------------------------+

| Variable_name               | Value                                                     |

+------------------------------+------------------------------------------------------------+

| wsrep_local_state_uuid      | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_protocol_version      | 7                                                         |

| wsrep_last_committed        | 8                                                         |

| wsrep_replicated            | 4                                                         |

| wsrep_replicated_bytes      | 906                                                       |

| wsrep_repl_keys             | 4                                                         |

| wsrep_repl_keys_bytes       | 124                                                       |

| wsrep_repl_data_bytes       | 526                                                       |

| wsrep_repl_other_bytes      | 0                                                         |

| wsrep_received              | 9                                                         |

| wsrep_received_bytes        | 1181                                                      |

| wsrep_local_commits         | 0                                                         |

| wsrep_local_cert_failures   | 0                                                         |

| wsrep_local_replays         | 0                                                         |

| wsrep_local_send_queue      | 0                                                         |

| wsrep_local_send_queue_max  | 1                                                         |

| wsrep_local_send_queue_min  | 0                                                         |

| wsrep_local_send_queue_avg  | 0.000000                                                  |

| wsrep_local_recv_queue      | 0                                                         |

| wsrep_local_recv_queue_max  | 2                                                         |

| wsrep_local_recv_queue_min  | 0                                                         |

| wsrep_local_recv_queue_avg  | 0.111111                                                  |

| wsrep_local_cached_downto   | 3                                                         |

| wsrep_flow_control_paused_ns | 0                                                         |

| wsrep_flow_control_paused   | 0.000000                                                  |

| wsrep_flow_control_sent     | 0                                                         |

| wsrep_flow_control_recv     | 0                                                         |

| wsrep_flow_control_interval | [ 28, 28 ]                                                |

| wsrep_cert_deps_distance    | 1.000000                                                  |

| wsrep_apply_oooe            | 0.000000                                                  |

| wsrep_apply_oool            | 0.000000                                                  |

| wsrep_apply_window          | 1.000000                                                  |

| wsrep_commit_oooe           | 0.000000                                                  |

| wsrep_commit_oool           | 0.000000                                                  |

| wsrep_commit_window         | 1.000000                                                  |

| wsrep_local_state           | 4                                                         |

| wsrep_local_state_comment   | Synced                                                    |

| wsrep_cert_index_size       | 2                                                         |

| wsrep_cert_bucket_count     | 22                                                        |

| wsrep_gcache_pool_size      | 3128                                                      |

| wsrep_causal_reads          | 0                                                         |

| wsrep_cert_interval         | 0.000000                                                  |

| wsrep_incoming_addresses    | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 |

| wsrep_desync_count          | 0                                                         |

| wsrep_evs_delayed           |                                                           |

| wsrep_evs_evict_list        |                                                           |

| wsrep_evs_repl_latency      | 0/0/0/0/0                                                 |

| wsrep_evs_state             | OPERATIONAL                                               |

| wsrep_gcomm_uuid            | b79d90df-1077-11e7-9922-3a1b217f7371                      |

| wsrep_cluster_conf_id       | 3                                                         |

| wsrep_cluster_size          | 3                                                         |

| wsrep_cluster_state_uuid    | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_cluster_status        | Primary                                                   |

| wsrep_connected             | ON                                                        |

| wsrep_local_bf_aborts       | 0                                                         |

| wsrep_local_index           | 2                                                         |

| wsrep_provider_name         | Galera                                                    |

| wsrep_provider_vendor       | Codership Oy <info@codership.com>                         |

| wsrep_provider_version      | 3.20(r7e383f7)                                            |

| wsrep_ready                 | ON                                                        |

+------------------------------+------------------------------------------------------------+

60 rows in set (0.01 sec)

This output above shows that the cluster has been successfully bootstrapped.

In order to perform successful State Snapshot Transfer using XtraBackup new user needs to be set up with proper privileges:

mysql@PXC1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'sstuser';

mysql@PXC1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';

mysql@PXC1> FLUSH PRIVILEGES;

Configuration file /etc/my.cnf on the second node (PXC2) should look like this:

[mysqld]

datadir=/var/lib/mysql

user=mysql

# Path to Galera library

wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226

# In order for Galera to work correctly binlog format should be ROW

binlog_format=ROW

# MyISAM storage engine has only experimental support

default_storage_engine=InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

innodb_autoinc_lock_mode=2

# Node #2 address

wsrep_node_address=159.203.118.230

# SST method

wsrep_sst_method=xtrabackup-v2

# Cluster name

wsrep_cluster_name=pxc_cluster

# Authentication for SST method

wsrep_sst_auth="sstuser:sstuser"

Second node can be started with the following command:

# systemctl start mysql

Cluster status can now be checked on both nodes. This is the example from the second node (PXC2):

mysql> show status like 'wsrep%';

+------------------------------+------------------------------------------------------------+

| Variable_name               | Value                                                     |

+------------------------------+------------------------------------------------------------+

| wsrep_local_state_uuid      | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_protocol_version      | 7                                                         |

| wsrep_last_committed        | 8                                                         |

| wsrep_replicated            | 0                                                         |

| wsrep_replicated_bytes      | 0                                                         |

| wsrep_repl_keys             | 0                                                         |

| wsrep_repl_keys_bytes       | 0                                                         |

| wsrep_repl_data_bytes       | 0                                                         |

| wsrep_repl_other_bytes      | 0                                                         |

| wsrep_received              | 10                                                        |

| wsrep_received_bytes        | 1238                                                      |

| wsrep_local_commits         | 0                                                         |

| wsrep_local_cert_failures   | 0                                                         |

| wsrep_local_replays         | 0                                                         |

| wsrep_local_send_queue      | 0                                                         |

| wsrep_local_send_queue_max  | 1                                                         |

| wsrep_local_send_queue_min  | 0                                                         |

| wsrep_local_send_queue_avg  | 0.000000                                                  |

| wsrep_local_recv_queue      | 0                                                         |

| wsrep_local_recv_queue_max  | 1                                                         |

| wsrep_local_recv_queue_min  | 0                                                         |

| wsrep_local_recv_queue_avg  | 0.000000                                                  |

| wsrep_local_cached_downto   | 6                                                         |

| wsrep_flow_control_paused_ns | 0                                                         |

| wsrep_flow_control_paused   | 0.000000                                                  |

| wsrep_flow_control_sent     | 0                                                         |

| wsrep_flow_control_recv     | 0                                                         |

| wsrep_flow_control_interval | [ 28, 28 ]                                                |

| wsrep_cert_deps_distance    | 1.000000                                                  |

| wsrep_apply_oooe            | 0.000000                                                  |

| wsrep_apply_oool            | 0.000000                                                  |

| wsrep_apply_window          | 1.000000                                                  |

| wsrep_commit_oooe           | 0.000000                                                  |

| wsrep_commit_oool           | 0.000000                                                  |

| wsrep_commit_window         | 1.000000                                                  |

| wsrep_local_state           | 4                                                         |

| wsrep_local_state_comment   | Synced                                                    |

| wsrep_cert_index_size       | 2                                                         |

| wsrep_cert_bucket_count     | 22                                                        |

| wsrep_gcache_pool_size      | 2300                                                      |

| wsrep_causal_reads          | 0                                                         |

| wsrep_cert_interval         | 0.000000                                                  |

| wsrep_incoming_addresses    | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 |

| wsrep_desync_count          | 0                                                         |

| wsrep_evs_delayed           |                                                           |

| wsrep_evs_evict_list        |                                                           |

| wsrep_evs_repl_latency      | 0/0/0/0/0                                                 |

| wsrep_evs_state             | OPERATIONAL                                               |

| wsrep_gcomm_uuid            | 248e2782-1078-11e7-a269-4a3ec033a606                      |

| wsrep_cluster_conf_id       | 3                                                         |

| wsrep_cluster_size          | 3                                                         |

| wsrep_cluster_state_uuid    | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_cluster_status        | Primary                                                   |

| wsrep_connected             | ON                                                        |

| wsrep_local_bf_aborts       | 0                                                         |

| wsrep_local_index           | 0                                                         |

| wsrep_provider_name         | Galera                                                    |

| wsrep_provider_vendor       | Codership Oy <info@codership.com>                         |

| wsrep_provider_version      | 3.20(r7e383f7)                                            |

| wsrep_ready                 | ON                                                        |

+------------------------------+------------------------------------------------------------+

60 rows in set (0.00 sec)

This output shows that the new node has been successfully added to the cluster.

MySQL configuration file /etc/my.cnf on the third node (PXC3) should look like this:

[mysqld]

datadir=/var/lib/mysql

user=mysql

# Path to Galera library

wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226

# In order for Galera to work correctly binlog format should be ROW

binlog_format=ROW

# MyISAM storage engine has only experimental support

default_storage_engine=InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

innodb_autoinc_lock_mode=2

# Node #3 address

wsrep_node_address=138.197.8.226

# SST method

wsrep_sst_method=xtrabackup-v2

# Cluster name

wsrep_cluster_name=pxc_cluster

# Authentication for SST method

wsrep_sst_auth="sstuser:sstuser"

Third node can now be started with the following command:

# systemctl start mysql

Percona XtraDB Cluster status can now be checked from the third node (PXC3):

mysql> show status like 'wsrep%';

+------------------------------+------------------------------------------------------------+

| Variable_name               | Value                                                     |

+------------------------------+------------------------------------------------------------+

| wsrep_local_state_uuid      | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_protocol_version      | 7                                                         |

| wsrep_last_committed        | 8                                                         |

| wsrep_replicated            | 2                                                         |

| wsrep_replicated_bytes      | 396                                                       |

| wsrep_repl_keys             | 2                                                         |

| wsrep_repl_keys_bytes       | 62                                                        |

| wsrep_repl_data_bytes       | 206                                                       |

| wsrep_repl_other_bytes      | 0                                                         |

| wsrep_received              | 4                                                         |

| wsrep_received_bytes        | 529                                                       |

| wsrep_local_commits         | 0                                                         |

| wsrep_local_cert_failures   | 0                                                         |

| wsrep_local_replays         | 0                                                         |

| wsrep_local_send_queue      | 0                                                         |

| wsrep_local_send_queue_max  | 1                                                         |

| wsrep_local_send_queue_min  | 0                                                         |

| wsrep_local_send_queue_avg  | 0.000000                                                  |

| wsrep_local_recv_queue      | 0                                                         |

| wsrep_local_recv_queue_max  | 1                                                         |

| wsrep_local_recv_queue_min  | 0                                                         |

| wsrep_local_recv_queue_avg  | 0.000000                                                  |

| wsrep_local_cached_downto   | 6                                                         |

| wsrep_flow_control_paused_ns | 0                                                         |

| wsrep_flow_control_paused   | 0.000000                                                  |

| wsrep_flow_control_sent     | 0                                                         |

| wsrep_flow_control_recv     | 0                                                         |

| wsrep_flow_control_interval | [ 28, 28 ]                                                |

| wsrep_cert_deps_distance    | 1.000000                                                  |

| wsrep_apply_oooe            | 0.000000                                                  |

| wsrep_apply_oool            | 0.000000                                                  |

| wsrep_apply_window          | 1.000000                                                  |

| wsrep_commit_oooe           | 0.000000                                                  |

| wsrep_commit_oool           | 0.000000                                                  |

| wsrep_commit_window         | 1.000000                                                  |

| wsrep_local_state           | 4                                                         |

| wsrep_local_state_comment   | Synced                                                    |

| wsrep_cert_index_size       | 2                                                         |

| wsrep_cert_bucket_count     | 22                                                        |

| wsrep_gcache_pool_size      | 2166                                                      |

| wsrep_causal_reads          | 0                                                         |

| wsrep_cert_interval         | 0.000000                                                  |

| wsrep_incoming_addresses    | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 |

| wsrep_desync_count          | 0                                                         |

| wsrep_evs_delayed           |                                                           |

| wsrep_evs_evict_list        |                                                           |

| wsrep_evs_repl_latency      | 0/0/0/0/0                                                 |

| wsrep_evs_state             | OPERATIONAL                                               |

| wsrep_gcomm_uuid            | 3f51b20e-1078-11e7-8405-8e9b37a37cb1                      |

| wsrep_cluster_conf_id       | 3                                                         |

| wsrep_cluster_size          | 3                                                         |

| wsrep_cluster_state_uuid    | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_cluster_status        | Primary                                                   |

| wsrep_connected             | ON                                                        |

| wsrep_local_bf_aborts       | 0                                                         |

| wsrep_local_index           | 1                                                         |

| wsrep_provider_name         | Galera                                                    |

| wsrep_provider_vendor       | Codership Oy <info@codership.com>                         |

| wsrep_provider_version      | 3.20(r7e383f7)                                            |

| wsrep_ready                 | ON                                                        |

+------------------------------+------------------------------------------------------------+

60 rows in set (0.03 sec)

This output confirms that the third node has joined the cluster.

Testing Replication

Creating the new database on the PXC1 node:

mysql> create database minervadb;

Query OK, 1 row affected (0.01 sec)

Creating the example table on the PXC2 node:

mysql> use minervadb;

Database changed

mysql> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));

Query OK, 0 rows affected (0.01 sec)

Inserting records on the PXC3 node:

mysql> INSERT INTO minervadb.example VALUES (1, 'MinervaDB');

Query OK, 1 row affected (0.07 sec)

Retrieving all the rows from that table on the PXC1 node:

mysql> select * from minervadb.example;

+---------+-----------+

| node_id | node_name |

+---------+-----------+

|      1 | MinervaDB |

+---------+-----------+

1 row in set (0.00 sec)

 

About Shiv Iyer 36 Articles
WebScale Database Infrastructure Operations Expert in MySQL, MariaDB, PostgreSQL and ClickHouse with core interests in performance, scalability, high availability and database reliability engineering. Shiv currently is the Founder and Principal of MinervaDB, an independent and vendor neutral Consulting, 24*7 Support and Remote DBA Services provider for MySQL, MariaDB, PostgreSQL and ClickHouse serving approximately 300 customers globally.
UA-155183614-1