Deprecated: Function Yoast\WP\SEO\Conditionals\Schema_Blocks_Conditional::get_feature_flag is deprecated since version Yoast SEO 20.5 with no alternative available. in /home1/minerho3/public_html/wp-includes/functions.php on line 6078

Deprecated: Function Yoast\WP\SEO\Conditionals\Schema_Blocks_Conditional::get_feature_flag is deprecated since version Yoast SEO 20.5 with no alternative available. in /home1/minerho3/public_html/wp-includes/functions.php on line 6078

Deprecated: Function Yoast\WP\SEO\Conditionals\Schema_Blocks_Conditional::get_feature_flag is deprecated since version Yoast SEO 20.5 with no alternative available. in /home1/minerho3/public_html/wp-includes/functions.php on line 6078

Warning: Cannot modify header information - headers already sent by (output started at /home1/minerho3/public_html/wp-includes/functions.php:6078) in /home1/minerho3/public_html/wp-includes/feed-rss2.php on line 8
MySQL - MariaDB - ClickHouse - InnoDB - Galera Cluster - MySQL Support - MariaDB Support - MySQL Consulting - MariaDB Consulting - MySQL Remote DBA - MariaDB Remote DBA - Emergency DBA Support - Remote DBA - Database Migration - PostgreSQL - PostgreSQL Consulting - PostgreSQL Support - PostgreSQL Remote DBA https://minervadb.com/index.php/category/galera-cluster/ Committed to Building Optimal, Scalable, Highly Available, Fault-Tolerant, Reliable and Secured WebScale Database Infrastructure Operations Sat, 08 Feb 2020 19:04:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://minervadb.com/wp-content/uploads/2017/10/cropped-LogoColorTextRight-32x32.jpeg MySQL - MariaDB - ClickHouse - InnoDB - Galera Cluster - MySQL Support - MariaDB Support - MySQL Consulting - MariaDB Consulting - MySQL Remote DBA - MariaDB Remote DBA - Emergency DBA Support - Remote DBA - Database Migration - PostgreSQL - PostgreSQL Consulting - PostgreSQL Support - PostgreSQL Remote DBA https://minervadb.com/index.php/category/galera-cluster/ 32 32 Troubleshooting Galera Cluster – Quick Guide https://minervadb.com/index.php/2020/01/15/troubleshooting-galera-cluster-quick-guide/ Wed, 15 Jan 2020 20:03:14 +0000 http://minervadb.com/?p=3013 How to troubleshoot Galera Cluster efficiently ?  Basics – To troubleshoot Galera Cluster successfully  you should know how it works ? Galera Cluster is built on top of a proprietary group communication system layer, which [...]

The post Troubleshooting Galera Cluster – Quick Guide appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
How to troubleshoot Galera Cluster efficiently ? 

Basics – To troubleshoot Galera Cluster successfully  you should know how it works ?

Galera Cluster is built on top of a proprietary group communication system layer, which implements a virtual synchrony QoS (Quality of Service). Virtual synchrony unifies the data delivery and cluster membership services, providing clear formalism for message delivery semantics. While virtual synchrony guarantees consistency, it does not guarantee temporal synchrony, which is necessary for smooth multi-master operations. To address this, Galera Cluster implements its own runtime-configurable temporal flow control. Flow control keeps nodes synchronized to a fraction of a second. A minimal Galera cluster consists of 3 nodes and it is recommended to run with odd number of nodes. The reason is that, should there be a problem applying a transaction on one node (e.g., network problem or the machine becomes unresponsive), the two other nodes will have a quorum (i.e. a majority) and will be able to proceed with the transaction commit.

Galera Cluster and Certification based replication

When a transaction issues COMMIT, Before actual COMMIT happens, all the changes (INSERTs / UPDATEs / DELETEs) occurred at the DB node along with the modified rows “PRIMARY KEYS” are collected as WRITE-SET. The source node then broadcast this WRITE-SET to all the nodes in the cluster, including the originating node then performs a deterministic certification test on the write-set using the PRIMARY KEYS in the write-set and the actual PRIMARY KEY values in the nodes. This test is to determine the key constraint integrity of the write-set. If the test fails, the originating node drops the write-set and the cluster rolls back the original transaction. If the certification test succeeds, the transaction commits and write-sets are applied to all nodes in the cluster, thus making the replication.

In Galera Cluster all the nodes must reach on a consensus about the outcome of the transaction and replication happens. On this, the originating node notifies the client about the successful transaction commit. Each transaction is assigned a global ordinal sequence number. During the deterministic certification test on the write-set, the cluster checks the current transaction with the last successful transaction. If any transactions would have occurred in between these 2 globally ordered transactions, primary key conflicts will occur and the test fails.

Three layers of Galera Cluster Replication:

1. Certification layer that prepares the write-sets and performs the certification tests.
2. Replication layer that manages the replication process and global ordering.
3. Group communication framework provides plugin architecture for the group communication systems in the Cluster.

Collecting Galera Cluster Operations Forensics Data to troubleshoot efficiently

Every great technology solution need forensics / diagnostics tools for troubleshooting confidently. It’s technically impossible for the operations team to recommend or fix the issue without sufficient evidence  / facts. Galera Cluster errors are also logged-in MySQL, MariaDB and Percona XtraDB error log (by  default hostname.err  file in the data directory), depending on which you use The following Galera Cluster system variables can be configured to enable error logging specific to the replication  process:

  • wsrep_log_conflicts: Enables conflict logging for error logs, Example – Transaction conflicts  between two nodes on the same row of same table at the same time
  • cert.log_conflicts:Logs certification failures during replication
  • wsrep_debug:Enables debugging information for the database server logs ( This parameter also logs-in authentication details / passwords so we don’t recommend you to configure this system variable in the production )

Galera Cluster Status Variables and interpretation – This is often our first step to audit your Galera Cluster infrastructure

  1. wsrep_ready – Node status, Are they ready to accept queries ? Expected (ideal scenario) answer is ” ON “, which means can accept write-sets from the cluster and if ” OFF “, all queries fails with error – ERROR 1047 (08501) Unknown Command
  2. wsrep_connected – Confirms node’s network connectivity  with other nodes in the  cluster. If the output is “ON”, then respective node has a network connection to one or more other nodes forming a cluster component. If “OFF”, the node does not have a connection to any cluster components.

Using status variables for Galera Cluster Replication Health-Check

wsrep_local_recv_queue_avg: The average size of the local received queue since the last status query.  If the value is higher than 0.0, That specific node cannot apply write-sets as fast as it receives them ( indicates replication throttling or network throughput issues) and this can lead to replication latencies. To annotate the range you can also log the status variables wsrep_local_send_queue_min and wsrep_local_send_queue_max

wsrep_flow_controlled_paused: When you are processing really very large volume of data you may see a node is with unusually higher values of wsrep_flow_controlled_paused , for eg. , value a value of 0.2 indicates that flow control was in effect for 20% of the time since the last SHOW GLOBAL STATUS, This isn’t useful for us, because you may not know when that was, So we recommend you to flush occasionally (flush privileges) and closely monitor that node till you narrow down to the epicenter, If it’s not settling down, please consider increasing the number of slave threads (wsrep_slave_threads)

Restarting Galera Cluster Safely

When restarting Galera Cluster always identify the most advanced node, Restarting a cluster, starting nodes in the wrong order, starting the wrong nodes first, can be devastating and lead to loss of data. Attempting to boostrap using any other node will cause the following error message:

#source - Galera Cluster
2019-12-11 03:27:31 5572 [ERROR] WSREP: It may not be safe to bootstrap the cluster from this node.
It was not the last one to leave the cluster and may not contain all the updates.
To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1 .

Identifying the Most Advanced Node

To identify the most advanced node state ID is done by comparing the Global Transaction ID values on each node in your cluster. You can find this in the grastate.dat file, located in the data directory for your database.

grastate.dat file looks like this copied below:

#source - Galera Cluster
# GALERA saved state
version: 2.1
uuid:    6rc24719-ct3q-11e2-f5w1-61sg152r2h81
seqno:   8157271294261
cert_index:

To find the sequence number of the last committed transaction, run mysqld with the –wsrep-recover option. This recovers the InnoDB table space to a consistent state, prints the corresponding Global Transaction ID value into the error log, and then exits. Here’s an example of this:

#source - Galera Cluster
130514 18:39:13 [Note] WSREP: Recovered position: 6rc24719-ct3q-11e2-f5w1-61sg152r2h81:8157271294261

How to identify crashed Galera nodes ?

You can identify  the crashed node by reading the contents of the grastate.dat file. If it looks like the example below, the node has either crashed during execution of a non-transactional operation (e.g., ALTER TABLE), or the node aborted due to a database inconsistency.

# GALERA saved state
version: 2.1
uuid:   6rc24719-ct3q-11e2-f5w1-61sg152r2h81
seqno:   -1
cert_index:

Safe to Bootstrap’ Protection (Galera Cluster 3.19 onwards)

Starting with provider version 3.19, Galera has an additional protection against attempting to boostrap the cluster using a node that may not have been the last node remaining in the cluster prior to cluster shutdown. If Galera can conclusively determine which node was the last node standing, it will be marked as ‘safe to bootstrap’, as seen in this example grastate.dat:

#source - Galera Cluster 
# GALERA saved state
version: 2.1
uuid:    5981f182-a4cc-11e6-98cc-77fabedd360d
seqno:   1234
safe_to_bootstrap: 1

In the case when all nodes crashed simultaneously, no node will be considered safe to bootstrap until the grastate.dat file is edited manually. To override this protection, edit the safe_to_bootstrap line in the grastate.dat file of the node you intend to use as the first node.

Galera Cluster Recovery with Gcache

Starting with provider version 3.19, Galera provides the gcache.recover parameter. If set to yes, Galera will attempt to recover the gcache on node startup. the node will be in position to provide IST to other joining nodes, which can speed up the overall restart time for the entire cluster. Gcache recovery requires that the entire gcache file is read twice so expect latency in larger database with slower disks. it is a “best effort” operation for your database infrastructure If the recovery was not successful, the node will continue to operate normally however other nodes will fall back to SST when attempting to join.

Conclusion

If you don’t carefully restart Galera Cluster nodes after a crash, You may end-up with a cluster of corrupted data and may not prove functional. If there are no other nodes in the cluster with a well-defined state, there is no need to preserve the node state ID. You must perform a thorough database recovery procedure, similar to that used on stand-alone database servers. Once you recover one node, use it as the first node in a new cluster.

References 

The post Troubleshooting Galera Cluster – Quick Guide appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
Resetting the Galera Cluster Quorum https://minervadb.com/index.php/2019/09/03/resetting-the-galera-cluster-quorum/ Tue, 03 Sep 2019 12:03:51 +0000 http://minervadb.com/?p=2654 Checklist for Galera Cluster Quorum Resetting Introduction – What you should know about Galera Cluster to efficiently troubleshoot ?  We have several customers on Galera Cluster and it works great if you are building synchronous [...]

The post Resetting the Galera Cluster Quorum appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
Checklist for Galera Cluster Quorum Resetting

Introduction – What you should know about Galera Cluster to efficiently troubleshoot ? 

We have several customers on Galera Cluster and it works great if you are building synchronous MySQL / MariaDB replication solution for high availability and scale-out. Unlike MySQL / MariaDB master-slave replication topology, The Galera nodes (slaves) are master ready all times, Galera replication can guarantee zero slave lag for such installations and, due to parallel slave applying, much better throughput for the cluster. Galera Cluster is a write-set replication service provider in the form of the dlopenable library. The heart of Galera Cluster replication is wsrep API which consists of two elements:

  • wsrep Hooks: Integrates database system to write-set replication
  • dlopen(): This function makes the wsrep provider available to the wsrep hooks

The primary focus of Galera Cluster is data consistency. Transactions are either applied to every node or not at all (everything or nothing transaction guidance and management). In Galera cluster, the transaction commits (row-based replication events) are then applied on all servers, via a certification-based replication. Certification-based replication is an alternative approach to synchronous database replication using Group Communication and transaction ordering techniques.

How is Galera Cluster Replication different from MySQL Replication ?

MySQL Replication
(Asynchronous Master-Slave Replication)
Galera Cluster Replication
(Synchronous Replication)
The change data capture of MySQL master is copied to binary log. MySQL Replication happens in three threads - dump thread in MySQL master continuously read binary log and send it to slave. IO thread in the slave receives the binlog that the master's dump thread sent, and writes it to a file called relay log. Another thread in slave, called the SQL thread, that continuously reads the relay log and applies the changes to the slave server.Certification based replication - Transaction ordering using group communication method, Optimal atomic execution of transaction in a single or multiple node (** We at MinervaDB don't recommend though inserts in multiple Galera Cluster nodes in parallel though) and during COMMIT a coordinated certification based process ensures transaction consistency across the cluster.
In MySQL Master-Slave asynchronous replication, the UPDATES should always be done on one master, these are then propagated to slaves. Though It is possible to create a ring topology with multiple masters, We at MinervaDB do not recommended as it is very easy for the servers to get out of sync in case of a master failing.Every node in a Galera Cluster is always WRITE ready. Whenever a transaction commits, the row-based replication events are applied on all servers via a certification-based replication. In a certification-based replication, a transaction executes until it reaches the commit point, assuming there is no conflict. When a client issues COMMIT, Before actual transactional commit happens, the primary keys of keys of changed rows are copied into a write-set and sent to all other nodes. A deterministic certification test happens based on the primary keys on each nodes in the cluster, including the node where the write-set originated and determines whether the node can apply the write-set. If the test succeeds, the write-set ins applied to the rest of the cluster and transaction is committed. If the certification test fails, node drops the write-set and cluster rollbacks the original transaction.
Manual failover process in an ideal / normal scenario. If not carefully planned graduation of replica slave to master, There are high chances you will corrupt the entire MySQL replication infrastructure.You can build self-healing MySQL / MariaDB replication solution directly using Galera Cluster. If a node fails, the other nodes can (and will) operate without any database reliability impact. When failed nodes comes back, It will automatically synchronize with other nodes through State Snapshot Transfer (SST) or Incremental State Transfer (IST) depending on the last known state, before it is allowed back into the cluster. No data is lost when a node fails.

When do you Reset Galera Cluster Quorum ?

During a network outage / failure of split-brain situation, the node come to suspect that there is another Primary Component, to which they are no longer connected. When this happens, all nodes will return an unknown command error to all the queries. To confirm this you can query the status variable wsrep_cluster_status on each node:

SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';

If none of the nodes return the value Primary, That means you have to reset the quorum. The result of the above query returns Primary, It indicates that node is part of the Primary Component. Any other value indicates that the node is part of a nonoperational component.

Finding the Most Advanced Node

Identify the most advanced node in the cluster before resetting the quorum. Technically, the most advanced node is the one which committed the last transaction. This node will serve as the starting point for the new Primary Component. You can identify the most advanced node with the most advanced sequence number, or seqno.You can determine this using the wsrep_last_committed status variable.

From the database client on each node, run the following query:

SHOW STATUS LIKE 'wsrep_last_committed';

Resetting the Quorum

By resetting the quorum you are actually bootstrapping the primary component of the most advanced node available. Eventually this will be the node functioning as the new Primary Component, bringing the rest of the cluster into line with its state.

You can either do this process automatic or manual

Automatic Bootstrap

Once you have identified most advanced node, You can dynamically enable pc.bootstrap under wsrep_provider_options making the node a new Primary Component, run the following command:

SET GLOBAL wsrep_provider_options='pc.bootstrap=YES';

Manual Bootstrap

To manually bootstrap your cluster, complete the following steps:

1. Shut down all cluster nodes. For servers that use init, run the following command from the console:

# service mysql stop

For servers that use systemd, instead run this command:

# systemctl stop mysql

2. Start the most advanced node with the –wsrep-new-cluster option. For servers that use init, run the following command:

# service mysql start --wsrep-new-cluster

For servers that use systemd and Galera Cluster 5.5 or 5.6, instead run this command:

# systemctl start mysql --wsrep-new-cluster

For servers that use systemd and Galera Cluster 5.7, use the following command:

# /usr/bin/mysqld_bootstrap

3. Start every other node in the cluster. For servers that use init, run the following command:

# service mysql start

For servers that use systemd, instead run this command:

# systemctl start mysql

Why we recommend Automatic Bootstrap ?

When we follow Automatic Bootstrap process, the write-set cache or Cache is preserved in each node. i.e. , some or all of the joining nodes can provision themselves using the Incremental State Transfer (IST) method, rather than the much slower State Snapshot Transfer (SST) method.

Recommended Reads – Blogs on Galera Cluster Operations and MySQL Replication 

The post Resetting the Galera Cluster Quorum appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
Things to remember while using Galera Cluster Streaming Replication https://minervadb.com/index.php/2019/06/29/things-to-remember-while-using-galera-cluster-streaming-replication/ https://minervadb.com/index.php/2019/06/29/things-to-remember-while-using-galera-cluster-streaming-replication/#comments Sat, 29 Jun 2019 19:28:20 +0000 http://minervadb.com/?p=2457 What are things you must consider before using Galera Cluster Streaming Replication ?  In Streaming Replication, the node breaks the transaction into fragments, then certifies and replicates them on the slaves while the transaction is [...]

The post Things to remember while using Galera Cluster Streaming Replication appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
What are things you must consider before using Galera Cluster Streaming Replication ? 

In Streaming Replication, the node breaks the transaction into fragments, then certifies and replicates them on the slaves while the transaction is still in progress. Once certified, the fragment can no longer be aborted by conflicting transactions. Additionally, Streaming Replication allows the node to process transaction write-sets greater than 2Gb. So, How do you decide whether go with Galera Cluster Streaming Replication or not ? We caution our customers before choosing Galera Cluster Streaming Replication considering several limitations and costs attached to it (explained below the limitations of Galera Cluster Streaming Replication in this post). Even you have decided to proceed with Galera Cluster Streaming Replication, We recommend you to enable it only for a session-level and then only on specific transactions.

What is so compelling about Galera Cluster Streaming Replication ?

  • Troubleshooting long-running transactions performance in Galera Cluster ? It is worth trying Galera Cluster Streaming Replication 

Many time long-running write transactions on Galera Cluster aborts the transaction. Because, longer it takes for a node to commit a transaction, the greater the likelihood that the cluster will apply a smaller, conflicting transaction before the longer one can replicate to the cluster. When this happens, the cluster aborts the long-running transaction. In Galera Cluster Streaming Replication, Once the node replicates and certifies a fragment, it is no longer possible for other transactions to abort it. The certification keys are generated from record locks, therefore they don’t cover gap locks or next key locks. If the transaction takes a gap lock, it is possible that a transaction, which is executed on another node, will apply a write set which encounters the gap log and will abort the streaming transaction.

  • Replicating Large Data WRITE transactions on Galera Cluster

Galera Cluster performance hugely depends on the network infrastructure quality, the node locally processes the transaction and doesn’t replicate the data until you commit. while slave nodes apply a large transaction, they cannot commit other transactions they receive, which may result in Flow Control throttling of the entire cluster. In Streaming Replication, the node begins to replicate the data with each transaction fragment, rather than waiting for the commit. This allows you to spread the replication over the lifetime of the transaction. This also allows the slave node to process incrementally the entire large transaction with a minimal impact on the cluster.

  • Hot Records – Troubleshooting “UPDATEs” Performance in Galera Cluster 

When your application needs to update frequently the same records from the same table (e.g., implementing a locking scheme, a counter, or a job queue), Streaming Replication allows you to force critical changes to replicate to the entire cluster. For instance, consider the use case of a Mobile Ad. Network  that creates clicks for a campaign. When the transaction starts, it updates the table ad_clicks, setting the queue position for the clicks. Under normal replication, two transactions can come into conflict if they attempt to update the queue position at the same time.

You can avoid this with Streaming Replication. As an example of how to do this, you would first execute the following SQL statement to begin the transaction:

START TRANSACTION;

After reading the data that you need for the application, you would enable Streaming Replication by executing the following two SET statements:

SET SESSION wsrep_trx_fragment_unit='statements';
SET SESSION wsrep_trx_fragment_size=1;

Next, set the user’s position in the queue like so:

UPDATE ad_clicks
SET queue_position = queue_position + 1;

With that done, you can disable Streaming Replication by executing one of the previous SET statements, but with a different value like so:

SET SESSION wsrep_trx_fragment_size=0;

You can now perform whatever additional tasks you need to prepare the Ad. Clicks, and then commit the transaction:

COMMIT;

During the ad. clicks  transaction, the client initiates Streaming Replication for a single statement, which it uses to set the queue position. The queue position update then replicates throughout the cluster, which prevents other nodes from coming into conflict with the new clicks.

Let’s talk about Galera Cluster Streaming Replications Limitations – Not everything is always awesome with Galera Cluster Streaming Replication 😉

There are limitations to Galera Cluster Streaming Replication, This is definitely not the solution for all your Galera Replication challenges. The two major challenges with Galera Cluster Streaming Replication are ” Performance bottlenecks during a transaction and rollback ” , We have explained these below in detail:

Performance bottleneck during a transaction 

When Streaming Replication is enabled on Galera Cluster Version 4, the each node in the cluster begins recording its write-sets to the wsrep_streaming_log table in the mysql database. This done to guarantee the persistence of Streaming Replication updates in the event that they crash. But, This operations will increase the load on specific node, which definitely will be a performance bottleneck during peak load hours.

Performance bottleneck during rollbacks 

If you ever rollback a transaction when Streaming Replication is in use, The rollback operation consumes the entire available system resources. So if your application frequently need to be rolled back, This will become a major performance bottleneck for Galera Cluster Streaming Replication Ops. So we strongly recommend to use shorter transactions whenever possible. In the event that your application performs batch processing or scheduled housekeeping tasks, consider splitting these into smaller transactions in addition to using Streaming Replication.

The post Things to remember while using Galera Cluster Streaming Replication appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
https://minervadb.com/index.php/2019/06/29/things-to-remember-while-using-galera-cluster-streaming-replication/feed/ 3
Installation and configuration of 3 node MariaDB Galera Cluster and MariaDB MaxScale on CentOS https://minervadb.com/index.php/2019/06/06/installation-and-configuration-of-mariadb-galera-cluster-and-mariadb-maxscale-on-centos/ https://minervadb.com/index.php/2019/06/06/installation-and-configuration-of-mariadb-galera-cluster-and-mariadb-maxscale-on-centos/#comments Thu, 06 Jun 2019 12:20:27 +0000 http://minervadb.com/?p=2374 Step-by-step installation and configuration of 3 node MariaDB Galera Cluster and MariaDB MaxScale on CentOS This blog is aimed to help anyone who is interested in setting-up 3 node MariaDB Galera Cluster with MariaDB MaxScale [...]

The post Installation and configuration of 3 node MariaDB Galera Cluster and MariaDB MaxScale on CentOS appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
Step-by-step installation and configuration of 3 node MariaDB Galera Cluster and MariaDB MaxScale on CentOS

This blog is aimed to help anyone who is interested in setting-up 3 node MariaDB Galera Cluster with MariaDB MaxScale for building highly available / reliable, fault-tolerant and self-healing MariaDB infrastructure operations.  We have not tried to explain MariaDB Galera Cluster and MariaDB MaxScale Architecture or internals in this post. MariaDB documentation on both MariaDB Galera Cluster and MariaDB MaxScale is neat and direct , You can read the same here – https://mariadb.com/kb/en/library/galera-cluster/   (MariaDB Galera Cluster) and https://mariadb.com/kb/en/mariadb-maxscale-20-setting-up-mariadb-maxscale/    (MariaDB MaxScale). If you are building maximum availability Database Infrastructure Operations on MariaDB stack, it’s worth investing and exploring MariaDB Galera Cluster and MariaDB MaxScale.

Our lab for building highly available and fault-tolerant MariaDB Ops. using 3 node MariaDB Galera Cluster and MariaDB MaxScale 

We have 3 nodes MariaDB Galera Cluster and 1 node for MariaDB MaxScale

NodesIP address
MariaDB Galera Cluster Node 1 192.168.56.101
MariaDB Galera Cluster Node 2192.168.56.102
MariaDB Galera Cluster Node 3192.168.56.103
MariaDB MaxScale192.168.56.104

LINUX Configuration

Disable SELinux and the Linux firewall (which is firewalld in CentOS and RedHat 7.0 and up, and not iptables) and also set the hostname.

Disable SELinux

We recommend disabling SELinux unless your IT security demands SELinux, You disable / enable SELinux through the file /etc/selinux/config,  looks something like this:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Disable firewalld

Firewalld is a standard service that is disabled using the systemctl command:

$ sudo systemctl disable firewalld

Configuring hostname

We recommend configuring hostname to tell which server you are connecting to when using MariaDB MaxScale, Please follow the steps appropriately for your infrastructure / IP:

$ sudo hostname node101

MariaDB Repository Installation

Before we install the software we need to set up the MariaDB repository on all 4 servers:

$ curl -sS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash

Having run this on the four servers, let us now go on with installing MariaDB Galera server on the three nodes where this is appropriate, in the case here we are looking at nodes 192.168.56.101, 192.168.56.102 and 192.168.56.103 On these three nodes run this:

$ sudo yum -y install MariaDB-server

When this is completed, we should have MariaDB Server installed. The next thing to do then is to install MariaDB MaxScale on the instance 192.168.56.104 :

$ sudo yum -y install maxscale

We recommend our customers to install MariaDB client programs on the MariaDB MaxScale instance (192.168.56.104) for good reasons, Though there are blogs which says it’s optional

$ sudo yum -y install MariaDB-client

Configuring MariaDB Galera Cluster  

In this blog we are only mentioning about the minimal settings to make MariaDB Galera Cluster working with MariaDB MaxScale, Here we are not talking about how to make MariaDB Galera Cluster optimal and Scalable, The settings below make MariaDB Galera Cluster fully operational, We have to edit the file /etc/my.cnf.d/server.cnf and we have to adjust the Galera specific settings on the nodes 192.168.56.101, 192.168.56.102 and 192.168.56.103. Edit the [galera] section to look like this on all three nodes:

[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.56.101,192.168.56.102,192.168.56.103
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2

Starting MariaDB Galera Cluster

To start a Galera Cluster from scratch we run a process called a bootstrap, and the reason this is a bit different from the usual MariaDB startup is that for HA reasons a node in a cluster attaches to one or more other nodes in the cluster, but for the first node, this is not possible. This is not complicated though, there is a script that is included with MariaDB Server that manages this, but remember that this script is only to be used when the first node in a Cluster is started with no existing nodes in it. In this case, on 192.168.56.101 run:

$ sudo galera_new_cluster

Confirm MariaDB is running successfully:

$ ps -f -u mysql | more
UID        PID  PPID  C STIME TTY          TIME CMD
mysql     1411     1  0 18:33 ?        00:00:00 /usr/sbin/mysqld --wsrep-new-cluster --wsrep_start_position=00000000-0000-0000-0000-000000000000:-1

Confirm the status of Galera Cluster:

$ mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or g.
Your MariaDB connection id is 10
Server version: 10.3.15-MariaDB MariaDB Server
 
Copyright (c) 2000, 2019, Oracle, MariaDB Corporation Ab and others.
 
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
 
MariaDB [(none)]> show global status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+
1 row in set (0.00 sec)

Start MariaDB instance  in 192.168.56.102

$ sudo systemctl start mariadb.service

We should now have 2 nodes running in the cluster, let’s check it out from the MariaDB command line on 192.168.56.101:

MariaDB [(none)]> show global status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 2     |
+--------------------+-------+
1 row in set (0.00 sec)

Start MariaDB instances in 192.168.56.103

$ sudo systemctl start mariadb.service

Check the cluster size on 192.168.56.101 again:

MariaDB [(none)]> show global status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.00 sec)

The wsrep_cluster_size is 3 , So have successfully added all the three nodes to the Galera Cluster

Configuring MariaDB for MariaDB MaxScale

First we need to set up a user that MariaDB MaxScale use to attach to the cluster to get authentication data. On 192.168.56.101, using the MariaDB command line as the database root user:

$ mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or g.
Your MariaDB connection id is 11
Server version: 10.3.15-MariaDB MariaDB Server
 
Copyright (c) 2000, 2019, Oracle, MariaDB Corporation Ab and others.
 
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
 
MariaDB [(none)]> create user 'dbuser1'@'192.168.56.104' identified by 'My@PAssword';
Query OK, 0 rows affected (0.01 sec)
 
MariaDB [(none)]> grant select on mysql.user to 'dbuser1'@'192.168.56.104';
Query OK, 0 rows affected (0.01 sec)

we need some extra privileges for table and database level grants:

MariaDB [(none)]> grant select on mysql.db to 'dbuser1'@'192.168.56.104';
Query OK, 0 rows affected (0.01 sec)
 
MariaDB [(none)]> grant select on mysql.tables_priv to 'dbuser1'@'192.168.56.104';
Query OK, 0 rows affected (0.00 sec)
 
MariaDB [(none)]> grant show databases on *.* to 'dbuser1'@'192.168.56.104';
Query OK, 0 rows affected (0.00 sec)

MariaDB MaxScale Configuration

MariaDB  MaxScale configure file is located in /etc/maxscale.cnf  . we have copied below the “MaxScale.cnf” used in our lab:

# Globals
[maxscale]
threads=1
 
# Servers
[server1]
type=server
address=192.168.56.101
port=3306
protocol=MySQLBackend
 
[server2]
type=server
address=192.168.56.102
port=3306
protocol=MySQLBackend
 
[server3]
type=server
address=192.168.56.103
port=3306
protocol=MySQLBackend
 
# Monitoring for the servers
[Galera Monitor]
type=monitor
module=galeramon
servers=server1,server2,server3
user=dbuser1
passwd=My@PAssword
monitor_interval=1000
 
# Galera router service
[Galera Service]
type=service
router=readwritesplit
servers=server1,server2,server3
user=dbuser1
passwd=My@PAssword
 
# MaxAdmin Service
[MaxAdmin Service]
type=service
router=cli
 
# Galera cluster listener
[Galera Listener]
type=listener
service=Galera Service
protocol=MySQLClient
port=3306
 
# MaxAdmin listener
[MaxAdmin Listener]
type=listener
service=MaxAdmin Service
protocol=maxscaled
socket=default

Starting MariaDB MaxScale

$ sudo systemctl start maxscale.service

Connecting to MariaDB Galera Cluster from MariaDB MaxScale:

$ mysql -h 192.168.56.104 -u dbuser1 -pMy@PAssword
Welcome to the MariaDB monitor.  Commands end with ; or g.
Your MySQL connection id is 4668
Server version: 10.0.0 3.1.5-maxscale MariaDB Server
 
Copyright (c) 2000, 2019, Oracle, MariaDB Corporation Ab and others.
 
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
 
MySQL [(none)]>

You can see that we are connected to MariaDB MaxScale now, but which server in the MariaDB Galera Cluster we are connected to? Let’s confirm that now:

MySQL [(none)]> show variables like 'hostname';
+---------------+---------+
| Variable_name | Value   |
+---------------+---------+
| hostname      | node101 |
+---------------+---------+
1 row in set (0.00 sec)

let’s stop MariaDB server on 192.168.56.101 and see what happens. On 192.168.56.101 run the following command:

$ sudo systemctl stop mariadb.service

Now login from MariaDB MaxScale command prompt and check which MariaDB instance are we connecting to:

$ mysql -h 192.168.56.104 -u dbuser1 -pMy@PAssword
Welcome to the MariaDB monitor.  Commands end with ; or g.
Your MySQL connection id is 4668
Server version: 10.0.0 2.1.5-maxscale MariaDB Server
 
Copyright (c) 2000, 2019, Oracle, MariaDB Corporation Ab and others.
 
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
 
MySQL [(none)]> show variables like 'hostname';
+---------------+---------+
| Variable_name | Value   |
+---------------+---------+
| hostname      | node102 |
+---------------+---------+
1 row in set (0.00 sec)

We are connecting to “node102” (192.168.56.102) because “node1” (192.168.56.101) is not available

Conclusion

To conclude this post we have successfully installed and configured 3 node MariaDB Galera Cluster with single node MariaDB MaxScale.

The post Installation and configuration of 3 node MariaDB Galera Cluster and MariaDB MaxScale on CentOS appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
https://minervadb.com/index.php/2019/06/06/installation-and-configuration-of-mariadb-galera-cluster-and-mariadb-maxscale-on-centos/feed/ 2