Deprecated: Function Yoast\WP\SEO\Conditionals\Schema_Blocks_Conditional::get_feature_flag is deprecated since version Yoast SEO 20.5 with no alternative available. in /home1/minerho3/public_html/wp-includes/functions.php on line 6078

Deprecated: Function Yoast\WP\SEO\Conditionals\Schema_Blocks_Conditional::get_feature_flag is deprecated since version Yoast SEO 20.5 with no alternative available. in /home1/minerho3/public_html/wp-includes/functions.php on line 6078

Deprecated: Function Yoast\WP\SEO\Conditionals\Schema_Blocks_Conditional::get_feature_flag is deprecated since version Yoast SEO 20.5 with no alternative available. in /home1/minerho3/public_html/wp-includes/functions.php on line 6078

Warning: Cannot modify header information - headers already sent by (output started at /home1/minerho3/public_html/wp-includes/functions.php:6078) in /home1/minerho3/public_html/wp-includes/feed-rss2.php on line 8
MySQL Consulting https://minervadb.com/index.php/tag/percona-server/ Committed to Building Optimal, Scalable, Highly Available, Fault-Tolerant, Reliable and Secured WebScale Database Infrastructure Operations Tue, 19 Mar 2019 18:37:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 http://minervadb.com/wp-content/uploads/2017/10/cropped-LogoColorTextRight-32x32.jpeg MySQL Consulting https://minervadb.com/index.php/tag/percona-server/ 32 32 Installation and configuration of Percona XtraDB Cluster on CentOS 7.3 http://minervadb.com/index.php/2018/08/30/installation-and-configuration-of-percona-xtradb-cluster-on-centos-7-3/ Thu, 30 Aug 2018 06:11:05 +0000 http://minervadb.com/?p=1937 This blog will show how to install the Percona XtraDB Cluster on three CentOS 7.3 servers, using the packages from Percona repositories. This is a step-by-step installation and configuration blog, We recommend Percona XtraDB Cluster [...]

The post Installation and configuration of Percona XtraDB Cluster on CentOS 7.3 appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
This blog will show how to install the Percona XtraDB Cluster on three CentOS 7.3 servers, using the packages from Percona repositories. This is a step-by-step installation and configuration blog, We recommend Percona XtraDB Cluster for maximum availability / reliability and scale-out READ/WRITE optimally. We are an private-label independent and vendor neutral consulting, support, managed services and education solutions provider for MySQL, MariaDB, Percona Server and ClickHouse with core expertise in performance, scalability, high availability and database reliability engineering. All our blog posts are purely focussed on education and research across open source database systems infrastructure operations. To engage us for building and managing web-scale database infrastructure operations, Please contact us on contact@minervadb.com

This cluster will be assembled of three servers/nodes:

node #1

hostname: PXC1

IP: 138.197.70.35

node #2

hostname: PXC2

IP: 159.203.118.230

node #3

hostname: PXC3

IP: 138.197.8.226

Prerequisites

  • All three nodes have a CentOS 7.3 installation.
  • Firewall has been set up to allow connecting to ports 3306, 4444, 4567 and 4568
  • SELinux is disabled

Installing from Percona Repository on 138.197.70.35

  • Install the Percona repository package:
$ sudo yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
  • You should see the following if successful:
Installed:

 percona-release.noarch 0:0.1-4

Complete!
  • Check that the packages are available:
$ sudo yum list | grep Percona-XtraDB-Cluster-57

Percona-XtraDB-Cluster-57.x86_64          5.7.14-26.17.1.el7         percona-release-x86_64

Percona-XtraDB-Cluster-57-debuginfo.x86_64 5.7.14-26.17.1.el7         percona-release-x86_64
  • Install the Percona XtraDB Cluster packages:
$ sudo yum install Percona-XtraDB-Cluster-57
  • Start the Percona XtraDB Cluster server:
$ sudo service mysql start
  • Copy the automatically generated temporary password for the superuser account:
$ sudo grep 'temporary password' /var/log/mysqld.log
  • Use this password to login as root:
$ mysql -u root -p
  • Change the password for the superuser account and log out. For example:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';

Query OK, 0 rows affected (0.00 sec)

mysql> exit

Bye
  • Stop the mysql service:
$ sudo service mysql stop

Repeat the same Percona XtraDB Cluster installation process for 159.203.118.230 and 138.197.8.226

Configuring nodes

We have to configure separately the nodes 138.197.70.35, 159.203.118.230 and 138.197.8.226 for successfully implementing an fully operational Percona XtraDB Cluster ecosystem.

Configuring the node 138.197.70.35

Configuration file /etc/my.cnf for the first node should look like:

[mysqld]

datadir=/var/lib/mysql

user=mysql

# Path to Galera library

wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226

# In order for Galera to work correctly binlog format should be ROW

binlog_format=ROW

# MyISAM storage engine has only experimental support

default_storage_engine=InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

innodb_autoinc_lock_mode=2

# Node #1 address

wsrep_node_address=138.197.70.35

# SST method

wsrep_sst_method=xtrabackup-v2

# Cluster name

wsrep_cluster_name=pxc_cluster

# Authentication for SST method

wsrep_sst_auth="sstuser:sstuser"

The first node can be started with the following command:

# /etc/init.d/mysql bootstrap-pxc

We are using CentOS 7.3 so systemd bootstrap service should be used:

# systemctl start mysql@bootstrap.service

This command will start the cluster with initial wsrep_cluster_address set to gcomm://. This way the cluster will be bootstrapped and in case the node or MySQL have to be restarted later, there would be no need to change the configuration file.

After the first node has been started, cluster status can be checked by:

mysql> show status like 'wsrep%';

+------------------------------+------------------------------------------------------------+

| Variable_name               | Value                                                     |

+------------------------------+------------------------------------------------------------+

| wsrep_local_state_uuid      | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_protocol_version      | 7                                                         |

| wsrep_last_committed        | 8                                                         |

| wsrep_replicated            | 4                                                         |

| wsrep_replicated_bytes      | 906                                                       |

| wsrep_repl_keys             | 4                                                         |

| wsrep_repl_keys_bytes       | 124                                                       |

| wsrep_repl_data_bytes       | 526                                                       |

| wsrep_repl_other_bytes      | 0                                                         |

| wsrep_received              | 9                                                         |

| wsrep_received_bytes        | 1181                                                      |

| wsrep_local_commits         | 0                                                         |

| wsrep_local_cert_failures   | 0                                                         |

| wsrep_local_replays         | 0                                                         |

| wsrep_local_send_queue      | 0                                                         |

| wsrep_local_send_queue_max  | 1                                                         |

| wsrep_local_send_queue_min  | 0                                                         |

| wsrep_local_send_queue_avg  | 0.000000                                                  |

| wsrep_local_recv_queue      | 0                                                         |

| wsrep_local_recv_queue_max  | 2                                                         |

| wsrep_local_recv_queue_min  | 0                                                         |

| wsrep_local_recv_queue_avg  | 0.111111                                                  |

| wsrep_local_cached_downto   | 3                                                         |

| wsrep_flow_control_paused_ns | 0                                                         |

| wsrep_flow_control_paused   | 0.000000                                                  |

| wsrep_flow_control_sent     | 0                                                         |

| wsrep_flow_control_recv     | 0                                                         |

| wsrep_flow_control_interval | [ 28, 28 ]                                                |

| wsrep_cert_deps_distance    | 1.000000                                                  |

| wsrep_apply_oooe            | 0.000000                                                  |

| wsrep_apply_oool            | 0.000000                                                  |

| wsrep_apply_window          | 1.000000                                                  |

| wsrep_commit_oooe           | 0.000000                                                  |

| wsrep_commit_oool           | 0.000000                                                  |

| wsrep_commit_window         | 1.000000                                                  |

| wsrep_local_state           | 4                                                         |

| wsrep_local_state_comment   | Synced                                                    |

| wsrep_cert_index_size       | 2                                                         |

| wsrep_cert_bucket_count     | 22                                                        |

| wsrep_gcache_pool_size      | 3128                                                      |

| wsrep_causal_reads          | 0                                                         |

| wsrep_cert_interval         | 0.000000                                                  |

| wsrep_incoming_addresses    | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 |

| wsrep_desync_count          | 0                                                         |

| wsrep_evs_delayed           |                                                           |

| wsrep_evs_evict_list        |                                                           |

| wsrep_evs_repl_latency      | 0/0/0/0/0                                                 |

| wsrep_evs_state             | OPERATIONAL                                               |

| wsrep_gcomm_uuid            | b79d90df-1077-11e7-9922-3a1b217f7371                      |

| wsrep_cluster_conf_id       | 3                                                         |

| wsrep_cluster_size          | 3                                                         |

| wsrep_cluster_state_uuid    | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_cluster_status        | Primary                                                   |

| wsrep_connected             | ON                                                        |

| wsrep_local_bf_aborts       | 0                                                         |

| wsrep_local_index           | 2                                                         |

| wsrep_provider_name         | Galera                                                    |

| wsrep_provider_vendor       | Codership Oy <info@codership.com>                         |

| wsrep_provider_version      | 3.20(r7e383f7)                                            |

| wsrep_ready                 | ON                                                        |

+------------------------------+------------------------------------------------------------+

60 rows in set (0.01 sec)

This output above shows that the cluster has been successfully bootstrapped.

In order to perform successful State Snapshot Transfer using XtraBackup new user needs to be set up with proper privileges:

mysql@PXC1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'sstuser';

mysql@PXC1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';

mysql@PXC1> FLUSH PRIVILEGES;

Configuration file /etc/my.cnf on the second node (PXC2) should look like this:

[mysqld]

datadir=/var/lib/mysql

user=mysql

# Path to Galera library

wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226

# In order for Galera to work correctly binlog format should be ROW

binlog_format=ROW

# MyISAM storage engine has only experimental support

default_storage_engine=InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

innodb_autoinc_lock_mode=2

# Node #2 address

wsrep_node_address=159.203.118.230

# SST method

wsrep_sst_method=xtrabackup-v2

# Cluster name

wsrep_cluster_name=pxc_cluster

# Authentication for SST method

wsrep_sst_auth="sstuser:sstuser"

Second node can be started with the following command:

# systemctl start mysql

Cluster status can now be checked on both nodes. This is the example from the second node (PXC2):

mysql> show status like 'wsrep%';

+------------------------------+------------------------------------------------------------+

| Variable_name               | Value                                                     |

+------------------------------+------------------------------------------------------------+

| wsrep_local_state_uuid      | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_protocol_version      | 7                                                         |

| wsrep_last_committed        | 8                                                         |

| wsrep_replicated            | 0                                                         |

| wsrep_replicated_bytes      | 0                                                         |

| wsrep_repl_keys             | 0                                                         |

| wsrep_repl_keys_bytes       | 0                                                         |

| wsrep_repl_data_bytes       | 0                                                         |

| wsrep_repl_other_bytes      | 0                                                         |

| wsrep_received              | 10                                                        |

| wsrep_received_bytes        | 1238                                                      |

| wsrep_local_commits         | 0                                                         |

| wsrep_local_cert_failures   | 0                                                         |

| wsrep_local_replays         | 0                                                         |

| wsrep_local_send_queue      | 0                                                         |

| wsrep_local_send_queue_max  | 1                                                         |

| wsrep_local_send_queue_min  | 0                                                         |

| wsrep_local_send_queue_avg  | 0.000000                                                  |

| wsrep_local_recv_queue      | 0                                                         |

| wsrep_local_recv_queue_max  | 1                                                         |

| wsrep_local_recv_queue_min  | 0                                                         |

| wsrep_local_recv_queue_avg  | 0.000000                                                  |

| wsrep_local_cached_downto   | 6                                                         |

| wsrep_flow_control_paused_ns | 0                                                         |

| wsrep_flow_control_paused   | 0.000000                                                  |

| wsrep_flow_control_sent     | 0                                                         |

| wsrep_flow_control_recv     | 0                                                         |

| wsrep_flow_control_interval | [ 28, 28 ]                                                |

| wsrep_cert_deps_distance    | 1.000000                                                  |

| wsrep_apply_oooe            | 0.000000                                                  |

| wsrep_apply_oool            | 0.000000                                                  |

| wsrep_apply_window          | 1.000000                                                  |

| wsrep_commit_oooe           | 0.000000                                                  |

| wsrep_commit_oool           | 0.000000                                                  |

| wsrep_commit_window         | 1.000000                                                  |

| wsrep_local_state           | 4                                                         |

| wsrep_local_state_comment   | Synced                                                    |

| wsrep_cert_index_size       | 2                                                         |

| wsrep_cert_bucket_count     | 22                                                        |

| wsrep_gcache_pool_size      | 2300                                                      |

| wsrep_causal_reads          | 0                                                         |

| wsrep_cert_interval         | 0.000000                                                  |

| wsrep_incoming_addresses    | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 |

| wsrep_desync_count          | 0                                                         |

| wsrep_evs_delayed           |                                                           |

| wsrep_evs_evict_list        |                                                           |

| wsrep_evs_repl_latency      | 0/0/0/0/0                                                 |

| wsrep_evs_state             | OPERATIONAL                                               |

| wsrep_gcomm_uuid            | 248e2782-1078-11e7-a269-4a3ec033a606                      |

| wsrep_cluster_conf_id       | 3                                                         |

| wsrep_cluster_size          | 3                                                         |

| wsrep_cluster_state_uuid    | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_cluster_status        | Primary                                                   |

| wsrep_connected             | ON                                                        |

| wsrep_local_bf_aborts       | 0                                                         |

| wsrep_local_index           | 0                                                         |

| wsrep_provider_name         | Galera                                                    |

| wsrep_provider_vendor       | Codership Oy <info@codership.com>                         |

| wsrep_provider_version      | 3.20(r7e383f7)                                            |

| wsrep_ready                 | ON                                                        |

+------------------------------+------------------------------------------------------------+

60 rows in set (0.00 sec)

This output shows that the new node has been successfully added to the cluster.

MySQL configuration file /etc/my.cnf on the third node (PXC3) should look like this:

[mysqld]

datadir=/var/lib/mysql

user=mysql

# Path to Galera library

wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226

# In order for Galera to work correctly binlog format should be ROW

binlog_format=ROW

# MyISAM storage engine has only experimental support

default_storage_engine=InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

innodb_autoinc_lock_mode=2

# Node #3 address

wsrep_node_address=138.197.8.226

# SST method

wsrep_sst_method=xtrabackup-v2

# Cluster name

wsrep_cluster_name=pxc_cluster

# Authentication for SST method

wsrep_sst_auth="sstuser:sstuser"

Third node can now be started with the following command:

# systemctl start mysql

Percona XtraDB Cluster status can now be checked from the third node (PXC3):

mysql> show status like 'wsrep%';

+------------------------------+------------------------------------------------------------+

| Variable_name               | Value                                                     |

+------------------------------+------------------------------------------------------------+

| wsrep_local_state_uuid      | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_protocol_version      | 7                                                         |

| wsrep_last_committed        | 8                                                         |

| wsrep_replicated            | 2                                                         |

| wsrep_replicated_bytes      | 396                                                       |

| wsrep_repl_keys             | 2                                                         |

| wsrep_repl_keys_bytes       | 62                                                        |

| wsrep_repl_data_bytes       | 206                                                       |

| wsrep_repl_other_bytes      | 0                                                         |

| wsrep_received              | 4                                                         |

| wsrep_received_bytes        | 529                                                       |

| wsrep_local_commits         | 0                                                         |

| wsrep_local_cert_failures   | 0                                                         |

| wsrep_local_replays         | 0                                                         |

| wsrep_local_send_queue      | 0                                                         |

| wsrep_local_send_queue_max  | 1                                                         |

| wsrep_local_send_queue_min  | 0                                                         |

| wsrep_local_send_queue_avg  | 0.000000                                                  |

| wsrep_local_recv_queue      | 0                                                         |

| wsrep_local_recv_queue_max  | 1                                                         |

| wsrep_local_recv_queue_min  | 0                                                         |

| wsrep_local_recv_queue_avg  | 0.000000                                                  |

| wsrep_local_cached_downto   | 6                                                         |

| wsrep_flow_control_paused_ns | 0                                                         |

| wsrep_flow_control_paused   | 0.000000                                                  |

| wsrep_flow_control_sent     | 0                                                         |

| wsrep_flow_control_recv     | 0                                                         |

| wsrep_flow_control_interval | [ 28, 28 ]                                                |

| wsrep_cert_deps_distance    | 1.000000                                                  |

| wsrep_apply_oooe            | 0.000000                                                  |

| wsrep_apply_oool            | 0.000000                                                  |

| wsrep_apply_window          | 1.000000                                                  |

| wsrep_commit_oooe           | 0.000000                                                  |

| wsrep_commit_oool           | 0.000000                                                  |

| wsrep_commit_window         | 1.000000                                                  |

| wsrep_local_state           | 4                                                         |

| wsrep_local_state_comment   | Synced                                                    |

| wsrep_cert_index_size       | 2                                                         |

| wsrep_cert_bucket_count     | 22                                                        |

| wsrep_gcache_pool_size      | 2166                                                      |

| wsrep_causal_reads          | 0                                                         |

| wsrep_cert_interval         | 0.000000                                                  |

| wsrep_incoming_addresses    | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 |

| wsrep_desync_count          | 0                                                         |

| wsrep_evs_delayed           |                                                           |

| wsrep_evs_evict_list        |                                                           |

| wsrep_evs_repl_latency      | 0/0/0/0/0                                                 |

| wsrep_evs_state             | OPERATIONAL                                               |

| wsrep_gcomm_uuid            | 3f51b20e-1078-11e7-8405-8e9b37a37cb1                      |

| wsrep_cluster_conf_id       | 3                                                         |

| wsrep_cluster_size          | 3                                                         |

| wsrep_cluster_state_uuid    | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5                      |

| wsrep_cluster_status        | Primary                                                   |

| wsrep_connected             | ON                                                        |

| wsrep_local_bf_aborts       | 0                                                         |

| wsrep_local_index           | 1                                                         |

| wsrep_provider_name         | Galera                                                    |

| wsrep_provider_vendor       | Codership Oy <info@codership.com>                         |

| wsrep_provider_version      | 3.20(r7e383f7)                                            |

| wsrep_ready                 | ON                                                        |

+------------------------------+------------------------------------------------------------+

60 rows in set (0.03 sec)

This output confirms that the third node has joined the cluster.

Testing Replication

Creating the new database on the PXC1 node:

mysql> create database minervadb;

Query OK, 1 row affected (0.01 sec)

Creating the example table on the PXC2 node:

mysql> use minervadb;

Database changed

mysql> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));

Query OK, 0 rows affected (0.01 sec)

Inserting records on the PXC3 node:

mysql> INSERT INTO minervadb.example VALUES (1, 'MinervaDB');

Query OK, 1 row affected (0.07 sec)

Retrieving all the rows from that table on the PXC1 node:

mysql> select * from minervadb.example;

+---------+-----------+

| node_id | node_name |

+---------+-----------+

|      1 | MinervaDB |

+---------+-----------+

1 row in set (0.00 sec)

 

The post Installation and configuration of Percona XtraDB Cluster on CentOS 7.3 appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
MinervaDB Technology Partnership Program http://minervadb.com/index.php/2018/02/26/minervadb-technology-partnership-program/ Mon, 26 Feb 2018 16:47:04 +0000 http://minervadb.com/?p=957 Are you an systems integrator, ISV or value added reseller / distributor powering database infrastructure operations of internet / mobility, IoT and technology companies ? Then you will be interested in talking to us. We [...]

The post MinervaDB Technology Partnership Program appeared first on The WebScale Database Infrastructure Operations Experts.

]]>
Are you an systems integrator, ISV or value added reseller / distributor powering database infrastructure operations of internet / mobility, IoT and technology companies ? Then you will be interested in talking to us. We have built planet-scale internet properties for several Alexa top 5000 websites from diversified industries like CDN, mobile advertisement networks, E-Commerce, online payment solutions, social media applications / gaming and SaaS . We are an boutique private-label independent and vendor neutral MySQL, MariaDB, Percona Server and ClickHouse consulting, support and training services provider with core expertise in performance, scalability and high availability . We are an virtual corporations, no offices anywhere in the world and all of us work from home on multiple timezones (from different locations worldwide) and stay connected via Email, Google Hangouts, Skype, Slack and Phone, This makes us an most attractive technology partner for building web-scale database infrastructure operations for the customers worldwide .

☛ How we work with partners ?

  • Actively participate in our partner Go-To-Market strategies and fill the gaps addressing the challenges in building web-scale database infrastructure operations using MySQL, MariaDB, Percona Server and ClickHouse .
  • The partners can contact our consulting, support and remote DBA services teams for scoping database infrastructure operations services, pre-sales, strategic advisory services and technology account management .
  • Unbiased technology consulting, support and remote DBA services – We are an vendor neutral and independent MySQL, MariaDB, Percona Server and ClickHouse consulting, support and training company with core expertise in performance, scalability and high availability .
  • We work as an extended team of our partners so they can benefit both technically and strategically from our technology partnership program .
  • Global presence to technically and strategically support the partners worldwide .

☛ MinervaDB Technical Expertise

  • MySQL –  MySQL (GA and Enterprise) ; InnoDB ; MySQL NDB Cluster ; InnoDB Cluster  ; MySQL performance monitoring and trending.
  • MariaDB – MariaDB Server ; MariaDB Backup ; MariaDB MaxScale ; MariaDB ColumnStore ; MariaDB Spider.
  • Percona Server – Percona toolkit ; XtraBackup ; Percona XtraDB Cluster ; TokuDB.
  • Building and troubleshooting web-scale database infrastructure operations using MyRocks / RocksDB.
  • InnoDB / XtraDB – Performance optimization ; troubleshooting and data recovery services.
  • Performance benchmarking – Measure response time, throughput and recommendations.
  • Capacity planning and sizing – Building infrastructure for performance, scalability and reliability for database operations proactively.
  • Performance health check, diagnostics and forensics – Monitor your database performance 24*7 and Proactively troubleshoot.
  • Performance optimization and tuning – High performance MySQL, Maximum reliability of your database infrastructure operations.
  • Designing optimal schema (logical and physical) – Size it right first time and build an optimal data lifecycle management system.
  • SQL performance tuning – Measure SQL performance by response time and conservative SQL engineering practices.
  • Disk I/O tuning – Distribute disk operations optimally across the storage layer and build archiving plan early / proactively.
  • High availability and site reliability engineering – Self healing database infrastructure operations, auto-failover and cross-DC availability services.
  • Galera Cluster operations and troubleshooting.
  • Continuent Tungsten operations and troubleshooting.
  • MySQL load balancing solutions using HAProxy, ProxySQL, MySQL Router.
  • Data recovery services – Building robust disaster recovery solutions, zero data loss systems and multi-location backup retention.
  • Sharding and horizontal partitioning solutions – Building scalable database infrastructure for planet-scale internet properties.
  • Database clustering solutions – Grow your MySQL infrastructure horizontally for performance, scalability and availability.
  • Scale-out and replication – Building maximum availability database infrastructure operations.
  • Database security – Database firewall, transaction audit and secured data operations.
  • Database upgrades and migration – Seamless upgrades and migration of database infrastructure on zero downtime.
  • ClickHouse – ClickHouse consulting, support and training.

☛ Technology focus – Vendor neutral and independent

Technology FocusTools and Technologies
Linux Ubuntu, Debian, CentOS, Red Hat Linux, Oracle Linux and SUSE Linux.
MySQLMySQL GA, MySQL Enterprise, InnoDB, MySQL Enterprise Backup, MySQL Cluster CGE, MySQL Enterprise Monitor, MySQL Utilities, MySQL Enterprise Audit, MySQL Enterprise Firewall and MySQL Router.
Percona Percona Server for MySQL, XtraDB, TokuDB, RocksDB, Percona Toolkit, Percona XtraBackup and PMM(Percona Monitoring & Management).
MariaDBMariaDB Server, RocksDB, MariaDB Galera Cluster, MariaDB Backup, MariaDB MaxScale and MariaDB ColumnStore.
PostgreSQLPostgreSQL Performance Benchmarking, Capacity Planning / Sizing, PostgreSQL Performance Optimization, PostgreSQL High Availability / Database Reliability Engineering, PostgreSQL Upgrades / Migration and PostgreSQL Security
Cloud DBA Services IaaS and DBaaS including: Oracle Cloud, Google CloudSQL, Amazon Aurora, AWS RDS®, EC2®, Microsoft Azure® and Rackspace® Cloud
Performance Monitoring and Trending PlatformsMySQL Enterprise Monitor, Icinga, Zabbix, Prometheus and Grafana.
High Availability, Scale-Out, Replication and Load BalancerMySQL Group Replication, MySQL Cluster CGE, InnoDB Cluster, Galera Cluster, Percona XtraDB Cluster, MariaDB MaxScale, Continuent Tungsten Replicator, MHA (Master High Availability Manager and tools for MySQL), HAProxy, ProxySQL, MySQL Router and Vitess.
Columnar Database SystemsClickHouse, MariaDB ColumnStore
DevOps. and Automation Vagrant, Docker, Kubernetes, Jenkins, Ganglia, Chef, Puppet, Ansible, Consul, JIRA, Graylog and Grafana.

☛ Partial list of customers – What we did for them ?

  • BankBazaar – MariaDB consultative support and remote DBA services
  • MIX – MySQL consulting and professional services
  • AOL– Performance benchmarking, capacity planning & sizing, remote DBA services and site reliability engineering services.
  • eBay– Remote DBA services, MySQL DevOps. & automation, on-call DBA support (24*7) and MySQL upgrades / migration.
  • Forbes.com– Remote DBA services, architecting & building highly available MySQL infrastructure operations and consulting & professional services for MySQL.
  • National Geographic– MySQL consulting & professional services, performance audit & optimization and MySQL upgrades.
  • Apigee– MySQL support and remote DBA services.
  • PayPal– MySQL consulting & professional services, MySQL performance audit & recommendations and MySQL SRE.
  • Yahoo– MySQL consulting & professional services, on-call DBA services and MySQL emergency support.
  • Priceline.com – consulting & professional services for MySQL & Percona Server for MySQL, Performance optimization & tuning and MySQL SRE.
  • Freshdesk – MySQL performance audit & recommendation, MySQL scale-out & replication solutions and MySQL consultancy.
  • OLA – MySQL consultative support.
  • Flipkart – MySQL consulting & professional services and MySQL consultative support.
  • Paytm– MySQL consultative support.
  • PetSmart– MySQL consulting & professional services and MySQL consultative support.
  • ESPN – Architected & deployed MySQL high availability solution and MySQL consulting & professional services.  
  • Mashable – MySQL consulting and professional services.
  • Proteans – MySQL performance audit & optimization and MySQL consulting & professional services.
  • Symphony Software – MySQL performance audit & recommendations, custom MySQL sharding solution and MySQL upgrades / migration.
  • Yatra – MySQL consultative support.
  • Justdial– MySQL consultative support.
  • Victoria’s Secret– MySQL performance audit & recommendations and MySQL replication solutions.
  • Airpush– MySQL remote DBA services and MySQL consultancy.
  • Virsec – Remote DBA services and consulting & professional services .
  • Go-Jek – Deployed MySQL high availability solution using Percona XtraDB Cluster, MySQL emergency support, performance optimization and consultative support.
  • Midtrans – 24*7 MySQL remote DBA services, MySQL high availability & fault tolerance solution using Percona XtraDB Cluster & ProxySQL and MySQL consultative support.
  • Bukalapak– MySQL consultative support.
  • Sequoia Capital – MySQL professional services, remote DBA services & consultative support for Sequoia APAC and Southeast Asia portfolio companies.
  • Housing.com – consultative support and professional services.
  • Electronic Arts – MySQL consulting & professional services addressing performance and scalability.
  • Adyen – MySQL consulting and professional services.
  • Pinterest – MySQL consulting & professional services addressing performance and scalability.

☛ Customer testimonials

” Shiv is a expert in MySQL performance, He can fine tune MySQL performance at instance, application and infrastructure level in a shortest duration, I would love to hire him again “

David  Hutton
Head of IT operations
National Geographic

” If it’s about MySQL performance and scalability, I will first call Shiv and He has helped us several times in building optimal, scalable and highly available MySQL infrastructure operations 

Mark Gray
  IT Manager
Nike Technologies

” If you are building an highly reliable MySQL ecosystem, Hiring Shiv and his team will simplify your project. He is a guru to build an highly available and fault tolerant web property “ 

Kevin Thomson
Business Head – Media properties
AOL

” Shiv and his team built an custom MySQL high availability solution for us across data centers which enabled 24*7*365 availability of our business services. His Soultions are non vendor biased and cost efficient “

Keshav Patel
Group Manager – IT
Lastminute.com 

” Thinking about outsourcing your DBA function ? Shiv is the guy who you should be talking to, He can build an highly reliable 24*7 remote DBA team for an fraction of cost to hiring a Sr. level resident DBA “

Brian Lewis
Lead Systems Engineer
Priceline.com  

” The shortest notice emergency DBA support provider I have ever worked with, They are highly responsive and professional. If we suspect something going wrong with our database systems, The immediate action item is contact Shiv and his team”

Sherly Williams
Manager -Business Continuity
GAP Inc. 

” We have contracted MinervaDB for 24*7 remote DBA services, They delivered MySQL maximum reliability and availability solutions ahead of the project schedule using 100% open source and free tools, saving considerably our IT budget. “

Simon Matthew
IT Manager – Media & Entertainment
Vodafone PLC 

“Our database infrastructure operations was highly unreliable till we engaged Shiv Iyer and his globally distributed team of consultants, Now we have 24*7 access to expert DBA(s) for a fraction of cost to hiring a resident full-time DBA”

Kerry Jones
Head  – IT Operations
The BestBuy Inc

The post MinervaDB Technology Partnership Program appeared first on The WebScale Database Infrastructure Operations Experts.

]]>