The Full Monty- Part 2

August 15, 2011

Installing DRDB in CentOS 5.6.

In Part 1 I when through the process of preparing a number of CentOS 5.6 servers. Now make the services they’ll preform more stable.

High Availability (HA)

I’ll be presenting two ways to provide redundant data and high available services. First, Pacemaker – with DRDB will duplicate your data at the disk partition level and watch for failures. Should the hardware failure, Pacemaker will take all the needed steps to start MySQL on the Hot Stand By (HSB). This is not perfect. Should someone run ‘rm *’ or drop a database DRDB will duplicate the loss on the HSB.

In another part, I’ll use Tungsten replicator. It offers a set of features that surpass the built-in MySQL replicator. The community version of Tungsten has global transaction IDs so even with many slaves, global transactions IDs make turning a slave into the master easy. Tungsten replicator is not a HA service. You have to manually fail to a new master. Tungsten Enterprise (if you have money) solves all the issues. With the tools the enterprise version supplies you can easily migrate the “master” server to any slave.

HA with Pacemaker:

Neither RedHat or CentOS supply PaceMaker packages. Redhat support their own propitiatory clustering suite. CentOS is suck trying to maintain compatibility with Redhat while still giving you a high availability system. CentOS does supply heartbeat and openais but not pacemaker. Thankful Redhat helps out by supporting the Fedora project and in turn Fedora provide an EPEL repository for Redhat 5.

The Pacemaker packages in Fedora’s EPEL directories are built against some additional packages that don’t exist on vanilla RHEL/CentOS installs. For more information on EPEL, see http://fedoraproject.org/wiki/EPEL/FAQ So before installing Pacemaker, you will first need to tell the machine how to find the EPEL packages Pacemaker depends on. To do this, download and install the EPEL package that matches your RHEL/CentOS version.

LINBIT is the primary maintainer of DRBD and offers a product and service portfolio for exactly what we are building here. They have produced a video that takes you through this same process using the the DRDB Console Manager. I’m going to take you through the same process by hand. I hope this way you will better understand the management touch points.

DRBD Installation:

I build two computer alike in every way (clones) and use DRBD to sync the data partitions on each. CentOS provides the packages needed.

 # wget http://www.clusterlabs.org/rpm/epel-5/clusterlabs.repo  
 # mv clusterlabs.repo /etc/yum.repos.d  
 # rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm  
 # yum -y install pacemaker.x86_64 heartbeat.x86_64 drbd83.x86_64 kmod-drbd83.x86_64
 # /sbin/chkconfig --add heartbeat  
 # depmod  
 # modprobe drbd

DRBD Configuration:

On both machines (db1 and db2), edit the DRBD configuration file. The host names must be the save as returned by ‘hostname’ command. You may also need to edit the host name in /etc/sysconfig/network. I have highlighted parts you will need to edit in red.

To replace the shared-secret, select at least half of the characters from the “63 random alpha-numeric characters” from Gibson Research Ultra High Security Passwords.

 # vi /etc/drbd.conf
 include "drbd.d/global.conf";
 include "drbd.d/*.res";
# vi /etc/drbd.d/global.conf
global { usage-count yes; }
common { startup { degr-wfc-timeout 0; }
net { cram-hmac-alg sha1; shared-secret R4x2alEkxtIg2kzbXqUL6l4uoTI7Ab7Qt; }
disk { on-io-error detach; } }
# hostname
 db1.grennan.com
# vi /etc/drbd.d/db.res
resource db { protocol C; syncer { rate 10M; }
on db1.grennan.com { device /dev/drbd0; disk /dev/md3; address 192.168.4.1:7788; flexible-meta-disk internal; }
on db2.grennan.com { device /dev/drbd0; disk /dev/md3; address 192.168.4.2:7788; flexible-meta-disk internal; } }
 # scp -r /etc/drbd.d db2:/etc  # scp -r /etc/drbd.conf db2:/etc

Manage DRDB processes:

If the disk was formatted during the OS install you may need to erase the ext3 file system info on both DB1 and DB2.

 # umount /data  # dd if=/dev/zero of=/dev/md3 count=2048

Write the DRBD meta data on both DB1 and DB2.

# drbdadm create-md db

On >>> DB1 only

  # drbdadm adjust db 
  # drbdsetup /dev/drbd0 primary -o
  # service drbd start

On DB2

  # service drbd start

Did you miss the >>> DB1 only

WAIT until the sync process completes.

Back on DB1 create the file system and mount it.

  # mkdir /data 
  # mkfs -j /dev/drbd0
  # tune2fs -c -1 -i 0 /dev/drbd0
  # mkdir /data
  # mount -o rw /dev/drbd0 /data

Don’t forget to make the /data directory on DB2.

  # mkdir /data/mysql
  # cat /proc/drbd
version: 8.3.8 (api:88/proto:86-94)
 GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:09
1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----
 ns:2433920 nr:0 dw:0 dr:2433920 al:0 bm:148 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:28530088
 [==>.................] sync'ed: 21.9% (27860/30236)M delay_probe: 757
 finish: 4:57:11 speed: 1,248 (6,404) K/sec

In the next post I’ll go through installing MySQL in preparation for configuring Pacemaker. Then I’ll show you how to test fail over.

In the next part I will go over setting up MySQL.

 

 

 

 

Tweet

posted in Commentary by admin

Follow comments via the RSS Feed | Leave a comment | Trackback URL

Leave Your Comment

 



Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org
Creative Commons License
MySQL Fan Boy by Mark Grennan is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.
HOME