Part-6:
Database server’s configurations with DRBD & Heartbeat.
 6.1 : Setup LVM Partitions on both servers.
Note : we have extra 8 GB sdb on dbase1 & same on dbase2.
LVM setup Needs tree portions to setup.
1: Creation of Physical Volume.
2: Creation of Volume Group.
3: Creation of Logical Volume.
Have a look on lvm Diagram for better understanding.
                                                                             figure 1.1
First make two partitions on both servers using fdisk and type should be lvm. ( /dev/sdb1, /dev/sdb2 )
[root@dbase1 ~]# fdisk /dev/sdb Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1044, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-1044, default 1044): +5G Command (m for help): t Selected partition 1 Hex code (type L to list codes): 8e Changed system type of partition 1 to 8e (Linux LVM) Command (m for help): p Disk /dev/sdb: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9888e551 Device Boot Start End Blocks Id System /dev/sdb1 1 654 5253223+ 8e Linux LVM Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (655-1044, default 655): Using default value 655 Last cylinder, +cylinders or +size{K,M,G} (655-1044, default 1044): Using default value 1044 Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 8e Changed system type of partition 2 to 8e (Linux LVM) Command (m for help): p Disk /dev/sdb: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8b12277b Device Boot Start End Blocks Id System /dev/sdb1 1 654 5253223+ 8e Linux LVM /dev/sdb2 655 1044 3132675 8e Linux LVM
6.2 : Tell to kernal we updated partitions using ‘partprobe’ command.
[root@dbase1 ~]# partprobe /dev/sdb1 [root@dbase1 ~]# partprobe /dev/sdb2
6.3 : lets move on setting up lvm, we will create first layer of lvm ( physical volumes )
[root@dbase1 ~]# pvcreate /dev/sdb1 /dev/sdb2 Writing physical volume data to disk "/dev/sdb1" Physical volume "/dev/sdb1" successfully created Writing physical volume data to disk "/dev/sdb2" Physical volume "/dev/sdb2" successfully created
Note : using ‘pvdisplay’ you can view your recently updated physical volumes.
6.4 Create a volume Group named ‘vg_db’. ( second layer of LVM ).
[root@dbase1 ~]# vgcreate vg_db /dev/sdb1 /dev/sdb2 Volume group "vg_db" successfully created
Now we have a around 8 GB volume group with the name of ‘vg_db’.
Note : verify your volume group using ‘vgdisplay’ command.
6.5 : Create two logical volumes.
1.lv_db ( 5GB )
2.lv_meta ( 256MB )
Create lv_db ( logical volume ) 5 GB.
[root@dbase1 ~]# lvcreate -n lv_db -L +5G vg_db Logical volume "lv_db" created [root@dbase1 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg_db/lv_db VG Name vg_db LV UUID xC49OF-ZcVd-M3Ra-DPjT-utwz-isRI-ytCQ2g LV Write Access read/write LV Status available # open 0 LV Size 5.00 GiB Current LE 1280 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2
Create logical volume ‘lv_meta’ 256MB.
[root@dbase1 ~]# lvcreate -n lv_meta -L 256M vg_db Logical volume "lv_meta" created [root@dbase1 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg_db/lv_meta VG Name vg_db LV UUID vBLjXr-KbzT-KsHc-ZAKZ-VFDM-4bVk-BADLxt LV Write Access read/write LV Status available # open 0 LV Size 256.00 MiB Current LE 64 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3
Note : Here we have done LVM setup on ‘dbase1’. You must perform same steps on ‘dbase2’
6.6 : Install DRBD on both servers. with yum.
Find this article for ‘how to install drbd on centos 6.2’.
6.7 : Configure DRBD on Database Servers.(dbase1, dbase2) loadb ‘modprobe drbd’.
[root@dbase1 ~]# modprobe drbd
Load on dbase2.
[root@dbase2 ~]# modprobe drbd
6.8 : Add this module on startup.
[root@dbase1 ~]# echo "modprobe drbd" >> /etc/rc.local
dbase2
[root@dbase2 ~]# echo "modprobe drbd" >> /etc/rc.local
6.9 : Edit vi /etc/drbd.conf delete all lines and paste below contents.
[root@dbase1 ~]# vi /etc/drbd.conf global { usage-count yes; } common { syncer { rate 10M; } } resource r0 { protocol C; handlers { pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f"; pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f"; local-io-error "echo o > /proc/sysrq-trigger ; halt -f"; outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5"; } startup { } disk { on-io-error detach; } net { after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } syncer { rate 10M; al-extents 257; } on dbase1.broexperts.com { device /dev/drbd0; disk /dev/vg_db/lv_db; address 192.168.2.20:7788; meta-disk /dev/vg_db/lv_meta[1]; } on dbase2.broexperts.com { device /dev/drbd0; disk /dev/vg_db/lv_db; address 192.168.2.21:7788; meta-disk /dev/vg_db/lv_meta[1]; } }
6.10 : Copy this file on dbase2 using ‘scp’ command.
[root@dbase1 ~]# scp /etc/drbd.conf dbase2:/etc/ drbd.conf 100% 929 0.9KB/s 00:00 [root@dbase1 ~]#
6.11 : Add these lines in ‘/etc/sysctl.conf’
[root@dbase1 ~]# vi /etc/sysctl.conf net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.eth0.arp_announce = 2
6.12 : ‘sysctl -p’ on dbase1.
sysctl -p
6.13 : Copy this file on dbase2 using scp.
[root@dbase1 ~]# scp /etc/sysctl.conf dbase2:/etc/ sysctl.conf 100% 1260 1.2KB/s 00:00
6.14 : ‘sysctl -p’ on dbase2.
[root@dbase2 /]# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.eth0.arp_announce = 2 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 4294967295 kernel.shmall = 268435456
6.15 : Commands to Create, attach and connect resource 0 on both Database servers.
on dbase1 & dbase2.
[root@dbase1 ~]# drbdadm create-md r0 --== Thank you for participating in the global usage survey ==-- The server's response is: you are the 1866th user to install this version WARN: You are using the 'drbd-peer-outdater' as fence-peer program. If you use that mechanism the dopd heartbeat plugin program needs to be able to call drbdsetup and drbdmeta with root privileges. You need to fix this with these commands: chgrp haclient /sbin/drbdsetup chmod o-x /sbin/drbdsetup chmod u+s /sbin/drbdsetup chgrp haclient /sbin/drbdmeta chmod o-x /sbin/drbdmeta chmod u+s /sbin/drbdmeta Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created.
If you Notice the last some lines of output gives warning and also some commands to fix that warning. so the next step for fixing this problem.
6.16 : Run some commands to create drbdsetup on both servers.
[root@dbase1 ~]# groupadd haclient [root@dbase1 ~]# chgrp haclient /sbin/drbdsetup [root@dbase1 ~]# chmod o-x /sbin/drbdsetup [root@dbase1 ~]# chmod u+s /sbin/drbdsetup [root@dbase1 ~]# chgrp haclient /sbin/drbdmeta [root@dbase1 ~]# chmod o-x /sbin/drbdmeta [root@dbase1 ~]# chmod u+s /sbin/drbdmeta
Note : Same Commands run on ‘dbase2’.
Run this command on both servers
drbdadm attach r0
Sync r0 on both servers using this command.
drbdadm syncer r0
Now Connect r0 on both database servers.
drbdadm connect r0
6.17 : Now time to decide which one would be primary node in my case dbase1 is primary node. so this command only for primary node.
drbdadm -- --overwrite-data-of-peer primary r0
6.18 : You can check your Synchronization by issuing this command.
watch cat /proc/drbd
6.19 : Again this command only for primary node.
[root@dbase1 ~]# drbdadm -- primary all
6.20 : Make Ex4 file system on lv only on dabase1.
[root@dbase1 ~]# mkfs.ext4 /dev/drbd0 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 327680 inodes, 1310720 blocks 65536 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1342177280 40 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
6.21 : Make db_data directory and mount /dev/drbd0 on db_data directory.
[root@dbase1 ~]# mkdir /db_data [root@dbase1 ~]# mount /dev/drbd0 /db_data/ disk/ dm-1 dm-3 drbd/ drbd1 drbd11 drbd13 drbd15 drbd3 drbd5 drbd7 drbd9 dvdrw dm-0 dm-2 dmmidi drbd0 drbd10 drbd12 drbd14 drbd2 drbd4 drbd6 drbd8 dvd [root@dbase1 ~]# mount /dev/drbd0 /db_data/ [root@dbase1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_dbase1-lv_root 5.5G 792M 4.5G 15% / tmpfs 58M 0 58M 0% /dev/shm /dev/sda1 485M 64M 396M 14% /boot /dev/drbd0 5.0G 138M 4.6G 3% /db_data
6.22 : Just make the same directory on dbase2.
[root@dbase2 /]# mkdir /db_data
6.23 : Install MySql On Both Servers
Find this article for How to Install Mysql server.
6.24 : MySql Configurations
Change mysqls db data location by editing this file ‘/etc/my.cnf’.
[root@dbase1 ~]# vi /etc/my.cnf [mysqld] # Set Data Directory on new location datadir=/db_data/mysql socket=/db_data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
6.25 : copy my.cnf on dbase2 using scp
[root@dbase1 ~]# scp /etc/my.cnf dbase2:/etc/ my.cnf 100% 251 0.3KB/s 00:00
6.25 : Create ‘mysql’ directory in ‘/db_data’.
[root@dbase1 ~]# mkdir /db_data/mydql
6.26 : change ownership from ‘root’ to ‘mysql’
[root@dbase1 ~]# chown mysql:mysql /db_data/mydql/
6.27 : Allow web servers to use mysql
Now it is time to add users/hosts to mysql server: mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.2.10' IDENTIFIED BY 'redhat' mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.2.11' IDENTIFIED BY 'redhat' mysql> FLUSH PRIVILEGES;
6.28 : Install HeartBeat On Both Servers
Find this article for How to Install HeartBeat on CentOs 6.2
6.29 : HeartBeat Configurations.
Create ‘ha.cf’ file and paste below contents.
[root@dbase1 ~]# vi /etc/ha.d/ha.cf logfacility local0 keepalive 2 deadtime 10 # we use two heartbeat links, eth2 and serial 0 bcast eth0 #serial /dev/ttyS0 baud 19200 auto_failback off node dbase1.broexperts.com node dbase2.broexperts.com
Copy this file on dbase2.
[root@dbase1 ~]# scp /etc/ha.d/ha.cf dbase2:/etc/ha.d/ ha.cf 100% 207 0.2KB/s 00:00
Create ‘haresources’ file and paste below contents.
[root@dbase1 ]# vi /etc/ha.d/haresources dbase1.broexperts.com IPaddr::192.168.2.200/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/db_data::ext4 mysqld
Create ‘haresources’ file on dbase2 and paste below contents.
dbase2.broexperts.com IPaddr::192.168.2.200/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/db_data::ext4 mysqld
Crete ‘vi /etc/ha.d/authkeys’ file on both servers and paste file contents.
[root@dbase1 ]# vi /etc/ha.d/authkeys auth 3 3 md5 redhat
Change file’s permissions.
[root@dbase1 ]# chmod 600 /etc/ha.d/authkeys
Copy this file on dbase2.
[root@dbase1 ~]# scp /etc/ha.d/authkeys dbase2:/etc/ha.d/ authkeys 100% 20 0.0KB/s 00:00
6.30 : Time to start heartbeat service on both servers
[root@dbase1 ~]# /etc/init.d/heartbeat start Starting High-Availability services: IPaddr[3132]: INFO: Resource is stopped Done.
[root@dbase2 ~]# /etc/init.d/heartbeat start Starting High-Availability services: IPaddr[3132]: INFO: Resource is stopped Done.
6.31 : chkconfig heartbeat service on both servers.
chkconfig heartbeat on
Note : Our Databases are available on db.broexperts.com. in case of failure of primary server the secondary server which is dabse2 can handle the situation and will keep the network alive.
if you have any problem you can comment me here.