歡迎來到Linux教程網
Linux教程網
Linux教程網
Linux教程網
Linux教程網 >> Linux基礎 >> 關於Linux >> Centos 6.3下DRBD安裝配置筆記

Centos 6.3下DRBD安裝配置筆記

日期:2017/3/3 16:16:29   编辑:關於Linux

最近准備更新點負載均衡高可用的文檔,所以把之前一直想攻克的DRBD今天抽空給搞定了。

DRBD(Distributed Replicated Block Device) 我們可以理解為它其實就是個網絡RAID-1,兩台服務器間就算某台因斷電或者宕機也不會對數據有任何影響,而真正的熱切換可以通過Heartbeat方案解決,不需要人工干預。

例如:DRBD+Heartbeat+Mysql進行主從結構分離,作為DRBD+HeartBeat+NFS的備份存儲解決方案。

--------------------廢話不多說,開搞---------------------------

系統版本:centos6.3 x64(內核2.6.32)

DRBD:DRBD-8.4.3

node1: 192.168.7.88(drbd1.example.com)

node2: 192.168.7.89 (drbd2.example.com)

(node1)為僅主節點配置

(node2)為僅從節點配置

(node1,node2)為主從節點共同配置

一.准備環境:(node1,node2)

1.關閉iptables和SELINUX,避免安裝過程中報錯。

# service iptables stop

# setenforce 0

# vi /etc/sysconfig/selinux

---------------

SELINUX=disabled

---------------

2.設置hosts文件

# vi /etc/hosts

-----------------

192.168.7.88 drbd1.example.com drbd1

192.168.7.89 drbd2.example.com drbd2

-----------------

3.在兩台虛擬機分別添加一塊2G硬盤sdb作為DRBD,分別分區為sdb1,大小1G,並在本地系統創建/data目錄,不做掛載操作。

# fdisk /dev/sdb

----------------

n-p-1-1-"+1G"-w

----------------

# mkdir /data

4.時間同步:(重要)

# ntpdate -u asia.pool.ntp.org

5.更改主機名:

(node1)

# vi /etc/sysconfig/network

----------------

HOSTNAME=server.example.com

----------------

(node2)

# vi /etc/sysconfig/network

----------------

HOSTNAME=client.example.com

----------------

二.DRBD的安裝配置:

1.安裝依賴包:(node1,node2)

# yum install gcc gcc-c++ make glibc flex kernel-devel kernel-headers

2.安裝DRBD:(node1,node2)

# wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz

# tar zxvf drbd-8.4.3.tar.gz

# cd drbd-8.4.3

# ./configure --prefix=/usr/local/drbd --with-km

# make KDIR=/usr/src/kernels/2.6.32-279.el6.x86_64/

# make install

# mkdir -p /usr/local/drbd/var/run/drbd

# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d

# chkconfig --add drbd

# chkconfig drbd on

加載DRBD模塊:

# modprobe drbd

查看DRBD模塊是否加載到內核:

# lsmod |grep drbd

3.參數配置:(node1,node2)

# vi /usr/local/drbd/etc/drbd.conf

清空文件內容,並添加如下配置:

---------------

resource r0{

protocol C;

startup { wfc-timeout 0; degr-wfc-timeout 120;}

disk { on-io-error detach;}

net{

timeout 60;

connect-int 10;

ping-int 10;

max-buffers 2048;

max-epoch-size 2048;

}

syncer { rate 30M;}

on drbd1.example.com{

device /dev/drbd0;

disk /dev/sdb1;

address 192.168.7.88:7788;

meta-disk internal;

}

on drbd2.example.com{

device /dev/drbd0;

disk /dev/sdb1;

address 192.168.7.89:7788;

meta-disk internal;

}

}

---------------

4.創建DRBD設備並激活ro資源:(node1,node2)

# mknod /dev/drbd0 b 147 0

# drbdadm create-md r0

等待片刻,顯示success表示drbd塊創建成功

----------------

Writing meta data...

initializing activity log

NOT initializing bitmap

New drbd meta data block successfully created.

--== Creating metadata ==--

As with nodes, we count the total number of devices mirrored by DRBD

at http://usage.drbd.org.

The counter works anonymously. It creates a random number to identify

the device and sends that random number, along with the kernel and

DRBD version, to usage.drbd.org.

http://usage.drbd.org/cgi-bin/insert_usage.pl?

nu=716310175600466686&ru=15741444353112217792&rs=1085704704

* If you wish to opt out entirely, simply enter 'no'.

* To continue, just press [RETURN]

success

----------------

再次輸入該命令:

# drbdadm create-md r0

成功激活r0

----------------

[need to type 'yes' to confirm] yes

Writing meta data...

initializing activity log

NOT initializing bitmap

New drbd meta data block successfully created.

----------------

5.啟動DRBD服務:(node1,node2)

# service drbd start

注:需要主從共同啟動方能生效

6。查看狀態:(node1,node2)

# cat /proc/drbd

----------------

# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],

2013-05-27 20:45:19

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1060184

----------------

# service drbd status

----------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],

2013-05-27 20:45:19

m:res cs ro ds p mounted fstype

0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C

----------------

這裡ro:Secondary/Secondary表示兩台主機的狀態都是備機狀態,ds是磁盤狀態,顯示的狀態內容為“不一致”,這是因為DRBD無法判斷哪一方為主機,應以哪一方的磁盤數據作為標准。

7.將drbd1.example.com主機配置為主節點:(node1)

# drbdsetup /dev/drbd0 primary --force

分別查看主從DRBD狀態:

(node1)

# service drbd status

--------------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],

2013-05-27 20:45:19

m:res cs ro ds p mounted fstype

0:r0 Connected Primary/Secondary UpToDate/UpToDate C

---------------------

(node2)

# service drbd status

---------------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],

2013-05-27 20:49:06

m:res cs ro ds p mounted fstype

0:r0 Connected Secondary/PrimaryUpToDate/UpToDate C

---------------------

ro在主從服務器上分別顯示 Primary/Secondary和Secondary/Primary

ds顯示UpToDate/UpToDate

表示主從配置成功。

8.掛載DRBD:(node1)

從剛才的狀態上看到mounted和fstype參數為空,所以我們這步開始掛載DRBD到系統目錄

# mkfs.ext4 /dev/drbd0

# mount /dev/drbd0 /data

注:Secondary節點上不允許對DRBD設備進行任何操作,包括只讀,所有的讀寫操作只能在Primary節點上進行,只有當Primary節點掛掉時,Secondary節點才能提升為Primary節點繼續工作。

9.模擬DRBD1故障,DRBD2接管並提升為Primary

(node1)

# cd /data

# touch 1 2 3 4 5

# cd ..

# umount /data

# drbdsetup /dev/drbd0 secondary

注:這裡實際生產環境若DRBD1宕機,在DRBD2狀態信息中ro的值會顯示為Secondary/Unknown,只需要進行DRBD提權操作即可。

(node2)

# drbdsetup /dev/drbd0 primary

# mount /dev/drbd0 /data

# cd /data

# touch 6 7 8 9 10

# ls

--------------

1 10 2 3 4 5 6 7 8 9 lost+found

--------------

# service drbd status

--------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],

2013-05-27 20:49:06

m:res cs ro ds p mounted fstype

0:r0 Connected Primary/Secondary UpToDate/UpToDate C /data ext4

--------------

(node1)

# service drbd status

---------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],

2013-05-27 20:45:19

m:res cs ro ds p mounted fstype

0:r0 Connected Secondary/Primary UpToDate/UpToDate C

---------------

DRBD大功告成。。。

 

不過如何保證DRBD主從結構的智能切換,實現高可用,這裡就需要Heartbeat來實現了。

Heartbeat會在DRBD主端掛掉的情況下,自動切換從端為主端並自動掛載/data分區。

注:(摘自酒哥的構建高可用LINUX服務器第2版)

假設你把Primary的eth0擋掉,然後直接在Secondary上進行主Primary主機的提升,並且mount上,你可能會發現在Primary上測試考入的文件確實同步過來了,之後把Primary的eth0恢復後,看看有沒有自動恢復 主從關系。經過查看,發現DRBD檢測出了Split-Brain的狀況,也就是兩個節點都處於standalone狀態,故障描述如下:Split-Brain detected,dropping connection! 這就是傳說中的“腦裂”。

這裡是DRBD官方推薦的手動恢復方案:

(node2)

# drbdadm secondary r0

# drbdadm disconnect all

# drbdadm --discard-my-data connect r0

(node1)

# drbdadm disconnect all

# drbdadm connect r0

# drbdsetup /dev/drbd0 primary

這裡本人實際模擬的是將Primary端重啟,然後立即進行Secondary提權操作,待Primary端重啟完畢,將其降權,查看兩邊的status,結果都為standalone狀態,很奇怪的也出現“腦裂”情況,不知道是什麼情況?有經驗的朋友可以幫忙指點一下。

DRBD+HeartBeat+NFS傳送門:http://showerlee.blog.51cto.com/2047005/1212185

本文出自 “一路向北” 博客,請務必保留此出處http://showerlee.blog.51cto.com/2047005/1211963

Copyright © Linux教程網 All Rights Reserved