歡迎來到Linux教程網
Linux教程網
Linux教程網
Linux教程網
Linux教程網 >> Linux基礎 >> 關於Linux >> Redhat 5.8 x64 RHCS Oracle 10gR2 HA實踐配置

Redhat 5.8 x64 RHCS Oracle 10gR2 HA實踐配置

日期:2017/3/3 16:10:01   编辑:關於Linux

配置說明:

1. 通過Openfiler實現iscsi共享存儲

2. 通過VMware ESXi5 虛擬fence實現fence功能。

3. 結合Redhat 5.8 vmware-fence-soap實現RHCS fence設備功能。

4. 本文原創建搭建RHCS實驗環境測試RHCS Oracle HA功能。

本文鏈接:http://koumm.blog.51cto.com/703525/1161791

一、准備基礎環境

1. 網絡環境准備

node1,node2節點

# cat /etc/hosts

192.168.14.100 node1

192.168.14.110 node2

2. 配置YUM安裝源

(1) 掛載光盤ISO

# mount /dev/cdrom /mnt

(2) 配置YUM客戶端

說明: 通過本地光盤做為yum安裝源。

# vi /etc/yum.repos.d/rhel-debuginfo-bak.repo

[rhel-debuginfo]

name=Red Hat Enterprise Linux $releasever - $basearch - Debug

baseurl=ftp://ftp.redhat.com/pub/redhat/linux/enterprise/$releasever/en/os/$basearch/Debuginfo/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[Server]

name=Server

baseurl=file:///mnt/Server

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[VT]

name=VT

baseurl=file:///mnt/VT

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[Cluster]

name=Cluster

baseurl=file:///mnt/Cluster

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ClusterStorage]

name=ClusterStorage

baseurl=file:///mnt/ClusterStorage

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

(3) openfiler iscsi存儲配置

具體配置略,共劃分兩塊lun,一塊10G配置GFS,一塊1G配置表決盤。

(4) 掛載存儲

rpm -ivh iscsi-initiator-utils-6.2.0.872-13.el5.x86_64.rpm

chkconfig iscsi --level 35 on

chkconfig iscsid --level 35 on

service iscsi start

# iscsiadm -m discovery -t st -p 192.168.14.162

# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.b2bd5bb312a7 -p 192.168.14.162 -l

二、RHCS軟件包的安裝

192.168.10.100 node1 (管理機)

192.168.10.110 node2

1. 在node1上安裝luci及RHCS軟件包

安裝 ricci、rgmanager、gfs、cman

(1) node1(管理節點)安裝RHCS軟件包,luci是管理端軟件包,只在管理端安裝。

yum install luci ricci cman cman-devel gfs2-utils rgmanager system-config-cluster -y

(2) 配置RHCS服務開機啟動

chkconfig luci on

chkconfig ricci on

chkconfig rgmanager on

chkconfig cman on

service ricci start

service cman start

2. 在node2 上安裝RHCS軟件包

(1) node2安裝RHCS軟件包

yum install ricci cman cman-devel gfs2-utils rgmanager system-config-cluster -y

(2) 配置RHCS服務開機啟動

chkconfig ricci on

chkconfig rgmanager on

chkconfig cman on

service ricci start

service cman start

這是因為還沒有加入集群沒有產生配置文件/etc/cluster/cluster.conf

3. 直接安裝集群組件或安裝操作系統的過程中選擇集群。

yum groupinstall Clustering

三、RHCS集群管理端配置

1. 在node1管理節點上安裝啟動luci服務

說明:在管理節點上進行操作。

(1) luci初始化

# luci_admin init Initializing the luci server

Creating the 'admin' user

Enter password:

Confirm password:

Please wait...

The admin password has been successfully set.

Generating SSL certificates...

The luci server has been successfully initialized

(2) 啟動luci服務

# service luci start

(3) 配置管理地址

https://192.168.14.100:8084

admin/111111

四、RHCS集群配置

1. 添加集群

登錄進管理界面,點擊cluster->Create a New Cluster->填入如下內容:

Cluster Name: rhcs

node1 192.168.10.100

node2 192.168.10.110

選中如下選項,然後提交,集群會經過install,reboot,config,join兩步過程才能成功。

Use locally installed packages.

Enable shared storage support

check if node passwords are identical

說明:

(1) 這步會生成集群配置文件/etc/cluster/cluster.conf

(2) 也可以直接創建該配置文件。

2. 兩節點分別啟動集群服務

分別ssh到 node1,node2上,啟動cman服務。

# service cman start

Starting cluster:

Loading modules... done

Mounting configfs... done

Starting ccsd... done

Starting cman... done

Starting daemons... done

Starting fencing... done

3. 添加fence設備

說明:

RHCS要實現完整的集群功能,必須要實現fence功能。由於非物理服務器配置等條件限制,特使用VMware ESXi5.X的虛擬fence來實現fence設備的功能。

正是由於有了fence設備可以使用,才得以完整測試RHCS HA功能。

(1)登錄進管理界面,點擊cluster->Cluster List->

(2)分別選擇node1,node2進行如下操作:

(3)選擇"Add a fence device to this level",選擇vmware fence soap。

(4)添加fence設備

name : fence設備名,可以用虛擬機計算機名,也可以別取

hostname : vCenter或ESXi主機的地址

Login : vCenter或ESXi主機的用戶名

Password : vCenter或ESXi主機的密碼

Use SSL : 勾取

Power Wait: 可選

Virtual machine name: RHCS_node1

Virtual machine UUID: /vmfs/volumes/datastore3/RHCS_node1/RHCS_node1.vmx

#虛擬機的名稱與UUID需要按實際填寫,ESXi需要開啟ssh登錄進系統實際查看。

#手動測試fence功能示例:

# /sbin/fence_vmware_soap -a 192.168.14.70 -z -l root -p 1111111 -n RHCS_node1 -o status

Status: ON

# /sbin/fence_vmware_soap -a 192.168.14.70 -z -l root -p 1111111 -n RHCS_node1 -o list

RHCS_node2,564d5908-e8f6-99f6-18a8-a523c04111b2

RHCS_node1,564d3c96-690c-1f4b-cfbb-a880ca4bca6a

選項:

-o : list,status,reboot等參數

4. 添加Failover Domains 故障轉移域

Failover Domains -> Add

名字:rhcs_failover

勾選Prioritized,

No Failback具體情況自己設定;

勾選兩台節點,

設定其優先級。

點擊提交。

5. 配置GFS服務

(1) GFS服務配置

分別在node1,node2啟動CLVM的集成cluster鎖服務

# lvmconf --enable-cluster # chkconfig clvmd on

# service clvmd start

Activating VG(s): No volume groups found [ OK ]

(2) 在任意一節點對磁盤進行分區,劃分出sdb1。然後格式化成gfs2.

node1節點上:

# pvcreate /dev/sdb1

# vgcreate vg /dev/sdb1

# lvcreate -l +100%FREE -n var01 vg

Error locking on node node2: Volume group for uuid not found: QkM2JYKg5EfFuFL6LzJsg7oAfK4zVrkytMVzdziWDmVhBGggTsbr47W1HDEu8FdB

Failed to activate new LV.

出現以上提示,需要在node2上創建物理卷

node2節點上:

# pvcreate /dev/sdb1

Can't initialize physical volume "/dev/sdb1" of volume group "vg1" without -ff

不用管提示什麼。

回到node1節點上:

# lvcreate -l +100%FREE -n var01 vg

Logical volume "var01" created (能夠創建lv了。)

# /etc/init.d/clvmd start

Activating VG(s): 1 logical volume(s) in volume group "vg1" now active

[ OK ]

(3) 格式化GFS文件系統

node1節點上:

# mkfs.gfs2 -p lock_dlm -t rhcs:gfs2 -j 3 /dev/vg/var01 This will destroy any data on /dev/vg/var01.

Are you sure you want to proceed? [y/n] y

Device: /dev/vg/var01

Blocksize: 4096

Device Size 10.00 GB (2620416 blocks)

Filesystem Size: 10.00 GB (2620416 blocks)

Journals: 3

Resource Groups: 40

Locking Protocol: "lock_dlm"

Lock Table: "rhcs:gfs2"

UUID: A692D99D-22C4-10E9-3C0C-006CBF7574CD

說明:

rhcs:gfs2這個rhcs就是集群的名字,gfs2是定義的名字,相當於標簽吧。

-j是指定掛載這個文件系統的主機個數,不指定默認為1即為管理節點的。

這裡實驗有兩個節點,加上管理主機為3

6. 掛載GFS文件系統

node1,node2 上創建GFS掛載點

# mkdir /oradata

(1)node1,node2手動掛載測試,掛載成功後,創建文件測試集群文件系統情況。

# mount.gfs2 /dev/vg/var01 /oradata

(2)配置開機自動掛載

# vi /etc/fstab

/dev/vg/var01 /oradata gfs2 defaults 0 0

(3)通過管理平台配置掛載(略)

查看掛載情況:

[root@node1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

14G 2.8G 10G 22% /

/dev/sda1 99M 27M 68M 29% /boot

tmpfs 506M 0 506M 0% /dev/shm

/dev/hdc 3.1G 3.1G 0 100% /mnt

/dev/mapper/vg-var01 10G 388M 9.7G 4% /oradata

7. 配置表決磁盤

說明:

#表決磁盤是共享磁盤,10M大小就可以了,無需要太大,本例采用/dev/sdc1來進行創建。

#兩節點好像不需要配置表決磁盤,兩節點以上必須配置。

[root@node1 ~]# fdisk -l

Disk /dev/sdc: 1073 MB, 1073741824 bytes

34 heads, 61 sectors/track, 1011 cylinders

Units = cylinders of 2074 * 512 = 1061888 bytes

Device Boot Start End Blocks Id System

/dev/sdc1 1 1011 1048376+ 83 Linux

(1) 創建表決磁盤

[root@node1 ~]# mkqdisk -c /dev/sdc1 -l myqdisk

mkqdisk v0.6.0

Writing new quorum disk label 'myqdisk' to /dev/sdc1.

WARNING: About to destroy all data on /dev/sdc1; proceed [N/y] ? y

Initializing status block for node 1...

Initializing status block for node 2...

Initializing status block for node 3...

Initializing status block for node 4...

Initializing status block for node 5...

Initializing status block for node 6...

Initializing status block for node 7...

Initializing status block for node 8...

Initializing status block for node 9...

Initializing status block for node 10...

Initializing status block for node 11...

Initializing status block for node 12...

Initializing status block for node 13...

Initializing status block for node 14...

Initializing status block for node 15...

Initializing status block for node 16...

(2) 查看表決磁盤信息

[root@node1 ~]# mkqdisk -L

mkqdisk v0.6.0

/dev/disk/by-id/scsi-14f504e46494c450068656b3274732d664452562d48534f63-part1:

/dev/disk/by-path/ip-192.168.14.162:3260-iscsi-iqn.2006-01.com.openfiler:tsn.b2bd5bb312a7-lun-1-part1:

/dev/sdc1:

Magic: eb7a62c2

Label: myqdisk

Created: Sun Mar 24 00:18:12 2013

Host: node1

Kernel Sector Size: 512

Recorded Sector Size: 512

(3) 配置表決磁盤qdisk

# 進入管理界面cluster->cluster list->點擊Cluster Name: rhcs;

# 選擇"Quorum Partition",選擇"use a Quorum Partition"

interval : 2

votes : 2

TKO : 10

Minimum Score: 1

Device : /dev/sdc1

Path to program : ping -c3 -t2 192.168.14.2

Interval : 3

Score : 2

# 點擊apply

(4) 啟動qdisk服務

chkconfig qdiskd on

service qdiskd start

clustat -l

# clustat -l Cluster Status for rhcs @ Sun Mar 24 00:26:26 2013

Member Status: Quorate

Member Name ID Status

------ ---- ---- ------

node1 1 Online, Local

node2 2 Online

/dev/sdc1 0 Offline, Quorum Disk

8. 添加Resources資源

(1) 添加集群IP資源

點擊cluster-> rhcs-> Resources->Add a Resources

選擇IP,輸入:192.168.14.130

選中monitorlink

點擊submit

(2) 添加Oracle啟動與關閉腳本資源

#啟動oracle數據庫的腳本,放在/etc/init.d/dbora下面,名稱為oracle,不用配置成服務形成,該腳本會由RHCS服務來管理。

 

#分別在node1,node2上創建如下腳本。

# vi /etc/init.d/oracle

#!/bin/bash

export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1

export ORACLE_SID=orcl

start() {

su - oracle<<EOF

echo "Starting Listener ..."

$ORACLE_HOME/bin/lsnrctl start

echo "Starting Oracle10g Server.. "

sqlplus / as sysdba

startup

exit;

EOF

}

stop() {

su - oracle<<EOF

echo "Shutting down Listener..."

$ORACLE_HOME/bin/lsnrctl stop

echo "Shutting down Oracle10g Server..."

sqlplus / as sysdba

shutdown immediate;

exit

EOF

}

case "$1" in

start)

start

;;

stop)

stop

;;

*)

echo "Usage: $0 {start|stop}"

;;

esac

chmod +x /etc/init.d/dbora

五、安裝配置Oracle 10G數據庫

說明: 具體安裝過程略過,

1. node1節點上

(1) 准備oracle安裝環境

(2) 安裝oracle數據庫軟件及補丁

(3) netca

(4) dbca 創建數據庫,數據庫文件,控制文件,redolog文件,閃回區,規檔等都創建在/oradata集群文件系統上。

2. node2節點上

(1) 准備oracle安裝環境

(2) 安裝oracle數據庫軟件及補丁

(3) netca

3. 從node1拷貝相關參數文件到node2上

(1) node1打包參數文件

$ cd /u01/app/oracle/product/10.2.0.1/db_1

$ tar czvf dbs.tar.gz dbs

dbs/

dbs/init.ora

dbs/lkORCL

dbs/hc_orcl.dat

dbs/initdw.ora

dbs/spfileorcl.ora

dbs/orapworcl

$ scp dbs.tar.gz node2:/u01/app/oracle/product/10.2.0/db_1/

(2) node2上

# su - oracle

$ mkdir -p /u01/app/oracle/admin/orcl/{adump,bdump,cdump,dpdump,udump}

$ cd /u01/app/oracle/product/10.2.0/db_1/

$ tar zxvf dbs.tar.gz

六、配置Oracle10G數據庫服務

1. 添加數據庫服務

點擊cluster->rhcs->Services->Add a Services

ServiceName:oracle10g

選中 Automatically start this service

選中 Failover domain 選擇剛創建的rhcs_failover

選中 Reovery policy (恢復策略) restart

點擊"add a resource to this service" 添加之前創建的"IP資源","Oracle腳本資源"。

點擊"go",即可創建oracle10g服務

2. 查看服務狀態

(1) 查看集群狀態

[root@node1 db_1]# clustat -l

Cluster Status for rhcs @ Sun Mar 24 12:37:02 2013

Member Status: Quorate

Member Name ID Status

------ ---- ---- ------

node1 1 Online, Local, rgmanager

node2 2 Online, rgmanager

/dev/sdc1 0 Online, Quorum Disk

Service Information

------- -----------

Service Name : service:oracle10g

Current State : started (112)

Flags : none (0)

Owner : node1

Last Owner : none

Last Transition : Sun Mar 24 12:35:53 2013

(2) 查看集群IP

[root@node1 db_1]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: sit0: <NOARP> mtu 1480 qdisc noop

link/sit 0.0.0.0 brd 0.0.0.0

3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000

link/ether 00:0c:29:25:ee:43 brd ff:ff:ff:ff:ff:ff

inet 192.168.14.100/24 brd 192.168.14.255 scope global eth0

inet 192.168.14.130/24 scope global secondary eth0

inet6 fe80::20c:29ff:fe25:ee43/64 scope link

valid_lft forever preferred_lft forever

4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000

link/ether 00:0c:29:25:ee:4d brd ff:ff:ff:ff:ff:ff

inet 10.10.10.10/24 brd 10.10.10.255 scope global eth1

inet6 fe80::20c:29ff:fe25:ee4d/64 scope link

valid_lft forever preferred_lft forever

3. 手工切換服務節點

# clusvcadm -r "oracle10g" -m node2

Trying to relocate service:oracle10g to node2...Success

service:oracle10g is now running on node2

# cat /var/log/messages

Mar 24 21:12:44 node2 clurgmgrd[3601]: <notice> Starting stopped service service:oracle10g

Mar 24 21:12:46 node2 avahi-daemon[3513]: Registering new address record for 192.168.14.130 on eth0.

Mar 24 21:13:43 node2 clurgmgrd[3601]: <notice> Service service:oracle10g started

其它:具體RHCS HA功能還需待詳細測試。

本文出自 “koumm的linux技術博客” 博客,請務必保留此出處http://koumm.blog.51cto.com/703525/1161791

Copyright © Linux教程網 All Rights Reserved