11gR2 RAC添加和刪除節點步驟–添加節點
- 2019 年 10 月 10 日
- 筆記
今天小麥苗給大家分享的是11gR2 RAC添加和刪除節點步驟。
11gR2 RAC添加和刪除節點步驟–添加節點
1 個節點的hosts關閉防火牆
service iptables stop
chkconfig iptables off
3 創建用戶和組
–創建組:
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
useradd -u 1101 -g oinstall -G dba,oper,asmdba oracle
–配置用戶的環境變量
–grid用戶:
export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac3
export ORACLE_SID=orcl3
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/11.2.0/db_1
export ORACLE_UNQNAME=orcl
export TNS_ADMIN=$ORACLE_HOME/network/admin
#export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export LANG=en_US
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'
umask 022
6 配置limits.conf修改內核參數
–和shmmax使sysctl停止NTP安裝相關依賴包
yum install gcc compat-libstdc++-33 elfutils-libelf-devel glibc-devel glibc-headers gcc-c++ libaio-devel libstdc++-devel pdksh compat-libcap1-*
11 執行
/sbin/start_udev
[root@rac3 ~]# ll /dev/asm*
brw-rw—- 1 grid asmadmin 8, 16 Jun 14 05:42 /dev/asm-diskb
brw-rw—- 1 grid asmadmin 8, 32 Jun 14 05:42 /dev/asm-diskc
brw-rw—- 1 grid asmadmin 8, 48 Jun 14 05:42 /dev/asm-diskd
brw-rw—- 1 grid asmadmin 8, 64 Jun 14 05:42 /dev/asm-diske
brw-rw—- 1 grid asmadmin 8, 80 Jun 14 05:42 /dev/asm-diskf
brw-rw—- 1 grid asmadmin 8, 96 Jun 14 05:42 /dev/asm-diskg
brw-rw—- 1 grid asmadmin 8, 112 Jun 14 05:42 /dev/asm-diskh
以上10個步驟要和之前2個節點配置一樣
12 和GRID在節點1驗證對等性
[grid@rac1 ~]$ cluvfy comp nodecon -n rac1,rac2,rac3
Verifying node connectivity
Checking node connectivity…
Checking hosts config file…
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.8.0" with node(s) rac2,rac1,rac3
TCP connectivity check passed for subnet "192.168.8.0"
Node connectivity passed for subnet "172.168.0.0" with node(s) rac2,rac1,rac3
TCP connectivity check passed for subnet "172.168.0.0"
Node connectivity passed for subnet "169.254.0.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "169.254.0.0"
Interfaces found on subnet "192.168.8.0" that are likely candidates for VIP are:
rac2 eth0:192.168.8.223 eth0:192.168.8.224
rac1 eth0:192.168.8.221 eth0:192.168.8.222 eth0:192.168.8.225
rac3 eth0:192.168.8.227
Interfaces found on subnet "172.168.0.0" that are likely candidates for VIP are:
rac2 eth1:172.168.1.19
rac1 eth1:172.168.1.18
rac3 eth1:172.168.1.20
WARNING:
Could not find a suitable set of interfaces for the private interconnect
Checking subnet mask consistency…
Subnet mask consistency check passed for subnet "192.168.8.0".
Subnet mask consistency check passed for subnet "172.168.0.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed.
Node connectivity check passed
Verification of node connectivity was successful.
14 對新節點安裝Clusterware
[grid@rac1 ~]$ cluvfy stage -post hwos -n rac3
Performing post-checks for hardware and operating system setup
Checking node reachability…
Node reachability check passed from node "rac1"
Checking user equivalence…
User equivalence check passed for user "grid"
Checking node connectivity…
Checking hosts config file…
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.8.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
ERROR: /*錯誤原因由於BUG,檢查了網絡和對等性都沒問題,這裡忽略*/
PRVF-7617 : Node connectivity between "rac1 : 192.168.8.221" and "rac3 : 172.168.1.20" failed
TCP connectivity check failed for subnet "172.168.0.0"
Node connectivity check failed
Checking multicast communication…
Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"…
Check of subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0"…
Check of subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed
Checking shared storage accessibility…
Disk Sharing Nodes (1 in count)
———————————— ————————
/dev/sda rac3
Disk Sharing Nodes (1 in count)
———————————— ————————
/dev/sdb rac3
/dev/sdc rac3
/dev/sdd rac3
/dev/sde rac3
/dev/sdf rac3
/dev/sdg rac3
/dev/sdh rac3
Shared storage check was successful on nodes "rac3"
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" …
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
[grid@rac1 ~]$ cluvfy stage -pre crsinst -n rac3
Performing pre-checks for cluster services setup
Checking node reachability…
Node reachability check passed from node "rac1"
Checking user equivalence…
User equivalence check passed for user "grid"
Checking node connectivity…
Checking hosts config file…
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.8.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
ERROR:
PRVF-7617 : Node connectivity between "rac1 : 192.168.8.221" and "rac3 : 172.168.1.20" failed
TCP connectivity check failed for subnet "172.168.0.0"
Node connectivity check failed
Checking multicast communication…
Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"…
。。。。。。。。
Package existence check failed for "pdksh" /*節點3沒裝pdksh,這個包可裝可不裝*/
Check failed on nodes:
rac3
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)…
NTP Configuration file check started…
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Time zone consistency check passed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@rac1 ~]$ cluvfy stage -pre nodeadd -n rac3 -fixup -verbose
Performing pre-checks for node addition
Checking node reachability…
Check: Node reachability from node "rac1"
Destination Node Reachable?
———————————— ————————
rac3 yes
。。。。。。。。。。。。。。。。。。
Result: Package existence check passed for "sysstat"
Check: Package existence for "pdksh"
Node Name Available Required Status
———— ———————— ———————— ———-
rac1 pdksh-5.2.14-30 pdksh-5.2.14 passed
rac3 missing pdksh-5.2.14 failed
Result: Package existence check failed for "pdksh"
Check: Package existence for "expat(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
rac1 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed
rac3 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed
Result: Package existence check passed for "expat(x86_64)"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
———————————— ————————
rac1 passed
rac3 passed
Check for consistency of root user's primary group passed
Checking OCR integrity…
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration…
Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)…
NTP Configuration file check started…
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
———— ———————— ————————
rac1 passed does not exist
rac3 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes…
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes…
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking DNS response time for an unreachable node
Node Name Status
———————————— ————————
rac1 passed
rac3 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" …
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes…
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Pre-check for node addition was unsuccessful on all the nodes.
2.3 GI命令
正式添加節點之前它也會調用cluvfy因為addNode.sh 工具來驗證新加入節點是否滿足條件,而我們DNS 所以在運行addNode.sh 這個參數就是從addNode.sh 在節點1命令把oracle 選擇實例,輸入sys選擇節點和實例名-> Finish.
的圖形化管理,也可以使用dbca用oracle 資源里
用戶執行,還要注意oracle 發現orcl3五. 配置
5.1 用戶下tnsnames.ora 修改所有節點,Oracle文件,修改內容如下:
NODE1_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST= rac1-vip)(PORT = 1521))
NODE2_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST =rac2-vip)(PORT = 1521))
NODE3_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST =rac3-vip)(PORT = 1521))
DAVE_REMOTE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST=rac1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST=rac2-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST=rac3-vip)(PORT = 1521))
)
)
5.2 和REMOTE_LISTENER 執行如下操作:
alter system set LOCAL_LISTENER='NODE1_LOCAL' scope=both sid='orcl1';
alter system set LOCAL_LISTENER='NODE2_LOCAL' scope=both sid='orcl2';
alter system set LOCAL_LISTENER='NODE3_LOCAL' scope=both sid='orcl3';
alter system set REMOTE_LISTENER='ORCL_REMOTE' scope=both sid='*';
修改Service-Side TAF 修改之前的service實例:orcl3
[oracle@rac1 admin]$ srvctl modify service -d orcl -s orcl_taf -n -i orcl1,orcl2,orcl3
[oracle@rac1 admin]$ srvctl config service -d orcl
Service name: orcl_taf
Service is enabled
Server pool: orcl_orcl_taf
Cardinality: 3
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 180
TAF failover delay: 5
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: orcl1,orcl2,orcl3
Available instances:
[oracle@rac1 admin]$ srvctl start service -d orcl -s orcl_taf -i orcl3
[oracle@rac1 admin]$ srvctl status service -d orcl
Service orcl_taf is running on instance(s) orcl1,orcl3
#沒啟用,這裡順便啟動下
[oracle@rac1 admin]$ srvctl start service -d orcl -s orcl_taf -i orcl2
[oracle@rac1 admin]$ srvctl status service -d orcl
Service orcl_taf is running on instance(s) orcl1,orcl2,orcl3
驗證
[grid@rac3 ~]$ olsnodes -s
rac1 Active
rac2 Active
rac3 Active
[grid@rac3 ~]$ olsnodes -n
rac1 1
rac2 2
rac3 3
[grid@rac1 ~]$ crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATADG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.SYSTEMDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ONLINE ONLINE rac3 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
OFFLINE OFFLINE rac3
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac2
ora.oc4j
1 ONLINE ONLINE rac2
ora.orcl.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
3 ONLINE ONLINE rac3 Open
ora.orcl.orcl_taf.svc
1 ONLINE ONLINE rac1
2 ONLINE ONLINE rac3
3 ONLINE ONLINE rac2
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.rac3.vip
1 ONLINE ONLINE rac3
ora.scan1.vip
1 ONLINE ONLINE rac2
[oracle@rac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Thu Jun 9 10:09:31 2016
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> col host_name for a20
SQL> select inst_id,host_name,instance_name,status from gv$instance;
INST_ID HOST_NAME INSTANCE_NAME STATUS
———- ——————– —————- ————
1 rac1 orcl1 OPEN
3 rac3 orcl3 OPEN
2 rac2 orcl2 OPEN
[root@rac1 ~]# ./crs_stat.sh
Name Target State Host
———————— ———- ——— ——-
ora.DATADG.dg ONLINE ONLINE rac1
ora.LISTENER.lsnr ONLINE ONLINE rac1
ora.LISTENER_SCAN1.lsnr ONLINE ONLINE rac2
ora.SYSTEMDG.dg ONLINE ONLINE rac1
ora.asm ONLINE ONLINE rac1
ora.cvu ONLINE ONLINE rac2
ora.gsd OFFLINE OFFLINE
ora.net1.network ONLINE ONLINE rac1
ora.oc4j ONLINE ONLINE rac2
ora.ons ONLINE ONLINE rac1
ora.orcl.db ONLINE ONLINE rac1
ora.orcl.orcl_taf.svc ONLINE ONLINE rac1
ora.rac1.ASM1.asm ONLINE ONLINE rac1
ora.rac1.LISTENER_RAC1.lsnr ONLINE ONLINE rac1
ora.rac1.gsd OFFLINE OFFLINE
ora.rac1.ons ONLINE ONLINE rac1
ora.rac1.vip ONLINE ONLINE rac1
ora.rac2.ASM2.asm ONLINE ONLINE rac2
ora.rac2.LISTENER_RAC2.lsnr ONLINE ONLINE rac2
ora.rac2.gsd OFFLINE OFFLINE
ora.rac2.ons ONLINE ONLINE rac2
ora.rac2.vip ONLINE ONLINE rac2
ora.rac3.ASM3.asm ONLINE ONLINE rac3
ora.rac3.LISTENER_RAC3.lsnr ONLINE ONLINE rac3
ora.rac3.gsd OFFLINE OFFLINE
ora.rac3.ons ONLINE ONLINE rac3
ora.rac3.vip ONLINE ONLINE rac3
ora.scan1.vip ONLINE ONLINE rac2
添加刪除節點小結
11gR2 RAC 個階段:
)第一階段主要工作是複製GIRD HOME,並且啟動GRID信息,更新inventory(2到新節點,更新inventory(3創建新的數據庫實例(包括創建undo ,初始化參數等),更新OCR的卸載步驟正好和上面的步驟相反。 在添加/狀態,不需要停機,對客戶端業務沒有影響。新節點的ORACLE_BASE路徑在添加過程中會自動創建,無需手動創建。
(1刪除節點前,建議手工備份一下OCR刪除節點失敗,可以通過恢復原來的OCR(2時,OUI配置功能,但是添加節點腳本addNode.sh用戶和grid用戶等效性。