11gR2 RAC添加和刪除節點步驟–刪除節點
- 2019 年 10 月 10 日
- 筆記
今天小麥苗給大家分享的是11gR2 RAC添加和刪除節點步驟。
11gR2 RAC添加和刪除節點步驟–刪除節點
一. 現有的RAC 節點的11.2.0.4,在本文檔中,我們要演示刪除一個節點:rac3所有刪除操作都在環境正常運行狀態下進行。
RAC 當前RAC 二. 在刪除節點前,建議手動備份OCR 每4),目的是如果出現某些問題,我們可以恢復OCR這裡在節點1用root執行手工OCR查看ocr三. DBCA調整service 如果RAC 的操作,並且待刪除節點的service 的,那麼在我們刪除該節點之前,需要把該節點上的連接轉移到其他節點上去,使用relocate service當preferred instance 會自動relocate的實例上,這個過程也可以手工來執行,命令如下:
將節點3轉移到其他節點,用oracle因為我這裡的3都是preferred只能從preferred。
–的資訊,刪除節點3:
[oracle@rac1 ~]$ srvctl stop service -d orcl -s orcl_taf -i orcl3
[oracle@rac1 ~]$ srvctl status service -d orcl
Service orcl_taf is running on instance(s) orcl1,orcl2
[oracle@rac1 ~]$ srvctl modify service -d orcl -s orcl_taf -n -i orcl1,orcl2 -f
[oracle@rac1 ~]$ srvctl status service -d orcl
Service orcl_taf is running on instance(s) orcl1,orcl2
[oracle@rac1 ~]$ srvctl config service -d orcl
Service name: orcl_taf
Service is enabled
Server pool: orcl_orcl_taf
Cardinality: 2
Disconnect: true
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 180
TAF failover delay: 5
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: orcl1,orcl2
Available instances:
3.2 DBCA在節點1用戶運行dbca這裡可以用圖形介面來刪除:
dbca -> RAC database -> nstance Management -> Delete Instance -> 用戶和密碼 -> 也可以使用dbca 來操作:
dbca -silent -deleteInstance [-nodeList node_name] -gdbName gdb_name -instanceName instance_name -sysDBAUserName sysdba -sysDBAPassword password
上用Oracle 確認orcl3中清除
用戶執行,還要注意oracle 這裡已經沒有orcl3四. 層面刪除節點(Oracle 這小節的操作都用oracle 停止節點3用grid在節點3用戶更新Inventory
[root@rac3 ~]# su – oracle
[oracle@rac3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac3 bin]$ ls
addLangs.sh attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh
addNode.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh
[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=rac3"
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 2047 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
注意這的解決方法:
log查看:/etc/oraInst.loc
[root@rac3 logs]# cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
[root@rac3 logs]#
–上操作的,所以查看節點的oraInst.loc 修改節點3與節點1再次更新目錄,這次成功:
[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/11.2.0/db_1 "CLUSTER_NODES=rac3"
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 2925 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/oraInventory
'UpdateNodeList' was successful.
4.3 的ORACLE_HOME, 用戶執行Deinstall在節點1 用戶更新inventory
[root@rac1 ~]# su – oracle
[oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=rac1,rac2"
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 1868 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
在GRID)
用戶或者root查看節點都是unpinned在節點3用戶運行deconfig
[root@rac3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -deinstall -force
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Network exists: 1/192.168.8.0/255.255.255.0/eth0, type static
VIP exists: /192.168.8.242/192.168.8.242/192.168.8.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /192.168.8.244/192.168.8.244/192.168.8.0/255.255.255.0/eth0, hosting node rac2
VIP exists: /rac3-vip/192.168.8.247/192.168.8.0/255.255.255.0/eth0, hosting node rac3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'rac3'
CRS-2677: Stop of 'ora.DATADG.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
–在節點1用root在節點3刪除GIRD HOME運行Deinstall
用戶執行:
[grid@rac3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping …
Please wait …
Location of logs /tmp/deinstall2016-06-13_01-38-44PM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac3
Checking for sufficient temp space availability on node(s) : 'rac3'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2016-06-13_01-38-44PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac3"[rac3-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "rac3"
Enter the IP netmask of Virtual IP "192.168.8.247" on node "rac3"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "192.168.8.247" is active
>
Enter an address or the name of the virtual IP[]
>
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/netdc_check2016-06-13_01-39-35-PM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/asmcadc_check2016-06-13_01-39-38-PM.log
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y – yes, n – no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2016-06-13_01-38-44PM/logs/deinstall_deconfig2016-06-13_01-39-02-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2016-06-13_01-38-44PM/logs/deinstall_deconfig2016-06-13_01-39-02-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/asmcadc_clean2016-06-13_01-39-41-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/netdc_clean2016-06-13_01-39-41-PM.log
De-configuring RAC listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener on node "rac3": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file…
Naming Methods configuration file de-configured successfully.
De-configuring backup files…
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
—————————————->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac3".
/tmp/deinstall2016-06-13_01-38-44PM/perl/bin/perl -I/tmp/deinstall2016-06-13_01-38-44PM/perl/lib -I/tmp/deinstall2016-06-13_01-38-44PM/crs/install /tmp/deinstall2016-06-13_01-38-44PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2016-06-13_01-38-44PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
<—————————————-
—————————————->
執行完成後,重新回到剛才窗口按Enter在保留節點運行,更新inventory
,用grid檢查節點刪除是否成功
上用grid六.效驗
[grid@rac1 ~]$ olsnodes -s
rac1 Active
rac2 Active
[grid@rac1 ~]$ olsnodes -n
rac1 1
rac2 2
[grid@rac1 ~]$ crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATADG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.SYSTEMDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac2
ora.oc4j
1 ONLINE ONLINE rac2
ora.orcl.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.orcl.orcl_taf.svc
1 ONLINE ONLINE rac1
3 ONLINE ONLINE rac2
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac2
[root@rac1 ~]# ./crs_stat.sh
Name Target State Host
———————— ———- ——— ——-
ora.DATADG.dg ONLINE ONLINE rac1
ora.LISTENER.lsnr ONLINE ONLINE rac1
ora.LISTENER_SCAN1.lsnr ONLINE ONLINE rac2
ora.SYSTEMDG.dg ONLINE ONLINE rac1
ora.asm ONLINE ONLINE rac1
ora.cvu ONLINE ONLINE rac2
ora.gsd OFFLINE OFFLINE
ora.net1.network ONLINE ONLINE rac1
ora.oc4j ONLINE ONLINE rac2
ora.ons ONLINE ONLINE rac1
ora.orcl.db ONLINE ONLINE rac1
ora.orcl.orcl_taf.svc ONLINE ONLINE rac1
ora.rac1.ASM1.asm ONLINE ONLINE rac1
ora.rac1.LISTENER_RAC1.lsnr ONLINE ONLINE rac1
ora.rac1.gsd OFFLINE OFFLINE
ora.rac1.ons ONLINE ONLINE rac1
ora.rac1.vip ONLINE ONLINE rac1
ora.rac2.ASM2.asm ONLINE ONLINE rac2
ora.rac2.LISTENER_RAC2.lsnr ONLINE ONLINE rac2
ora.rac2.gsd OFFLINE OFFLINE
ora.rac2.ons ONLINE ONLINE rac2
ora.rac2.vip ONLINE ONLINE rac2
ora.scan1.vip ONLINE ONLINE rac2
6.2 在節點3清除家目錄:
rm -rf /u01/app/grid_home
rm -rf /home/oracle
七. 11gR2 添加節點分3(1到新節點,配置GRID,同時更新OCR資訊。
)第二階段主要工作是複製RDBMS HOME資訊。
)第三階段主要工作是DBCA表空間,redo log資訊(包括註冊新的資料庫實例等)。
11gR2 步驟還是三個步驟。
刪除節點的過程中,原有的節點一直是online和ORACLE_HOME 注意事項:
)在添加/,在某些情況下添加/來解決問題。
)在正常安裝11.2 GRID介面提供SSH 沒有這個功能,因此需要手動配置oracle用戶的SSH用戶等效性。
注意:本文內容太多,公眾號有字數限制,全文可點擊文末的閱讀原文,謝謝大家的理解。