11gR2 RAC添加和删除节点步骤–删除节点

  • 2019 年 10 月 10 日
  • 筆記

今天小麦苗给大家分享的是11gR2 RAC添加和删除节点步骤。

11gR2 RAC添加和删除节点步骤–删除节点

一. 现有的RAC 节点的11.2.0.4,在本文档中,我们要演示删除一个节点:rac3所有删除操作都在环境正常运行状态下进行。

RAC 当前RAC 二. 在删除节点前,建议手动备份OCR 每4),目的是如果出现某些问题,我们可以恢复OCR这里在节点1用root执行手工OCR查看ocr三. DBCA调整service 如果RAC 的操作,并且待删除节点的service 的,那么在我们删除该节点之前,需要把该节点上的连接转移到其他节点上去,使用relocate service当preferred instance 会自动relocate的实例上,这个过程也可以手工来执行,命令如下:

将节点3转移到其他节点,用oracle因为我这里的3都是preferred只能从preferred。

–的信息,删除节点3:

[oracle@rac1 ~]$ srvctl stop service -d orcl -s orcl_taf -i orcl3

[oracle@rac1 ~]$ srvctl status service -d orcl

Service orcl_taf is running on instance(s) orcl1,orcl2

[oracle@rac1 ~]$ srvctl modify service -d orcl -s orcl_taf -n -i orcl1,orcl2 -f

[oracle@rac1 ~]$ srvctl status service -d orcl

Service orcl_taf is running on instance(s) orcl1,orcl2

[oracle@rac1 ~]$ srvctl config service -d orcl

Service name: orcl_taf

Service is enabled

Server pool: orcl_orcl_taf

Cardinality: 2

Disconnect: true

Service role: PRIMARY

Management policy: AUTOMATIC

DTP transaction: false

AQ HA notifications: false

Failover type: SELECT

Failover method: BASIC

TAF failover retries: 180

TAF failover delay: 5

Connection Load Balancing Goal: LONG

Runtime Load Balancing Goal: NONE

TAF policy specification: BASIC

Edition:

Preferred instances: orcl1,orcl2

Available instances:

3.2 DBCA在节点1用户运行dbca这里可以用图形界面来删除:

dbca -> RAC database -> nstance Management -> Delete Instance -> 用户和密码 -> 也可以使用dbca 来操作:

dbca -silent -deleteInstance [-nodeList node_name] -gdbName gdb_name -instanceName instance_name -sysDBAUserName sysdba -sysDBAPassword password

上用Oracle 确认orcl3中清除

用户执行,还要注意oracle 这里已经没有orcl3四. 层面删除节点(Oracle 这小节的操作都用oracle 停止节点3用grid在节点3用户更新Inventory

[root@rac3 ~]# su – oracle

[oracle@rac3 ~]$ cd $ORACLE_HOME/oui/bin

[oracle@rac3 bin]$ ls

addLangs.sh attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh

addNode.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh

[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=rac3"

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 2047 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

注意这的解决方法:

log查看:/etc/oraInst.loc

[root@rac3 logs]# cat /etc/oraInst.loc

inventory_loc=/u01/app/oraInventory

inst_group=oinstall

[root@rac3 logs]#

–上操作的,所以查看节点的oraInst.loc 修改节点3与节点1再次更新目录,这次成功:

[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/11.2.0/db_1 "CLUSTER_NODES=rac3"

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 2925 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/oraInventory

'UpdateNodeList' was successful.

4.3 的ORACLE_HOME, 用户执行Deinstall在节点1 用户更新inventory

[root@rac1 ~]# su – oracle

[oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=rac1,rac2"

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 1868 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

在GRID)

用户或者root查看节点都是unpinned在节点3用户运行deconfig

[root@rac3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -deinstall -force

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Network exists: 1/192.168.8.0/255.255.255.0/eth0, type static

VIP exists: /192.168.8.242/192.168.8.242/192.168.8.0/255.255.255.0/eth0, hosting node rac1

VIP exists: /192.168.8.244/192.168.8.244/192.168.8.0/255.255.255.0/eth0, hosting node rac2

VIP exists: /rac3-vip/192.168.8.247/192.168.8.0/255.255.255.0/eth0, hosting node rac3

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3'

CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'rac3'

CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'rac3'

CRS-2677: Stop of 'ora.DATADG.dg' on 'rac3' succeeded

CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac3'

CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'

CRS-2673: Attempting to stop 'ora.asm' on 'rac3'

CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'

CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac3'

CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'

CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'

CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Removing Trace File Analyzer

Successfully deconfigured Oracle clusterware stack on this node

–在节点1用root在节点3删除GIRD HOME运行Deinstall

用户执行:

[grid@rac3 ~]$ $ORACLE_HOME/deinstall/deinstall -local

Checking for required files and bootstrapping …

Please wait …

Location of logs /tmp/deinstall2016-06-13_01-38-44PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/11.2.0/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home

The following nodes are part of this cluster: rac3

Checking for sufficient temp space availability on node(s) : 'rac3'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2016-06-13_01-38-44PM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "rac3"[rac3-vip]

>

The following information can be collected by running "/sbin/ifconfig -a" on node "rac3"

Enter the IP netmask of Virtual IP "192.168.8.247" on node "rac3"[255.255.255.0]

>

Enter the network interface name on which the virtual IP address "192.168.8.247" is active

>

Enter an address or the name of the virtual IP[]

>

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/netdc_check2016-06-13_01-39-35-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/asmcadc_check2016-06-13_01-39-38-PM.log

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac3', and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/11.2.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER

Option -local will not modify any ASM configuration.

Do you want to continue (y – yes, n – no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2016-06-13_01-38-44PM/logs/deinstall_deconfig2016-06-13_01-39-02-PM.out'

Any error messages from this session will be written to: '/tmp/deinstall2016-06-13_01-38-44PM/logs/deinstall_deconfig2016-06-13_01-39-02-PM.err'

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/asmcadc_clean2016-06-13_01-39-41-PM.log

ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/netdc_clean2016-06-13_01-39-41-PM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER

Stopping listener on node "rac3": LISTENER

Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

De-configuring Naming Methods configuration file…

Naming Methods configuration file de-configured successfully.

De-configuring backup files…

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

—————————————->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac3".

/tmp/deinstall2016-06-13_01-38-44PM/perl/bin/perl -I/tmp/deinstall2016-06-13_01-38-44PM/perl/lib -I/tmp/deinstall2016-06-13_01-38-44PM/crs/install /tmp/deinstall2016-06-13_01-38-44PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2016-06-13_01-38-44PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<—————————————-

—————————————->

执行完成后,重新回到刚才窗口按Enter在保留节点运行,更新inventory

,用grid检查节点删除是否成功

上用grid六.效验

[grid@rac1 ~]$ olsnodes -s

rac1 Active

rac2 Active

[grid@rac1 ~]$ olsnodes -n

rac1 1

rac2 2

[grid@rac1 ~]$ crsctl stat res -t

——————————————————————————–

NAME TARGET STATE SERVER STATE_DETAILS

——————————————————————————–

Local Resources

——————————————————————————–

ora.DATADG.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.LISTENER.lsnr

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.SYSTEMDG.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.asm

ONLINE ONLINE rac1 Started

ONLINE ONLINE rac2 Started

ora.gsd

OFFLINE OFFLINE rac1

OFFLINE OFFLINE rac2

ora.net1.network

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.ons

ONLINE ONLINE rac1

ONLINE ONLINE rac2

——————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE rac2

ora.cvu

1 ONLINE ONLINE rac2

ora.oc4j

1 ONLINE ONLINE rac2

ora.orcl.db

1 ONLINE ONLINE rac1 Open

2 ONLINE ONLINE rac2 Open

ora.orcl.orcl_taf.svc

1 ONLINE ONLINE rac1

3 ONLINE ONLINE rac2

ora.rac1.vip

1 ONLINE ONLINE rac1

ora.rac2.vip

1 ONLINE ONLINE rac2

ora.scan1.vip

1 ONLINE ONLINE rac2

[root@rac1 ~]# ./crs_stat.sh

Name Target State Host

———————— ———- ——— ——-

ora.DATADG.dg ONLINE ONLINE rac1

ora.LISTENER.lsnr ONLINE ONLINE rac1

ora.LISTENER_SCAN1.lsnr ONLINE ONLINE rac2

ora.SYSTEMDG.dg ONLINE ONLINE rac1

ora.asm ONLINE ONLINE rac1

ora.cvu ONLINE ONLINE rac2

ora.gsd OFFLINE OFFLINE

ora.net1.network ONLINE ONLINE rac1

ora.oc4j ONLINE ONLINE rac2

ora.ons ONLINE ONLINE rac1

ora.orcl.db ONLINE ONLINE rac1

ora.orcl.orcl_taf.svc ONLINE ONLINE rac1

ora.rac1.ASM1.asm ONLINE ONLINE rac1

ora.rac1.LISTENER_RAC1.lsnr ONLINE ONLINE rac1

ora.rac1.gsd OFFLINE OFFLINE

ora.rac1.ons ONLINE ONLINE rac1

ora.rac1.vip ONLINE ONLINE rac1

ora.rac2.ASM2.asm ONLINE ONLINE rac2

ora.rac2.LISTENER_RAC2.lsnr ONLINE ONLINE rac2

ora.rac2.gsd OFFLINE OFFLINE

ora.rac2.ons ONLINE ONLINE rac2

ora.rac2.vip ONLINE ONLINE rac2

ora.scan1.vip ONLINE ONLINE rac2

6.2 在节点3清除家目录:

rm -rf /u01/app/grid_home

rm -rf /home/oracle

七. 11gR2 添加节点分3(1到新节点,配置GRID,同时更新OCR信息。

)第二阶段主要工作是复制RDBMS HOME信息。

)第三阶段主要工作是DBCA表空间,redo log信息(包括注册新的数据库实例等)。

11gR2 步骤还是三个步骤。

删除节点的过程中,原有的节点一直是online和ORACLE_HOME 注意事项:

)在添加/,在某些情况下添加/来解决问题。

)在正常安装11.2 GRID界面提供SSH 没有这个功能,因此需要手动配置oracle用户的SSH用户等效性。

注意:本文内容太多,公众号有字数限制,全文可点击文末的阅读原文,谢谢大家的理解。