Delete a Node from Oracle RAC 11gR2

Posted: February 13, 2014 in Oracle RAC

This guide shows how to add a node to an existing 11gR2(11.2.0.3) Oracle RAC cluster. It is assumed that the node in question is available and is not part of a GNS/Grid Plug and Play cluster. In other words, the database is considered to be “Administrator-Managed”. Also, the database software is non-shared and uses role separation where “grid” is the clusterware owner and “oracle” owns the database software. This guide uses a 2-node cluster running Oracle Linux 6.3 (x64). There are two pre-existing nodes “node1” and “node2”. We will be adding “node3” to the cluster. This guide does not cover node preparation steps/prerequisites. The assumption is that since there is a pre-existing cluster the user knows how to prepare a node – from a prerequisite perspective – for cluster addition.

From an existing node, run “cluvfy” to ensure that “node3” – the cluster node to be added – is ready from a hardware and operating system perspective.
cluvfy stage -post hwos -n node3

The cluster verify utility – “cluvfy” – is used to determine the integrity of the cluster and whether it is ready for a new node. From an existing node, run “cluvfy” to verify the integrity of the cluster.
cluvfy stage -pre nodeadd -n node3 -fixup -verbose
[grid@node1 ~]$ cluvfy stage -pre crsinst -n node1,node2,node3 -fixup -verbose

Pre-check for cluster services setup was successful.
Extend Clusterware
The clusterware software will be extended to the new node.
Run “addNode.sh”
From an existing node, run “addNode.sh” to extend the clusterware to the new node “node3”:
[grid@node1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@node1 ~]$ cd $ORACLE_HOME/oui/bin
[grid@node1 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={node3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}” “CLUSTER_NEW_PRIVATE_NODE_NAMES={node3-priv}”
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 5667 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes node2,node3 are available
……………………………………………………… 100% Done.

. 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Friday, January 4, 2013 11:19:48 AM CST)
………………………………………………………………………………….. 96% Done.
Home copied to new nodes

Saving inventory on nodes (Friday, January 4, 2013 11:25:19 AM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at ‘/u000/app/oraInventory/orainstRoot.sh’ with root privileges on nodes ‘node3’.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u000/app/oraInventory/orainstRoot.sh #On nodes node3
/u000/app/grid/root.sh #On nodes node3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts in each cluster node

The Cluster Node Addition of /u000/app/grid was successful.
Please check ‘/tmp/silentInstall.log’ for more details.
[grid@node1 bin]$

Run the “root.sh” commands on the new node as directed
[root@node3 ~]# /u000/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u000/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u000/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node3 ~]#
[root@node3 ~]# /u000/app/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u000/app/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u000/app/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization – successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster … succeeded

If successful, clusterware daemons, the listener, the ASM instance, etc. should be started by the “root.sh” script

[grid@node3 ~]$ crs_stat -t -v -c node3
Name Type R/RA F/FT Target State Host
———————————————————————-
ora….SM3.asm application 0/5 0/0 ONLINE ONLINE node3
ora….E3.lsnr application 0/5 0/0 ONLINE ONLINE node3
ora.node3.ons application 0/3 0/0 ONLINE ONLINE node3
ora.node3.vip ora….t1.type 0/0 0/0 ONLINE ONLINE node3
[grid@node3 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
———————————————————————-
ora.CRS.dg ora….up.type 0/5 0/ ONLINE ONLINE node1
ora.DATADG.dg ora….up.type 0/5 0/ ONLINE ONLINE node1
ora.FRADG.dg ora….up.type 0/5 0/ ONLINE ONLINE node1
ora….ER.lsnr ora….er.type 0/5 0/ ONLINE ONLINE node1
ora….N1.lsnr ora….er.type 0/5 0/0 ONLINE ONLINE node1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE node1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE node1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora….network ora….rk.type 0/5 0/ ONLINE ONLINE node1
ora….SM1.asm application 0/5 0/0 ONLINE ONLINE node1
ora….E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip ora….t1.type 0/0 0/0 ONLINE ONLINE node1
ora….SM2.asm application 0/5 0/0 ONLINE ONLINE node2
ora….E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip ora….t1.type 0/0 0/0 ONLINE ONLINE node2
ora….SM3.asm application 0/5 0/0 ONLINE ONLINE node3
ora….E3.lsnr application 0/5 0/0 ONLINE ONLINE node3
ora.node3.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.node3.ons application 0/3 0/0 ONLINE ONLINE node3
ora.node3.vip ora….t1.type 0/0 0/0 ONLINE ONLINE node3
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE node1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE node1
ora.scan1.vip ora….ip.type 0/0 0/0 ONLINE ONLINE node1
ora.ETL.db ora….se.type 0/2 0/1 ONLINE ONLINE node1
[grid@node3 ~]$ crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.CRS.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.DATADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.FRADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.LISTENER.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.asm
ONLINE ONLINE node1 Started
ONLINE ONLINE node2 Started
ONLINE ONLINE node3 Started
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
OFFLINE OFFLINE node3
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node1
ora.cvu
1 ONLINE ONLINE node1
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.node3.vip
1 ONLINE ONLINE node3
ora.oc4j
1 ONLINE ONLINE node1
ora.scan1.vip
1 ONLINE ONLINE node1
ora.ETL.db
1 ONLINE ONLINE node1 Open
2 ONLINE ONLINE node2 Open

Extend Oracle Database Software


From an existing node – as the database software owner – run the following command to extend the Oracle database software to the new node “node3”
[oracle@node1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@node1 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={node3}”

Performing pre-checks for node addition

Checking node reachability…
Node reachability check passed from node “node1”

Checking user equivalence…
User equivalence check passed for user “oracle”

WARNING:
Node “node3” already appears to be part of cluster

Pre-check for node addition was successful.
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 3919 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes node2,node3 are available
……………………………………………………… 100% Done.

Instantiating scripts for add node (Saturday, January 5, 2013 10:02:31 AM CST)
. 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Saturday, January 5, 2013 10:02:41 AM CST)
………………………………………………………………………………….. 96% Done.
Home copied to new nodes

Saving inventory on nodes (Saturday, January 5, 2013 10:45:12 AM CST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u000/app/oracle/product/11.2.0/dbhome_1/root.sh #On nodes node3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts in each cluster node

The Cluster Node Addition of /u000/app/oracle/product/11.2.0/dbhome_1 was successful.
Please check ‘/tmp/silentInstall.log’ for more details.

[oracle@node1 bin]$

Run the “root.sh” commands on the new node as directed
[root@node3 ~]# /u000/app/oracle/product/11.2.0/dbhome_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u000/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

Add Instance to Clustered Database

A database instance will be established on the new node. Specifically, an instance named “ETL3” will be added to “ETL” – a pre-existing clustered database.
Satisfy Node Instance Dependencies
Satisfy all node instance dependencies, such as passwordfile, init.ora parameters, etc.
From the new node “node3”, run the following commands to create the passwordfile, “init.ora” file, and “oratab” entry for the new instance
[oracle@node3 ~]$ cd $ORACLE_HOME/dbs
[oracle@node3 dbs]$ mv initETL1.ora initETL3.ora
[oracle@node3 dbs]$ mv orapwETL1 orapwETL3
[oracle@node3 dbs]$ echo “ETL3:$ORACLE_HOME:N” >> /etc/oratab

From a node with an existing instance of “ETL” issue the following commands to create the needed public log thread, undo tablespace, and “init.ora” entries for the new instance
[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sat Jan 5 11:06:02 2013

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> alter database add logfile thread 3 group 5 (‘+DATADG’,’+FRADG’) size 50M, group 6 (‘+DATADG’,’+FRADG’) size 50M;

Database altered.

SQL> alter database enable public thread 3;

Database altered.

SQL> create undo tablespace undotbs3 datafile ‘+DATADG’ size 200M autoextend on;

Tablespace created.

SQL> alter system set undo_tablespace=’undotbs3′ scope=spfile sid=’ETL3′;

System altered.

SQL> alter system set instance_number=3 scope=spfile sid=’ETL3′;

System altered.

SQL> alter system set cluster_database_instances=3 scope=spfile sid=’*’;

System altered.

Update Oracle Cluster Registry (OCR)
The OCR will be updated to account for a new instance – “ETL3” – being added to the “ETL” cluster database .
Add “ETL3” instance to the “ETL” database and verify
[oracle@node3 ~]$ srvctl add instance -d ETL -i ETL3 -n node3
[oracle@node3 ~]$ srvctl status database -d ETL -v
Instance ETL1 is running on node node1. Instance status: Open.
Instance ETL2 is running on node node2. Instance status: Open.
Instance ETL3 is not running on node node3
[oracle@node3 ~]$
[oracle@node3 ~]$ srvctl config database -d ETL
Database unique name: ETL
Database name: ETL
Oracle home: /u000/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATADG/ETL/spfileETL.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ETL
Database instances: ETL1,ETL2,ETL3
Disk Groups: DATADG,FRADG
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@node3 ~]$

Start the Instance
Now that all the prerequisites have been satisfied and OCR updated, the “ETL3” instance will be started.
Start the newly created instance – “ETL3” – and verify
[oracle@node3 ~]$ srvctl start instance -d ETL -i ETL3
[oracle@node3 ~]$
[oracle@node3 ~]$ crvctl status database -d ETL -v
-bash: crvctl: command not found
[oracle@node3 ~]$ srvctl status database -d ETL -v
Instance ETL1 is running on node node1. Instance status: Open.
Instance ETL2 is running on node node2. Instance status: Open.
Instance ETL3 is running on node node3. Instance status: Open.
[oracle@node3 ~]$
[oracle@node2 admin]$ pwd
/u000/app/oracle/product/11.2.0/dbhome_1/network/admin
[oracle@node2 admin]$ scp tnsnames.ora node3:/u000/app/oracle/product/11.2.0/dbhome_1/network/admin
tnsnames.ora 100% 342 0.3KB/s 00:00
[oracle@node2 admin]$
[oracle@node3 ~]$ cat /etc/issue
Oracle Linux Server release 6.3
Kernel \r on an \m

[oracle@node3 ~]$ uname -p
x86_64
[oracle@node3 ~]$
[oracle@node3 ~]$
[oracle@node3 ~]$ hostname
node3
[oracle@node3 ~]$
[oracle@node3 ~]$ sqlplus system@ETL

SQL*Plus: Release 11.2.0.3.0 Production on Sat Jan 5 11:29:32 2013

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> alter session set nls_date_format=’yyyy-mm-dd hh24:mi:ss’;

Session altered.

SQL> col host_name format a11
SQL> set line 300
SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE;

INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS
—————- ———– —————– ——————- ———— ——— —————— —————–
ETL3 node3 11.2.0.3.0 2013-01-05 11:23:16 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
ETL1 node1 11.2.0.3.0 2013-01-05 09:53:24 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
ETL2 node2 11.2.0.3.0 2013-01-04 17:34:40 OPEN NORMAL PRIMARY_INSTANCE ACTIVE

SQL> conn hr@ETL
Enter password:
Connected.
SQL> select tname from tab;

TNAME
——————————
COUNTRIES
DEPARTMENTS
EMPLOYEES
EMP_DETAILS_VIEW
JOBS
JOB_HISTORY
LOCATIONS
REGIONS

8 rows selected.
[grid@node3 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
[grid@node3 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE 88a0a181a53c4fe0bfc583095b1f3c4b (/dev/asm-diskb) [CRS]
Located 1 voting disk(s).
[grid@node3 ~]$
[grid@node3 ~]$
[grid@node3 ~]$ crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.CRS.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.DATADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.FRADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.LISTENER.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.asm
ONLINE ONLINE node1 Started
ONLINE ONLINE node2 Started
ONLINE ONLINE node3 Started
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
OFFLINE OFFLINE node3
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node2
ora.cvu
1 ONLINE ONLINE node3
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.node3.vip
1 ONLINE ONLINE node3
ora.oc4j
1 ONLINE ONLINE node3
ora.scan1.vip
1 ONLINE ONLINE node2
ora.ETL.db
1 ONLINE ONLINE node1 Open
2 ONLINE ONLINE node2 Open
3 ONLINE ONLINE node3 Open

Comments
  1. After exploring a few of the blog posts on your blog, I honestly appreciate your technique of blogging.
    I book-marked it to my bookmark site list and will
    be checking back soon. Take a look at my web site as
    well and let me know how you feel.

Leave a comment