Add New Cluster Node
on Oracle RAC 12.2
This procedure
allows you to add an additional node on Oracle Database 12c (including
12.1.0.1, 12.1.0.2 and 12.2.0.1) Real Application Cluster.
--------------------------------------------------------------------
OS: Oracle Enterprise Linux 7
Existing Nodes: racNode1, racNode2
Node to be added: racNode3
Database Name: racdb
DB Version:
12.2.0.1
1-
Prerequisites
Before installing
the Operating System, you have to:
-
Align /etc/sysctl.conf and /etc/security/limits.conf files on racNode3 with racNode1 and racNode2
-
Disable the
firewall between the nodes
-
Disable SELinux on
all the nodes
In addition to the
basic OS Installation. The following packages need to be installed. This
includes the 64-bit and 32-bit versions of some packages.
# From Public Yum or ULN
yum install binutils -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install gcc -y
yum install gcc-c++ -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libXext -y
yum install libXext.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install make -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y
yum install zlib-devel -y
yum install zlib-devel.i686 -y
You also need to :
-
install all the
Oracle ASM libraries
-
Initialize and
Configure OracleASM and make all the diskgroups available for the new node
-
Update the IPs
addresses of the new node in the /etc/hosts file on all the nodes. The same
entries must exist on the other nodes of the cluster.
Network Interface:
[root@racNode3 ~]# more /etc/hosts
127.0.0.1 localhost
localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost
localhost.localdomain localhost6 localhost6.localdomain6
#Public IP
XXX.XXX.XXX.XXX racNode1 racNode1.mydomain
XXX.XXX.XXX.XXX racNode2 racNode2.mydomain
XXX.XXX.XXX.XXX racNode3 racNode3.mydomain
#VIP
XXX.XXX.XXX.XXX racNode1-vip racNode1-vip.mtncameroon.net
XXX.XXX.XXX.XXX racNode2-vip racNode2-vip.mtncameroon.net
XXX.XXX.XXX.XXX racNode3-vip racNode3-vip.mtncameroon.net
#Private Network
XXX.XXX.XXX.XXX racNode1-prv1
XXX.XXX.XXX.XXX racNode1-prv2
XXX.XXX.XXX.XXX racNode2-prv1
XXX.XXX.XXX.XXX racNode2-prv2
XXX.XXX.XXX.XXX racNode3-prv1
XXX.XXX.XXX.XXX racNode3-prv2
2-
Configure Password
Less ssh
Create passwordless
ssh between the existing nodes of the cluster and with the new node for Grid
Infrastructure and Oracle Database owner (In our case, the user GRID owns the
Grid Infrastructure and ORACLE owns Oracle Database). This can be done by
running the sshUserSetup.sh script available on the Grid Infrastructure
software.
[grid@racNode3 ~]$ find /u01 -name sshUserSetup.sh
/u01/12.2.0/grid/deinstall/sshUserSetup.sh
/u01/12.2.0/grid/oui/prov/resources/scripts/sshUserSetup.sh
[grid@racNode3 ~]$
[grid@racNode3 ~]$./sshUserSetup.sh -user grid -hosts "racNode1 racNode2
racNode3" -noPromptPassphrase -confirm -advanced
You will provide
the grid password for all nodes.
[grid@racNode3 deinstall]$ ./sshUserSetup.sh -user oracle -hosts "racNode1
racNode2 racNode3" -noPromptPassphrase -confirm -advanced
3-
Cluster
verification
Run the CLUVFY
utility with the options “pre nodeadd” to check the pre-requisites on the new
node. This needs to be run from the node which is already part of the cluster.
[grid@racNode1 bin]$ ./cluvfy stage -pre nodeadd -n racNode3 -verbose
> /home/grid/racNode3prenodeadd.txt
You can check the
log file to see all conditions are not met.
E.g:
[grid@racNode1 ~]$ grep -i "FAILED" racNode3prenodeadd.txt
Verifying User Equivalence ...FAILED (PRVG-2019, PRKN-1038)
Verifying User Equivalence ...FAILED
"racNode1"
to node "racNode3" failed
You must fix all
the FAILED configuration that appear on the log file and run the cluvfy utility.
Once you have the message :
Pre-check for node
addition was successful
In our case, we
have the below messages:
Pre-check for node addition was successful.
Warnings were encountered during execution of CVU verification request
"stage -pre nodeadd".
Verifying OS Kernel Version ...WARNING
PRVF-7524 : Kernel version is not consistent across all the nodes.
Kernel
version = "3.8.13-118.17.5.el7uek.x86_64" found on nodes: racNode1.
Kernel
version = "4.1.12-61.1.18.el7uek.x86_64" found on nodes: racNode3.
CVU operation performed:
stage -pre nodeadd
Date: May
28, 2018 11:30:46 AM
CVU home:
/u01/12.2.0/grid/
User: grid
[grid@racNode1 ~]$
The above condition
has failed. We can ignore the Kernel version if it’s 3.8.13.
You can move
forward.
In 12c, the
location of “addnode.sh” is quite different than that of in 11gR2. In 11gR2, it
was located under “GRID_HOME/oui/bin”, but in 12c, it’s
located under “GRID_HOME/addnode”.
On the node which
is already a part of the cluster, run the addnode.sh script. We will run it
from racNode1 in GUI mode.
4-
Grid Infrastructure
Installation
As Grid User, run
the addnode.sh program.
[grid@racNode1 ~]$GRID_HOME/addnode.sh
Click on Add button to add the new node on the cluster.
You must specify the full host name and the virtual hostname.
Click
Ok.
You must provide the Grid user and password and then,
verify the ssh connectivity.
Once the ssh connectivity is done, click on Next
Button. The Installation program will check the OS Parameter and all the needed
packages.
You can ignore This OS kernel version as notice on
point 3.
You can check the Ignore All CheckBox and click on
Next Button and then Install Button.
As Root user, run the following
scripts on racNode3 node.
Open another shell window
and run first the orainstRoot.sh script as below :
Also run
the roo.sh script as below:
[root@racnode1 tmp]# /u01/12.2.0/grid/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/12.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to
/usr/local/bin ...
Copying oraenv to
/usr/local/bin ...
Copying coraenv to
/usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/racnode3/crsconfig/rootcrs_racnode3_2018-05-29_10-14-22PM.log
2018/05/29 22:14:24 CLSRSC-594: Executing installation step 1 of 19:
'SetupTFA'.
2018/05/29 22:14:24 CLSRSC-4001: Installing Oracle Trace File Analyzer
(TFA) Collector.
2018/05/29 22:14:49 CLSRSC-4002: Successfully installed Oracle Trace
File Analyzer (TFA) Collector.
2018/05/29 22:14:49 CLSRSC-594: Executing installation step 2 of 19:
'ValidateEnv'.
2018/05/29 22:14:52 CLSRSC-594: Executing installation step 3 of 19:
'CheckFirstNode'.
2018/05/29 22:14:52 CLSRSC-594: Executing installation step 4 of 19:
'GenSiteGUIDs'.
2018/05/29 22:14:53 CLSRSC-594: Executing installation step 5 of 19:
'SaveParamFile'.
2018/05/29 22:14:56 CLSRSC-594: Executing installation step 6 of 19:
'SetupOSD'.
2018/05/29 22:14:56 CLSRSC-594: Executing installation step 7 of 19:
'CheckCRSConfig'.
2018/05/29 22:14:57 CLSRSC-594: Executing installation step 8 of 19:
'SetupLocalGPNP'.
2018/05/29 22:14:58 CLSRSC-594: Executing installation step 9 of 19:
'ConfigOLR'.
2018/05/29 22:15:01 CLSRSC-594: Executing installation step 10 of 19:
'ConfigCHMOS'.
2018/05/29 22:15:01 CLSRSC-594: Executing installation step 11 of 19:
'CreateOHASD'.
2018/05/29 22:15:03 CLSRSC-594: Executing installation step 12 of 19:
'ConfigOHASD'.
2018/05/29 22:15:18 CLSRSC-330: Adding Clusterware entries to file
'oracle-ohasd.service'
2018/05/29 22:15:45 CLSRSC-594: Executing installation step 13 of 19:
'InstallAFD'.
2018/05/29 22:15:46 CLSRSC-594: Executing installation step 14 of 19:
'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'racnode3'
CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'racnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/05/29 22:16:14 CLSRSC-594: Executing installation step 15 of 19:
'InstallKA'.
2018/05/29 22:16:15 CLSRSC-594: Executing installation step 16 of 19:
'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'racnode3'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources
on 'racnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'racnode3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racnode3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'racnode3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'racnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/05/29 22:16:33 CLSRSC-594: Executing installation step 17 of 19:
'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'racnode3'
CRS-2672: Attempting to start 'ora.evmd' on 'racnode3'
CRS-2676: Start of 'ora.mdnsd' on 'racnode3' succeeded
CRS-2676: Start of 'ora.evmd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'racnode3'
CRS-2676: Start of 'ora.gpnpd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'racnode3'
CRS-2676: Start of 'ora.gipcd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode3'
CRS-2676: Start of 'ora.cssdmonitor' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racnode3'
CRS-2672: Attempting to start 'ora.diskmon' on 'racnode3'
CRS-2676: Start of 'ora.diskmon' on 'racnode3' succeeded
CRS-2676: Start of 'ora.cssd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'racnode3'
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode3'
CRS-2676: Start of 'ora.ctssd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'racnode3'
CRS-2676: Start of 'ora.crf' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode3'
CRS-2676: Start of 'ora.crsd' on 'racnode3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'racnode3'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'racnode3'
CRS-2676: Start of 'ora.asm' on 'racnode3' succeeded
CRS-6017: Processing resource auto-start for servers: racnode3
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'racnode1'
CRS-2672: Attempting to start 'ora.net1.network' on 'racnode3'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'racnode3'
CRS-2672: Attempting to start 'ora.chad' on 'racnode3'
CRS-2676: Start of 'ora.net1.network' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'racnode3'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'racnode1'
CRS-2676: Start of 'ora.chad' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'racnode3'
CRS-2676: Start of 'ora.scan2.vip' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'racnode3'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'racnode3'
CRS-2676: Start of 'ora.ons' on 'racnode3' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'racnode3' succeeded
CRS-2676: Start of 'ora.asm' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'racnode3'
CRS-2672: Attempting to start 'ora.ISL_DG.dg' on 'racnode3'
CRS-2676: Start of 'ora.ISL_DG.dg' on 'racnode3' succeeded
CRS-2676: Start of 'ora.DATA.dg' on 'racnode3' succeeded
CRS-2664: Resource 'ora.orcldb.db' is already running on 'racnode1'
CRS-6016: Resource auto-start has completed for server racnode3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed
resources
CRS-4123: Oracle High Availability Services has been started.
2018/05/29 22:18:50 CLSRSC-343: Successfully started Oracle Clusterware
stack
2018/05/29 22:18:50 CLSRSC-594: Executing installation step 18 of 19:
'ConfigNode'.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/05/29 22:19:06 CLSRSC-594: Executing installation step 19 of 19:
'PostConfig'.
2018/05/29 22:19:14
CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
You have new mail in /var/spool/mail/root
[root@racnode3 tmp]
After the root.sh is successfully completed, Click on
Ok.
Let’s check the CRS status and the resources on the
new node:
[root@racnode3 ~]# cd /u01/12.2.0/grid/bin/
[root@racnode3
bin]# ./crsctl stat res -t
--------------------------------------------------------------------
Name Target State
Server State
details
-------------------------------------------------------------------
Local Resources
------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.DATA.dg
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.FRA.dg
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.ISL_DG.dg
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.OCR_DG.dg
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.chad
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.net1.network
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.ons
ONLINE ONLINE
racnode1 STABLE
ONLINE ONLINE
racnode2 STABLE
ONLINE ONLINE
racnode3 STABLE
ora.proxy_advm
OFFLINE OFFLINE racnode1 STABLE
OFFLINE OFFLINE racnode2 STABLE
OFFLINE OFFLINE racnode3 STABLE
-------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE
racnode2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE
racnode3 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE
racnode1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE
racnode1
169.254.30.126 10.25
2.40.106 10.252.40.1
07,STABLE
ora.asm
1 ONLINE ONLINE
racnode1
Started,STABLE
2 ONLINE ONLINE
racnode2 Started,STABLE
3 ONLINE ONLINE
racnode3
Started,STABLE
ora.cvu
1 ONLINE ONLINE
racnode1 STABLE
ora.entcdb01.db
1 ONLINE ONLINE
racnode1
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
2 ONLINE ONLINE
racnode2
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
ora.entcdb02.db
1 ONLINE ONLINE
racnode1
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
2 ONLINE ONLINE
racnode2
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
ora.racnode1.vip
1 ONLINE ONLINE
racnode1 STABLE
ora.racnode2.vip
1 ONLINE ONLINE
racnode2 STABLE
ora.racnode3.vip
1 ONLINE ONLINE
racnode3 STABLE
ora.esfprod.db
1 ONLINE ONLINE
racnode1
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
2 ONLINE ONLINE
racnode2
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
ora.isldb.db
1 ONLINE ONLINE
racnode1
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
2 ONLINE ONLINE
racnode2
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
ora.lcms.db
1 ONLINE ONLINE
racnode2
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
2 ONLINE ONLINE
racnode1
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
ora.mgmtdb
1 ONLINE ONLINE
racnode1 Open,STABLE
ora.mtnasset90.db
1 ONLINE ONLINE
racnode1
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
ora.orcldb.db
1 ONLINE ONLINE
racnode1
Open,HOME=/u01/app/o
racle/product/12.2.0
/db_1,STABLE
2 OFFLINE OFFLINE STABLE
ora.qosmserver
1 ONLINE ONLINE
racnode1 STABLE
ora.scan1.vip
1 ONLINE ONLINE
racnode2 STABLE
ora.scan2.vip
1 ONLINE ONLINE
racnode3 STABLE
ora.scan3.vip
1 ONLINE ONLINE
racnode1 STABLE
--------------------------------------------------------------------------
[root@racnode3 bin]#
Check cluster on all the nodes:
[root@racnode3 bin]# ./crsctl check cluster -all
**************************************************************
racnode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode3
bin]#
5-
Oracle Database
Software Installation
Now it’s time to add the RDBMS home on the new node.
This done in the same way as above by running the addnode.sh script from the
RDBMS home on the existing node. The script is located at $ORACLE_HOME/addnode
directory and here, it’s being run from addnode1.
We are using the GUI mode as above.
As Oracle user, run the addnode.sh script :
You also check the ssh connectivity for oracle user.
Click
on Next
Ignore the Os Kernel Version warning
Click
on Install button
[root@racnode2 ~]# /u01/app/oracle/product/12.2.0/db_1/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to
overwrite.
The contents of "oraenv" have not changed. No need to
overwrite.
The contents of "coraenv" have not changed. No need to
overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@racnode2 ~]#
[root@racnode3 bin]# /u01/app/oracle/product/12.2.0/db_1/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to
overwrite.
The contents of "oraenv" have not changed. No need to
overwrite.
The contents of "coraenv" have not changed. No need to
overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@racnode3
bin]#
Hi,
RépondreSupprimerThanks for your post!
If there is any psu patches applied earlier what steps we need to follow while adding node. Will add node script will take care of we need to patch grid and rdbms home first then run addnode.sh. please suggest.