Tutorial to install and
configure 2 nodes Oracle 19c RAC on Linux using Oracle VirtualBox
By Donatien MBADI OUM,
OCP, OCE, Oracle Instructor
What is Oracle RAC?
Oracle Real Application Cluster is a cluster structure in which
multiple servers work for a single service using shared disk technology
prepared for an uninterrupted database. This structure works with a minimum of
two servers.
For Real Application Cluster installation, we will use two
Oracle Linux operating systems, two ethernet cards connected to these operating
systems and a shared disk that can be seen by both operating systems.
In Production, 2 nodes RAC environment is looking like this:
In this article we are
going to see step by step how to install and configure Oracle RAC on Oracle Enterprise
Linux 7.3. We will use Oracle VirtualBox 6.0.12 as virtual environment. We are
assuming that it’s already installed.
High level steps
1. Configure
Oracle VirtualBox.
2. Install
OEL 7.3
3. DNS
nameserver configuration
4. Export
the Virtual Machine
5. Creating
a new Disks for ASM
6. Import
the Virtual Machine
7. Configure
Shared disks
8. Configure
the second node
9. Pre-check
for RAC Setup
10. Configure
Oracle ASM
11. Install
and Configure Oracle Grid Infrastructure for a Cluster
12. Oracle
19c Database Software Installation
13. Add
additional diskgroup
14. Create
RAC database
15. Post-Check
for RAC Setup
RAC Configuration
|
Active
Server 1 |
Active
Server 2 |
Hostname |
ol7-193-rac1 |
ol7-193-rac2 |
Public
Ip |
192.168.56.10 |
192.168.56.11 |
Private
Name and Ip |
ol7-193-rac1-priv/192.168.10.1 |
ol7-193-rac2/192.168.10.2 |
Virtual
Name and Ip |
ol7-193-rac1-vip/192.168.56.20 |
ol7-193-rac2-vip/192.168.56.21 |
Gateway |
192.168.56.1 |
|
Cluster
Name |
ol7-193-rac |
|
Scan
Name and Ips |
ol7-193-rac-scan/192.168.56.30,
192.168.56.31, 192.168.56.32 |
Download software
1. Oracle VirtualBox Configuration
From VirtualBox, create a new virtual machine
for Active Serve1 called ol7-193-rac1.
Click on Machine/New
Provide
a name as ol7-193-rac1, Linux as type and Oracle 64 bits as version. Choose the
folder that has enough space on your local disk.
Enter
memory value to 16GB (Oracle recommends at least 8GB of RAM for Oracle Grid
infrastructure. But in production, you will have enough RAM to avoid the node
eviction) and click Next.
Accept
the default option to create a new virtual hard disk by choosing the Create a
virtual hard disk now option and clicking to the Create button.
Select
the Default VDI type and click the Next button.
Select
Dynamically Allocated option and click Next
Provide
a location where you want to save your virtual machine disk files and enter the
size of your disk. We are choosing 50GB, but you can choose more in production.
Click
on Create button to create your virtual machine as below:
Click
on Settings and select on Storage.
Click
on the Optical Drive and Choose Virtual Optical Disk File.
Choose
the iso image and click on Open button.
Click
on Advanced and change shared clipboard and Drag ‘n’ Drop to Bidirectional.
Also chose the Snapshot folder.
Add
three adaptors:
-
Public
-
Private
-
Internet
for installing RPMs.
Click
on Adapter1 tab.
Make
sure Adapter1 is enabled by checking on Enable Network Adapter. Attach Adapter1
to Host-Only Adapter. This interface will be used as public network, for
example for connection to the RAC database from applications.
Click
on Adapter2 tab.
Make
sure Adapter2 is enabled and attach to Internal Network. You can name this
network as Private Network.
Click
on Adapter3 tab.
Enable
it and attach to NAT Adapter. This adapter will be used for Internet. Click on
Ok button.
Now
you can start Linux Installation by clicking on Start button.
After
starting you get below screen and select Install Oracle Linux 7.3.
2. Install Oracle Linux 7.3
Choose
your language and click on Continue.
Click
on Software Selection
Select
following packages and click Done button:
Click
on Installation Destination
Select
50GB and choose I will configure partitioning. Click Done button.
Enter
mount point /boot and Desired Capacity 2GB
Press
+ button and enter mount point swap and Desired Capacity 16GB.
Press
+button and enter mount point / and Desired Capacity all the remaining
available space. In production, you may create /tmp, /var, /home and other
specific folders.
Click
Done button.
Select
Accept Change.
Click
on Date & Time
Select
Region and City and press Done button.
Click
on Network & Hostname.
Select
the first network card (enpOs3) to configure the public Ip.
Select
IPv4 settings tab and change Method to Manual. Press Add button and fill
address as follow:
192.168.56.10;
Netmask: 255.255.255.0; Gateway: 0.0.0.0
and press Save then Done button.
Select
the second network card (enpOs8) to configure the private network as follow:
Select
the third network (enpOs9) card used for Internet and choose DHCP by checking
the box Connect automatically DHCP
Click
on ROOT PASSWORD to set the root password.
Provide
your root password and confirm.
Now,
click on LICENSE INFORMATION
Accept
the license agreement
Click on FINISH CONFIGURATION
After
installation completed, you can create your owned account as follow:
Click
on Start Using Linux Server.
First
check internet is working fine.
The
package oracle-database-preinstall-19c contains all the prerequisites on Oracle
Linux using the Oracle Unbreakable Enterprise Kernel.
[root@localhost ~]# yum update –y
…..
…..
…..
[root@localhost
~]# yum install -y oracle-database-preinstall-19c.x86_64
Loaded
plugins: langpacks, ulninfo
Resolving
Dependencies
-->
Running transaction check
--->
Package oracle-database-preinstall-19c.x86_64 0:1.0-3.el7 will be installed
-->
Processing Dependency: libaio-devel for package:
oracle-database-preinstall-19c-1.0-3.el7.x86_64
-->
Processing Dependency: ksh for package:
oracle-database-preinstall-19c-1.0-3.el7.x86_64
-->
Running transaction check
--->
Package ksh.x86_64 0:20120801-142.0.1.el7 will be installed
--->
Package libaio-devel.x86_64 0:0.3.109-13.el7 will be installed
-->
Finished Dependency Resolution
Dependencies
Resolved
==============================================================================================================
Package Arch Version Repository Size
==============================================================================================================
Installing:
oracle-database-preinstall-19c x86_64 1.0-3.el7 ol7_latest 27 k
Installing
for dependencies:
ksh x86_64 20120801-142.0.1.el7 ol7_latest 882 k
libaio-devel x86_64 0.3.109-13.el7 ol7_latest 12 k
Transaction
Summary
==============================================================================================================
Install 1 Package (+2 Dependent packages)
Total
download size: 921 k
Installed
size: 3.2 M
Downloading
packages:
warning:
/var/cache/yum/x86_64/7Server/ol7_latest/packages/libaio-devel-0.3.109-13.el7.x86_64.rpm:
Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public
key for libaio-devel-0.3.109-13.el7.x86_64.rpm is not installed
(1/3):
libaio-devel-0.3.109-13.el7.x86_64.rpm
| 12 kB
00:00:00
(2/3):
oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm | 27 kB
00:00:00
(3/3):
ksh-20120801-142.0.1.el7.x86_64.rpm |
882 kB 00:00:01
--------------------------------------------------------------------------------------------------------------
Total
465 kB/s | 921 kB 00:00:01
Retrieving
key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing
GPG key 0xEC551F03:
Userid
: "Oracle OSS group (Open Source Software group)
<build@oss.oracle.com>"
Fingerprint: 4214 4123 fecf c55b 9086 313d
72f9 7b74 ec55 1f03
Package
: 7:oraclelinux-release-7.3-1.0.4.el7.x86_64 (@anaconda/7.3)
From
: /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Running
transaction check
Running
transaction test
Transaction
test succeeded
Running
transaction
Installing :
ksh-20120801-142.0.1.el7.x86_64 1/3
Installing :
libaio-devel-0.3.109-13.el7.x86_64
2/3
Installing :
oracle-database-preinstall-19c-1.0-3.el7.x86_64 3/3
Verifying
: libaio-devel-0.3.109-13.el7.x86_64
1/3
Verifying
: oracle-database-preinstall-19c-1.0-3.el7.x86_64 2/3
Verifying
: ksh-20120801-142.0.1.el7.x86_64
3/3
Installed:
oracle-database-preinstall-19c.x86_64 0:1.0-3.el7
Dependency
Installed:
ksh.x86_64 0:20120801-142.0.1.el7 libaio-devel.x86_64
0:0.3.109-13.el7
Complete!
[root@localhost
~]#
[root@localhost
~]# yum install oracleasm-support
Loaded
plugins: langpacks, ulninfo
Resolving
Dependencies
-->
Running transaction check
--->
Package oracleasm-support.x86_64 0:2.1.11-2.el7 will be installed
-->
Finished Dependency Resolution
Dependencies
Resolved
==============================================================================================================
Package Arch Version Repository Size
==============================================================================================================
Installing:
oracleasm-support x86_64 2.1.11-2.el7 ol7_latest 85 k
Transaction
Summary
==============================================================================================================
Install 1 Package
Total
download size: 85 k
Installed
size: 266 k
Is
this ok [y/d/N]: y
Downloading
packages:
oracleasm-support-2.1.11-2.el7.x86_64.rpm | 85 kB
00:00:01
Running
transaction check
Running
transaction test
Transaction
test succeeded
Running
transaction
Installing :
oracleasm-support-2.1.11-2.el7.x86_64 1/1
Note:
Forwarding request to 'systemctl enable oracleasm.service'.
Created
symlink from /etc/systemd/system/multi-user.target.wants/oracleasm.service to
/usr/lib/systemd/system/oracleasm.service.
Verifying
: oracleasm-support-2.1.11-2.el7.x86_64 1/1
Installed:
oracleasm-support.x86_64 0:2.1.11-2.el7
Complete!
[root@localhost
~]# yum install bind* –y
…..
…..
…..
Once
Yum is updated, Configure Vbox tools.
Select
Devices and Insert Guest Additions CD image.
Execute
the Run button.
VBox
tools successfully configured without error.
Configure Host file.
As
root user, enter vi /etc/hosts and configure the hosts file as below:
Create groups:
[root@localhost
~]# groupadd -g 54327 asmdba
[root@localhost
~]#
[root@localhost
~]# groupadd -g 54328 asmoper
[root@localhost
~]#
[root@localhost
~]# groupadd -g 54329 asmadmin
[root@localhost
~]#
Create user, set the password
and assign to groups
[root@localhost
~]# usermod -G asmdba,asmoper,asmadmin oracle
[root@localhost
~]# passwd oracle
Changing
password for user oracle.
New
password:
BAD
PASSWORD: The password contains the user name in some form
Retype
new password:
passwd:
all authentication tokens updated successfully.
[root@localhost
~]#
Create the directories in
which the Oracle software will be installed
[root@localhost
~]# mkdir -p /u01/app/19c/grid
[root@localhost
~]#
[root@localhost
~]# mkdir -p /u01/app/oracle/product/19c/db_1
[root@localhost
~]#
[root@localhost
~]# chown -R oracle:oinstall /u01
[root@localhost
~]#
[root@localhost
~]# chmod -R 775 /u01/
Configure the bash profile of
users
From
/home/oracle and as oracle user, open the .bash_profile file.
[oracle@ol7-193-rac1]
vi /home/oracle/.bash_profile
#
Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/19c/grid
export DB_HOME=$ORACLE_BASE/product/19c/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=racdb19
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias
grid='. /home/oracle/grid.env'
alias
db='. /home/oracle/db.env'
export
ORACLE_SID=+ASM1
export
ORACLE_HOME=$GRID_HOME
export
PATH=$ORACLE_HOME/bin:$BASE_PATH
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export
ORACLE_SID=racdb19
export
ORACLE_HOME=$DB_HOME
export
PATH=$ORACLE_HOME/bin:$BASE_PATH
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Firewall Stop and Disable
We can open firewall after installation
Chrony NTP Configuration
[root@localhost
~]# systemctl enable chronyd.service
Created
symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to
/usr/lib/systemd/system/chronyd.service.
[root@localhost
~]#
[root@localhost
~]# systemctl restart chronyd.service
[root@localhost
~]#
[root@localhost
~]# chronyc -a 'burst 4/4'
200
OK
200
OK
[root@localhost
~]#
[root@localhost
~]# chronyc -a makestep
200
OK
200
OK
[root@localhost
~]#
3. DNS Nameserver configuration
a- Check
Bind packages
[root@localhost
~]# rpm -qa | grep bind
keybinder3-0.3.0-1.el7.x86_64
bind-9.9.4-37.el7.x86_64
bind-libs-lite-9.9.4-37.el7.i686
bind-chroot-9.9.4-37.el7.x86_64
bind-pkcs11-9.9.4-37.el7.x86_64
bind-libs-9.9.4-37.el7.x86_64
bind-pkcs11-utils-9.9.4-37.el7.x86_64
bind-utils-9.9.4-37.el7.x86_64
bind-license-9.9.4-37.el7.noarch
bind-libs-lite-9.9.4-37.el7.x86_64
rpcbind-0.2.0-38.el7.x86_64
bind-libs-9.9.4-37.el7.i686
bind-pkcs11-libs-9.9.4-37.el7.i686
bind-pkcs11-libs-9.9.4-37.el7.x86_64
bind-dyndb-ldap-10.0-5.el7.x86_64
[root@localhost ~]#
b- Configuration
of name server
Edit the named.conf file and define
zone files as below:
[root@localhost
~]#cp /etc/named.conf /etc/named.conf_bkp
[root@ol7-193-rac1 ~]# vim
/etc/named.conf
channel default_debug {
file
"data/named.run";
severity
dynamic;
};
};
zone "." IN {
type hint;
file
"named.ca";
};
zone "localdomain.com" IN {
type master;
file
"localdomain.zone";
allow-update { none; };
};
zone "56.168.192.in-addr.arpa." IN {
type master;
file
"56.168.192.in-addr.arpa";
allow-update { none; };
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
c-
Create
the Forward and Reserve zone files as mentioned in named.conf
-
Forward zone
[root@localhost ~]# cp
/var/named/named.localhost /var/named/localdomain.zone
[root@localhost ~]#
[root@localhost ~]# vim
/var/named/localdomain.zone
$TTL 86400
@ IN SOA ol7-193-rac1.localdomain.com.
root.localdomain.com. (
2014051001 ; serial
3600 ; refresh
1800 ; retry
604800 ; expire
86400 ; minimum
)
@ IN NS ol7-193-rac1.localdomain.
localhost IN A 127.0.0.1
ol7-193-rac1 IN A 192.168.56.10
ol7-193-rac2 IN A 192.168.56.11
ol7-193-rac1-priv IN A 192.168.10.1
ol7-193-rac2-priv IN A 192.168.10.2
ol7-193-rac1-vip IN A 192.168.56.20
ol7-193-rac2-vip IN A 192.168.56.21
ol7-193-rac-scan IN A 192.168.56.30
ol7-193-rac-scan IN A 192.168.56.31
ol7-193-rac-scan IN A 192.168.56.32
-
Reserve zone
vim
/var/named/56.168.192.in-addr.arpa
$TTL 86400
@ IN SOA ol7-193-rac1.localdomain.com.
root.localdomain.com. (
2
3H
1H
1W
1H )
@ IN NS ol7-193-rac1.localdomain.com.
@ IN PTR localdomain.com.
ol7-193-rac1 IN A 192.168.56.10
ol7-193-rac-scan IN A 192.168.56.30
ol7-193-rac-scan IN A 192.168.56.31
ol7-193-rac-scan IN A 192.168.56.32
1 IN PTR ol7-193-rac1.localdomain.com.
41 IN PTR ol7-193-rac-scan.localdomain.
42 IN PTR ol7-193-rac-scan.localdomain.
43 IN PTR ol7-193-rac-scan.localdomain.
Change
the group to named as below:
[root@localhost
~]# chgrp named /var/named/localdomain.zone
[root@localhost
~]#
[root@localhost
~]# chgrp named /var/named/56.168.192.in-addr.arpa
[root@localhost
~]#
d-
Start
the DNS Service
[root@localhost
~]# systemctl start named.service
e-
Deploy
iptables rules to allow DNS service
[root@ol7-193-rac1 named]# iptables
-A INPUT -i lo -j ACCEPT
[root@ol7-193-rac1 named]#
[root@ol7-193-rac1 named]# iptables
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
[root@ol7-193-rac1 named]#
[root@ol7-193-rac1 named]# iptables
-A INPUT -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT
[root@ol7-193-rac1 named]#
[root@ol7-193-rac1 named]# iptables
-A INPUT -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT
[root@ol7-193-rac1 named]#
[root@ol7-193-rac1 named]# iptables
-A INPUT -j DROP
[root@ol7-193-rac1 named]#
f-
Check
for the available hosts in DNS
[root@ol7-193-rac1 ~]# nslookup ol7-193-rac-scan
Server: 127.0.0.1
Address:
127.0.0.1#53
Name: ol7-193-rac-scan.localdomain.com
Address: 192.168.56.32
Name:
ol7-193-rac-scan.localdomain.com
Address: 192.168.56.31
Name:
ol7-193-rac-scan.localdomain.com
Address: 192.168.56.30
[root@ol7-193-rac1 ~]#
4. Clone the Virtual Machine
Click on File/Export Appliance
Select
the Virtual Machine and press Next button
Select
the location of backup and click on Next button.
Click
Export button.
5. Creating a Shared Disk
After
completed the backup we need to add disks for ASM storage.
Click
on Settings/Storage/Controller SATA
Click
Add Hard Disk
Click
on Create new disk button.
Select
VDI (Virtual Disk Image) option and Next button.
Choose
Fixed Size option and Next.
Choose
end select your shared location and specify the size of disk.
Click
on Create button.
Same
like the first shared disk, you need to also create the disk for OCR. In
production you can add other shared disks if needed.
6. Import the Virtual Machine
Restore
the backup that was taken earlier and change it as ol7-193-rac2.
Select
File/Import Appliance
Select
Appliance to restore.
Change
the name to ol7-193-rac2, change the location and click on Import.
Import
is completed successfully.
7. Configure ASM disks as shareable
We
need to change the disks we created earlier as shareable.
Click
on File/Virtual Media Manager…
Select
the ASM disk and choose Shareable as Type.
Click
on Apply button.
Select
the second disk and choose Shareable as Type.
Click
on Apply button.
8. Configure the second node
Both
disks are changed to shareable. We are now going to add these disks to the
second node.
On
the second node, go to Settings/Storage.
Click
on Controller:SATA and Add Hard Disk.
Click
on Choose existing disk.
Select
the first ASM disk and click on Choose. Do the same for the second ASM disk.
Now
start the second node.
Change
the hostname.
Change
the Network settings
On
the Virtual machine GUI, click on Applications/System Tools/Settings
Click
on Network.
Configure
the Ips.
Check
the DNS
9. Pre-check for RAC Setup
Copy
and the Oracle 19c Grid infrastructure software on GRID_HOME
Install
the package cvudisk from the grid home as root user on allow nodes.
1st
Node:
2nd
Node:
[root@ol7-193-rac1
rpm]# scp cvuqdisk* root@ol7-193-rac2:/tmp
The
authenticity of host 'ol7-193-rac2 (192.168.56.11)' can't be established.
ECDSA
key fingerprint is 31:29:5e:94:0f:fe:09:73:2c:a9:83:9a:0d:77:a8:56.
Are
you sure you want to continue connecting (yes/no)? yes
Warning:
Permanently added 'ol7-193-rac2,192.168.56.11' (ECDSA) to the list of known
hosts.
root@ol7-193-rac2's
password:
cvuqdisk-1.0.10-1.rpm 100% 11KB
11.1KB/s 00:00
[root@ol7-193-rac1
rpm]#
10.
Configure ASM disks
Now
you have to create a partition from the two disks created earlier.
Run
fdisk -l to see all the partitions.
sdb and sdc are the available partitions
[root@ol7-193-rac1 ~]# fdisk
/dev/sdc
Welcome to fdisk (util-linux
2.23.2).
Changes will remain in memory only,
until you decide to write them.
Be careful before using the write
command.
Device does not contain a
recognized partition table
Building a new DOS disklabel with
disk identifier 0x7869de11.
Command (m for help): n
Partition type:
p primary (0 primary, 0
extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-8388607, default
2048):
Using default value 2048
Last sector, +sectors or
+size{K,M,G} (2048-8388607, default 8388607):
Using default value 8388607
Partition 1 of type Linux and of
size 4 GiB is set
Command (m for help): w
The partition table has been
altered!
Calling ioctl() to re-read
partition table.
Syncing disks.
Now
that the partitions are created, you need to configure oracleasm utility.
[root@ol7-193-rac1 ~]#
[root@ol7-193-rac1 ~]#
/usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library
driver.
This will configure the on-boot
properties of the Oracle ASM library
driver. The following questions will determine
whether the driver is
loaded on boot and what permissions
it will have. The current values
will be shown in brackets
('[]'). Hitting <ENTER> without
typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver
interface []: oracle
Default group to own the driver
interface []: oinstall
Start Oracle ASM library driver on
boot (y/n) [n]: y
Scan for Oracle ASM disks on boot
(y/n) [y]: y
Writing Oracle ASM library driver
configuration: done
[root@ol7-193-rac1 ~]#
Initialize
the asmlib with the oracleasm init command to load oracleasm module and mount
the oracleasm filesystem
[root@ol7-193-rac1 ~]#
[root@ol7-193-rac1 ~]#
/usr/sbin/oracleasm init
Creating /dev/oracleasm mount
point: /dev/oracleasm
Loading module
"oracleasm": oracleasm
Configuring "oracleasm"
to use device physical block size
Mounting ASMlib driver filesystem:
/dev/oracleasm
[root@ol7-193-rac1 ~]#
Create
ASM disks
[root@ol7-193-rac1
~]# oracleasm createdisk ASMD_DATA /dev/sdb1
Writing
disk header: done
Instantiating
disk: done
[root@ol7-193-rac1
~]#
[root@ol7-193-rac1
~]# oracleasm createdisk ASMD_OCR /dev/sdc1
Writing
disk header: done
Instantiating
disk: done
[root@ol7-193-rac1
~]#
[root@ol7-193-rac1
~]# oracleasm scandisks
Reloading
disk partitions: done
Cleaning
any stale ASM disks...
Scanning
system for ASM disks...
[root@ol7-193-rac1
~]#
[root@ol7-193-rac1
~]# oracleasm listdisks
ASMD_DATA
ASMD_OCR
[root@ol7-193-rac1 ~]#
Configure
the disks on Node 2
[root@ol7-193-rac2
~]# /usr/sbin/oracleasm configure -i
Configuring
the Oracle ASM library driver.
This
will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine
whether the driver is
loaded
on boot and what permissions it will have.
The current values
will
be shown in brackets ('[]'). Hitting
<ENTER> without typing an
answer
will keep that current value. Ctrl-C
will abort.
Default
user to own the driver interface []: oracle
Default
group to own the driver interface []: oinstall
Start
Oracle ASM library driver on boot (y/n) [n]: y
Scan
for Oracle ASM disks on boot (y/n) [y]: y
Writing
Oracle ASM library driver configuration: done
[root@ol7-193-rac2
~]#
[root@ol7-193-rac2
~]# /usr/sbin/oracleasm init
Creating
/dev/oracleasm mount point: /dev/oracleasm
Loading
module "oracleasm": oracleasm
Configuring
"oracleasm" to use device physical block size
Mounting
ASMlib driver filesystem: /dev/oracleasm
[root@ol7-193-rac2
~]#
[root@ol7-193-rac2
~]# oracleasm scandisks
Reloading
disk partitions: done
Cleaning
any stale ASM disks...
Scanning
system for ASM disks...
Instantiating
disk "ASMD_DATA"
Instantiating
disk "ASMD_OCR"
[root@ol7-193-rac2
~]#
[root@ol7-193-rac2
~]#
[root@ol7-193-rac2
~]# oracleasm listdisks
ASMD_DATA
ASMD_OCR
[root@ol7-193-rac2
~]#
Configure
ssh Setup
[oracle@ol7-193-rac1
~]$ cd /u01/app/19c/grid/deinstall/
[oracle@ol7-193-rac1
deinstall]$
[oracle@ol7-193-rac1
deinstall]$ pwd
/u01/app/19c/grid/deinstall
[oracle@ol7-193-rac1
deinstall]$ ./sshUserSetup.sh -user oracle -hosts "ol7-193- rac1 ol7-193-rac2"
-noPromptPassphrase -confirm -advanced
The output of
this script is also logged into /tmp/sshUserSetup_2021-05-26-07-39 -55.log
Hosts are
ol7-193-rac1 ol7-193-rac2
user is oracle
Platform:-
Linux
……
…….
…….
[oracle@ol7-193-rac1
deinstall]$
Use
Cluvfy command to check that our cluster is ready for the Grid install.
[oracle@ol7-193-rac1
grid]$ ./runcluvfy.sh stage -pre crsinst -n ol7-193-rac1,ol7-193-rac2 -verbose
ERROR:
PRVG-10467
: The default Oracle Inventory group could not be determined.
Verifying
Physical Memory ...
Node Name
Available
Required Status
------------
------------------------
------------------------
----------
ol7-193-rac2
15.4193GB (1.6168284E7KB) 8GB
(8388608.0KB) passed
ol7-193-rac1
15.4193GB (1.6168284E7KB) 8GB
(8388608.0KB) passed
Verifying
Physical Memory ...PASSED
Verifying
Available Physical Memory ...
Node Name
Available
Required Status
------------
------------------------
------------------------
----------
ol7-193-rac2
14.643GB (1.535432E7KB) 50MB
(51200.0KB) passed
ol7-193-rac1
14.4774GB (1.5180632E7KB) 50MB
(51200.0KB) passed
Verifying
Available Physical Memory ...PASSED
….
….
….
[oracle@ol7-193-rac1
grid]$
11. Install and Configure Oracle Grid Infrastructure for a Cluster
Run
gridSetup.sh under GRID_HOME directory
[oracle@ol7-193-rac1
~]$. grid_env
[oracle@ol7-193-rac1
~]$ cd /u01/app/19c/grid/
Choose
the Configure Oracle Grid Infrastructure for a New Cluster option, then click
the Next button.
Choose
the Configure an Oracle Standalone Cluster option and click the Next button.
Enter
the cluster name of your cluster and your scan name. On our case, the name of
cluster is ol7-183-rac and the SCAN name is ol7-193-scan. The SCAN port is 1521.
Click the Next button.
On the Cluster
Node Information screen, click the Add button, enter the details of the second
node in the cluster, then click the OK button.
Click
the SSH connectivity button and enter the password for the oracle user. Click
the Setup button to configure SSH connectivity, and the Test button to test it
once it is complete. Once the test is complete, click the Next button.
Click
Ok button.
Check
the public and private networks are specified correctly as ASM & Private.
Click the Next button.
Accept
the Use Oracle Flex ASM for Storage option by clicking the Next button.
Choose
the No option, as we don’t want to create a separate disk group for the Grid
Infrastructure Management Repository (GIMR) in this case. Click the Next button.
Don’t
worry. Just click on Change Discovery Path
Browse
the path /dev/oracleasm/disks*
Enter
your OCR diskgroup name and select the corresponding Disk Path.
Click
on Next button.
Enter
the credentials and click the Next button.
Click
on Yes button
Accept
the default IPMI option by clicking the Next button.
Uncheck
the EM checkbox. You can register your configuration later if you have Enterprise
manager cloud control server. Click the Next button.
Set
the OSASM groups to oinstall and click the Next button. Accept the warnings on
the subsequent dialog by clicking the Yes button.
Specify
your ORACLE_BASE location and click on Next button.
Accept
the default inventory directory by click the Next button
Deselect
the Automatically run configuration scripts and click on Next button.
Check
the Ignore All checkbox and click the Next button.
Accept
the warning by clicking on Yes button.
Click
on Install button.
When
prompted, run the configuration scripts as root user on each node one by one.
On
node 1
[root@ol7-193-rac1 ~]#
[root@ol7-193-rac1 ~]# /u01/app/19c/grid/root.sh
Performing root user operation.
The following environment variables
are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/19c/grid
Enter the full pathname of the
local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the
/etc/oratab file as needed by
Database Configuration Assistant
when a database is created
Finished running generic part of
root script.
Now product-specific root actions
will be performed.
Relinking oracle with rac_on option
Using configuration parameter file:
/u01/app/19c/grid/crs/install/crsconfig_params
The log of current session can be
found at:
/u01/app/oracle/crsdata/ol7-193-rac1/crsconfig/rootcrs_ol7-193-rac1_2021-05-26_09-00-35AM.log
2021/05/26 09:00:59 CLSRSC-594:
Executing installation step 1 of 19: 'SetupTFA'.
2021/05/26 09:00:59 CLSRSC-594:
Executing installation step 2 of 19: 'ValidateEnv'.
2021/05/26 09:00:59 CLSRSC-363:
User ignored prerequisites during installation
2021/05/26 09:00:59 CLSRSC-594:
Executing installation step 3 of 19: 'CheckFirstNode'.
2021/05/26 09:01:05 CLSRSC-594:
Executing installation step 4 of 19: 'GenSiteGUIDs'.
2021/05/26 09:01:08 CLSRSC-594:
Executing installation step 5 of 19: 'SetupOSD'.
2021/05/26 09:01:09 CLSRSC-594:
Executing installation step 6 of 19: 'CheckCRSConfig'.
2021/05/26 09:01:09 CLSRSC-594:
Executing installation step 7 of 19: 'SetupLocalGPNP'.
2021/05/26 09:02:40 CLSRSC-4002:
Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2021/05/26 09:02:43 CLSRSC-594:
Executing installation step 8 of 19: 'CreateRootCert'.
2021/05/26 09:02:52 CLSRSC-594:
Executing installation step 9 of 19: 'ConfigOLR'.
2021/05/26 09:03:14 CLSRSC-594:
Executing installation step 10 of 19: 'ConfigCHMOS'.
2021/05/26 09:03:14 CLSRSC-594:
Executing installation step 11 of 19: 'CreateOHASD'.
2021/05/26 09:03:28 CLSRSC-594:
Executing installation step 12 of 19: 'ConfigOHASD'.
2021/05/26 09:03:29 CLSRSC-330:
Adding Clusterware entries to file 'oracle-ohasd.service'
2021/05/26 09:04:16 CLSRSC-594:
Executing installation step 13 of 19: 'InstallAFD'.
2021/05/26 09:04:28 CLSRSC-594:
Executing installation step 14 of 19: 'InstallACFS'.
2021/05/26 09:05:09 CLSRSC-594:
Executing installation step 15 of 19: 'InstallKA'.
2021/05/26 09:05:20 CLSRSC-594:
Executing installation step 16 of 19: 'InitConfig'.
ASM has been created and started
successfully.
[DBT-30001] Disk groups created
successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-210526AM090610.log
for details.
2021/05/26 09:08:01 CLSRSC-482:
Running command: '/u01/app/19c/grid/bin/ocrconfig -upgrade oracle oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk
c0c7d7cae8374f06bf5f879b2a53f308.
Successfully replaced voting disk
group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s)
successfully replaced
##
STATE File Universal Id File Name Disk group
--
----- ----------------- --------- ---------
1. ONLINE
c0c7d7cae8374f06bf5f879b2a53f308 (/dev/oracleasm/disks/ASMD_OCR) [OCR]
Located 1 voting disk(s).
2021/05/26 09:10:41 CLSRSC-594:
Executing installation step 17 of 19: 'StartCluster'.
2021/05/26 09:12:32 CLSRSC-343:
Successfully started Oracle Clusterware stack
2021/05/26 09:12:32 CLSRSC-594:
Executing installation step 18 of 19: 'ConfigNode'.
2021/05/26 09:16:21 CLSRSC-594:
Executing installation step 19 of 19: 'PostConfig'.
2021/05/26 09:17:35 CLSRSC-325:
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ol7-193-rac1 ~]#
On
node 2
[root@ol7-193-rac2 ~]#
[root@ol7-193-rac2 ~]#
/u01/app/19c/grid/root.sh
Performing root user operation.
The following environment variables
are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/19c/grid
Enter the full pathname of the
local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the
/etc/oratab file as needed by
Database Configuration Assistant
when a database is created
Finished running generic part of
root script.
Now product-specific root actions
will be performed.
Relinking oracle with rac_on option
Using configuration parameter file:
/u01/app/19c/grid/crs/install/crsconfig_params
The log of current session can be
found at:
/u01/app/oracle/crsdata/ol7-193-rac2/crsconfig/rootcrs_ol7-193-rac2_2021-05-26_09-19-56AM.log
2021/05/26 09:20:25 CLSRSC-594:
Executing installation step 1 of 19: 'SetupTFA'.
2021/05/26 09:20:25 CLSRSC-594:
Executing installation step 2 of 19: 'ValidateEnv'.
2021/05/26 09:20:25 CLSRSC-363:
User ignored prerequisites during installation
2021/05/26 09:20:25 CLSRSC-594:
Executing installation step 3 of 19: 'CheckFirstNode'.
2021/05/26 09:20:28 CLSRSC-594:
Executing installation step 4 of 19: 'GenSiteGUIDs'.
2021/05/26 09:20:28 CLSRSC-594:
Executing installation step 5 of 19: 'SetupOSD'.
2021/05/26 09:20:28 CLSRSC-594:
Executing installation step 6 of 19: 'CheckCRSConfig'.
2021/05/26 09:20:31 CLSRSC-594:
Executing installation step 7 of 19: 'SetupLocalGPNP'.
2021/05/26 09:20:36 CLSRSC-594:
Executing installation step 8 of 19: 'CreateRootCert'.
2021/05/26 09:20:36 CLSRSC-594:
Executing installation step 9 of 19: 'ConfigOLR'.
2021/05/26 09:20:53 CLSRSC-594:
Executing installation step 10 of 19: 'ConfigCHMOS'.
2021/05/26 09:20:53 CLSRSC-594:
Executing installation step 11 of 19: 'CreateOHASD'.
2021/05/26 09:20:55 CLSRSC-594:
Executing installation step 12 of 19: 'ConfigOHASD'.
2021/05/26 09:20:56 CLSRSC-330:
Adding Clusterware entries to file 'oracle-ohasd.service'
2021/05/26 09:21:25 CLSRSC-4002:
Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2021/05/26 09:21:43 CLSRSC-594:
Executing installation step 13 of 19: 'InstallAFD'.
2021/05/26 09:21:46 CLSRSC-594:
Executing installation step 14 of 19: 'InstallACFS'.
2021/05/26 09:22:11 CLSRSC-594:
Executing installation step 15 of 19: 'InstallKA'.
2021/05/26 09:22:13 CLSRSC-594:
Executing installation step 16 of 19: 'InitConfig'.
2021/05/26 09:22:28 CLSRSC-594:
Executing installation step 17 of 19: 'StartCluster'.
2021/05/26 09:23:45 CLSRSC-343:
Successfully started Oracle Clusterware stack
2021/05/26 09:23:45 CLSRSC-594:
Executing installation step 18 of 19: 'ConfigNode'.
2021/05/26 09:24:32 CLSRSC-594:
Executing installation step 19 of 19: 'PostConfig'.
2021/05/26 09:24:45 CLSRSC-325:
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ol7-193-rac2 ~]#
Grid
Installation is complete. Ignore the NTP error.
12. Oracle 19c Database Software Installation
Copy
the Oracle database 129c software on ORACLE_HOME directory.
Run
runInstaller.sh under ORACLE_HOME directory
[oracle@ol7-193-rac1
~]$ cp LINUX.X64_193000_db_home.zip /u01/app/oracle/product/19c/db_1
[oracle@ol7-193-rac1
~]$. db_env
[oracle@ol7-193-rac1
~]$ cd /u01/app/oracle/product/19c/db_1
[oracle@ol7-193-rac1
~]$ unzip LINUX.X64_193000_db_home.zip
Select
the Set up software only option, then click the Next button.
Choose
the Oracle Real Application Clusters database installation and click the Next
button.
Make
sure both nodes are selected, then click the Next button.
Choose
the Enterprise Edition option, then click the Next button.
Enter
the ORACLE_BASE location and click Next button.
Select
oinstall as OSDBA and OSOPER and click on Next button.
Accept
the warnings on the subsequent dialog by clicking the Yes button
Click
Next button
Check
the Ignore All checkbox and click the Next button.
Accept
the Warning by clicking on Yes button.
Check
the summary and then click on Install button.
When
prompted, run the configuration script on each node. When the scripts have been
run on each node, click the OK button.
[root@ol7-193-rac1
~]#
[root@ol7-193-rac1
~]# /u01/app/oracle/product/19c/db_1/root.sh
Performing
root user operation.
The
following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/19c/db_1
Enter
the full pathname of the local bin directory: [/usr/local/bin]:
The
contents of "dbhome" have not changed. No need to overwrite.
The
contents of "oraenv" have not changed. No need to overwrite.
The
contents of "coraenv" have not changed. No need to overwrite.
Entries
will be added to the /etc/oratab file as needed by
Database
Configuration Assistant when a database is created
Finished
running generic part of root script.
Now
product-specific root actions will be performed.
[root@ol7-193-rac1
~]#
[root@ol7-193-rac2
~]#
[root@ol7-193-rac2
~]# /u01/app/oracle/product/19c/db_1/root.sh
Performing
root user operation.
The
following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/19c/db_1
Enter
the full pathname of the local bin directory: [/usr/local/bin]:
The
contents of "dbhome" have not changed. No need to overwrite.
The
contents of "oraenv" have not changed. No need to overwrite.
The
contents of "coraenv" have not changed. No need to overwrite.
Entries
will be added to the /etc/oratab file as needed by
Database
Configuration Assistant when a database is created
Finished
running generic part of root script.
Now
product-specific root actions will be performed.
[root@ol7-193-rac2
~]#
Click
on Close button. Installation is completed.
13. Add additional ASM disk
Before
creating a RAC database we need to add all the diskgroups for future datafiles.
For example we will add the DATA diskgroup created earlier using ASM
Configuration Assistant (ASMCA)
Right-click
on Disk Groups and Create…
Set
the redundancy to External, Choose the corresponding Disk Path, then click the Ok
button.
DiskGroup
is ready for creating Database.
Click
on Exit button.
14. Create RAC database
Set the Database Environment Variables
and then run the Database Configuration Assistant (DBCA) to create the Database.
Choose
Create a database option and then click on Next button.
Choose
Advanced configuration and click Next.
Select
Real Application Cluster Database as Database type and Admin Managed as
Configuration. Choose General purpose or Transaction Processing as template.
Click on Next button.
Make
sure both nodes are selected, then click the Next button.
Enter
the Global database name and the SID prefix. In our case we need to create a
Container Database.
Check
the Create as Container database, choose the number of pluggable databases and
the pdb prefix.
Click
on Next button.
Click
on Next button.
You
can choose or not to specify the Fast recovery Area. In case of YES, specify
the corresponding diskgroup and the size. The best practices recommend creating
a separate disk for FRA. You can also choose to enable archiving.
Notice
that those two configurations can be done after creating the database.
Click
on Next button.
Click
Next button.
Choose
the Use Automatic Shared Memory Management (ASMM) and set the SGA and PGA sizes,
You can also modify the Sizing, Character sets or Connection mode by selecting
the corresponding tab.
Click
on Next button.
Enter
Oracle users password and click on Next button.
Click
on Next button.
Check
Ignore All and click Next button.
Check
the summary and click Finish to create a database.
Oracle
RAC database creation completed.
Click
on Close button.
15. Post-Check for RAC Setup
Check
the status of the RAC database
Check
the database configuration
[oracle@ol7-193-rac1
~]$
[oracle@ol7-193-rac1
~]$ srvctl config database -d orcl
Database
unique name: orcl
Database
name: orcl
Oracle
home: /u01/app/oracle/product/19c/db_1
Oracle
user: oracle
Spfile:
+DATA/ORCL/PARAMETERFILE/spfile.272.1073597247
Password
file: +DATA/ORCL/PASSWORD/pwdorcl.256.1073595457
Domain:
localdomain
Start
options: open
Stop
options: immediate
Database
role: PRIMARY
Management
policy: AUTOMATIC
Server
pools:
Disk
Groups: DATA
Mount
point paths:
Services:
Type:
RAC
Start
concurrency:
Stop
concurrency:
OSDBA
group: oinstall
OSOPER
group: oinstall
Database
instances: orcl1,orcl2
Configured
nodes: ol7-193-rac1,ol7-193-rac2
CSS
critical: no
CPU
count: 0
Memory
target: 0
Maximum
memory: 0
Default
network number for database services:
Database
is administrator managed
[oracle@ol7-193-rac1
~]$