For detailed steps on configuring your Virtual Box and setting up 2 servers in your own home environment for 2 Node Oracle RAC setup, please click here

Please click on the INDEX and browse for more interesting posts.

 

This post will be totally concentrating on the below facts :

  • ASM Configuration
  • 19c Grid Infrastructure Installation and setup
  • 19c Oracle Database binaries installation
  • Oracle 19c Database creation

 

Softwares Required :

  • Virtual Box 6.1 for Windows
  • Oracle Enterprise Linux 7.3
  • Oracle Database 19c Release 3 Linux x86-64
  • kmod-20-21.el7.x86_64.rpm
  • kmod-libs-20-21.el7.x86_64.rpm
  • oracleasmlib-2.0.12-1.el7.x86_64.rpm
  • kmod-oracleasm-2.0.8-17.el7.x86_64  
  • oracleasm-support-2.1.8-3.el7.x86_64.rpm 

Steps :

1. Install the RPM Packages in both nodes as root user:

  • oracleasmlib-2.0.12-1.el7.x86_64.rpm 
  • oracleasm-support-2.1.8-3.el7.x86_64.rpm 
  • kmod-oracleasm-2.0.8-17.el7.x86_64  

2. As root user, check the shared RAW Disks from Node 1

fdisk -l

3. Partition the 6 Shared RAW disks created in our previous lab from Node 1.

An example is shown below :

[root@jack ~]# fdisk /dev/sde 
Welcome to fdisk (util-linux 2.23.2).  

Changes will remain in memory only, until you decide to write them. 

Be careful before using the write command. 

Command (m for help): n 

Partition type: 

   p   primary (0 primary, 0 extended, 4 free) 

   e   extended 

Select (default p): p

Using default response p Partition number (1-4, default 1): 

First sector (2048-20971519, default 2048): 

Using default value 2048 

Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 

Using default value 20971519 

Partition 1 of type Linux and of size 50 GiB is set  

 

Command (m for help): w 

The partition table has been altered!  

 

Calling ioctl() to re-read partition table. 

Syncing disks.  

4. Configure ASM from both Nodes :

[root@jack ~]# oracleasm configure -i 

Configuring the Oracle ASM library driver.  

This will configure the on-boot properties of the Oracle ASM library 

driver.  The following questions will determine whether the driver is 

loaded on boot and what permissions it will have.  The current values 

will be shown in brackets ('[]').  Hitting <ENTER> without typing an 

answer will keep that current value.  Ctrl-C will abort.  

Default user to own the driver interface []: oracle 

Default group to own the driver interface []: oinstall 

Scan for Oracle ASM disks on boot (y/n) [y]: y 

Writing Oracle ASM library driver configuration: done 

5. Check the configuration from both Nodes as root user : 

# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

6. Start Oracle ASM on both nodes as root user:

oracleasm init

7. Check ASM Status on both nodes as root user:

[root@jack ~]# /etc/init.d/oracleasm status 

Checking if ASM is loaded: yes 

Checking if /dev/oracleasm is mounted: yes 

8. Create the ASM disks from Node 1 as root user :

oracleasm createdisk DATA1 /dev/sdb1 

oracleasm createdisk DATA2 /dev/sdc1 

oracleasm createdisk DATA3 /dev/sdd1 

oracleasm createdisk DATA4 /dev/sde1

oracleasm createdisk FRA1 /dev/sdf1 

oracleasm createdisk FRA2 /dev/sdg1 

9. Check the disks from Node 1  as root user :

oracleasm listdisks

10. Check the disks from Node 2 as root user :

oracleasm scandisks

oracleasm listdisks

11. Add directories and change their permissions in both Nodes

mkdir -p /u01/app/19c/grid
mkdir -p /u01/app/oracle/product/19c/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01/

12. Unzip 19c Grid Infrastructure in /u01/app/19c/grid of Node 1 as oracle user (Since our DB and Grid user is same : oracle)

unzip /u01/app/19c/grid/LINUX.X64_193000_grid_home.zip

13. Install the package cvudisk from the grid home as the “root” user on all nodes.

[root@jack grid]# cd $GRID_HOME/cv/rpm
[root@jack rpm]# rpm -Uvh cvuqdisk*

14. Configure SSH Setup between nodes: 

cd $GRID_HOME/deinstall
./sshUserSetup.sh -user oracle -hosts "jack jill" -noPromptPassphrase 

15. Pre-check for CRS installation using Cluvfy 

We use this Cluvfy command to check that our cluster is ready for the Grid install.

cd $GRID_HOME

./runcluvfy.sh stage -pre crsinst -n jack,jill

16. Install and Configure Oracle 19c Grid Infrastructure for a Cluster

cd /u01/app/19c/grid/
./gridSetup.sh

Launching Oracle Grid Infrastructure Setup Wizard…

Follow the steps as described in the screenshots

Here we are taking crsprod as the cluster name and crsprod-scan as the scan name.

NOTE : Please make sure your Cluster Name is less than 15 characters, or else your Grid Infrastructure will fail with error code “CLSRSC-119: Start of the exclusive mode cluster failed”

Add Node 2

Click the SSH connectivity button and enter the password for the oracle user. Click the Setup button to configure SSH connectivity, and the Test button to test it once it is complete. Once the test is complete, click the Next button.

Check the public and private networks are specified correctly as ASM & Private. Click the Next button.

Select the No option, as we don’t want to create a separate disk group for the GIMR in this case. Click the Next button.

First Browse the path /dev/oracleasm/disks* using change discovery path

Set the redundancy to Normal, click the 50GB DISK for Data configuration, then click the Next button

Providing the same password over all the users

Donot use IPMI and hit Next

I am not registering the Grid with OEM as of now

Select oinstall for all and accept the warnings

 

Select the Oracle base and hit next

Let the default path be for Oracle inventory and hit next

Exeucting Root Scripts can be done manually or if you have root password or oracle user has sudo to root privilege, this can be taken care automatically by the installer

Here we are selecting the Automatic execution of Root script

The prerequisite check will run now

For your home setup, the below pre-requisites would fail. Please ensure there aren’t any errors observed if you are planning to implement this in your work

Click ignore and proceed

Double check and click the Install button.

If you have opted for manual installation fo root scripts, login as root user and execute the scripts step by step before clicking ok on the dialog box

Node 1

[root@jack run]# /u01/app/oraInventory/orainstRoot.sh

Node 2

[root@jill grid]# /u01/app/oraInventory/orainstRoot.sh

Node 1

[root@jack run]# /u01/app/19c/grid/root.sh

Node 2

[root@jill grid]# /u01/app/19c/grid/root.sh

 

Grid Installation is complete. Ignore the NTP error for your home setup

17. Check the running processes through the below commands :

ps -ef|grep d.bin

$GRID_HOME/crsctl stat res -t –> This command will show the running services from both the nodes

18. Install and Configure Oracle 19c Software binary

Unzip the file in your DB home directory

unzip LINUX.X64_193000_db_home.zip

Run the below command after setting up DISPLAY

./runInstaller

Select the setup software only option, then click the Next button.

Accept the Oracle Real Application Clusters database installation option by clicking the Next button.

Make sure both nodes are selected, then click the Next button.

Select the Enterprise Edition option, then click the Next button.

Enter /u01/app/orabase as the Oracle base and /u01/app/oracle/product/19c/db_1 as the software location, then click the Next button.

Click the Next button. Accept the warnings on the subsequent dialog by clicking the Yes button

Click on Automatically run configuration scripts and select root user or Oracle with sudo to root privilege to run root.sh script. If you want to manually run it, then just press Next

Check the “Ignore All” checkbox and click the “Next” button.

Click the Install button.

If you have chosen to run configuration scripts manually, when prompted, run the configuration script on each node. When the scripts have been run on each node, click the OK button.

Node 1

[root@jack dbhome_1]# /u01/app/oracle/product/19c/db_1/root.sh

Node 2

[root@jill dbhome_1]# /u01/app/oracle/product/19c/db_1/root.sh

 

Oracle 19c Installation is completed.

19.  Create the required disk groups

$GRID_HOME/bin/asmca

Click on Disk Groups

Provide the Disk Group Name – FRA

Select Redundancy as Normal

Select the 2o GB Disks whose status is shown as provisioned and click create

DiskGroup is ready for creating Database

20. Database Creation 

cd $ORACLE_HOME/bin

./dbca

Select the Advanced configuration option and click the Next button.

Select the General purpose Template and Admin Managed as Configuration Type and click the Next button.

Make sure both nodes are selected, then click the Next button.

Enter the Database Name and select the  CDB Name

Default select as it is and Click next

Deselect FRA and Archivelog mode

Default as it is and Click next

Check ASMM Memory

Click on CVU periodical checks and deselect others

Enter the same credentials for all users

Select Create Database and click finish

Oracle 19c Rac Database Creation is completed.

21. Post-Check For Rac Setup
Check the Status of the RAC by firing the below commands

$GRID_HOME/crsctl stat res -t

srvctl config database -d infra

srvctl status database -d infra

srvctl config scan