In this blog we will see how to upgrade your Oracle GRID Infrastructure along with RAC database from 12c to 19c.
Resource :
Source Version : 12.2.0.1
Target Version : 19.3.0.0
12c GRID HOME : /u01/app/12c/grid
19c GRID HOME : /u01/app/19c/grid/
12c Database Home : /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1
19c Database Home : /u01/app/oracle/product/19c/db_1
Node 1 : JACK
Node 2 : JILL
Please click on the INDEX and browse for more interesting posts.
Assumptions made :
1. Patch set 28553832 has already been applied to the 12c version
2. All necessary Kernel Parameters and RPM’s have been installed or updated
3. All backups have been taken prior
For Kernel parameters and RPM’s, please refer to this LINK
For applying Patch 28553832 , please refer to this LINK
GRID Infrastructure upgrade STEPS :
Download and unzip Grid software On 1st node only
[oracle@jack software]$ mv LINUX.X64_193000_grid_home.zip /u01/app/19c/grid/
[oracle@jack software]$ cd /u01/app/19c/grid/
[oracle@jack grid]$ unzip LINUX.X64_193000_grid_home.zip
Prepare response file on 1st node only . Right response file parameter is very crucial for upgrade to be successful .
You can get the response file in $GRID_HOME/install/response
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/grid/19c/gridbase
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=crsprod
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=jack:,jill:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
On 1st node only run pre upgrade script from new home and generate fix script if generated . as grid user
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12c/grid -dest_crshome /u01/app/19c/grid -dest_version 19.3.0.0.0 -fixup –verbose
On 1st node run grid setup dry run from new home. Starting from 19c we have option to run dry run before we actually do upgrade. Note , this will copy software to 2nd node .
[oracle@jack 19c]$ ./gridSetup.sh -silent -ignorePrereqFailure -dryRunForUpgrade -responseFile /u01/app/19c/grid/install/response/gridsetup.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/12c/oraInventory/logs/GridSetupActions2021-06-25_06-46-21PM/gridSetupActions2021-06-25_06-46-21PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/12c/oraInventory/logs/GridSetupActions2021-06-25_06-46-21PM/gridSetupActions2021-06-25_06-46-21PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
/u01/app/grid/19c/install/response/grid_2021-06-25_06-46-21PM.rsp
As a root user, execute the following script(s):
Run the script on the local node.
Successfully Setup Software with warning(s).
Run the rootupgrade script which is present in the grid home
/u01/app/grid/19c/rootupgrade.sh
Run the rootupgrade script from local node
/u01/app/grid/19c/rootupgrade.sh
Check the owner of the file $GRID_HOME/crs/config/rootconfig.sh after thrDry Run Upgrade. If it is owned by root , then change it to oracle:oinstall before running the actual upgrade.
Run actual run from 1st node
[oracle@jack 19c]$ ./gridSetup.sh -silent -ignorePrereqFailure -responseFile /u01/app/19c/grid/install/response/gridsetup.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/12c/oraInventory/logs/GridSetupActions2021-06-25_07-41-39PM/gridSetupActions2021-06-25_07-41-39PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/12c/oraInventory/logs/GridSetupActions2021-06-25_07-41-39PM/gridSetupActions2021-06-25_07-41-39PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
/u01/app/grid/19c/install/response/grid_2021-06-25_07-41-39PM.rsp
As a root user, execute the following script(s):
1. /u01/app/grid/19c/rootupgrade.sh
Execute /u01/app/grid/19c/rootupgrade.sh on the following nodes:
[jack, jill]
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.
Successfully Setup Software with warning(s).
As install user, execute the following command to complete the configuration.
/u01/app/grid/19c/gridSetup.sh -executeConfigTools -responseFile /u01/app/19c/grid/install/response/gridsetup.rsp [-silent]
As per the above log, run /u01/app/grid/19c/rootupgrade.sh in the local node first and once it is executed, run it from all other nodes parallelly except the last node
[root@jack oracle]# /u01/app/grid/19c/rootupgrade.sh
Check /u01/app/grid/19c/install/root_jack.infraxpertzz.com_2021-06-25_19-50-22-886009304.log for the output of root script
[oracle@jack 19c]$ cat /u01/app/grid/19c/install/root_jack.infraxpertzz.com_2021-06-25_19-50-22-886009304.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/grid/19c
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/grid/19c/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/19cgridbase/crsdata/jack/crsconfig/rootcrs_jack_2021-06-25_07-50-23PM.log
2021/06/25 19:50:39 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2021/06/25 19:50:39 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2021/06/25 19:50:39 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2021/06/25 19:50:42 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2021/06/25 19:50:42 CLSRSC-464: Starting retrieval of the cluster configuration data
2021/06/25 19:50:53 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2021/06/25 19:53:23 CLSRSC-693: CRS entities validation completed successfully.
2021/06/25 19:53:47 CLSRSC-515: Starting OCR manual backup.
2021/06/25 19:54:04 CLSRSC-516: OCR manual backup successful.
2021/06/25 19:54:29 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2021/06/25 19:56:49 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2021/06/25 19:56:49 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2021/06/25 19:56:49 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2021/06/25 19:57:12 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2021/06/25 19:57:12 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2021/06/25 19:57:15 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2021/06/25 19:57:22 CLSRSC-363: User ignored prerequisites during installation
2021/06/25 19:57:40 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2021/06/25 19:57:41 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2021/06/25 20:05:05 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2021/06/25 20:05:05 CLSRSC-482: Running command: '/u01/app/12c/grid/bin/crsctl start rollingupgrade 19.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2021/06/25 20:05:10 CLSRSC-482: Running command: '/u01/app/grid/19c/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12c/grid -oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false '
ASM configuration upgraded in local node successfully.
2021/06/25 20:05:32 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2021/06/25 20:05:39 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2021/06/25 20:06:13 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2021/06/25 20:06:15 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2021/06/25 20:06:16 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2021/06/25 20:06:30 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2021/06/25 20:07:02 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2021/06/25 20:07:12 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2021/06/25 20:07:21 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2021/06/25 20:07:21 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2021/06/25 20:08:18 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2021/06/25 20:09:15 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2021/06/25 20:09:24 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2021/06/25 20:11:05 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2021/06/25 20:11:48 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2021/06/25 20:11:54 CLSRSC-474: Initiating upgrade of resource types
2021/06/25 20:13:23 CLSRSC-475: Upgrade of resource types successfully initiated.
2021/06/25 20:13:43 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2021/06/25 20:14:01 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Once the rootupgrade.sh is executed on both nodes, run the below script from the local node :
/u01/app/grid/19c/gridSetup.sh -executeConfigTools -responseFile /u01/app/19c/grid/install/response/gridsetup.rsp -silent
From both Nodes Detach old GRID home and verify inventory on both nodes
[oracle@jack bin]$ /u01/app/12c/grid/oui/bin/runInstaller -detachHome –silent ORACLE_HOME=/u01/app/12c/grid
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 15905 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
Reboot both nodes and check whether cluster services are up or not
Check CRS Version from both nodes :
[root@jack ~]# /u01/app/19c/grid/bin/crsctl query crs softwareversion -all
Oracle Clusterware version on node [jack] is [19.0.0.0.0]
Oracle Clusterware version on node [jill] is [19.0.0.0.0]
[root@jack ~]# /u01/app/19c/grid/bin/crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [724960844].
[root@jack ~]# /u01/app/19c/grid/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
RAC Database upgrade steps :
Unzip the 19c DB software and edit the response file for Silent installation :
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v19.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
ORACLE_HOME=/u01/app/oracle/product/19c/db_1
ORACLE_BASE=/u01/app/oracle/product/19cbase
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=oinstall
oracle.install.db.OSOPER_GROUP=oinstall
oracle.install.db.OSBACKUPDBA_GROUP=oinstall
oracle.install.db.OSDGDBA_GROUP=oinstall
oracle.install.db.OSKMDBA_GROUP=oinstall
oracle.install.db.OSRACDBA_GROUP=oinstall
oracle.install.db.rootconfig.executeRootScript=false
oracle.install.db.CLUSTER_NODES=jack,jill
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
oracle.install.db.ConfigureAsContainerDB=false
oracle.install.db.config.starterdb.memoryOption=false
oracle.install.db.config.starterdb.installExampleSchemas=false
oracle.install.db.config.starterdb.managementOption=DEFAULT
oracle.install.db.config.starterdb.omsPort=0
oracle.install.db.config.starterdb.enableRecovery=false
Silently install the DB software
[oracle@jack db_1]$ ./runInstaller -ignorePrereq -waitforcompletion -silent -responseFile /u01/app/oracle/product/19c/db_1/install/response/db.rsp
Launching Oracle Database Setup Wizard...
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/12c/oraInventory/logs/InstallActions2021-06-26_12-00-15PM/installActions2021-06-26_12-00-15PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/12c/oraInventory/logs/InstallActions2021-06-26_12-00-15PM/installActions2021-06-26_12-00-15PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
/u01/app/oracle/product/19c/db_1/install/response/db_2021-06-26_12-00-15PM.rsp
You can find the log of this install session at:
/u01/app/12c/oraInventory/logs/InstallActions2021-06-26_12-00-15PM/installActions2021-06-26_12-00-15PM.log
As a root user, execute the following script(s):
1. /u01/app/oracle/product/19c/db_1/root.sh
Execute /u01/app/oracle/product/19c/db_1/root.sh on the following nodes:
[jack, jill]
Successfully Setup Software with warning(s).
Run root.sh script as indicated in the above output from both nodes
Run preupgrade utility after setting the 12c environment variables
[oracle@jack ~]$ /u01/app/oracle/product/19c/db_1/jdk/bin/java -jar /u01/app/oracle/product/19c/db_1/rdbms/admin/preupgrade.jar TERMINAL TEXT
Report generated by Oracle Database Pre-Upgrade Information Tool Version
19.0.0.0.0 Build: 1 on 2021-06-26T12:54:16
Upgrade-To version: 19.0.0.0.0
=======================================
Status of the database prior to upgrade
=======================================
Database Name: INFRA
Container Name: infra
Container ID: 0
Version: 12.2.0.1.0
DB Patch Level: No Patch Bundle applied
Compatible: 12.2.0
Blocksize: 8192
Platform: Linux x86 64-bit
Timezone File: 26
Database log mode: ARCHIVELOG
Readonly: FALSE
Edition: EE
Oracle Component Upgrade Action Current Status
---------------- -------------- --------------
Oracle Server [to be upgraded] VALID
JServer JAVA Virtual Machine [to be upgraded] VALID
Oracle XDK for Java [to be upgraded] VALID
Real Application Clusters [to be upgraded] VALID
Oracle Workspace Manager [to be upgraded] VALID
OLAP Analytic Workspace [to be upgraded] VALID
Oracle Label Security [to be upgraded] VALID
Oracle Database Vault [to be upgraded] VALID
Oracle Text [to be upgraded] VALID
Oracle XML Database [to be upgraded] VALID
Oracle Java Packages [to be upgraded] VALID
Oracle Multimedia [to be upgraded] VALID
Oracle Spatial [to be upgraded] VALID
Oracle OLAP API [to be upgraded] VALID
==============
BEFORE UPGRADE
==============
REQUIRED ACTIONS
================
None
RECOMMENDED ACTIONS
===================
1. (AUTOFIXUP) Gather stale data dictionary statistics prior to database
upgrade in off-peak time using:
EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
Dictionary statistics do not exist or are stale (not up-to-date).
Dictionary statistics help the Oracle optimizer find efficient SQL
execution plans and are essential for proper upgrade timing. Oracle
recommends gathering dictionary statistics in the last 24 hours before
database upgrade.
For information on managing optimizer statistics, refer to the 12.2.0.1
Oracle Database SQL Tuning Guide.
INFORMATION ONLY
================
2. To help you keep track of your tablespace allocations, the following
AUTOEXTEND tablespaces are expected to successfully EXTEND during the
upgrade process.
Min Size
Tablespace Size For Upgrade
---------- ---------- -----------
SYSAUX 570 MB 588 MB
SYSTEM 810 MB 926 MB
TEMP 44 MB 150 MB
UNDOTBS1 70 MB 439 MB
Minimum tablespace sizes for upgrade are estimates.
3. Check the Oracle Backup and Recovery User's Guide for information on how
to manage an RMAN recovery catalog schema.
If you are using a version of the recovery catalog schema that is older
than that required by the RMAN client version, then you must upgrade the
catalog schema.
It is good practice to have the catalog schema the same or higher version
than the RMAN client version you are using.
ORACLE GENERATED FIXUP SCRIPT
=============================
All of the issues in database INFRA
which are identified above as BEFORE UPGRADE "(AUTOFIXUP)" can be resolved by
executing the following
SQL>@/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs
/infra/preupgrade/preupgrade_fixups.sql
=============
AFTER UPGRADE
=============
REQUIRED ACTIONS
================
None
RECOMMENDED ACTIONS
===================
4. Upgrade the database time zone file using the DBMS_DST package.
The database is using time zone file version 26 and the target 19 release
ships with time zone file version 32.
Oracle recommends upgrading to the desired (latest) version of the time
zone file. For more information, refer to "Upgrading the Time Zone File
and Timestamp with Time Zone Data" in the 19 Oracle Database
Globalization Support Guide.
5. To identify directory objects with symbolic links in the path name, run
$ORACLE_HOME/rdbms/admin/utldirsymlink.sql AS SYSDBA after upgrade.
Recreate any directory objects listed, using path names that contain no
symbolic links.
Some directory object path names may currently contain symbolic links.
Starting in Release 18c, symbolic links are not allowed in directory
object path names used with BFILE data types, the UTL_FILE package, or
external tables.
6. (AUTOFIXUP) Gather dictionary statistics after the upgrade using the
command:
EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
Oracle recommends gathering dictionary statistics after upgrade.
Dictionary statistics provide essential information to the Oracle
optimizer to help it find efficient SQL execution plans. After a database
upgrade, statistics need to be re-gathered as there can now be tables
that have significantly changed during the upgrade or new tables that do
not have statistics gathered yet.
7. Gather statistics on fixed objects after the upgrade and when there is a
representative workload on the system using the command:
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
This recommendation is given for all preupgrade runs.
Fixed object statistics provide essential information to the Oracle
optimizer to help it find efficient SQL execution plans. Those
statistics are specific to the Oracle Database release that generates
them, and can be stale upon database upgrade.
For information on managing optimizer statistics, refer to the 12.2.0.1
Oracle Database SQL Tuning Guide.
ORACLE GENERATED FIXUP SCRIPT
=============================
All of the issues in database INFRA
which are identified above as AFTER UPGRADE "(AUTOFIXUP)" can be resolved by
executing the following
SQL>@/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs
/infra/preupgrade/postupgrade_fixups.sql
==================
PREUPGRADE SUMMARY
==================
/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs/infra/preupgrade/preupgrade.log
/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs/infra/preupgrade/preupgrade_fixups.sql
/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs/infra/preupgrade/postupgrade_fixups.sql
Execute fixup scripts as indicated below:
Before upgrade:
Log into the database and execute the preupgrade fixups
@/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs/infra/preupgrade/preupgrade_fixups.sql
After the upgrade:
Log into the database and execute the postupgrade fixups
@/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs/infra/preupgrade/postupgrade_fixups.sql
Preupgrade complete: 2021-06-26T12:54:17
Run the before upgrade steps along with preupgrade_fixups.sql
Copy dbs and network files from old home to new home ( both the servers )
Make changes in the pfile with oracle 19c location from Node 1
Also make cluster database=false in node 1 pfile from Node 1
cp /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/dbs/* /u01/app/oracle/product/19c/db_1/dbs/
cp /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/network/admin/* /u01/app/oracle/product/19c/db_1/network/admin/
ASMCMD> cp spfile.297.1075986753 /u01/app/oracle/product/19c/db_1/dbs/spfileinfra1.ora
copying +DATA/infra/PARAMETERFILE/spfile.297.1075986753 -> /u01/app/oracle/product/19c/db_1/spfileinfra1.ora
Enable RAC on new 19c home in node1
$ cd $ORACLE_HOME/rdbms/lib
[oracle@jack lib]$ make -f ins_rdbms.mk rac_on
(if /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin/skgxpinfo | grep rds;\
then \
make -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/ins_rdbms.mk ipc_rds; \
else \
make -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/ins_rdbms.mk ipc_g; \
fi)
make[1]: Entering directory `/u01/app/oracle/product/19c/db_1/rdbms/lib'
rm -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/libskgxp12.so
cp /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib//libskgxpg.so /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/libskgxp12.so
make[1]: Leaving directory `/u01/app/oracle/product/19c/db_1/rdbms/lib'
- Use stub SKGXN library
cp /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/libskgxns.so /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/libskgxn2.so
make: *** No rule to make target `ka_auto', needed by `rac_on'. Stop.
[oracle@jack lib]$ make -f ins_rdbms.mk ioracle
chmod 755 /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin
- Linking Oracle
rm -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/oracle
/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin/orald -o /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/ -L/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/stubs/ -Wl,-E /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv12 -Wl,--no-whole-archive /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/nautab.o /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/naeet.o /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/naect.o /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/naedhs.o /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/config.o -ldmext -lserver12 -lodm12 -lofs -lcell12 -lnnet12 -lskgxp12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12 -lvsn12 -lcommon12 -lgeneric12 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi` -lskjcx12 -lslax12 -lpls12 -lrt -lplp12 -ldmext -lserver12 -lclient12 -lvsn12 -lcommon12 -lgeneric12 `if [ -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/libavserver12.a ] ; then echo "-lavserver12" ; else echo "-lavstub12"; fi` `if [ -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/libavclient12.a ] ; then echo "-lavclient12" ; fi` -lknlopt -lslax12 -lpls12 -lrt -lplp12 -ljavavm12 -lserver12 -lwwg `cat /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnro12 `cat /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnnzst12 -lzt12 -lztkg12 -lmm -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lztkg12 `cat /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnro12 `cat /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnnzst12 -lzt12 -lztkg12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `if /usr/bin/ar tv /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo12 -lserver12"; fi` -L/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/ctx/lib/ -lctxc12 -lctx12 -lzx12 -lgx12 -lctx12 -lzx12 -lgx12 -lordimt12 -lclsra12 -ldbcfg12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -lgeneric12 -locr12 -locrb12 -locrutl12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -lgeneric12 -lgeneric12 -lorazip -loraz -llzopro5 -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged -lippsmerged -lippcore -lippcpemerged -lippcpmerged -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lsnls12 -lunls12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lasmclnt12 -lcommon12 -lcore12 -laio -lons -lfthread12 `cat /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib -lm `cat /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib/sysliblist` -ldl -lm -L/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/lib `test -x /usr/bin/hugeedit -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs`
rm -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin/oracle
mv /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin/oracle
chmod 6751 /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin/oracle
(if [ ! -f /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin/crsd.bin ]; then \
getcrshome="/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/srvm/admin/getcrshome" ; \
if [ -f "$getcrshome" ]; then \
crshome="`$getcrshome`"; \
if [ -n "$crshome" ]; then \
if [ $crshome != /u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1 ]; then \
oracle="/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/bin/oracle"; \
$crshome/bin/setasmgidwrap oracle_binary_path=$oracle; \
fi \
fi \
fi \
fi\
);
Shutdown database from old home and remove database from OCR from old home
[oracle@jack lib]$ srvctl stop database -d infra
[oracle@jack lib]$ srvctl remove database -d infra
Remove the database infra? (y/[n]) y
Upgrade database from new home on 1st node only
sqlplus / as sysdba
create spfile from pfile;
startup upgrade;
exit;
You can run the upgrade using either of the following commands. The second is actually just a shorthand for the former.
# Regular upgrade command.
cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl $ORACLE_HOME/rdbms/admin/catupgrd.sql
# Shorthand command.
$ORACLE_HOME/bin/dbupgrade
[oracle@jack bin]$ ./dbupgrade
Argument list for [/u01/app/oracle/product/19c/db_1/rdbms/admin/catctl.pl]
For Oracle internal use only A = 0
Run in c = 0
Do not run in C = 0
Input Directory d = 0
Echo OFF e = 1
Simulate E = 0
Forced cleanup F = 0
Log Id i = 0
Child Process I = 0
Log Dir l = 0
Priority List Name L = 0
Upgrade Mode active M = 0
SQL Process Count n = 0
SQL PDB Process Count N = 0
Open Mode Normal o = 0
Start Phase p = 0
End Phase P = 0
Reverse Order r = 0
AutoUpgrade Resume R = 0
Script s = 0
Serial Run S = 0
RO User Tablespaces T = 0
Display Phases y = 0
Debug catcon.pm z = 0
Debug catctl.pl Z = 0
catctl.pl VERSION: [19.0.0.0.0]
STATUS: [Production]
BUILD: [RDBMS_19.3.0.0.0DBRU_LINUX.X64_190417]
/u01/app/oracle/product/19c/db_1/rdbms/admin/orahome = [/u01/app/oracle/product/19c/db_1]
/u01/app/oracle/product/19c/db_1/bin/orabasehome = [/u01/app/oracle/product/19c/db_1]
catctlGetOraBaseLogDir = [/u01/app/oracle/product/19c/db_1]
Analyzing file /u01/app/oracle/product/19c/db_1/rdbms/admin/catupgrd.sql
Log file directory = [/tmp/cfgtoollogs/upgrade20210626160845]
catcon::set_log_file_base_path: ALL catcon-related output will be written to [/tmp/cfgtoollogs/upgrade20210626160845/catupgrd_catcon_2816.lst]
catcon::set_log_file_base_path: catcon: See [/tmp/cfgtoollogs/upgrade20210626160845/catupgrd*.log] files for output generated by scripts
catcon::set_log_file_base_path: catcon: See [/tmp/cfgtoollogs/upgrade20210626160845/catupgrd_*.lst] files for spool files, if any
Number of Cpus = 2
Database Name = infra
DataBase Version = 12.2.0.1.0
catcon::set_log_file_base_path: ALL catcon-related output will be written to [/u01/app/oracle/product/19c/db_1/cfgtoollogs/infra/upgrade20210626160856/catupgrd_catcon_2816.lst]
catcon::set_log_file_base_path: catcon: See [/u01/app/oracle/product/19c/db_1/cfgtoollogs/infra/upgrade20210626160856/catupgrd*.log] files for output generated by scripts
catcon::set_log_file_base_path: catcon: See [/u01/app/oracle/product/19c/db_1/cfgtoollogs/infra/upgrade20210626160856/catupgrd_*.lst] files for spool files, if any
Log file directory = [/u01/app/oracle/product/19c/db_1/cfgtoollogs/infra/upgrade20210626160856]
Parallel SQL Process Count = 4
Components in [infra]
Installed [APS CATALOG CATJAVA CATPROC CONTEXT DV JAVAVM OLS ORDIM OWM RAC SDO XDB XML XOQ]
Not Installed [APEX EM MGW ODM WK]
------------------------------------------------------
Phases [0-107] Start Time:[2021_06_26 16:09:08]
------------------------------------------------------
*********** Executing Change Scripts ***********
Serial Phase #:0 [infra] Files:1 Time: 80s
*************** Catalog Core SQL ***************
Serial Phase #:1 [infra] Files:5 Time: 61s
Restart Phase #:2 [infra] Files:1 Time: 2s
*********** Catalog Tables and Views ***********
Parallel Phase #:3 [infra] Files:19 Time: 33s
Restart Phase #:4 [infra] Files:1 Time: 2s
************* Catalog Final Scripts ************
Serial Phase #:5 [infra] Files:7 Time: 58s
***************** Catproc Start ****************
Serial Phase #:6 [infra] Files:1 Time: 26s
***************** Catproc Types ****************
Serial Phase #:7 [infra] Files:2 Time: 27s
Restart Phase #:8 [infra] Files:1 Time: 1s
**************** Catproc Tables ****************
Parallel Phase #:9 [infra] Files:67 Time: 64s
Restart Phase #:10 [infra] Files:1 Time: 3s
************* Catproc Package Specs ************
Serial Phase #:11 [infra] Files:1 Time: 177s
Restart Phase #:12 [infra] Files:1 Time: 2s
************** Catproc Procedures **************
Parallel Phase #:13 [infra] Files:94 Time: 28s
Restart Phase #:14 [infra] Files:1 Time: 3s
Parallel Phase #:15 [infra] Files:120 Time: 72s
Restart Phase #:16 [infra] Files:1 Time: 0s
Serial Phase #:17 [infra] Files:22 Time: 8s
Restart Phase #:18 [infra] Files:1 Time: 1s
***************** Catproc Views ****************
Parallel Phase #:19 [infra] Files:32 Time: 47s
Restart Phase #:20 [infra] Files:1 Time: 2s
Serial Phase #:21 [infra] Files:3 Time: 36s
Restart Phase #:22 [infra] Files:1 Time: 3s
Parallel Phase #:23 [infra] Files:25 Time: 300s
Restart Phase #:24 [infra] Files:1 Time: 2s
Parallel Phase #:25 [infra] Files:12 Time: 201s
Restart Phase #:26 [infra] Files:1 Time: 2s
Serial Phase #:27 [infra] Files:1 Time: 0s
Serial Phase #:28 [infra] Files:3 Time: 7s
Serial Phase #:29 [infra] Files:1 Time: 0s
Restart Phase #:30 [infra] Files:1 Time: 2s
*************** Catproc CDB Views **************
Serial Phase #:31 [infra] Files:1 Time: 2s
Restart Phase #:32 [infra] Files:1 Time: 2s
Serial Phase #:34 [infra] Files:1 Time: 0s
***************** Catproc PLBs *****************
Serial Phase #:35 [infra] Files:293 Time: 61s
Serial Phase #:36 [infra] Files:1 Time: 0s
Restart Phase #:37 [infra] Files:1 Time: 3s
Serial Phase #:38 [infra] Files:6 Time: 12s
Restart Phase #:39 [infra] Files:1 Time: 2s
*************** Catproc DataPump ***************
Serial Phase #:40 [infra] Files:3 Time: 107s
Restart Phase #:41 [infra] Files:1 Time: 2s
****************** Catproc SQL *****************
Parallel Phase #:42 [infra] Files:13 Time: 195s
Restart Phase #:43 [infra] Files:1 Time: 2s
Parallel Phase #:44 [infra] Files:11 Time: 19s
Restart Phase #:45 [infra] Files:1 Time: 1s
Parallel Phase #:46 [infra] Files:3 Time: 4s
Restart Phase #:47 [infra] Files:1 Time: 0s
************* Final Catproc scripts ************
Serial Phase #:48 [infra] Files:1 Time: 13s
Restart Phase #:49 [infra] Files:1 Time: 2s
************** Final RDBMS scripts *************
Serial Phase #:50 [infra] Files:1 Time: 7s
************ Upgrade Component Start ***********
Serial Phase #:51 [infra] Files:1 Time: 2s
Restart Phase #:52 [infra] Files:1 Time: 1s
********** Upgrading Java and non-Java *********
Serial Phase #:53 [infra] Files:2 Time: 750s
***************** Upgrading XDB ****************
Restart Phase #:54 [infra] Files:1 Time: 3s
Serial Phase #:56 [infra] Files:3 Time: 16s
Serial Phase #:57 [infra] Files:3 Time: 8s
Parallel Phase #:58 [infra] Files:10 Time: 9s
Parallel Phase #:59 [infra] Files:25 Time: 17s
Serial Phase #:60 [infra] Files:4 Time: 16s
Serial Phase #:61 [infra] Files:1 Time: 0s
Serial Phase #:62 [infra] Files:32 Time: 8s
Serial Phase #:63 [infra] Files:1 Time: 0s
Parallel Phase #:64 [infra] Files:6 Time: 10s
Serial Phase #:65 [infra] Files:2 Time: 47s
Serial Phase #:66 [infra] Files:3 Time: 83s
**************** Upgrading ORDIM ***************
Restart Phase #:67 [infra] Files:1 Time: 2s
Serial Phase #:69 [infra] Files:1 Time: 6s
Parallel Phase #:70 [infra] Files:2 Time: 105s
Restart Phase #:71 [infra] Files:1 Time: 2s
Parallel Phase #:72 [infra] Files:2 Time: 4s
Serial Phase #:73 [infra] Files:2 Time: 5s
***************** Upgrading SDO ****************
Restart Phase #:74 [infra] Files:1 Time: 2s
Serial Phase #:76 [infra] Files:1 Time: 147s
Serial Phase #:77 [infra] Files:2 Time: 8s
Restart Phase #:78 [infra] Files:1 Time: 3s
Serial Phase #:79 [infra] Files:1 Time: 106s
Restart Phase #:80 [infra] Files:1 Time: 2s
Parallel Phase #:81 [infra] Files:3 Time: 171s
Restart Phase #:82 [infra] Files:1 Time: 2s
Serial Phase #:83 [infra] Files:1 Time: 16s
Restart Phase #:84 [infra] Files:1 Time: 3s
Serial Phase #:85 [infra] Files:1 Time: 31s
Restart Phase #:86 [infra] Files:1 Time: 2s
Parallel Phase #:87 [infra] Files:4 Time: 207s
Restart Phase #:88 [infra] Files:1 Time: 2s
Serial Phase #:89 [infra] Files:1 Time: 5s
Restart Phase #:90 [infra] Files:1 Time: 1s
Serial Phase #:91 [infra] Files:2 Time: 23s
Restart Phase #:92 [infra] Files:1 Time: 2s
Serial Phase #:93 [infra] Files:1 Time: 2s
Restart Phase #:94 [infra] Files:1 Time: 3s
******* Upgrading ODM, WK, EXF, RUL, XOQ *******
Serial Phase #:95 [infra] Files:1 Time: 33s
Restart Phase #:96 [infra] Files:1 Time: 2s
*********** Final Component scripts ***********
Serial Phase #:97 [infra] Files:1 Time: 5s
************* Final Upgrade scripts ************
Serial Phase #:98 [infra] Files:1 Time: 538s
******************* Migration ******************
Serial Phase #:99 [infra] Files:1 Time: 3s
*** End PDB Application Upgrade Pre-Shutdown ***
Serial Phase #:100 [infra] Files:1 Time: 3s
Serial Phase #:101 [infra] Files:1 Time: 0s
Serial Phase #:102 [infra] Files:1 Time: 82s
***************** Post Upgrade *****************
Serial Phase #:103 [infra] Files:1 Time: 54s
**************** Summary report ****************
Serial Phase #:104 [infra] Files:1 Time: 5s
*** End PDB Application Upgrade Post-Shutdown **
Serial Phase #:105 [infra] Files:1 Time: 3s
Serial Phase #:106 [infra] Files:1 Time: 0s
Serial Phase #:107 [infra] Files:1 Time: 43s
------------------------------------------------------
Phases [0-107] End Time:[2021_06_26 17:20:35]
------------------------------------------------------
Grand Total Time: 4292s
LOG FILES: (/u01/app/oracle/product/19c/db_1/cfgtoollogs/infra/upgrade20210626160856/catupgrd*.log)
Upgrade Summary Report Located in:
/u01/app/oracle/product/19c/db_1/cfgtoollogs/infra/upgrade20210626160856/upg_summary.log
Grand Total Upgrade Time: [0d:1h:11m:32s]
Update timezone , gather stats and run post script that will be generated in step 5 of preupgrade utility tool
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
SET SERVEROUTPUT ON
DECLARE
l_tz_version PLS_INTEGER;
BEGIN
SELECT DBMS_DST.get_latest_timezone_version
INTO l_tz_version
FROM dual;
DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
DBMS_DST.begin_upgrade(l_tz_version);
END;
/
SHUTDOWN IMMEDIATE;
STARTUP;
--> Do the Upgrade
SET SERVEROUTPUT ON
DECLARE
l_failures PLS_INTEGER;
BEGIN
DBMS_DST.upgrade_database(l_failures);
DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
DBMS_DST.end_upgrade(l_failures);
DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/
-- Check new settings.
SELECT * FROM v$timezone_file;
SQL> SELECT * FROM v$timezone_file;
FILENAME VERSION CON_ID
-------------------- ---------- ----------
timezlrg_32.dat 32 0
COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20
SELECT property_name, property_value
FROM database_properties
WHERE property_name LIKE 'DST_%'
ORDER BY property_name;
PROPERTY_NAME PROPERTY_VALUE
------------------------------ --------------------
DST_PRIMARY_TT_VERSION 32
DST_SECONDARY_TT_VERSION 0
DST_UPGRADE_STATE NONE
sqlplus / as sysdba
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
exit;
# AUTOFIXUP
sqlplus / as sysdba
@/u01/app/oracle/product/12c/orabase/product/12.2.0/dbhome_1/cfgtoollogs/infra/preupgrade/postupgrade_fixups.sql
exit;
Run UTLRP script to validate the DBA Registry components as well as DBA Objects
Enable cluster mode to true
alter system set cluster_database=true scope=spfile ;
Shutdown the database
Copy spfile to ASM
ASMCMD> cp /u01/app/oracle/product/19c/db_1/dbs/spfileinfra1.ora +DATA/infra/PARAMETERFILE/spfileinfra1.ora
copying /u01/app/oracle/product/19c/db_1/dbs/spfileinfra1.ora -> +DATA/infra/PARAMETERFILE/spfileinfra1.ora
Add database to crs from new home
srvctl add database -d infra -o /u01/app/oracle/product/19c/db_1 -p '+DATA/infra/PARAMETERFILE/spfileinfra1.ora' -role PRIMARY
srvctl add instance -d infra -i infra1 -n JACK
srvctl add instance -d infra -i infra2 -n JILL
Detach 12c home from both nodes and verify the oraInventory
[oracle@jack dbs]$ /u01/app/oracle/product/12c/oraclebase/product/12c/db_1/oui/bin/runInstaller -detachHome -silent ORACLE_HOME= /u01/app/oracle/product/12c/oraclebase/product/12c/db_1 Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 13562 MB Passed The inventory pointer is located at /etc/oraInst.loc 'DetachHome' was successful.
Bring MGMTDB online
If failed to failed to run Pre upgrade patch before starting upgrade procedure mgmt database will be in disabled state after upgrade.
In 19c Mgmt database is optional , but still if you want to fix, we can use below procedure to bring it online .
The first thing is to bring up the MGMTDB in the 12.1 GI_HOME.
srvctl enable mgmtdb
srvctl start mgmtdb
srvctl status mgmtdb
Once the MGMTDB is up and running, you need to drop the RHP service that was created during the rootupgrade process. This has to be done from the 19.0 GI_HOME.
[oracle@jack dbs]$ env | grep ORA
ORACLE_SID=infra1
ORACLE_HOME=/u01/app/oracle/product/19c/db_1
[root@jack dbs]# /u01/app/grid/19c/bin/srvctl remove rhpserver -f
Now that the RHP service has been removed, we need to remove the MGMTDB in 12.2.
########################################
# As root user in BOTH nodes
########################################
#Node 1
[root@jack dbs]# export ORACLE_HOME=/u01/app/12c/grid
[root@jack dbs]# export PATH=$PATH:$ORACLE_HOME/bin
[root@jack dbs]# crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'jack'
CRS-2677: Stop of 'ora.crf' on 'jack' succeeded
[root@jack dbs]# crsctl modify res ora.crf -attr ENABLED=0 -init
#Node 2
[root@jill lib]# export ORACLE_HOME=/u01/app/12c/grid
[root@jill lib]# export PATH=$PATH:$ORACLE_HOME/bin
[root@jill lib]# crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'jill'
CRS-2677: Stop of 'ora.crf' on 'jill' succeeded
[root@jill lib]# crsctl modify res ora.crf -attr ENABLED=0 -init
########################################
# As oracle User on Node 1
########################################
[oracle@jack ~]$ export ORACLE_HOME=/u01/app/12c/grid
[oracle@jack ~]$ export PATH=$PATH:$ORACLE_HOME/bin
[oracle@jack ~]$ srvctl stop mgmtdb
[oracle@jack ~]$ srvctl stop mgmtlsnr
[oracle@jack ~]$ /u01/app/12c/grid/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
[WARNING] [DBT-11503] The instance (-MGMTDB) is not running on the local node. This may result in partial delete of Oracle database.
CAUSE: A locally running instance is required for complete deletion of Oracle database instance and database files.
ACTION: Specify a locally running database, or execute DBCA on a node where the database instance is running.
Connecting to database
4% complete
9% complete
14% complete
19% complete
23% complete
28% complete
47% complete
Updating network configuration files
52% complete
Deleting instance and datafiles
76% complete
100% complete
Look at the log file "/u01/app/12c/gridbase/cfgtoollogs/dbca/_mgmtdb.log" for further details.
Check whether any files are present for mgmtdb
[oracle@jack ~]$ asmcmd
ASMCMD> cd DATA
ASMCMD> ls
ASM/
_mgmtdb/
crsprod/
infra/
orapwasm
orapwasm_backup
ASMCMD> cd _mgmtdb
ASMCMD> ls
ASMCMD>
Once the MGMTDB is deleted, then we run the mdbutil.pl (which you can grab from MOS Doc 2065175.1) and add the MGMTDB in the 19.3 GI_HOME.
########################################
# As oracle User on Node 1
########################################
[oracle@jack dbs]$ env|grep ORA
ORACLE_SID=infra1
ORACLE_HOME=/u01/app/oracle/product/19c/db_1
[oracle@jack software]$ ./mdbutil.pl --addmdb --target=+DATA -debug
mdbutil.pl version : 1.100
2021-06-26 19:03:17: D Executing: /u01/app/grid/19c/bin/srvctl status diskgroup -g DATA
2021-06-26 19:03:19: D Exit code: 0
2021-06-26 19:03:19: D Output of last command execution:
Disk Group DATA is running on jack,jill
2021-06-26 19:03:19: I Starting To Configure MGMTDB at +DATA...
2021-06-26 19:03:19: D Executing: /u01/app/grid/19c/bin/srvctl status mgmtlsnr
2021-06-26 19:03:20: D Exit code: 0
2021-06-26 19:03:20: D Output of last command execution:
Listener MGMTLSNR is enabled
2021-06-26 19:03:20: D Executing: /u01/app/grid/19c/bin/srvctl status mgmtdb
2021-06-26 19:03:21: D Exit code: 1
2021-06-26 19:03:21: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2021-06-26 19:03:21: D Executing: /u01/app/grid/19c/bin/srvctl status mgmtdb
2021-06-26 19:03:23: D Exit code: 1
2021-06-26 19:03:23: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2021-06-26 19:03:23: D Executing: /u01/app/grid/19c/bin/srvctl stop mgmtlsnr
2021-06-26 19:03:29: D Exit code: 0
2021-06-26 19:03:29: D Output of last command execution:
2021-06-26 19:03:29: D Executing: /u01/app/grid/19c/bin/crsctl query crs activeversion
2021-06-26 19:03:30: D Exit code: 0
2021-06-26 19:03:30: D Output of last command execution:
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
2021-06-26 19:03:30: D Executing: /u01/app/grid/19c/bin/srvctl enable qosmserver
2021-06-26 19:03:31: D Exit code: 0
2021-06-26 19:03:31: D Output of last command execution:
2021-06-26 19:03:31: D Executing: /u01/app/grid/19c/bin/srvctl start qosmserver
2021-06-26 19:03:43: D Exit code: 0
2021-06-26 19:03:43: D Output of last command execution:
2021-06-26 19:03:43: I Container database creation in progress... for GI 19.0.0.0.0
2021-06-26 19:03:43: D Executing: /u01/app/grid/19c/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATA -datafileJarLocation /u01/app/grid/19c/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
2021-06-26 19:30:40: D Exit code: 0
2021-06-26 19:30:40: D Output of last command execution:
Prepare for db operation
2021-06-26 19:30:40: I Plugable database creation in progress...
2021-06-26 19:30:40: D Executing: /u01/app/grid/19c/bin/mgmtca -local
2021-06-26 19:53:01: D Exit code: 0
2021-06-26 19:53:01: D Output of last command execution:
2021-06-26 19:53:01: D Executing: scp ./mdbutil.pl jack:/tmp/
2021-06-26 19:53:03: D Exit code: 0
2021-06-26 19:53:03: D Output of last command execution:
2021-06-26 19:53:03: I Executing "/tmp/mdbutil.pl --addchm" on jack as root to configure CHM.
2021-06-26 19:53:03: D Executing: ssh root@jack "/tmp/mdbutil.pl --addchm"
root@jack's password:
2021-06-26 19:54:05: D Exit code: 1
2021-06-26 19:54:05: D Output of last command execution:
mdbutil.pl version : 1.100
2021-06-26 19:54:05: W Not able to execute "/tmp/mdbutil.pl --addchm" on jack as root to configure CHM.
2021-06-26 19:54:05: D Executing: scp ./mdbutil.pl jill:/tmp/
2021-06-26 19:54:05: D Exit code: 0
2021-06-26 19:54:05: D Output of last command execution:
2021-06-26 19:54:05: I Executing "/tmp/mdbutil.pl --addchm" on jill as root to configure CHM.
2021-06-26 19:54:05: D Executing: ssh root@jill "/tmp/mdbutil.pl --addchm"
root@jill's password:
2021-06-26 19:54:23: D Exit code: 1
2021-06-26 19:54:23: D Output of last command execution:
mdbutil.pl version : 1.100
2021-06-26 19:54:23: W Not able to execute "/tmp/mdbutil.pl --addchm" on jill as root to configure CHM.
2021-06-26 19:54:23: I MGMTDB & CHM configuration done!
########################################
# As root user in BOTH nodes
########################################
[root@jack ~]# export ORACLE_HOME=/u01/app/grid/19c
[root@jack ~]# export PATH=$PATH:$ORACLE_HOME/bin
[root@jack ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@jack ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'jack'
CRS-2676: Start of 'ora.crf' on 'jack' succeeded
[root@jill lib]# export ORACLE_HOME=/u01/app/grid/19c
[root@jill lib]# export PATH=$PATH:$ORACLE_HOME/bin
[root@jill lib]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@jill lib]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'jill'
CRS-2676: Start of 'ora.crf' on 'jill' succeeded
Very well presented . Thank you gentlemen
Glad you liked it. Thanks for the nice comments
Thank a lot bro.. This content is very useful!!
Glad you liked it. Its my pleasure