Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.
Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.
Rapid Home Provisioning Server is configured and is not running.
The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]
Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL
While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.
The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”
Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer
I would suggest to run the last step using GUI if feasible versus silent to see what is happening:
/u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp
So how did I get myself into this predicament? I followed blindly. I trust but did not verify.
18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)
Step 2.1 - Understand how MGMTDB is handled during upgrade
****************************************************************************************************
Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist.
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.
****************************************************************************************************
Note: MGMTDB is required when using Rapid Host Provisioning.
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later, it is configured to use 1G of SGA and 500 MB of PGA.
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.
The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.
Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.
-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)
Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability.
Rapid Home Provisioning Server is configured, by default and there does not look to be documented or easily found option to not install or bypass default.
RHPS is interchangeable between Server and Service.
gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.
## To upgrade clusterware and/or Automatic storage management of earlier ##
## releases ##
## - Fill out sections A,B,C,D and H ##
#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false
oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION= | INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option= | oracle.install.option=UPGRADE
ORACLE_BASE= | ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType= | oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration= | oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster= | oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS= | oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP= | oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG= | oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD= | oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS= | oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes= | oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption= | oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript= | oracle.install.crs.rootconfig.executeRootScript=false
Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.
/u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp
Here is what the environment looks like after the 18c GI upgrade.
Notice ACFS is configured for RHP.
[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit
[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)
OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.CRS.GHCHKPT.advm
OFFLINE OFFLINE racnode-dc1-1 STABLE
OFFLINE OFFLINE racnode-dc1-2 STABLE
ora.CRS.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.DATA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.FRA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.chad
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.crs.ghchkpt.acfs
OFFLINE OFFLINE racnode-dc1-1 volume /opt/oracle/r
hp_images/chkbase is
unmounted,STABLE
OFFLINE OFFLINE racnode-dc1-2 volume /opt/oracle/r
hp_images/chkbase is
unmounted,STABLE
ora.helper
OFFLINE OFFLINE racnode-dc1-1 STABLE
OFFLINE OFFLINE racnode-dc1-2 STABLE
ora.net1.network
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.ons
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.proxy_advm
OFFLINE OFFLINE racnode-dc1-1 STABLE
OFFLINE OFFLINE racnode-dc1-2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racnode-dc1-2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE racnode-dc1-1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE racnode-dc1-1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE racnode-dc1-1 169.254.7.214 172.16
.9.10,STABLE
ora.asm
1 ONLINE ONLINE racnode-dc1-1 STABLE
2 ONLINE ONLINE racnode-dc1-2 STABLE
ora.cvu
1 ONLINE ONLINE racnode-dc1-2 STABLE
ora.hawk.db
1 ONLINE ONLINE racnode-dc1-1 Open,HOME=/u01/app/o
racle/12.1.0.1/db1,S
TABLE
2 ONLINE ONLINE racnode-dc1-2 Open,HOME=/u01/app/o
racle/12.1.0.1/db1,S
TABLE
ora.mgmtdb
1 ONLINE ONLINE racnode-dc1-1 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE racnode-dc1-2 STABLE
ora.racnode-dc1-1.vip
1 ONLINE ONLINE racnode-dc1-1 STABLE
ora.racnode-dc1-2.vip
1 ONLINE ONLINE racnode-dc1-2 STABLE
ora.rhpserver
1 OFFLINE OFFLINE STABLE
ora.scan1.vip
1 ONLINE ONLINE racnode-dc1-2 STABLE
ora.scan2.vip
1 ONLINE ONLINE racnode-dc1-1 STABLE
ora.scan3.vip
1 ONLINE ONLINE racnode-dc1-1 STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
/u01/18.3.0.0/grid on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB
[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
OFFLINE OFFLINE racnode-dc1-1 volume /opt/oracle/r
hp_images/chkbase is
unmounted,STABLE
OFFLINE OFFLINE racnode-dc1-2 volume /opt/oracle/r
hp_images/chkbase is
unmounted,STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
OFFLINE OFFLINE racnode-dc1-1 STABLE
OFFLINE OFFLINE racnode-dc1-2 STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1
[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)
[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version: 18.0.0.0.0
[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
Device: /dev/asm/ghchkpt-61
Mount Point: /opt/oracle/rhp_images/chkbase
Disk Group: CRS
Volume: GHCHKPT
Options: none
Nodes: all
Accelerator Volumes:
[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems
[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup Consumer Space Size With Mirroring Usable Free %Free Path
CRS 59.99 59.99 34.95 58%
DATA 99.99 99.99 94.76 94%
FRA 59.99 59.99 59.43 99%
----
unit of measurement: GB
[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#
[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version 18.0.0.0.0
[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_diskoting_files Name
MOUNTED EXTERN N 512 512 4096 4194304 61436 35784 0 35784 Y CRS/
MOUNTED EXTERN N 512 512 4096 4194304 102396 97036 0 97036 N DATA/
MOUNTED EXTERN N 512 512 4096 4194304 61436 60856 0 60856 N FRA/
[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running
[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle 3571 1 0 02:40 ? 00:00:03 mdb_pmon_-MGMTDB
oracle 17109 1 0 Apr20 ? 00:00:04 asm_pmon_+ASM1
oracle 17531 1 0 Apr20 ? 00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$
Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM
This is how you can find out.
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM
================================================================================
### gridSetup.sh -silent -skipPrereqs -applyRU
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ ll
total 13012
-rw-r----- 1 oracle oinstall 20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall 157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall 4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall 12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/rootupgrade.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
# Execute Root Scripts successful. # 3228 # 1555780156914
# Execute Root Scripts successful. # 3228 # 1555780156914
# Execute Root Scripts successful. # 3228 # 1555780156914
================================================================================
### gridSetup.sh -executeConfigTools -silent
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ ll
total 1116
-rw-r----- 1 oracle oinstall 0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall 122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall 129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log
oracle@racnode-dc1-1:hawk1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO: [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO: [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO: [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:18.0.0.0.0:common
INFO: [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO: [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method
It might be better to use GUI if available but be careful.
For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.
I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.
![]()
Rene’s quote:
After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.
I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.
Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.
I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.
Unless you’re lucky or know someone, good luck with your upgrade.
Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.
Did you use GUI or silent?