Quantcast
Channel: Thinking Out Loud
Viewing all 666 articles
Browse latest View live

Playing with oracleasm and ASMLib

$
0
0

Forgot about script I wrote some time ago: Be Friend With awk/sed | ASM Mapping

[root@racnode-dc1-1 ~]# cat /sf_working/scripts/asm_mapping.sh
#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done

[root@racnode-dc1-1 ~]# /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Mar 16 10:25 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Mar 16 10:25 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Mar 16 10:25 /dev/sde1

[root@racnode-dc1-1 ~]#

HOWTO: Which Disks Are Handled by ASMLib Kernel Driver? (Doc ID 313387.1)

[root@racnode-dc1-1 ~]# oracleasm listdisks
CRS01
DATA01
FRA01

[root@racnode-dc1-1 dev]# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 oracle dba 8, 33 Mar 15 10:46 CRS01
brw-rw---- 1 oracle dba 8, 49 Mar 15 10:46 DATA01
brw-rw---- 1 oracle dba 8, 65 Mar 15 10:46 FRA01

[root@racnode-dc1-1 dev]# ls -l /dev | grep -E '33|49|65'|grep -E '8'
brw-rw---- 1 root    disk      8,  33 Mar 15 23:47 sdc1
brw-rw---- 1 root    disk      8,  49 Mar 15 23:47 sdd1
brw-rw---- 1 root    disk      8,  65 Mar 15 23:47 sde1

[root@racnode-dc1-1 dev]# /sbin/blkid | grep oracleasm
/dev/sde1: LABEL="FRA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="205115d9-730d-4f64-aedd-d3886e73d123"
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"
/dev/sdc1: LABEL="CRS01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="232e214d-07bb-4f36-aba8-fb215437fb7e"
[root@racnode-dc1-1 dev]#

Various commands to retrieved oracleasm info and more.

[root@racnode-dc1-1 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# uname -r
4.1.12-61.1.18.el7uek.x86_64

[root@racnode-dc1-1 ~]# rpm -q oracleasm-`uname -r`
package oracleasm-4.1.12-61.1.18.el7uek.x86_64 is not installed

[root@racnode-dc1-1 ~]# rpm -qa |grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# oracleasm -V
oracleasm version 2.1.9

[root@racnode-dc1-1 ~]# oracleasm -h
Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ]
       oracleasm --exec-path
       oracleasm -h
       oracleasm -V

The basic oracleasm commands are:
    configure        Configure the Oracle Linux ASMLib driver
    init             Load and initialize the ASMLib driver
    exit             Stop the ASMLib driver
    scandisks        Scan the system for Oracle ASMLib disks
    status           Display the status of the Oracle ASMLib driver
    listdisks        List known Oracle ASMLib disks
    querydisk        Determine if a disk belongs to Oracle ASMlib
    createdisk       Allocate a device for Oracle ASMLib use
    deletedisk       Return a device to the operating system
    renamedisk       Change the label of an Oracle ASMlib disk
    update-driver    Download the latest ASMLib driver

[root@racnode-dc1-1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@racnode-dc1-1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

[root@racnode-dc1-1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@racnode-dc1-1 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@racnode-dc1-1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@racnode-dc1-1 ~]# oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

[root@racnode-dc1-1 ~]# oracleasm querydisk -p DATA01
Disk "DATA01" is a valid ASM disk
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"

[root@racnode-dc1-1 ~]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:CRS01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:DATA01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:FRA01 [104853504 blocks (53684994048 bytes), maxio 1024]

[root@racnode-dc1-1 ~]# lsmod | grep oracleasm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# modinfo oracleasm
filename:       /lib/modules/4.1.12-61.1.18.el7uek.x86_64/kernel/drivers/block/oracleasm/oracleasm.ko
description:    Kernel driver backing the Generic Linux ASM Library.
author:         Joel Becker, Martin K. Petersen <martin.petersen@oracle.com>
version:        2.0.8
license:        GPL
srcversion:     4B3524FDA590726E8D378CB
depends:
intree:         Y
vermagic:       4.1.12-61.1.18.el7uek.x86_64 SMP mod_unload modversions
signer:         Oracle CA Server
sig_key:        AC:74:F5:41:96:B5:9D:EB:61:BA:02:F9:C2:02:8C:9C:E5:94:53:06
sig_hashalgo:   sha512
parm:           use_logical_block_size:Prefer logical block size over physical (Y=logical, N=physical [default]) (bool)

[root@racnode-dc1-1 ~]# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Mar  5 20:21 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm

[root@racnode-dc1-1 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# rpm -qi oracleasmlib-2.0.4-1.el6.x86_64
Name        : oracleasmlib
Version     : 2.0.4
Release     : 1.el6
Architecture: x86_64
Install Date: Tue 18 Apr 2017 10:56:40 AM CEST
Group       : System Environment/Kernel
Size        : 27192
License     : Oracle Corporation
Signature   : RSA/SHA256, Mon 26 Mar 2012 10:22:51 PM CEST, Key ID 72f97b74ec551f03
Source RPM  : oracleasmlib-2.0.4-1.el6.src.rpm
Build Date  : Mon 26 Mar 2012 10:22:44 PM CEST
Build Host  : ca-build44.us.oracle.com
Relocations : (not relocatable)
Packager    : Joel Becker <joel.becker@oracle.com>
Vendor      : Oracle Corporation
URL         : http://oss.oracle.com/
Summary     : The Oracle Automatic Storage Management library userspace code.
Description :
The Oracle userspace library for Oracle Automatic Storage Management
[root@racnode-dc1-1 ~]#

References for ASMLib

Do you need asmlib?

Oracleasmlib Not Necessary


Playing with ACFS

$
0
0

Kernel version is 4.1.12-94.3.9.el7uek.x86_64 vs ACFS-9325: Driver OS kernel version = 4.1.12-32.el7uek.x86_64 because kernel was upgraded and ACFS has not been reconfigured after kernel upgrade.

[root@racnode-dc1-1 ~]# uname -r
4.1.12-94.3.9.el7uek.x86_64

[root@racnode-dc1-1 ~]# lsmod | grep oracle
oracleacfs           3719168  2
oracleadvm            606208  7
oracleoks             516096  2 oracleacfs,oracleadvm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# modinfo oracleoks
filename:       /lib/modules/4.1.12-94.3.9.el7uek.x86_64/weak-updates/usm/oracleoks.ko
author:         Oracle Corporation
license:        Proprietary
srcversion:     3B8116031A3907D0FFFC8E1
depends:
vermagic:       4.1.12-32.el7uek.x86_64 SMP mod_unload modversions
signer:         Oracle Linux Kernel Module Signing Key
sig_key:        2B:B3:52:41:29:69:A3:65:3F:0E:B6:02:17:63:40:8E:BB:9B:B5:AB
sig_hashalgo:   sha512

[root@racnode-dc1-1 ~]# acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 4.1.12-32.el7uek.x86_64(x86_64).
ACFS-9326:     Driver Oracle version = 181010.

[root@racnode-dc1-1 ~]# acfsdriverstate installed
ACFS-9203: true

[root@racnode-dc1-1 ~]# acfsdriverstate supported
ACFS-9200: Supported

[root@racnode-dc1-1 ~]# acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at: '/u01/app/12.1.0.1/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12/4.1.12-x86_64/bin'

[root@racnode-dc1-1 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# acfsutil registry
Mount Object:
  Device: /dev/asm/acfs_vol-256
  Mount Point: /ggdata02
  Disk Group: DATA
  Volume: ACFS_VOL
  Options: none
  Nodes: all
[root@racnode-dc1-1 ~]# acfsutil info fs
/ggdata02
    ACFS Version: 12.1.0.2.0
    on-disk version:       39.0
    flags:        MountPoint,Available
    mount time:   Mon Mar 25 16:24:58 2019
    allocation unit:       4096
    volumes:      1
    total size:   10737418240  (  10.00 GB )
    total free:   10569035776  (   9.84 GB )
    file entry table allocation: 49152
    primary volume: /dev/asm/acfs_vol-256
        label:
        state:                 Available
        major, minor:          248, 131073
        size:                  10737418240  (  10.00 GB )
        free:                  10569035776  (   9.84 GB )
        metadata read I/O count:         1087
        metadata write I/O count:        11
        total metadata bytes read:       556544  ( 543.50 KB )
        total metadata bytes written:    12800  (  12.50 KB )
        ADVM diskgroup         DATA
        ADVM resize increment: 536870912
        ADVM redundancy:       unprotected
        ADVM stripe columns:   8
        ADVM stripe width:     1048576
    number of snapshots:  0
    snapshot space usage: 0  ( 0.00 )
    replication status: DISABLED
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ cluvfy comp acfs -n all -f /ggdata02 -verbose

Verifying ACFS Integrity
Task ASM Integrity check started...


Starting check to see if ASM is running on all cluster nodes...

ASM Running check passed. ASM is running on all specified nodes

Confirming that at least one ASM disk group is configured...
Disk Group Check passed. At least one Disk Group configured

Task ASM Integrity check passed...

Task ACFS Integrity check started...

Checking shared storage accessibility...

"/ggdata02" is shared


Shared storage check was successful on nodes "racnode-dc1-1,racnode-dc1-2"

Task ACFS Integrity check passed

UDev attributes check for ACFS started...
Result: UDev attributes check passed for ACFS


Verification of ACFS Integrity was successful.
[oracle@racnode-dc1-1 ~]$

Gather ACFS Volume Info:

[oracle@racnode-dc1-1 ~]$ asmcmd volinfo –all

Diskgroup Name: DATA

         Volume Name: ACFS_VOL
         Volume Device: /dev/asm/acfs_vol-256
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /ggdata02

Gather ACFS info using resource name:

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init

NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

From (asmcmd volinfo –all): Diskgroup Name: DATA 

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.dg -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

From (asmcmd volinfo –all): Diskgroup Name: DATA and Volume Name: ACFS_VOL

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.advm -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.acfs -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------

Gather ACFS info using resource type:

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.volume.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.acfs.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.diskgroup.type’

--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.DATA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.FRA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
--------------------------------------------------------------------------------

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

$
0
0

After upgrade and apply RU for Grid 18c, the cluster upgrade state was not NORMAL.

The cluster upgrade state is [UPGRADE FINAL] which I have never seen before.

Searching Oracle Support was useless as I was only able to find the following states:

The cluster upgrade state is [NORMAL]
The cluster upgrade state is [FORCED]
The cluster upgrade state is [ROLLING PATCH]

The following checks were performed after upgrade:

[oracle@racnode-dc1-1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[oracle@racnode-dc1-1 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#


[oracle@racnode-dc1-2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2532936542].

[oracle@racnode-dc1-2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

Check OCR: Grid Infrastructure Upgrade : The cluster upgrade state is [FORCED] (Doc ID 2482606.1)
I was desperate and OCR was fine.

[root@racnode-dc1-1 ~]# olsnodes -c
vbox-rac-dc1

[root@racnode-dc1-1 ~]# olsnodes -t -a -s -n
racnode-dc1-1   1       Active  Hub     Unpinned
racnode-dc1-2   2       Active  Hub     Unpinned

[root@racnode-dc1-1 ~]# $GRID_HOME/bin/ocrdump /tmp/ocrdump.txt

[root@racnode-dc1-1 ~]# grep SYSTEM.version.hostnames /tmp/ocrdump.txt
[SYSTEM.version.hostnames]
[SYSTEM.version.hostnames.racnode-dc1-1]
[SYSTEM.version.hostnames.racnode-dc1-1.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-1.site]
[SYSTEM.version.hostnames.racnode-dc1-2]
[SYSTEM.version.hostnames.racnode-dc1-2.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-2.site]
[root@racnode-dc1-1 ~]#

Thanks to my friend Vlatko J. https://twitter.com/jvlatko

Run cluvfy:

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ which cluvfy
/u01/18.3.0.0/grid/bin/cluvfy

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade

Baseline collected.
Collection report for this execution is saved in file "/u01/app/oracle/crsdata/@global/cvu/baseline/install/grid_install_18.0.0.0.0.zip".

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 13, 2019 11:05:58 PM
CVU home:                     /u01/18.3.0.0/grid/
User:                         oracle
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

After running cluvfy, the cluster upgrade state is [NORMAL].

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

Create Linux Swap File

$
0
0

Currently I am using oravirt (Mikael Sandström) · GitHub (https://github.com/oravirt) vagrant boxes.

The swap is too small, wanted to increase for 18c upgrade test, tired of doing this manually, and here’s a script for that.

#!/bin/sh -x
swapon --show
free -h
rm -fv /swapfile1
dd if=/dev/zero of=/swapfile1 bs=1G count=16
ls -lh /swapfile?
chmod 0600 /swapfile1
mkswap /swapfile1
swapon /swapfile1
swapon --show
free -h
echo "/root/swapfile1         swap                    swap    defaults        0 0" >> /etc/fstab
cat /etc/fstab
exit

Script in action:

[root@racnode-dc1-2 patch]# ./mkswap.sh
+ swapon --show
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   2G  33M   -1
+ free -h
              total        used        free      shared  buff/cache   available
Mem:           5.6G        4.0G        114M        654M        1.4G        779M
Swap:          2.0G         33M        2.0G
+ rm -fv /swapfile1
+ dd if=/dev/zero of=/swapfile1 bs=1G count=16
16+0 records in
16+0 records out
17179869184 bytes (17 GB) copied, 42.7352 s, 402 MB/s
+ ls -lh /swapfile1
-rw-r--r-- 1 root root 16G Apr 14 15:18 /swapfile1
+ chmod 0600 /swapfile1
+ mkswap /swapfile1
Setting up swapspace version 1, size = 16777212 KiB
no label, UUID=b084bd5d-e32e-4c15-974f-09f505a0cedc
+ swapon /swapfile1
+ swapon --show
NAME       TYPE      SIZE   USED PRIO
/dev/dm-1  partition   2G 173.8M   -1
/swapfile1 file       16G     0B   -2
+ free -h
              total        used        free      shared  buff/cache   available
Mem:           5.6G        3.9G        1.0G        189M        657M        1.3G
Swap:           17G        173M         17G
+ echo '/root/swapfile1         swap                    swap    defaults        0 0'
+ cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Apr 18 08:50:14 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root     /                       xfs     defaults        0 0
UUID=ed2996e5-e077-4e23-83a5-10418226a725 /boot                   xfs     defaults        0 0
/dev/mapper/ol-swap     swap                    swap    defaults        0 0
/dev/vgora/lvora /u01 ext4 defaults 1 2
/root/swapfile1         swap                    swap    defaults        0 0
+ exit
[root@racnode-dc1-2 patch]#

Update Override OPatch

$
0
0

Framework to source GI/DB RAC environment stored on shared volume.

[oracle@racnode-dc1-2 patch]$ df -h |grep patch
media_patch              3.7T  442G  3.3T  12% /media/patch

[oracle@racnode-dc1-2 patch]$ ps -ef|grep pmon
oracle    3268  2216  0 15:37 pts/0    00:00:00 grep --color=auto pmon
oracle   11254     1  0 06:33 ?        00:00:02 ora_pmon_hawk2
oracle   19995     1  0 05:52 ?        00:00:02 asm_pmon_+ASM2

[oracle@racnode-dc1-2 patch]$ cat /etc/oratab
+ASM2:/u01/app/12.1.0.1/grid:N
hawk2:/u01/app/oracle/12.1.0.1/db1:N

[oracle@racnode-dc1-2 patch]$ cat gi.env
### Michael Dinh : Mar 26, 2019
### Source RAC GI environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset ORACLE_UNQNAME
ORAENV_ASK=NO
h=$(hostname -s)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=+ASM${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
export GRID_HOME=$ORACLE_HOME
env|egrep 'ORA|GRID'
sysresv|tail -1

[oracle@racnode-dc1-2 patch]$ cat hawk.env
### Michael Dinh : Mar 26, 2019
### Source RAC DB environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset GRID_HOME
h=$(hostname -s)
### Extract filename without extension (.env)
ORAENV_ASK=NO
export ORACLE_UNQNAME=$(basename $BASH_SOURCE .env)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=$ORACLE_UNQNAME${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
env|egrep 'ORA|GRID'
sysresv|tail -1
[oracle@racnode-dc1-2 patch]$

update_opatch.sh

#!/bin/sh -x
update_opatch()
{
set -ex
cd $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch version
unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip ; echo $?
$ORACLE_HOME/OPatch/opatch version
}
ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
. /media/patch/gi.env
update_opatch
. /media/patch/hawk.env
update_opatch
exit

Run update_opatch.sh

[oracle@racnode-dc1-1 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been changed from hawk1 to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 patch]$


[oracle@racnode-dc1-2 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-2 patch]$

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

$
0
0

There are/were a lot of discussions about Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]
on how cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade could have changed the cluster upgrade state to [NORMAL].

Running gridSetup.sh -executeConfigTools in silent mode, the next step cluvfy is not run.

[oracle@racnode-dc1-1 ~]$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-15_01-02-06AM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
[oracle@racnode-dc1-1 ~]$

Running gridSetup.sh -executeConfigTools in GUI, there is an option to ignore Failed Upgrading RHP Repository and continue to the next step to run cluvfy.

I don’t think cluvfy modify the state of the cluster but rather ora.cvu did due to the existing of the following files.

[root@racnode-dc1-1 install]# pwd
/u01/app/oracle/crsdata/@global/cvu/baseline/install
[root@racnode-dc1-1 install]# ll
total 36000
-rw-r--r-- 1 oracle oinstall 35958465 Apr 14 06:05 grid_install_12.1.0.2.0.xml
-rw-r--r-- 1 oracle oinstall   901803 Apr 15 01:42 grid_install_18.0.0.0.0.zip
[root@racnode-dc1-1 install]# 

When checking RESULTS from ora.cvu, there are no errors.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
[oracle@racnode-dc1-1 ~]$ 

Hell! What do I know as I am just a RAC novice and happy the cluster state is what it should be.

18c Upgrade: Failed gridSetup.sh -executeConfigTools: Cluster upgrade state is [UPGRADE FINAL]

$
0
0

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

This is a multi-part series for 18c Upgrade and suggest read the above 2 posts first.

Commands for gridSetup.sh

+ /u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs -applyRU /media/patch/Jan2019/28828717 -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/installerPatchActions_2019-04-16_06-19-12AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/18.3.0.0/grid/install/response/grid_2019-04-16_06-19-12AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/gridSetupActions2019-04-16_06-19-12AM.log

As a root user, execute the following script(s):
        1. /u01/18.3.0.0/grid/rootupgrade.sh

Execute /u01/18.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc1-1, racnode-dc1-2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp [-silent]


+ exit
oracle@racnode-dc1-1::/home/oracle
$

Basically, the error provided is utterly useless.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
oracle@racnode-dc1-1::/home/oracle

Check logs from directory /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ ls -alrt
total 1072
-rw-r----- 1 oracle oinstall     130 Apr 16 12:59 installerPatchActions_2019-04-16_12-59-56PM.log
-rw-r----- 1 oracle oinstall       0 Apr 16 12:59 gridSetupActions2019-04-16_12-59-56PM.err
drwxrwx--- 8 oracle oinstall    4096 Apr 16 13:01 ..
-rw-r----- 1 oracle oinstall 1004378 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.out
-rw-r----- 1 oracle oinstall    2172 Apr 16 13:01 time2019-04-16_12-59-56PM.log ***
-rw-r----- 1 oracle oinstall   73047 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.log ***
drwxrwx--- 2 oracle oinstall    4096 Apr 16 13:01 .

Check time2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ cat time2019-04-16_12-59-56PM.log
 # Message # ElapsedTime # Current Time ( ms )
 # Starting step:INITIALIZE_ACTION of state:init #  0  # 1555412405106
 # Finished step:INITIALIZE_ACTION of state:init # 1 # 1555412405106
 # Starting step:EXECUTE of state:init #  0  # 1555412405108
 # Finished step:EXECUTE of state:init # 3 # 1555412405111
 # Starting step:VALIDATE of state:init #  0  # 1555412405113
 # Finished step:VALIDATE of state:init # 2 # 1555412405115
 # Starting step:TRANSITION of state:init #  0  # 1555412405115
 # Finished step:TRANSITION of state:init # 2 # 1555412405117
 # Starting step:EXECUTE of state:CRSConfigTools #  0  # 1555412405117
 # Finished step:EXECUTE of state:CRSConfigTools # 813 # 1555412405930
 # Starting step:VALIDATE of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:VALIDATE of state:CRSConfigTools # 0 # 1555412405930
 # Starting step:TRANSITION of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:TRANSITION of state:CRSConfigTools # 26591 # 1555412432521
 # Starting step:INITIALIZE_ACTION of state:setup #  0  # 1555412432521
 # Finished step:INITIALIZE_ACTION of state:setup # 0 # 1555412432521
 # Starting step:EXECUTE of state:setup #  0  # 1555412432522
 # Finished step:EXECUTE of state:setup # 6 # 1555412432528
 # Configuration in progress. #  0  # 1555412436788
 # Update Inventory in progress. #  0  # 1555412437768
 # Update Inventory successful. # 52612 # 1555412490380
 # Upgrading RHP Repository in progress. #  0  # 1555412490445

================================================================================
 # Upgrading RHP Repository failed. # 12668 # 1555412503112
================================================================================

 # Starting step:VALIDATE of state:setup #  0  # 1555412503215
 # Finished step:VALIDATE of state:setup # 15 # 1555412503230
 # Starting step:TRANSITION of state:setup #  0  # 1555412503230
 # Finished step:TRANSITION of state:setup # 0 # 1555412503230
 # Starting step:EXECUTE of state:finish #  0  # 1555412503230
 # Finished step:EXECUTE of state:finish # 6 # 1555412503236
 # Starting step:VALIDATE of state:finish #  0  # 1555412503237
 # Finished step:VALIDATE of state:finish # 1 # 1555412503238
 # Starting step:TRANSITION of state:finish #  0  # 1555412503238
 # Finished step:TRANSITION of state:finish # 0 # 1555412503238

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

Check gridSetupActions2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -B2 -A100 'Executing RHPUPGRADE' gridSetupActions2019-04-16_12-59-56PM.log
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 16, 2019 1:01:30 PM] No arguments to pass to stdin
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 16, 2019 1:01:43 PM] Exiting ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus SUCCESS_MINUS_RECTOOL to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Calling event ConfigSessionEnding
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.endSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Configuration
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus FAILURE to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] All forked task are completed at state setup
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <setup>

================================================================================
WARNING:  [Apr 16, 2019 1:01:43 PM] [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
================================================================================

INFO:  [Apr 16, 2019 1:01:43 PM] Advice is CONTINUE
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <setup>
INFO:  [Apr 16, 2019 1:01:43 PM] Verifying route success
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Executing action at state finish
INFO:  [Apr 16, 2019 1:01:43 PM] FinishAction Actions.execute called
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] Completed executing action at state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Moved to state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <finish>
WARNING:  [Apr 16, 2019 1:01:43 PM] Validation disabled for the state finish
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Terminating all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Terminated all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Successfully executed the flow in SILENT mode
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] inventory location is/u01/app/oraInventory
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application

================================================================================
INFO:  [Apr 16, 2019 1:01:43 PM] Exit Status is -1
INFO:  [Apr 16, 2019 1:01:43 PM] Shutdown Oracle Grid Infrastructure 18c Installer
INFO:  [Apr 16, 2019 1:01:43 PM] Unloading Setup Driver
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

Due to Exit Status is -1 is probably why – The cluster upgrade state is [UPGRADE FINAL]

Why Upgrading RHP Repository when oracle_install_crs_ConfigureRHPS=false?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -i rhp *
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:04 PM] Setting value for the property:oracle_install_crs_ConfigureRHPS in the bean:CRSInstallSettings
gridSetupActions2019-04-16_12-59-56PM.log: oracle_install_crs_ConfigureRHPS                       false
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Created config job for rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Selecting job named 'Upgrading RHP Repository' for retry
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Started Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository in progress. #  0  # 1555412490445
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository failed. # 12668 # 1555412503112
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

ora.cvu does not report any errors.

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

Run rhprepos upgradeSchema -fromversion 12.1.0.2.0 – FAILED.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
PRCT-1474 : failed to run 'mgmtca' on node racnode-dc1-2.

oracle@racnode-dc1-1::/home/oracle
$ ps -ef|grep pmon
oracle    9722  4804  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle   10380     1  0 13:46 ?        00:00:01 asm_pmon_+ASM1
oracle   10974     1  0 13:46 ?        00:00:01 apx_pmon_+APX1
oracle   11218     1  0 13:47 ?        00:00:02 ora_pmon_hawk1
oracle@racnode-dc1-1::/home/oracle
$ ssh racnode-dc1-2
Last login: Tue Apr 16 18:44:30 2019

----------------------------------------
Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

oracle@racnode-dc1-2::/home/oracle
$ ps -ef|grep pmon
oracle    9219     1  0 13:44 ?        00:00:01 asm_pmon_+ASM2
oracle   10113     1  0 13:45 ?        00:00:01 apx_pmon_+APX2
oracle   10619     1  0 13:45 ?        00:00:01 ora_pmon_hawk2
oracle   13200 13178  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle@racnode-dc1-2::/home/oracle
$

In conclusion, the silent upgrade process is poorly documented at best.

Starting to wondering if the following parameters contributed to the issue:

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

$
0
0

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL

While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.

The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”

Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log

INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE

INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'

INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all

INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'

INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer

I would suggest to run the last step using GUI if feasible versus silent to see what is happening:

/u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

So how did I get myself into this predicament? I followed blindly. I trust but did not verify.

18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)

Step 2.1 - Understand how MGMTDB is handled during upgrade

****************************************************************************************************
Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist. 
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.
****************************************************************************************************

Note: MGMTDB is required when using Rapid Host Provisioning. 
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later,  it is configured to use 1G of SGA and 500 MB of PGA. 
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.

The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.

Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)

Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. 

Rapid Home Provisioning Server is configured, by default and there does not look to be documented or easily found option to not install or bypass default.

RHPS is interchangeable between Server and Service.

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.

/u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

Here is what the environment looks like after the 18c GI upgrade.

Notice ACFS is configured for RHP.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.chad
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.7.214 172.16
                                                             .9.10,STABLE
ora.asm
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
  /u01/18.3.0.0/grid on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)

[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version: 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
  Device: /dev/asm/ghchkpt-61
  Mount Point: /opt/oracle/rhp_images/chkbase
  Disk Group: CRS
  Volume: GHCHKPT
  Options: none
  Nodes: all
  Accelerator Volumes:

[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup      Consumer      Space     Size With Mirroring  Usable Free  %Free   Path
CRS                          59.99              59.99          34.95       58%
DATA                         99.99              99.99          94.76       94%
FRA                          59.99              59.99          59.43       99%
----
unit of measurement: GB

[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_diskoting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    35784                0           35784                        Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304    102396    97036                0           97036                        N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    60856                0           60856                        N  FRA/

[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle    3571     1  0 02:40 ?        00:00:03 mdb_pmon_-MGMTDB
oracle   17109     1  0 Apr20 ?        00:00:04 asm_pmon_+ASM1
oracle   17531     1  0 Apr20 ?        00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$

Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM

This is how you can find out.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM

================================================================================
### gridSetup.sh -silent -skipPrereqs -applyRU
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ ll
total 13012
-rw-r----- 1 oracle oinstall   20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall       0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall       0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall       0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall     157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall      29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall    4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall   12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/rootupgrade.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914

================================================================================
### gridSetup.sh -executeConfigTools -silent
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ ll
total 1116
-rw-r----- 1 oracle oinstall       0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall  122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall     129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall    3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log

oracle@racnode-dc1-1:hawk1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:18.0.0.0.0:common
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO:  [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method

It might be better to use GUI if available but be careful.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.

Rene’s quote:

After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.

I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.

Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.

I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.

Unless you’re lucky or know someone, good luck with your upgrade.

Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.

Did you use GUI or silent?


Solving DBFS UnMounting Issue

$
0
0

Often, I am quite baffle with Oracle’s implementations and documentations.

RAC GoldenGate DBFS implementation has been a nightmare and here is one example DBFS Nightmare

I am about to show you another.

In general, I find any implementation using ACTION_SCRIPT is good in theory, bad in practice, but I digress.

Getting ready to shutdown CRS for system patching to find out CRS failed to shutdown.

# crsctl stop crs
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2673: Attempting to stop 'dbfs_mount' on 'host02'
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2673: Attempting to stop 'dbfs_mount' on 'host02'
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2799: Failed to shut down resource 'dbfs_mount' on 'host02'
CRS-2799: Failed to shut down resource 'ora.GG_PROD.dg' on 'host02'
CRS-2799: Failed to shut down resource 'ora.asm' on 'host02'
CRS-2799: Failed to shut down resource 'ora.dbfs.db' on 'host02'
CRS-2799: Failed to shut down resource 'ora.host02.ASM2.asm' on 'host02'
CRS-2794: Shutdown of Cluster Ready Services-managed resources on 'host02' has failed
CRS-2675: Stop of 'ora.crsd' on 'host02' failed
CRS-2799: Failed to shut down resource 'ora.crsd' on 'host02'
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'host02' has failed
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.

Check /var/log/messages to find errors and there’s no clue for resolution.

# grep -i dbfs /var/log/messages
Apr 17 19:42:26 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 19:42:26 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 19:42:26 host02 DBFS_/ggdata: Stop - stopped, but still mounted, error

Apr 17 20:45:59 host02 DBFS_/ggdata: mount-dbfs.sh mounting DBFS at /ggdata from database DBFS
Apr 17 20:45:59 host02 DBFS_/ggdata: /ggdata already mounted, use mount-dbfs.sh stop before attempting to start

Apr 17 21:01:29 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 21:01:29 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 21:01:29 host02 DBFS_/ggdata: Stop - stopped, but still mounted, error
Apr 17 21:01:36 host02 dbfs_client[71957]: OCI_ERROR 3114 - ORA-03114: not connected to ORACLE
Apr 17 21:01:41 host02 dbfs_client[71957]: /FS1/dirdat/ih000247982 Block error RC:-5

Apr 17 21:03:06 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 21:03:06 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 21:03:06 host02 DBFS_/ggdata: Stop - stopped, now not mounted
Apr 17 21:09:19 host02 DBFS_/ggdata: filesystem /ggdata not currently mounted, no need to stop

Apr 17 22:06:16 host02 DBFS_/ggdata: mount-dbfs.sh mounting DBFS at /ggdata from database DBFS
Apr 17 22:06:17 host02 DBFS_/ggdata: ORACLE_SID is DBFS2
Apr 17 22:06:17 host02 DBFS_/ggdata: doing mount /ggdata using SID DBFS2 with wallet now
Apr 17 22:06:18 host02 DBFS_/ggdata: Start -- ONLINE

The messages in log of script agent show below error (MOS documentation).
Anyone know where script agent is located at?

2019-04-17 20:56:02.793903 :    AGFW:3274315520: {1:53477:37077} Agent received the message: AGENT_HB[Engine] ID 12293:16017523
2019-04-17 20:56:19.124667 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[check]
2019-04-17 20:56:19.176927 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Checking status now
2019-04-17 20:56:19.176973 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Check -- ONLINE
2019-04-17 20:56:32.794287 :    AGFW:3274315520: {1:53477:37077} Agent received the message: AGENT_HB[Engine] ID 12293:16017529
2019-04-17 20:56:43.312534 :    AGFW:3274315520: {2:37893:29307} Agent received the message: RESOURCE_STOP[dbfs_mount host02 1] ID 4099:16017535
2019-04-17 20:56:43.312574 :    AGFW:3274315520: {2:37893:29307} Preparing STOP command for: dbfs_mount host02 1
2019-04-17 20:56:43.312584 :    AGFW:3274315520: {2:37893:29307} dbfs_mount host02 1 state changed from: ONLINE to: STOPPING
2019-04-17 20:56:43.313088 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[stop]
2019-04-17 20:56:43.365201 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] unmounting DBFS from /ggdata
2019-04-17 20:56:43.415516 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] umounting the filesystem using '/bin/fusermount -u /ggdata'
2019-04-17 20:56:43.415541 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] /bin/fusermount: failed to unmount /ggdata: Device or resource busy
2019-04-17 20:56:43.415552 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] Stop - stopped, but still mounted, error
2019-04-17 20:56:43.415611 :    AGFW:3276416768: {2:37893:29307} Command: stop for resource: dbfs_mount host02 1 completed with status: FAIL
2019-04-17 20:56:43.415929 :CLSFRAME:3449863744:  TM [MultiThread] is changing desired thread # to 3. Current # is 2
2019-04-17 20:56:43.415970 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[check]
2019-04-17 20:56:43.416033 :    AGFW:3274315520: {2:37893:29307} Agent sending reply for: RESOURCE_STOP[dbfs_mount host02 1] ID 4099:16017535
2019-04-17 20:56:43.467939 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Checking status now
2019-04-17 20:56:43.467964 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Check -- ONLINE

ACTION_SCRIPT can be found using crsctl as shown if you have not found where log of script agent is at.

oracle@host02 ~ $ $GRID_HOME/bin/crsctl stat res -w "TYPE = local_resource" -p | grep mount-dbfs.sh
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh

Here’s a test case to resolve “failed to unmount /ggdata: Device or resource busy”.

First thought was to use fuser and kill the process.

# fuser -vmM /ggdata/
                     USER        PID ACCESS COMMAND
/ggdata:             root     kernel mount /ggdata
                     mdinh     85368 ..c.. bash
                     mdinh     86702 ..c.. vim
# 

Second thought, might not be a good idea and better idea is to let the script handle this if it can.
Let’s see what options are available for mount-dbfs.sh:

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh -h
Usage: mount-dbfs.sh { start | stop | check | status | restart | clean | abort | version }

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh version
20160215
oracle@host02 ~ 

Stop DBFS failed as expected.

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- ONLINE

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh stop
unmounting DBFS from /ggdata
umounting the filesystem using '/bin/fusermount -u /ggdata'
/bin/fusermount: failed to unmount /ggdata: Device or resource busy
Stop - stopped, but still mounted, error
oracle@host02 ~ $

Stop DBFS using clean option. Notice PID killed is 40047 and not the same as (mdinh 86702 ..c.. vim)
Note: not all output displayed for brevity.

oracle@host02 ~ $ /bin/bash -x /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh clean
+ msg='cleaning up DBFS nicely using (fusermount -u|umount)'
+ '[' info = info ']'
+ /bin/echo cleaning up DBFS nicely using '(fusermount' '-u|umount)'
cleaning up DBFS nicely using (fusermount -u|umount)
+ /bin/logger -t DBFS_/ggdata -p user.info 'cleaning up DBFS nicely using (fusermount -u|umount)'
+ '[' 1 -eq 1 ']'
+ /bin/fusermount -u /ggdata
/bin/fusermount: failed to unmount /ggdata: Device or resource busy
+ /bin/sleep 1
+ FORCE_CLEANUP=0
+ '[' 0 -gt 1 ']'
+ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
+ '[' 0 -eq 0 ']'
+ FORCE_CLEANUP=1
+ '[' 1 -eq 1 ']'
+ logit error 'tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ type=error
+ msg='tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ '[' error = info ']'
+ '[' error = error ']'
+ /bin/echo tried '(fusermount' '-u|umount),' still mounted, now cleaning with '(fusermount' -u '-z|umount' '-f)' and kill
tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill
+ /bin/logger -t DBFS_/ggdata -p user.error 'tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ '[' 1 -eq 1 ']'
================================================================================
+ /bin/fusermount -u -z /ggdata
================================================================================
+ '[' 1 -eq 1 ']'
++ /bin/ps -ef
++ /bin/grep -w /ggdata
++ /bin/grep dbfs_client
++ /bin/grep -v grep
++ /bin/awk '{print $2}'
+ PIDS=40047
+ '[' -n 40047 ']'
================================================================================
+ /bin/kill -9 40047
================================================================================
++ /bin/ps -ef
++ /bin/grep -w /ggdata
++ /bin/grep mount.dbfs
++ /bin/grep -v grep
++ /bin/awk '{print $2}'
+ PIDS=
+ '[' -n '' ']'
+ exit 1

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- OFFLINE

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- OFFLINE

oracle@host02 ~ $ df -h|grep /ggdata
/dev/asm/acfs_vol-177                                  299G  2.8G  297G   1% /ggdata1
dbfs-@DBFS:/                                            60G  1.4G   59G   3% /ggdata

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- ONLINE
oracle@host02 ~ $ 

What PID was killed when running mount-dbfs.sh clean?
It’s for dbfs_client.

mdinh@host02 ~ $ ps -ef|grep dbfs
oracle   34865     1  0 Mar29 ?        00:02:43 oracle+ASM1_asmb_dbfs1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   40047     1  0 Apr22 ?        00:00:10 /u01/app/oracle/product/12.1.0/db_1/bin/dbfs_client /@DBFS -o allow_other,direct_io,wallet /ggdata
oracle   40081     1  0 Apr22 ?        00:00:27 oracle+ASM1_user40069_dbfs1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
mdinh    88748 87565  0 13:30 pts/1    00:00:00 grep --color=auto dbfs
mdinh@host02 ~ $ 

It would have been so much better for mount-dbfs.sh to provide the info as part of the kill versus having user go debug script and trace process.

If you have read this far, then it’s only fair to provide log of script agent.

$ grep mount-dbfs.sh $ORACLE_BASE/diag/crs/$(hostname -s)/crs/trace/crsd_scriptagent_oracle.trc | grep "2019-04-17 20:5"

Simplest Automation: Source RAC GI/DB Environment

$
0
0

The conundrum I am facing is whether or not to prepend or append db_name for it’s specific environment.

For example, if db_name is hawk, then hawkprod, hawkqa, hawkdev, etc…

If the hosts have the specific environment, e.g. prodhost, qahost, devhost, then isn’t hawk database on qahost a qa database?

Understandably, every organizations have different requirements and conventions as there’s no one size fits all.

One thing to consider is consistency for easy automation.

So there I was, preparing to patch 8-nodes RAC cluster and thinking how can I make this easier and automated.

First thought was to have a consistent method to source RAC GI/DB.

Once that is done, create shell script to apply the patch.

Patching VM can take hours and too lazy to copy and paste all the commands.

Just run the script and be done with it. I could have scripted patching to loop through all instances vs running them individually per host.

[root@racnode-dc1-1 ~]# /media/patch/jan2019_patch_28833531.sh
[root@racnode-dc1-2 ~]# /media/patch/jan2019_patch_28833531.sh

Have I deployed this in real life? Not to the extreme where everything is scripted because there are too many inconsistencies across environments.

Framework and demo.

[oracle@racnode-dc2-1 ~]$ df -h /media/patch/
Filesystem      Size  Used Avail Use% Mounted on
media_patch     3.7T  463G  3.2T  13% /media/patch

[oracle@racnode-dc2-1 ~]$ ll /media/patch/*.env
-rwxrwxrwx 1 vagrant vagrant  180 Mar 26 04:49 /media/patch/asm.env
-rwxrwxrwx 1 vagrant vagrant 1295 Sep 28  2018 /media/patch/bp.env
-rwxrwxrwx 1 vagrant vagrant 1113 Feb  2 16:00 /media/patch/ch_gi_prereq.env
-rwxrwxrwx 1 vagrant vagrant  511 Mar 27 15:22 /media/patch/crap.env
-rwxrwxrwx 1 vagrant vagrant  130 Jan 31 23:16 /media/patch/db.env
-rwxrwxrwx 1 vagrant vagrant  105 Feb 20 19:04 /media/patch/ggs.env
-rwxrwxrwx 1 vagrant vagrant  444 Apr 14 15:45 /media/patch/gi.env
-rwxrwxrwx 1 vagrant vagrant  518 Apr 14 00:49 /media/patch/hawk.env
-rwxrwxrwx 1 vagrant vagrant  944 Apr 22  2018 /media/patch/psu.env

### 12.2 GI

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

### 12.2 DB

[oracle@racnode-dc2-1 ~]$ . /media/patch/hawk.env
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.2.0.1/db1
Oracle Instance alive for sid "hawk1"

[oracle@racnode-dc2-1 ~]$ ps -ef|grep pmon
oracle    8756     1  0 14:27 ?        00:00:00 asm_pmon_+ASM1
oracle    9663     1  0 14:28 ?        00:00:00 ora_pmon_hawk1
oracle   14319 12020  0 14:30 pts/0    00:00:00 grep --color=auto pmon

[oracle@racnode-dc2-1 ~]$ ssh racnode-dc2-1
Last login: Sun Apr 28 14:28:59 2019

----------------------------------------
Welcome to racnode-dc2-1
OracleLinux 7.5 x86_64

FQDN: racnode-dc2-1.internal.lab
IP's:
enp0s3: 10.0.2.15
enp0s8: 192.168.7.100
enp0s9: 172.16.7.10

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5708 MB
Kernel:    4.1.12-112.16.4.el7uek.x86_64

----------------------------------------

[oracle@racnode-dc2-1 ~]$ df -h /media/patch/
Filesystem      Size  Used Avail Use% Mounted on
media_patch     3.7T  463G  3.2T  13% /media/patch

[oracle@racnode-dc2-1 ~]$ ll /media/patch/*.env
-rwxrwxrwx 1 vagrant vagrant  180 Mar 26 04:49 /media/patch/asm.env
-rwxrwxrwx 1 vagrant vagrant 1295 Sep 28  2018 /media/patch/bp.env
-rwxrwxrwx 1 vagrant vagrant 1113 Feb  2 16:00 /media/patch/ch_gi_prereq.env
-rwxrwxrwx 1 vagrant vagrant  511 Mar 27 15:22 /media/patch/crap.env
-rwxrwxrwx 1 vagrant vagrant  130 Jan 31 23:16 /media/patch/db.env
-rwxrwxrwx 1 vagrant vagrant  105 Feb 20 19:04 /media/patch/ggs.env
-rwxrwxrwx 1 vagrant vagrant  444 Apr 14 15:45 /media/patch/gi.env
-rwxrwxrwx 1 vagrant vagrant  518 Apr 14 00:49 /media/patch/hawk.env
-rwxrwxrwx 1 vagrant vagrant  944 Apr 22  2018 /media/patch/psu.env

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc2-1 ~]$ . /media/patch/hawk.env
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.2.0.1/db1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc2-1 ~]$

====================================================================================================

[oracle@racnode-dc1-1 ~]$ df -h /media/patch/
Filesystem      Size  Used Avail Use% Mounted on
media_patch     3.7T  463G  3.2T  13% /media/patch

[oracle@racnode-dc1-1 ~]$ ll /media/patch/*.env
-rwxrwxrwx 1 vagrant vagrant  180 Mar 26 04:49 /media/patch/asm.env
-rwxrwxrwx 1 vagrant vagrant 1295 Sep 28  2018 /media/patch/bp.env
-rwxrwxrwx 1 vagrant vagrant 1113 Feb  2 16:00 /media/patch/ch_gi_prereq.env
-rwxrwxrwx 1 vagrant vagrant  511 Mar 27 15:22 /media/patch/crap.env
-rwxrwxrwx 1 vagrant vagrant  130 Jan 31 23:16 /media/patch/db.env
-rwxrwxrwx 1 vagrant vagrant  105 Feb 20 19:04 /media/patch/ggs.env
-rwxrwxrwx 1 vagrant vagrant  444 Apr 14 15:45 /media/patch/gi.env
-rwxrwxrwx 1 vagrant vagrant  518 Apr 14 00:49 /media/patch/hawk.env
-rwxrwxrwx 1 vagrant vagrant  944 Apr 22  2018 /media/patch/psu.env

### 18.3 GI

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"

### 12.1 DB

[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2
Last login: Wed Apr 24 04:23:30 2019

----------------------------------------
Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

[oracle@racnode-dc1-2 ~]$ df -h /media/patch/
Filesystem      Size  Used Avail Use% Mounted on
media_patch     3.7T  463G  3.2T  13% /media/patch

[oracle@racnode-dc1-2 ~]$ ll /media/patch/*.env
-rwxrwxrwx 1 vagrant vagrant  180 Mar 26 04:49 /media/patch/asm.env
-rwxrwxrwx 1 vagrant vagrant 1295 Sep 28  2018 /media/patch/bp.env
-rwxrwxrwx 1 vagrant vagrant 1113 Feb  2 16:00 /media/patch/ch_gi_prereq.env
-rwxrwxrwx 1 vagrant vagrant  511 Mar 27 15:22 /media/patch/crap.env
-rwxrwxrwx 1 vagrant vagrant  130 Jan 31 23:16 /media/patch/db.env
-rwxrwxrwx 1 vagrant vagrant  105 Feb 20 19:04 /media/patch/ggs.env
-rwxrwxrwx 1 vagrant vagrant  444 Apr 14 15:45 /media/patch/gi.env
-rwxrwxrwx 1 vagrant vagrant  518 Apr 14 00:49 /media/patch/hawk.env
-rwxrwxrwx 1 vagrant vagrant  944 Apr 22  2018 /media/patch/psu.env

[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM2"

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
[oracle@racnode-dc1-2 ~]$ 

====================================================================================================

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit
[oracle@racnode-dc1-1 ~]$

====================================================================================================

[oracle@racnode-dc1-2 ~]$ cat /media/patch/gi.env
### Michael Dinh : Mar 26, 2019
### Source RAC GI environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset ORACLE_UNQNAME
ORAENV_ASK=NO
h=$(hostname -s)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=+ASM${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
export GRID_HOME=$ORACLE_HOME
env|egrep 'ORA|GRID'
sysresv|tail -1
[oracle@racnode-dc1-2 ~]$

====================================================================================================

[oracle@racnode-dc1-2 ~]$ cat /media/patch/hawk.env
### Michael Dinh : Mar 26, 2019
### Source RAC DB environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset GRID_HOME
h=$(hostname -s)
### Extract filename without extension (.env)
ORAENV_ASK=NO
export ORACLE_UNQNAME=$(basename $BASH_SOURCE .env)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=$ORACLE_UNQNAME${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
env|egrep 'ORA|GRID'
sysresv|tail -1
[oracle@racnode-dc1-2 ~]$

First Time Learning Screen

$
0
0

So there I was, learning screen for the first time and was not liking the inability to scroll within screen session.

After a lot of googling, I believe to have found the solution as shown in .screenrc

DEMO:

Screen customization:

[oracle@racnode-dc1-1 ~]$ cat .screenrc
# Set scrollback buffer to 100000
defscrollback 100000

# Customize the status line
hardstatus alwayslastline
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %m-%d %{W}%c %{g}]'

# Enable mouse scrolling and scroll bar history scrolling
termcapinfo xterm* ti@:te@
[oracle@racnode-dc1-1 ~]$

Start screen session name testing and log to screenlog.0
I was informed screenlog.0 would overwrite but in my test case, it did not.

[oracle@racnode-dc1-1 ~]$ ls -l
total 0
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ screen -SL testing

In screen session


### Line below is from customization and blank lines above removed
[ racnode-dc1-1 ][                                                                                               (0*$(L)bash)                                                                                                ][ 04-30 21:32 ]

[oracle@racnode-dc1-1 ~]$ echo $TERM
screen
[oracle@racnode-dc1-1 ~]$ screen -ls
There is a screen on:
        17983.testing   (Attached)
1 Socket in /var/run/screen/S-oracle.

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$ exit
exit

[screen is terminating]

Review screenlog.0

[oracle@racnode-dc1-1 ~]$ ls -l
total 4
-rw-r--r-- 1 oracle oinstall 1774 Apr 30 21:33 screenlog.0

[oracle@racnode-dc1-1 ~]$ cat screenlog.0
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ echo $TERM
screen
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ screen -ls
There is a screen on:
        17983.testing   (Attached)
1 Socket in /var/run/screen/S-oracle.

oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ exit
exit
[oracle@racnode-dc1-1 ~]$

Start screen session name testing2 and log to screenlog.0

[oracle@racnode-dc1-1 ~]$ screen -SL testing2

In screen session

### Line below is from customization and blank lines above removed
[ racnode-dc1-1 ][                                                                                               (0*$(L)bash)                                                                                                ][ 04-30 21:35 ]

[oracle@racnode-dc1-1 ~]$ echo $TERM
screen
[oracle@racnode-dc1-1 ~]$ screen -ls
There is a screen on:
        19256.testing2  (Attached)
1 Socket in /var/run/screen/S-oracle.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$ exit
exit

[screen is terminating]
[oracle@racnode-dc1-1 ~]$ screen -ls
No Sockets found in /var/run/screen/S-oracle.

Review screenlog.0
Notice contents from testing and testing2

[oracle@racnode-dc1-1 ~]$ ll screenlog.0
-rw-r--r-- 1 oracle oinstall 3451 Apr 30 21:35 screenlog.0

[oracle@racnode-dc1-1 ~]$ cat screenlog.0
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ echo $TERM
screen
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ screen -ls
There is a screen on:
        17983.testing   (Attached)
1 Socket in /var/run/screen/S-oracle.

oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ exit
exit
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ echo $TERM
screen
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ screen -ls
There is a screen on:
        19256.testing2  (Attached)
1 Socket in /var/run/screen/S-oracle.

oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
oracle@racnode-dc1-1:~[oracle@racnode-dc1-1 ~]$ exit
exit
[oracle@racnode-dc1-1 ~]$

VitualBox Cannot register the hard disk because a hard disk with UUID already exists

$
0
0

Here’s an issue I have faced multiple times and have finally able to find a reasonable resolution.

For the moment, it seems to be working and only time will tell as other predominant solutions on WWW did not work for me.

VBox version:

D:\VirtualBox>VBoxManage -version
6.0.4r128413

D:\VirtualBox>

Start VM failed after shutdown:

[oracle@racnode-dc2-1 dbca]$ logout
[vagrant@racnode-dc2-1 ~]$ logout
Connection to 127.0.0.1 closed.

dinh@CMWPHV1 MINGW64 /d/Vagrant2/vagrant-vbox-rac (master)
$ vagrant halt

==> racnode-dc2-1: Attempting graceful shutdown of VM...
==> racnode-dc2-1: Forcing shutdown of VM...
==> racnode-dc2-2: Unpausing the VM...
==> racnode-dc2-2: Attempting graceful shutdown of VM...
==> racnode-dc2-2: Forcing shutdown of VM...

dinh@CMWPHV1 MINGW64 /d/Vagrant2/vagrant-vbox-rac (master)
$ vagrant up
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["showvminfo", "1d72cea4-f728-44d4-b69e-b2dd45064969"]

Stderr: VBoxManage.exe: error: Failed to create the VirtualBox object!
VBoxManage.exe: error: Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists
VBoxManage.exe: error: Details: code E_INVALIDARG (0x80070057), component VirtualBoxWrap, interface IVirtualBox

dinh@CMWPHV1 MINGW64 /d/Vagrant2/vagrant-vbox-rac (master)
$ vagrant status
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["showvminfo", "1d72cea4-f728-44d4-b69e-b2dd45064969"]

Stderr: VBoxManage.exe: error: Failed to create the VirtualBox object!
VBoxManage.exe: error: Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists
VBoxManage.exe: error: Details: code E_INVALIDARG (0x80070057), component VirtualBoxWrap, interface IVirtualBox

dinh@CMWPHV1 MINGW64 /d/Vagrant2/vagrant-vbox-rac (master)
$ 

VBoxManage internalcommands sethduuid / clonevdi failed:

D:\VirtualBox>VBoxManage.exe internalcommands sethduuid "D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk"
VBoxManage.exe: error: Failed to create the VirtualBox object!
VBoxManage.exe: error: Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35
-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists
VBoxManage.exe: error: Details: code E_INVALIDARG (0x80070057), component VirtualBoxWrap, interface IVirtualBox

D:\VirtualBox>VBoxManage clonevdi "D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk" "D:\VirtualBoxVM\vbox-rac-dc
2\racnode-dc2-2\packer-ol75-disk002.vmdk"
VBoxManage.exe: error: Failed to create the VirtualBox object!
VBoxManage.exe: error: Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35
-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c
748d54e-3cd2-4087-82dd-65324f4365f7} already exists
VBoxManage.exe: error: Details: code E_INVALIDARG (0x80070057), component VirtualBoxWrap, interface IVirtualBox

D:\VirtualBox>

Check c748d54e-3cd2-4087-82dd-65324f4365f7 exists in VirtualBox.xml and VirtualBox.xml-prev:

dinh@CMWPHV1 MINGW64 ~/.VirtualBox
$ grep c748d54e-3cd2-4087-82dd-65324f4365f7 *
VBoxSVC.log:00:00:00.390000          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.1:00:00:00.376400          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.2:00:00:00.391000          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.3:00:00:00.390000          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.4:00:00:00.374400          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.5:00:00:00.374400          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.6:00:00:00.436800          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VirtualBox.xml:        <HardDisk uuid="{c748d54e-3cd2-4087-82dd-65324f4365f7}" location="D:/VirtualBoxVM/vbox-rac-dc2/racnode-dc2-2/packer-ol75-disk001.vmdk" format="VMDK" type="Normal">
VirtualBox.xml-prev:        <HardDisk uuid="{c748d54e-3cd2-4087-82dd-65324f4365f7}" location="D:/VirtualBoxVM/vbox-rac-dc2/racnode-dc2-2/packer-ol75-disk001.vmdk" format="VMDK" type="Normal">

dinh@CMWPHV1 MINGW64 ~/.VirtualBox
$

Remove entry from VirtualBox.xml:

dinh@CMWPHV1 MINGW64 ~/.VirtualBox
$ grep c748d54e-3cd2-4087-82dd-65324f4365f7 *
VBoxSVC.log.2:00:00:00.391000          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.3:00:00:00.390000          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.4:00:00:00.376400          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.5:00:00:00.391000          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.6:00:00:00.390000          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.7:00:00:00.374400          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.8:00:00:00.374400          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0
VBoxSVC.log.9:00:00:00.436800          ERROR [COM]: aRC=E_INVALIDARG (0x80070057) aIID={d0a0163f-e254-4e5b-a1f2-011cf991c38d} aComponent={VirtualBoxWrap} aText={Cannot register the hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' {b507fc35-1c3c-46ab-9e0e-91f192c5b935} because a hard disk 'D:\VirtualBoxVM\vbox-rac-dc2\racnode-dc2-2\packer-ol75-disk001.vmdk' with UUID {c748d54e-3cd2-4087-82dd-65324f4365f7} already exists}, preserve=false aResultDetail=0

dinh@CMWPHV1 MINGW64 ~/.VirtualBox
$

Start VM:

dinh@CMWPHV1 MINGW64 /d/Vagrant2/vagrant-vbox-rac (master)
$ vagrant up
Bringing machine 'racnode-dc2-2' up with 'virtualbox' provider...
Bringing machine 'racnode-dc2-1' up with 'virtualbox' provider...

dinh@CMWPHV1 MINGW64 /d/Vagrant2/vagrant-vbox-rac (master)
$ vagrant status
Current machine states:

racnode-dc2-2             running (virtualbox)
racnode-dc2-1             running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

dinh@CMWPHV1 MINGW64 /d/Vagrant2/vagrant-vbox-rac (master)
$ vagrant ssh racnode-dc2-1
Last login: Wed May  1 14:40:57 2019 from 192.168.7.100

----------------------------------------
Welcome to racnode-dc2-1
OracleLinux 7.5 x86_64

FQDN: racnode-dc2-1.internal.lab
IP's:
enp0s3: 10.0.2.15
enp0s8: 192.168.7.100
enp0s9: 172.16.7.10

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5708 MB
Kernel:    4.1.12-112.16.4.el7uek.x86_64

----------------------------------------
[vagrant@racnode-dc2-1 ~]$ sudo su - oracle
Last login: Wed May  1 14:42:47 CEST 2019
[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc2-1 ~]$
[oracle@racnode-dc2-1 ~]$ ps -ef|grep [p]mon
oracle   17637     1  0 14:42 ?        00:00:00 asm_pmon_+ASM1
oracle   17987     1  0 14:43 ?        00:00:00 ora_pmon_hawk1
[oracle@racnode-dc2-1 ~]$ ssh racnode-dc2-2
Last login: Wed May  1 14:42:24 2019

----------------------------------------
Welcome to racnode-dc2-2
OracleLinux 7.5 x86_64

FQDN: racnode-dc2-2.internal.lab
IP's:
enp0s3: 10.0.2.15
enp0s8: 192.168.7.101
enp0s9: 172.16.7.11

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5708 MB
Kernel:    4.1.12-112.16.4.el7uek.x86_64

----------------------------------------
[oracle@racnode-dc2-2 ~]$ ps -ef|grep pmon
oracle   16172     1  0 14:41 ?        00:00:00 asm_pmon_+ASM2
oracle   17229     1  0 14:41 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   17368     1  0 14:41 ?        00:00:00 ora_pmon_hawk2
oracle   25901 25533  0 14:46 pts/0    00:00:00 grep --color=auto pmon
[oracle@racnode-dc2-2 ~]$

Check packer-ol75-disk001.vmdk from VirtualBox.xml:

dinh@CMWPHV1 MINGW64 ~/.VirtualBox
$ grep packer-ol75-disk001.vmdk VirtualBox.xml
        <HardDisk uuid="{b507fc35-1c3c-46ab-9e0e-91f192c5b935}" location="D:/VirtualBoxVM/vbox-rac-dc2/racnode-dc2-2/packer-ol75-disk001.vmdk" format="VMDK" type="Normal"/>
        <HardDisk uuid="{1e5667ab-69a7-49fc-aa76-7b6249aba862}" location="D:/VirtualBoxVM/vbox-rac-dc2/racnode-dc2-1/packer-ol75-disk001.vmdk" format="VMDK" type="Normal"/>

dinh@CMWPHV1 MINGW64 ~/.VirtualBox
$

Before and after:

dinh@CMWPHV1 MINGW64 /d/VirtualBoxVM/vbox-rac-dc2/racnode-dc2-2
$ ll
total 4566352
drwxr-xr-x 1 dinh 197121          0 Apr 30 20:04 Logs/
-rw-r--r-- 1 dinh 197121 4675928064 May  1 07:54 opacker-ol75-disk001.vmdk
-rw-r--r-- 1 dinh 197121       6433 May  1 07:54 racnode-dc2-2.vbox
-rw-r--r-- 1 dinh 197121       6433 Apr 30 20:04 racnode-dc2-2.vbox-prev

dinh@CMWPHV1 MINGW64 /d/VirtualBoxVM/vbox-rac-dc2/racnode-dc2-2
$ ll
total 4768592
drwxr-xr-x 1 dinh 197121          0 May  1 08:38 Logs/
-rw-r--r-- 1 dinh 197121 4883021824 May  1 08:43 packer-ol75-disk001.vmdk
-rw-r--r-- 1 dinh 197121       6433 May  1 08:38 racnode-dc2-2.vbox
-rw-r--r-- 1 dinh 197121       6433 May  1 08:38 racnode-dc2-2.vbox-prev

dinh@CMWPHV1 MINGW64 /d/VirtualBoxVM/vbox-rac-dc2/racnode-dc2-2
$

GRID Out Of Place (OOP) Rollback Disaster

$
0
0

Now I understand the hesitation to use Oracle new features, especially any auto.

It may just be simpler and less stress to perform manual task having control and knowing what is being executed and validated.

GRID Out Of Place (OOP) patching completed successfully for 18.6.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after patching.

+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Run cluvfy was successful too.

[oracle@racnode-dc1-1 ~]$ cluvfy stage -post crsinst -n racnode-dc1-1,racnode-dc1-2 -verbose

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 30, 2019 8:17:49 PM
CVU home:                     /u01/18.3.0.0/grid_2/
User:                         oracle
[oracle@racnode-dc1-1 ~]$

GRID OOP Rollback Patching completed successfully for node1.

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-1 ~]#
[root@racnode-dc1-1 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:06:47 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-06-50AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-08-00AM.log
The id for this session is R47N

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Fri May  3 01:14:25 2019
Time taken to complete the session 7 minutes, 38 seconds

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[root@racnode-dc1-1 ~]# /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
 10486 asm_pmon_+asm1       /u01/18.3.0.0/grid/
 10833 apx_pmon_+apx1       /u01/18.3.0.0/grid/

[root@racnode-dc1-1 ~]# cat /etc/oratab
#Backup file is  /u01/app/oracle/12.1.0.1/db1/srvm/admin/oratab.bak.racnode-dc1-1 line added by Agent
#+ASM1:/u01/18.3.0.0/grid:N
hawk1:/u01/app/oracle/12.1.0.1/db1:N
hawk:/u01/app/oracle/12.1.0.1/db1:N             # line added by Agent
[root@racnode-dc1-1 ~]#

GRID OOP Rollback Patching completed successfully for node2.

[root@racnode-dc1-2 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-2 ~]#
[root@racnode-dc1-2 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-2 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:21:39 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-21-41AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-22-46AM.log
The id for this session is 9RAT

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-2
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1


OPatchauto session completed at Fri May  3 01:40:51 2019
Time taken to complete the session 19 minutes, 12 seconds
[root@racnode-dc1-2 ~]#

GRID OOP Rollback completed successfully for 18.5.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after rollback.

+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Validation shows database is OFFLINE,

+ crsctl stat res -w '((TARGET != ONLINE) or (STATE != ONLINE)' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            IDLE,STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               Instance Shutdown,STABLE
      2        ONLINE  OFFLINE                               Instance Shutdown,STABLE

Start database FAILED.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk2"

[oracle@racnode-dc1-2 ~]$ srvctl status database -d $ORACLE_UNQNAME -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is not running on node racnode-dc1-2

[oracle@racnode-dc1-2 ~]$ srvctl start database -d $ORACLE_UNQNAME
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
[oracle@racnode-dc1-2 ~]$


[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk1"

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
[oracle@racnode-dc1-1 ~]$

Incorrect permissions for oracle library was the cause.
Change permissions for $GRID_HOME/bin/oracle (chmod 6751 $GRID_HOME/bin/oracle), stop and start CRS resolved the failure.

[oracle@racnode-dc1-1 dbs]$ ls -lhrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle dba 314M Apr 20 16:06 /u01/app/oracle/12.1.0.1/db1/bin/oracle

[oracle@racnode-dc1-1 dbs]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ cd /u01/18.3.0.0/grid/bin/
[oracle@racnode-dc1-1 bin]$ chmod 6751 oracle
[oracle@racnode-dc1-1 bin]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
[root@racnode-dc1-1 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# chmod 6751 $GRID_HOME/bin/oracle
[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# crsctl start crs
[root@racnode-dc1-1 ~]# crsctl start crs

Reference: RAC Database Can’t Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)

Updating vagrant-boxes/OracleRAC

$
0
0

I have been playing with and finally able to complete 18c RAC installation using oracle/vagrant-boxes/OracleRAC

Honestly, I am still fond of Mikael Sandström oravirt vagrant-boxes, but having some trouble with installations and thought to try something new.

Here are updates performed for oracle/vagrant-boxes/OracleRAC on all nodes and only showing one node as example.

/etc/oratab is not updated:

[oracle@ol7-183-node2 ~]$ ps -ef|grep pmon
grid      1155     1  0 14:00 ?        00:00:00 asm_pmon_+ASM2
oracle   18223 18079  0 14:43 pts/0    00:00:00 grep --color=auto pmon
oracle   31653     1  0 14:29 ?        00:00:00 ora_pmon_hawk2

[oracle@ol7-183-node2 ~]$ tail /etc/oratab
#   $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#

Update /etc/oratab [my framework works :=)]

[oracle@ol7-183-node2 ~]$ cat /etc/oratab
+ASM2:/u01/app/18.0.0.0/grid:N
hawk2:/u01/app/oracle/product/18.0.0.0/dbhome_1:N

[oracle@ol7-183-node2 ~]$ /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance not alive for sid "+ASM2"

[oracle@ol7-183-node2 ~]$ /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
Oracle Instance alive for sid "hawk2"
[oracle@ol7-183-node2 ~]$

sudo for grid/oracle is not enabled:

[oracle@ol7-183-node2 ~]$ sudo /media/patch/findhomes.sh
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for oracle:
oracle is not in the sudoers file.  This incident will be reported.
[oracle@ol7-183-node2 ~]$ exit

Enable sudo for grid/oracle: (shown as example for oracle and should be same for grid)

[vagrant@ol7-183-node2 ~]$ sudo su -
[root@ol7-183-node2 ~]# visudo
[root@ol7-183-node2 ~]# grep oracle /etc/sudoers
oracle  ALL=(ALL)       ALL
oracle  ALL=(ALL)       NOPASSWD: ALL
[root@ol7-183-node2 ~]# logout

[vagrant@ol7-183-node2 ~]$ sudo su - oracle
Last login: Sat May  4 14:43:46 -04 2019 on pts/0

[oracle@ol7-183-node2 ~]$ sudo /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
  1155 asm_pmon_+asm2       /u01/app/18.0.0.0/grid/
 31653 ora_pmon_hawk2       /u01/app/oracle/product/18.0.0.0/dbhome_1/
[oracle@ol7-183-node2 ~]$

Login banner:

dinh@CMWPHV1 MINGW64 /c/vagrant-boxes/OracleRAC (master)
$ vagrant ssh node2
Last login: Sat May  4 14:43:40 2019 from 10.0.2.2

Welcome to Oracle Linux Server release 7.6 (GNU/Linux 4.14.35-1844.1.3.el7uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

    * /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

    * http://yum.oracle.com/

[vagrant@ol7-183-node2 ~]$

Remove login banner:

[root@ol7-183-node2 ~]# cp -v /etc/motd /etc/motd.bak
‘/etc/motd’ -> ‘/etc/motd.bak’
[root@ol7-183-node2 ~]# cat /dev/null > /etc/motd
[root@ol7-183-node2 ~]# logout
[vagrant@ol7-183-node2 ~]$ logout
Connection to 127.0.0.1 closed.

dinh@CMWPHV1 MINGW64 /c/vagrant-boxes/OracleRAC (master)
$ vagrant ssh node2
Last login: Sat May  4 15:00:06 2019 from 10.0.2.2
[vagrant@ol7-183-node2 ~]$

Mandatory GIMR is not installed:

    node1: -----------------------------------------------------------------
    node1: INFO: 2019-05-04 14:01:02: Make GI config command
    node1: -----------------------------------------------------------------
    node1: -----------------------------------------------------------------
    node1: INFO: 2019-05-04 14:01:02: Grid Infrastructure configuration as 'RAC'
    node1: INFO: 2019-05-04 14:01:02: - ASM library   : ASMLIB
    node1: INFO: 2019-05-04 14:01:02: - without MGMTDB: true
    node1: -----------------------------------------------------------------
    node1: Launching Oracle Grid Infrastructure Setup Wizard...

[oracle@ol7-183-node1 ~]$ ps -ef|grep pmon
grid      7294     1  0 13:53 ?        00:00:00 asm_pmon_+ASM1
oracle   10986     1  0 14:29 ?        00:00:00 ora_pmon_hawk1
oracle   28642 28586  0 15:12 pts/0    00:00:00 grep --color=auto pmon

[oracle@ol7-183-node1 ~]$ ssh ol7-183-node2
Last login: Sat May  4 14:48:20 2019
[oracle@ol7-183-node2 ~]$ ps -ef|grep pmon
grid      1155     1  0 14:00 ?        00:00:00 asm_pmon_+ASM2
oracle   29820 29711  0 15:12 pts/0    00:00:00 grep --color=auto pmon
oracle   31653     1  0 14:29 ?        00:00:00 ora_pmon_hawk2
[oracle@ol7-183-node2 ~]$

Create GMIR:
How to Move/Recreate GI Management Repository (GIMR / MGMTDB) to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)

[grid@ol7-183-node1 ~]$ ps -ef|grep pmon
grid      2286 27832  0 16:35 pts/0    00:00:00 grep --color=auto pmon
grid      7294     1  0 13:53 ?        00:00:00 asm_pmon_+ASM1
oracle   10986     1  0 14:29 ?        00:00:00 ora_pmon_hawk1

[grid@ol7-183-node1 ~]$ ll /tmp/mdbutil.*
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:02 /tmp/mdbutil.pl

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --status
mdbutil.pl version : 1.95
2019-05-04 16:35:44: I Checking CHM status...
2019-05-04 16:35:46: I Listener MGMTLSNR is configured and running on ol7-183-node1
2019-05-04 16:35:49: W MGMTDB is not configured on ol7-183-node1!
2019-05-04 16:35:49: W Cluster Health Monitor (CHM) is configured and not running on ol7-183-node1!

[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base remains unchanged with value /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     65520    63108                0           63108              0             Y  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304     16368    15260             4092            5584              0             N  RECO/

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --addmdb --target=+DATA -debug
mdbutil.pl version : 1.95
2019-05-04 16:36:57: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status diskgroup -g DATA
2019-05-04 16:36:58: D Exit code: 0
2019-05-04 16:36:58: D Output of last command execution:
Disk Group DATA is running on ol7-183-node1,ol7-183-node2
2019-05-04 16:36:58: I Starting To Configure MGMTDB at +DATA...
2019-05-04 16:36:58: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtlsnr
2019-05-04 16:36:59: D Exit code: 0
2019-05-04 16:36:59: D Output of last command execution:
Listener MGMTLSNR is enabled
2019-05-04 16:36:59: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtdb
2019-05-04 16:37:00: D Exit code: 1
2019-05-04 16:37:00: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2019-05-04 16:37:00: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtdb
2019-05-04 16:37:01: D Exit code: 1
2019-05-04 16:37:01: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2019-05-04 16:37:01: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl stop mgmtlsnr
2019-05-04 16:37:05: D Exit code: 0
2019-05-04 16:37:05: D Output of last command execution:
2019-05-04 16:37:05: D Executing: /u01/app/18.0.0.0/grid/bin/crsctl query crs activeversion
2019-05-04 16:37:05: D Exit code: 0
2019-05-04 16:37:05: D Output of last command execution:
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
2019-05-04 16:37:05: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl enable qosmserver
2019-05-04 16:37:06: D Exit code: 2
2019-05-04 16:37:06: D Output of last command execution:
PRKF-1321 : QoS Management Server is already enabled.
2019-05-04 16:37:06: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl start qosmserver
2019-05-04 16:37:07: D Exit code: 2
2019-05-04 16:37:07: D Output of last command execution:
PRCC-1014 : qosmserver was already running
2019-05-04 16:37:07: I Container database creation in progress... for GI 18.0.0.0.0
2019-05-04 16:37:07: D Executing: /u01/app/18.0.0.0/grid/bin/dbca  -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATA -datafileJarLocation /u01/app/18.0.0.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
2019-05-04 16:55:03: D Exit code: 0
2019-05-04 16:55:03: D Output of last command execution:
Prepare for db operation
2019-05-04 16:55:03: I Plugable database creation in progress...
2019-05-04 16:55:03: D Executing: /u01/app/18.0.0.0/grid/bin/mgmtca -local
2019-05-04 16:59:32: D Exit code: 0
2019-05-04 16:59:32: D Output of last command execution:
2019-05-04 16:59:32: D Executing: scp /tmp/mdbutil.pl ol7-183-node1:/tmp/
2019-05-04 16:59:33: D Exit code: 0
2019-05-04 16:59:33: D Output of last command execution:
2019-05-04 16:59:33: I Executing "/tmp/mdbutil.pl --addchm" on ol7-183-node1 as root to configure CHM.
2019-05-04 16:59:33: D Executing: ssh root@ol7-183-node1 "/tmp/mdbutil.pl --addchm"
root@ol7-183-node1's password:
2019-05-04 16:59:42: D Exit code: 1
2019-05-04 16:59:42: D Output of last command execution:
mdbutil.pl version : 1.95
2019-05-04 16:59:42: W Not able to execute "/tmp/mdbutil.pl --addchm" on ol7-183-node1 as root to configure CHM.
2019-05-04 16:59:42: D Executing: scp /tmp/mdbutil.pl ol7-183-node2:/tmp/
2019-05-04 16:59:43: D Exit code: 0
2019-05-04 16:59:43: D Output of last command execution:
2019-05-04 16:59:43: I Executing "/tmp/mdbutil.pl --addchm" on ol7-183-node2 as root to configure CHM.
2019-05-04 16:59:43: D Executing: ssh root@ol7-183-node2 "/tmp/mdbutil.pl --addchm"
root@ol7-183-node2's password:
2019-05-04 16:59:51: D Exit code: 1
2019-05-04 16:59:51: D Output of last command execution:
mdbutil.pl version : 1.95
2019-05-04 16:59:51: W Not able to execute "/tmp/mdbutil.pl --addchm" on ol7-183-node2 as root to configure CHM.
2019-05-04 16:59:51: I MGMTDB & CHM configuration done!

[root@ol7-183-node1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[root@ol7-183-node1 ~]# crsctl start res ora.crf -init
CRS-2501: Resource 'ora.crf' is disabled
CRS-4000: Command Start failed, or completed with errors.

[root@ol7-183-node1 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init

[root@ol7-183-node1 ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'ol7-183-node1'
CRS-2676: Start of 'ora.crf' on 'ol7-183-node1' succeeded

[root@ol7-183-node1 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=ONLINE
STATE=ONLINE on ol7-183-node1

[root@ol7-183-node1 ~]# ll /tmp/mdbutil.pl
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:59 /tmp/mdbutil.pl
[root@ol7-183-node1 ~]# /tmp/mdbutil.pl --addchm
mdbutil.pl version : 1.95
2019-05-04 17:02:54: I Starting To Configure CHM...
2019-05-04 17:02:55: I CHM has already been configured!
2019-05-04 17:02:57: I CHM Configure Successfully Completed!
[root@ol7-183-node1 ~]#

[root@ol7-183-node1 ~]# ssh ol7-183-node2
Last login: Sat May  4 16:28:28 2019
[root@ol7-183-node2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM2"
[root@ol7-183-node2 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=OFFLINE
STATE=OFFLINE

[root@ol7-183-node2 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@ol7-183-node2 ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'ol7-183-node2'
CRS-2676: Start of 'ora.crf' on 'ol7-183-node2' succeeded
[root@ol7-183-node2 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=ONLINE
STATE=ONLINE on ol7-183-node2

[root@ol7-183-node2 ~]# ll /tmp/mdbutil.pl
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:59 /tmp/mdbutil.pl
[root@ol7-183-node2 ~]# /tmp/mdbutil.pl --addchm
mdbutil.pl version : 1.95
2019-05-04 17:04:41: I Starting To Configure CHM...
2019-05-04 17:04:41: I CHM has already been configured!
2019-05-04 17:04:44: I CHM Configure Successfully Completed!

[root@ol7-183-node2 ~]# logout
Connection to ol7-183-node2 closed.
[root@ol7-183-node1 ~]# logout

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --status
mdbutil.pl version : 1.95
2019-05-04 17:04:54: I Checking CHM status...
2019-05-04 17:04:56: I Listener MGMTLSNR is configured and running on ol7-183-node1
2019-05-04 17:04:59: I Database MGMTDB is configured and running on ol7-183-node1
2019-05-04 17:05:00: I Cluster Health Monitor (CHM) is configured and running
--------------------------------------------------------------------------------
CHM Repository Path = +DATA/_MGMTDB/881717C3357B4146E0536538A8C05D2C/DATAFILE/sysmgmtdata.291.1007398657
MGMTDB space used on DG +DATA = 23628 Mb
--------------------------------------------------------------------------------
[grid@ol7-183-node1 ~]$

Due to role separation, fix broken script for lspatches.

[grid@ol7-183-node2 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches

====================================================================================================
OPatch could not create/open history file for writing.
====================================================================================================

27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[grid@ol7-183-node2 ~]$

====================================================================================================

[root@ol7-183-node2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM2"
[root@ol7-183-node2 ~]# chmod 775 -R $ORACLE_HOME/cfgtoollogs

[root@ol7-183-node2 ~]# . /media/patch/hawk.env
The Oracle base has been changed from /u01/app/grid to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
Oracle Instance alive for sid "hawk2"
[root@ol7-183-node2 ~]# chmod 775 -R $ORACLE_HOME/cfgtoollogs

====================================================================================================

[vagrant@ol7-183-node2 ~]$ sudo su - grid /media/patch/lspatches.sh
Last login: Sat May  4 18:16:38 -04 2019
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[vagrant@ol7-183-node2 ~]$ sudo su - oracle /media/patch/lspatches.sh
Last login: Sat May  4 18:15:18 -04 2019 on pts/0
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[vagrant@ol7-183-node2 ~]$

I will update post as I progress.

What’s My Cluster Configuration

$
0
0
[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ crsctl get cluster configuration
Name                : ol7-183-cluster
Configuration       : Cluster
Class               : Standalone Cluster
Type                : flex
The cluster is not extended.
--------------------------------------------------------------------------------
        MEMBER CLUSTER INFORMATION

      Name       Version        GUID                       Deployed Deconfigured
================================================================================
================================================================================

[grid@ol7-183-node1 ~]$ olsnodes -s -a -t
ol7-183-node1   Active  Hub     Unpinned
ol7-183-node2   Active  Hub     Unpinned

[grid@ol7-183-node1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [70732493] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been applied on the local node. The release patch string is [18.3.0.0.0].

[grid@ol7-183-node1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].
[grid@ol7-183-node1 ~]$

Notes: Troubleshooting GRID opatchauto and Sample GRID OOP Log

$
0
0

Troubleshooting opatchauto Issues in Grid Infrastructure Environment (Doc ID 2467266.1)

Failure during prepatch execution
If any failure in pre-patch step, verify the logs under <oracle base>/crsdata/<hostname>/crsconfig/crspatch_<hostname>_<timestamp>.log

Failure during patching execution
If any failure in execution during patching, review the opatch execution logs under corresponding <oracle home>/cfgtoollogs/opatchauto location

Failure during post-patch execution
If any failure in post-patch execution, review the logs under <oracle base>/crsdata/<hostname>/crsconfig/crspatch_<hostname>_<timestamp>.log.

Generally, the error seen While Starting the Clusterware. In that situation, troubleshoot Grid Infrastructure issues referring Doc ID 1050908.1

How to debug opatchauto failures?
# export OPATCH_DEBUG=true
# opatchauto apply patch with location -loglevel FINEST

ACTUAL: Grid_Infrastructure_Out_of_Place_12.2 (GI/DB SAME version)

[root@racnode-dc2-1 ~]# export PATCH_HOME=/u01/stage/patch/Apr2019/29301687
[root@racnode-dc2-1 ~]# $GRID_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.17

[root@racnode-dc2-1 ~]# $GRID_HOME/OPatch/opatchauto apply $PATCH_HOME -prepare-clone -logLevel FINEST

System initialization log file is /u01/app/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-04-29_05-24-36PM.log.
Session log file is /u01/app/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-04-29_05-26-26PM.log
Prepatch operation log file location: /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/crspatch_racnode-dc2-1_2019-04-29_05-33-30PM.log

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc2-1
RAC Home:/u01/app/oracle/12.2.0.1/db1
Version:12.2.0.1.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/stage/patch/Apr2019/29301687/29301676
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/stage/patch/Apr2019/29301687/26839277
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/stage/patch/Apr2019/29301687/28566910
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/stage/patch/Apr2019/29301687/29314339
Log: /u01/app/oracle/12.2.0.1/db1_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-54PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314424
Log: /u01/app/oracle/12.2.0.1/db1_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-54PM_1.log


Host:racnode-dc2-1
CRS Home:/u01/app/12.2.0.1/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/stage/patch/Apr2019/29301687/26839277
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/28566910
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29301676
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314339
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314424
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log


Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc2-1
Actual Home : /u01/app/oracle/12.2.0.1/db1
Version:12.2.0.1.0
Clone Home Path : /u01/app/oracle/12.2.0.1/db1_2

Host : racnode-dc2-1
Actual Home : /u01/app/12.2.0.1/grid
Version:12.2.0.1.0
Clone Home Path : /u01/app/12.2.0.1/grid_2


OPatchauto session completed at Mon Apr 29 18:03:48 2019
Time taken to complete the session 39 minutes, 16 seconds
[root@racnode-dc2-1 ~]#

ACTUAL: Grid_Infrastructure_Out_of_Place_18.6 (GI/DB DIFFERENT version)

+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

[root@racnode-dc1-1 ~]# export PATCH_HOME=/u01/patch/Apr2019/29301682
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto apply $PATCH_HOME -prepare-clone -logLevel FINEST

System initialization log file is /u01/18.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-04-30_05-06-34PM.log.
Session log file is /u01/18.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2019-04-30_05-08-04PM.log

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
CRS Home:/u01/18.3.0.0/grid
Version:18.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/patch/Apr2019/29301682/28435192
Reason: This patch is already been applied, so not going to apply again.

Patch: /u01/patch/Apr2019/29301682/28547619
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/patch/Apr2019/29301682/29301631
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log

Patch: /u01/patch/Apr2019/29301682/29301643
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log

Patch: /u01/patch/Apr2019/29301682/29302264
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log


Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid_2


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Tue Apr 30 17:27:21 2019
Time taken to complete the session 20 minutes, 52 seconds
[root@racnode-dc1-1 ~]#

Remove GRID Home After Upgrade

$
0
0

The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0.

As a result, there are three GRID homes and remove 12.1.0.1.

This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes.

Review existing patch for Grid and Database homes:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process.
Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid

Verify cluster state is [NORMAL]:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$

Check Oracle Inventory:

[oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>

### GRID home (/u01/app/12.1.0.1/grid) to be removed.
========================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
========================================================================================

<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues.

[oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16040 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[oracle@racnode-dc1-2 ~]$

Verify GRID home was removed:

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>

### GRID home (/u01/app/12.1.0.1/grid) removed.
================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/>
================================================================================

</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove 12.1.0.1 directory:

[oracle@racnode-dc1-2 ~]$ sudo su -
Last login: Thu May  2 23:38:22 CEST 2019
[root@racnode-dc1-2 ~]# cd /u01/app/
[root@racnode-dc1-2 app]# ll
total 12
drwxr-xr-x  3 root   oinstall 4096 Apr 17 23:36 12.1.0.1
drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle
drwxrwx---  5 oracle oinstall 4096 May  2 23:54 oraInventory
[root@racnode-dc1-2 app]# rm -rf 12.1.0.1/
[root@racnode-dc1-2 app]#

Check the cluster:

[root@racnode-dc1-2 app]# logout
[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-2 ~]$

Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.

Create Mount Filesystem for Vagrant VirtualBox

$
0
0

Once again, I am using oravirt boxes.

If you just want to create the machine, and not run the provisioning step run this:

vagrant up

Since I don’t know ansible, it was much simpler to do the work manually.

Oracle Linux Server release 7.3

Review disks:

[root@MGOEM ~]# fdisk -l /dev/sd*

Disk /dev/sda: 52.4 GB, 52428800000 bytes, 102400000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000979b6

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   102399999    50150400   8e  Linux LVM

Disk /dev/sda1: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda2: 51.4 GB, 51354009600 bytes, 100300800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

### Disk is not partitioned since there are no # for device /dev/sdb and no Filesystem
Disk /dev/sdb: 187.9 GB, 187904819200 bytes, 367001600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@MGOEM ~]#

Create partition:

[root@MGOEM ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x37a8a8de.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-367001599, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-367001599, default 367001599):
Using default value 367001599
Partition 1 of type Linux and of size 175 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@MGOEM ~]#

Review disk: Linux System

[root@MGOEM ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 187.9 GB, 187904819200 bytes, 367001600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x37a8a8de

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048   367001599   183499776   83  Linux
[root@MGOEM ~]#

Create Filesystem:

[root@MGOEM ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
11468800 inodes, 45874944 blocks
2293747 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2193620992
1400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@MGOEM ~]#

Create and mount /u01:

[root@MGOEM ~]# mkdir -p /u01
[root@MGOEM ~]# mount /dev/sdb1 /u01
[root@MGOEM ~]#
[root@MGOEM ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
devtmpfs              2.8G     0  2.8G   0% /dev
tmpfs                 2.8G     0  2.8G   0% /dev/shm
tmpfs                 2.8G  8.4M  2.8G   1% /run
tmpfs                 2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root    46G  2.1G   44G   5% /
/dev/sda1            1014M  167M  848M  17% /boot
vagrant               932G  283G  650G  31% /vagrant
sf_working            420G  139G  281G  33% /sf_working
media_patch           3.7T  513G  3.2T  14% /media/patch
media_swrepo          3.7T  513G  3.2T  14% /media/swrepo
sf_OracleSoftware     3.7T  513G  3.2T  14% /sf_OracleSoftware
media_shared_storage  932G  283G  650G  31% /media/shared_storage
tmpfs                 571M     0  571M   0% /run/user/1000
/dev/sdb1             173G   61M  164G   1% /u01
[root@MGOEM ~]#

Update /etc/fstab:

[root@MGOEM ~]# tail /etc/fstab
# Created by anaconda on Tue Apr 18 08:50:14 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root     /                       xfs     defaults        0 0
UUID=ed2996e5-e077-4e23-83a5-10418226a725 /boot                   xfs     defaults        0 0
/dev/mapper/ol-swap     swap                    swap    defaults        0 0
/swapfile1              swap                    swap    defaults        0 0
/dev/sdb1               /u01                    ext4    defaults        1 1
[root@MGOEM ~]#

EM13.3 Directory Structures

$
0
0

Currently, I am preparing POC to migrate OMS 13.3 from OEL6 to OEL7 and wanted a high level overview of the installation.

[oracle@MGOEM ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

export PATH=$PATH:$HOME/bin
export DISPLAY=127.0.0.1:10.0

export ORACLE_BASE=/u01/app/oracle
export AGENT_BASE=$ORACLE_BASE/agent

export AGENT_HOME=$AGENT_BASE/agent_13.3.0.0.0
export EM_INSTANCE_BASE=$ORACLE_BASE/gc_inst
export OMS_INSTANCE_BASE=$EM_INSTANCE_BASE
export OHS=$EM_INSTANCE_BASE/user_projects/domains/GCDomain/servers/ohs1

### Starting from 13cR1, Oracle home (or OMS home) refers to the Middleware home.
export ORACLE_HOME=$ORACLE_BASE/middleware
export MW_HOME=$ORACLE_HOME
export OMS_HOME=$ORACLE_HOME
[oracle@MGOEM ~]$

Overview of the Directories Created for OMS Installation.
The OMS instance base directory (typically, gc_inst) is maintained outside the middleware home

[oracle@MGOEM ~]$ cd $MW_HOME; pwd; ls
/u01/app/oracle/middleware
allroot.sh   common               embip          ldap           OMSPatcher     plsql                root.sh     ucp
asr          create_header.log    gccompliance   lib            OPatch         plugins              slax        user_projects
bi           crs                  has            logs           oracle_common  plugins_common       soa         webgate
bin          css                  install        network        oracore        postjava_header.log  sqlplus     wlserver
bmp          disc                 instantclient  nls            oraInst.loc    precomp              srvm        xdk
cfgtoollogs  doc                  inventory      ocm            ord            rdbms                stage
clone        domain-registry.xml  jdbc           ohs            oui            relnotes             sysman
coherence    em                   jlib           omscarespfile  perl           response             thirdparty
[oracle@MGOEM middleware]$

Overview of the Directories Created for Management Agent Installation (Central Agent).
Agent base directory for the central agent (Management Agent installed with the OMS).

[oracle@MGOEM middleware]$ cd $AGENT_BASE; pwd; ls
/u01/app/oracle/agent
agent_13.3.0.0.0  agent_inst  agentInstall.rsp
[oracle@MGOEM agent]$

Agent home that is within the agent base directory.

[oracle@MGOEM agent]$ cd $AGENT_HOME; pwd; ls
/u01/app/oracle/agent/agent_13.3.0.0.0
agent.rsp    EMStage        jdbc  jythonLib  OPatch         perl     replacebins.sh           sbin    xsds
bin          install        jdk   ldap       oracle_common  plugins  replacebins.sh.template  stage
cfgtoollogs  instantclient  jlib  lib        oraInst.loc    prereqs  root.sh                  sysman
config       inventory      js    ocm        oui            rda      root.sh.template         ucp
[oracle@MGOEM agent_13.3.0.0.0]$

The OMS instance base directory (typically, gc_inst) is maintained outside the middleware home.

[oracle@MGOEM agent_13.3.0.0.0]$ cd $OMS_INSTANCE_BASE; pwd; ls
/u01/app/oracle/gc_inst
em  user_projects
[oracle@MGOEM gc_inst]$

ORACLE_BASE

[oracle@MGOEM gc_inst]$  cd $ORACLE_BASE; pwd; ls
/u01/app/oracle
agent  bip  gc_inst  middleware  swlib
[oracle@MGOEM oracle]$

Inventory and Patches:

[oracle@MGOEM ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2015, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>13.8.0.0.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="oms13c1" LOC="/u01/app/oracle/middleware" TYPE="O" IDX="1"/>
<HOME NAME="agent13c1" LOC="/u01/app/oracle/agent/agent_13.3.0.0.0" TYPE="O" IDX="2"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ $AGENT_HOME/OPatch/opatch lspatches
27839641;One-off
27369653;One-off
27244723;One-off
27074880;OPSS Bundle Patch 12.1.3.0.171124
26933408;One-off
25832897;One-off
25412962;
23519804;One-off
20882747;One-off
20442348;One-off
19982906;One-off
19345252;One-off
18814458;One-off
28042003;One-off
27419391;WLS PATCH SET UPDATE 12.1.3.0.180417
23527146;One-off
20741228;JDBC 12.1.3.1 BP1

OPatch succeeded.
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ $ORACLE_HOME/OPatch/opatch lspatches
27839641;One-off
27369653;One-off
27244723;One-off
27074880;OPSS Bundle Patch 12.1.3.0.171124
26933408;One-off
25832897;One-off
25412962;
23519804;One-off
20882747;One-off
20442348;One-off
19982906;One-off
19345252;One-off
18814458;One-off
28042003;One-off
27419391;WLS PATCH SET UPDATE 12.1.3.0.180417
23527146;One-off
20741228;JDBC 12.1.3.1 BP1

OPatch succeeded.
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 13.8.0.0.0
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/middleware
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/middleware/oraInst.loc
OPatch version    : 13.8.0.0.0
OUI version       : 13.8.0.0.0
Log file location : /u01/app/oracle/middleware/cfgtoollogs/opatch/opatch2019-05-12_16-34-38PM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/middleware"

Lsinventory Output file location : /u01/app/oracle/middleware/cfgtoollogs/opatch/lsinv/lsinventory2019-05-12_16-34-38PM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: MGOEM
ARU platform id: 226
ARU platform description:: Linux_AMD64

[oracle@MGOEM ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ cat /u01/app/oracle/middleware/oraInst.loc
#Oracle Installer Location File Location
#Fri May 10 16:53:18 CEST 2019
inst_group=oinstall
inventory_loc=/u01/app/oraInventory
[oracle@MGOEM ~]$

Reference:
DIRECTORY STRUCTURE AND LOCATIONS OF IMPORTANT TRACE AND LOG FILES OF ENTERPRISE MANAGER CLOUD CONTROL 13C

Overview of the Directories Created for an Enterprise Manager System

Shocking opatchauto resume works after auto-logout

$
0
0

WARNING: Please don’t try this at home or in production environment.

With that being said, patching was for DR production.

Oracle Interim Patch Installer version 12.2.0.1.16

Patching 2 nodes RAC cluster and node1 completed successfully.

Rationale for using -norestart because there was an issue at one time where datapatch was applied on the node1.

Don’t implement Active Data Guard and have database Start options: mount

# crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      2        ONLINE  INTERMEDIATE node2              Mounted (Closed),STABLE
ora.dbproddr.dbdr.svc
      2        ONLINE  OFFLINE                                          STABLE
--------------------------------------------------------------------------------

$ srvctl status database -d dbproddr -v
Instance dbproddr1 is running on node node1 with online services dbdr. Instance status: Open,Readonly.
Instance dbproddr2 is running on node node2. Instance status: Mounted (Closed).

Run opatchauto and ctrl-c from session is stuck.

node2 ~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019

node2 ~ # $GRID_HOME/OPatch/opatchauto apply $PATCH_TOP_DIR/28833531 -norestart

OPatchauto session is initiated at Thu May 16 20:20:24 2019

System initialization log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-16_08-20-26PM.log.

Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_08-20-47PM.log
The id for this session is K43Y

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.1.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0/db
Patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db

Patch applicability verified successfully on home /u02/app/12.1.0/grid


Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db
"/bin/sh -c 'cd /u01/app/oracle/product/12.1.0/db; ORACLE_HOME=/u01/app/oracle/product/12.1.0/db ORACLE_SID=dbproddr2 /u01/app/oracle/product/12.1.0/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db


Preparing to bring down database service on home /u01/app/oracle/product/12.1.0/db
Successfully prepared home /u01/app/oracle/product/12.1.0/db to bring down database service


Bringing down CRS service on home /u02/app/12.1.0/grid
Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
CRS service brought down successfully on home /u02/app/12.1.0/grid


Performing prepatch operation on home /u01/app/oracle/product/12.1.0/db
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u01/app/oracle/product/12.1.0/db
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db


Performing postpatch operation on home /u01/app/oracle/product/12.1.0/db
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u02/app/12.1.0/grid

Binary patch applied successfully on home /u02/app/12.1.0/grid


Starting CRS service on home /u02/app/12.1.0/grid





*** Ctrl-C as shown below ***
^C
OPatchauto session completed at Thu May 16 21:41:58 2019
*** Time taken to complete the session 81 minutes, 34 seconds ***

opatchauto failed with error code 130

This is not good as session disconnected while troubleshooting in another session.

node2 ~ # timed out waiting for input: auto-logout

Even though opatchauto session was terminated cluster upgrade state is [NORMAL] vs cluster upgrade state is [ROLLING PATCH]

node2 ~ # crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [323461694].

node2 ~ # crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
node2 ~ # crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      1        ONLINE  ONLINE       node1              Open,Readonly,STABLE
      2        ONLINE  ONLINE       node2              Open,Readonly,STABLE
--------------------------------------------------------------------------------

At this point, I was not sure what to do since everything looked good and online.

Colleague helping me with troubleshooting stated patch completed successfully and the main question if we need to try “opatchauto resume”

However, I was not comfortable with the outcome and tried opatchauto resume and it worked like magic.

Reconnect and opatchauto resume

mdinh@node2 ~ $ sudo su - 
~ # . /home/oracle/working/dinh/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM4
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u02/app/12.1.0/grid
ORACLE_HOME=/u02/app/12.1.0/grid
Oracle Instance alive for sid "+ASM4"
~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019/
~ # $GRID_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Thu May 16 22:03:09 2019
Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_10-03-10PM.log
Resuming existing session with id K43Y

Starting CRS service on home /u02/app/12.1.0/grid
Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
CRS service started successfully on home /u02/app/12.1.0/grid


Preparing home /u01/app/oracle/product/12.1.0/db after database service restarted

OPatchauto is running in norestart mode. PDB instances will not be checked for database on the current node.
No step execution required.........
 

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node2
RAC Home:/u01/app/oracle/product/12.1.0/db
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/patches/Jan2019/28833531/28729220
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log


Host:node2
CRS Home:/u02/app/12.1.0/grid
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28729220
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log


Patching session reported following warning(s): 
_________________________________________________

[WARNING] The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

[WARNING] The database instances will not be brought up under the 'norestart' option. The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.


OPatchauto session completed at Thu May 16 22:10:01 2019
Time taken to complete the session 6 minutes, 52 seconds
~ # 

Logs:

oracle@node2:/u02/app/12.1.0/grid/cfgtoollogs/crsconfig
> ls -alrt
total 508
drwxr-x--- 2 oracle oinstall   4096 Nov 23 02:15 oracle
-rwxrwxr-x 1 oracle oinstall 167579 Nov 23 02:15 rootcrs_node2_2018-11-23_02-07-58AM.log
drwxrwxr-x 9 oracle oinstall   4096 Apr 10 12:05 ..

opatchauto apply - Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  33020 May 16 20:22 crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================

Mysterious log file - Unknown where this log is from because it was not from my terminal output.
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  86983 May 16 21:42 crspatch_node2_2019-05-16_08-27-35PM.log
====================================================================================================

-rwxrwxr-x 1 oracle oinstall  56540 May 16 22:06 srvmcfg1.log
-rwxrwxr-x 1 oracle oinstall  26836 May 16 22:06 srvmcfg2.log
-rwxrwxr-x 1 oracle oinstall  21059 May 16 22:06 srvmcfg3.log
-rwxrwxr-x 1 oracle oinstall  23032 May 16 22:08 srvmcfg4.log

opatchauto resume - Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  64381 May 16 22:09 crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================

Prepatch operation log file.

> tail -20 crspatch_node2_2019-05-16_08-21-16PM.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS '
2019-05-16 20:22:04: Removing file /tmp/fileTChFoS
2019-05-16 20:22:04: Successfully removed file: /tmp/fileTChFoS
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

2019-05-16 20:22:04: checkpoint ROOTCRS_POSTPATCH_OOP_REQSTEPS does not exist
2019-05-16 20:22:04: Done - Performing pre-pathching steps required for GI stack
2019-05-16 20:22:04: Resetting cluutil_trc_suff_pp to 0
2019-05-16 20:22:04: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS"
2019-05-16 20:22:04: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil0.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS '
2019-05-16 20:22:04: Removing file /tmp/fileDoYyQA
2019-05-16 20:22:04: Successfully removed file: /tmp/fileDoYyQA
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

*** 2019-05-16 20:22:04: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:SUCCESS ***

Mysterious log file – crspatch_node2_2019-05-16_08-27-35PM.log

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL
2019-05-16 21:42:00: ###### Begin DIE Stack Trace ######
2019-05-16 21:42:00:     Package         File                 Line Calling   
2019-05-16 21:42:00:     --------------- -------------------- ---- ----------
2019-05-16 21:42:00:  1: main            rootcrs.pl            267 crsutils::dietrap
2019-05-16 21:42:00:  2: crsutils        crsutils.pm          1631 main::__ANON__
2019-05-16 21:42:00:  3: crsutils        crsutils.pm          1586 crsutils::system_cmd_capture_noprint
2019-05-16 21:42:00:  4: crsutils        crsutils.pm          9098 crsutils::system_cmd_capture
2019-05-16 21:42:00:  5: crspatch        crspatch.pm           988 crsutils::startFullStack
2019-05-16 21:42:00:  6: crspatch        crspatch.pm          1121 crspatch::performPostPatch
2019-05-16 21:42:00:  7: crspatch        crspatch.pm           212 crspatch::crsPostPatch
2019-05-16 21:42:00:  8: main            rootcrs.pl            276 crspatch::new
2019-05-16 21:42:00: ####### End DIE Stack Trace #######

2019-05-16 21:42:00: ROOTCRS_POSTPATCH checkpoint has failed
2019-05-16 21:42:00:      ckpt: -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil4.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH '
2019-05-16 21:42:00: Removing file /tmp/filewniUim
2019-05-16 21:42:00: Successfully removed file: /tmp/filewniUim
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil5.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status '
2019-05-16 21:42:00: Removing file /tmp/fileK1Tyw6
2019-05-16 21:42:00: Successfully removed file: /tmp/fileK1Tyw6
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: The 'ROOTCRS_POSTPATCH' status is FAILED
2019-05-16 21:42:00: ROOTCRS_POSTPATCH state is FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil6.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL '
2019-05-16 21:42:00: Removing file /tmp/filej20epR
2019-05-16 21:42:00: Successfully removed file: /tmp/filej20epR
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL '
2019-05-16 21:42:01: Removing file /tmp/filely834C
2019-05-16 21:42:01: Successfully removed file: /tmp/filely834C
2019-05-16 21:42:01: pipe exit code: 0
2019-05-16 21:42:01: /bin/su successfully executed

*** 2019-05-16 21:42:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL ***

Postpatch operation log file.

> tail -20 crspatch_node2_2019-05-16_10-03-17PM.log
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START '
2019-05-16 22:09:59: Removing file /tmp/file0IogVl
2019-05-16 22:09:59: Successfully removed file: /tmp/file0IogVl
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:START
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil8.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS '
2019-05-16 22:09:59: Removing file /tmp/fileXDCkuM
2019-05-16 22:09:59: Successfully removed file: /tmp/fileXDCkuM
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

*** 2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS ***

Happy patching and hopefully patching primary to come will be seamlessly successful.

Viewing all 666 articles
Browse latest View live