Quantcast
Channel: Thinking Out Loud
Viewing all 668 articles
Browse latest View live

Choiceology with Katy Milkman


Local Install rlwrap for OEL 7.6

$
0
0

Installing rlwrap 7.6, requires python34 local install

yum install rlwrap

[root@SLC02PNY ~]# yum install rlwrap
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: perl(Data::Dumper) for package: rlwrap-0.43-2.el7.x86_64
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64

****************************************************************************************************
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
****************************************************************************************************

--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
--> Finished Dependency Resolution

yum install python34

root@SLC02PNY ~]# yum install python34
Loaded plugins: ulninfo

****************************************************************************************************
Package python34 is obsoleted by python36, trying to install python36-3.6.8-1.el7.x86_64 instead
****************************************************************************************************

Resolving Dependencies
--> Running transaction check
---> Package python36.x86_64 0:3.6.8-1.el7 will be installed
--> Processing Dependency: python36-libs(x86-64) = 3.6.8-1.el7 for package: python36-3.6.8-1.el7.x86_64
--> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: python36-3.6.8-1.el7.x86_64
--> Running transaction check
---> Package python36-libs.x86_64 0:3.6.8-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                         Arch                     Version                        Repository                            Size
====================================================================================================================================
Installing:
 python36                        x86_64                   3.6.8-1.el7                    ol7_developer_EPEL                    66 k
Installing for dependencies:
 python36-libs                   x86_64                   3.6.8-1.el7                    ol7_developer_EPEL                   8.6 M

Transaction Summary
====================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 8.6 M
Installed size: 36 M
Is this ok [y/d/N]: n

cat /etc/system-release

[root@ADC6160274 ~]# cat /etc/system-release
Oracle Linux Server release 7.6
[root@ADC6160274 ~]#

yumdownloader python34-3.4.5-4.el7.x86_64

[root@ADC6160274 ~]# yumdownloader python34-3.4.5-4.el7.x86_64
python34-3.4.5-4.el7.x86_64.rpm                                                                              |  50 kB  00:00:00

yumdownloader python34-libs-3.4.5-4.el7.x86_64
[root@ADC6160274 ~]# yumdownloader python34-libs-3.4.5-4.el7.x86_64
python34-libs-3.4.5-4.el7.x86_64.rpm                                                                         | 8.2 MB  00:00:01

yum localinstall python34-libs-3.4.5-4.el7.x86_64.rpm python34-3.4.5-4.el7.x86_64.rpm

[root@ADC6160274 ~]# yum localinstall python34-libs-3.4.5-4.el7.x86_64.rpm python34-3.4.5-4.el7.x86_64.rpm
Loaded plugins: ulninfo
Examining python34-libs-3.4.5-4.el7.x86_64.rpm: python34-libs-3.4.5-4.el7.x86_64
Marking python34-libs-3.4.5-4.el7.x86_64.rpm to be installed
Examining python34-3.4.5-4.el7.x86_64.rpm: python34-3.4.5-4.el7.x86_64
Marking python34-3.4.5-4.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package python34.x86_64 0:3.4.5-4.el7 will be installed
---> Package python34-libs.x86_64 0:3.4.5-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                     Arch                 Version                     Repository                                       Size
====================================================================================================================================
Installing:
 python34                    x86_64               3.4.5-4.el7                 /python34-3.4.5-4.el7.x86_64                     36 k
 python34-libs               x86_64               3.4.5-4.el7                 /python34-libs-3.4.5-4.el7.x86_64                29 M

Transaction Summary
====================================================================================================================================
Install  2 Packages

Total size: 29 M
Installed size: 29 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python34-libs-3.4.5-4.el7.x86_64                                                                                 1/2
  Installing : python34-3.4.5-4.el7.x86_64                                                                                      2/2
  Verifying  : python34-3.4.5-4.el7.x86_64                                                                                      1/2
  Verifying  : python34-libs-3.4.5-4.el7.x86_64                                                                                 2/2

Installed:
  python34.x86_64 0:3.4.5-4.el7                                  python34-libs.x86_64 0:3.4.5-4.el7

Complete!

yum install rlwrap

[root@ADC6160274 ~]# yum install rlwrap
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: perl(Data::Dumper) for package: rlwrap-0.43-2.el7.x86_64
--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                           Arch                    Version                        Repository                           Size
====================================================================================================================================
Installing:
 rlwrap                            x86_64                  0.43-2.el7                     ol7_developer_EPEL                  118 k
Installing for dependencies:
 perl-Data-Dumper                  x86_64                  2.145-3.el7                    ol7_latest                           47 k

Transaction Summary
====================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 165 k
Installed size: 378 k
Is this ok [y/d/N]: y
Downloading packages:
(1/2): perl-Data-Dumper-2.145-3.el7.x86_64.rpm                                                               |  47 kB  00:00:00
(2/2): rlwrap-0.43-2.el7.x86_64.rpm                                                                          | 118 kB  00:00:00
------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                               311 kB/s | 165 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : perl-Data-Dumper-2.145-3.el7.x86_64                                                                              1/2
  Installing : rlwrap-0.43-2.el7.x86_64                                                                                         2/2
  Verifying  : perl-Data-Dumper-2.145-3.el7.x86_64                                                                              1/2
  Verifying  : rlwrap-0.43-2.el7.x86_64                                                                                         2/2

Installed:
  rlwrap.x86_64 0:0.43-2.el7

Dependency Installed:
  perl-Data-Dumper.x86_64 0:2.145-3.el7

Complete!
[root@ADC6160274 ~]#

RAC Installation Logs

$
0
0

Note to self for 2 Nodes RAC installation and DB creation logs location.

Oracle Universal Installer logs for GI/DB:

[oracle@racnode-dc1-1 logs]$ pwd; ls -lhrt
/u01/app/oraInventory/logs
total 2.3M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-13-PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-35-PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.8K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-13-PM.log
-rw-r----- 1 oracle oinstall  153 Jun  7 17:06 oraInstall2019-06-07_04-57-13-PM.out
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
-rw-r----- 1 oracle oinstall   47 Jun  7 17:09 time2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:09 oraInstall2019-06-07_05-09-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:13 oraInstall2019-06-07_05-09-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall   29 Jun  7 17:14 oraInstall2019-06-07_05-09-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:14 AttachHome2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  507 Jun  7 17:14 silentInstall2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  14K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 9.5K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  496 Jun  7 17:15 oraInstall2019-06-07_05-09-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 17:15 installActions2019-06-07_05-09-01PM.log
[oracle@racnode-dc1-1 logs]$

silentInstall*.log

[oracle@racnode-dc1-1 logs]$ grep successful silent*.log

silentInstall2019-06-07_04-39-01PM.log:The installation of Oracle Grid Infrastructure 12c was successful.

silentInstall2019-06-07_05-09-01PM.log:The installation of Oracle Database 12c was successful.

[oracle@racnode-dc1-1 logs

installActions*.log

[oracle@racnode-dc1-1 logs]$ grep "Using paramFile" install*.log

installActions2019-06-07_04-39-01PM.log:INFO: Using paramFile: /u01/stage/12.1.0.2/grid/install/oraparam.ini

installActions2019-06-07_05-09-01PM.log:Using paramFile: /u01/stage/12.1.0.2/database/install/oraparam.ini

[oracle@racnode-dc1-1 logs]$

Run root script after installation:
$GRID_HOME/root.sh

[oracle@racnode-dc1-1 install]$ pwd; ls -lhrt root*.log
/u01/app/12.1.0.2/grid/install
-rw------- 1 oracle oinstall 7.4K Jun  7 16:51 root_racnode-dc1-1_2019-06-07_16-44-37.log
[oracle@racnode-dc1-1 install]$

Run configToolAllCommands:
$GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/u01/stage/rsp/configtoolallcommands.rsp

[oracle@racnode-dc1-1 oui]$ pwd; ls -lhrt
/u01/app/12.1.0.2/grid/cfgtoollogs/oui
total 1.2M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r--r-- 1 oracle oinstall    0 Jun  7 16:57 configActions2019-06-07_04-57-10-PM.err
-rw-r--r-- 1 oracle oinstall  13K Jun  7 17:06 configActions2019-06-07_04-57-10-PM.log
-rw------- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw------- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
[oracle@racnode-dc1-1 oui]$

dbca

[oracle@racnode-dc1-1 dbca]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca
total 116K
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:02 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM.lck
drwxrwxr-x 3 oracle oinstall 4.0K Jun  7 17:02 _mgmtdb
-rwxrwxr-x 1 oracle oinstall 105K Jun  7 17:03 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM
drwxr-x--- 2 oracle oinstall 4.0K Jun  7 17:23 hawk
[oracle@racnode-dc1-1 dbca]$

dbca _mgmtdb

[oracle@racnode-dc1-1 _mgmtdb]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb
total 19M
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 16:58 trace.log.lck
-rwxrwxr-x 1 oracle oinstall  18M Jun  7 16:59 tempControl.ctl
-rwxrwxr-x 1 oracle oinstall  349 Jun  7 16:59 CloneRmanRestore.log
-rwxrwxr-x 1 oracle oinstall  596 Jun  7 16:59 cloneDBCreation.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:00 rmanUtil
-rwxrwxr-x 1 oracle oinstall 2.1K Jun  7 17:00 plugDatabase.log
-rwxrwxr-x 1 oracle oinstall  428 Jun  7 17:01 dbmssml_catcon_12271.lst
-rwxrwxr-x 1 oracle oinstall 3.5K Jun  7 17:01 dbmssml0.log
-rwxrwxr-x 1 oracle oinstall  396 Jun  7 17:01 postScripts.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:01 lockAccount.log
-rwxrwxr-x 1 oracle oinstall  442 Jun  7 17:01 catbundleapply_catcon_12348.lst
-rwxrwxr-x 1 oracle oinstall 3.9K Jun  7 17:01 catbundleapply0.log
-rwxrwxr-x 1 oracle oinstall  424 Jun  7 17:01 utlrp_catcon_12416.lst
-rwxrwxr-x 1 oracle oinstall 9.2K Jun  7 17:02 utlrp0.log
-rwxrwxr-x 1 oracle oinstall  964 Jun  7 17:02 postDBCreation.log
-rwxrwxr-x 1 oracle oinstall  737 Jun  7 17:02 OraGI12Home1__mgmtdb_creation_checkpoint.xml
-rwxrwxr-x 1 oracle oinstall  877 Jun  7 17:02 _mgmtdb.log
-rwxrwxr-x 1 oracle oinstall 1.1M Jun  7 17:02 trace.log
-rwxrwxr-x 1 oracle oinstall 1.3K Jun  7 17:02 DetectOption.log
drwxrwxr-x 2 oracle oinstall 4.0K Jun  7 17:03 vbox_rac_dc1

[oracle@racnode-dc1-1 _mgmtdb]$ tail _mgmtdb.log
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@racnode-dc1-1 _mgmtdb]$

dbca hawk

[oracle@racnode-dc1-1 hawk]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/hawk
total 34M
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 trace.log.lck
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 rmanUtil
-rw-r----- 1 oracle oinstall  18M Jun  7 17:17 tempControl.ctl
-rw-r----- 1 oracle oinstall  384 Jun  7 17:17 CloneRmanRestore.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 17:20 cloneDBCreation.log
-rw-r----- 1 oracle oinstall    8 Jun  7 17:20 postScripts.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:21 CreateClustDBViews.log
-rw-r----- 1 oracle oinstall    6 Jun  7 17:21 lockAccount.log
-rw-r----- 1 oracle oinstall 1.3K Jun  7 17:22 postDBCreation.log
-rw-r----- 1 oracle oinstall  511 Jun  7 17:23 OraDB12Home1_hawk_creation_checkpoint.xml
-rw-r----- 1 oracle oinstall  24K Jun  7 17:23 hawk.log
-rw-r----- 1 oracle oinstall  16M Jun  7 17:23 trace.log

[oracle@racnode-dc1-1 hawk]$ tail hawk.log
DBCA_PROGRESS : 73%
DBCA_PROGRESS : 76%
DBCA_PROGRESS : 85%
DBCA_PROGRESS : 94%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/hawk.
Database Information:
Global Database Name:hawk
System Identifier(SID) Prefix:hawk
[oracle@racnode-dc1-1 hawk]$

Shell Scripting Using set -v

$
0
0

set -v : Print shell input lines as they are read.

show_gds_status.sh

#!/bin/sh
##############################
# GDSCTL> configure -width 132
# GDSCTL> configure -save_config
##############################
. ~/gsm1.sh
set -evx
gdsctl -show << END
status
databases
services
exit
END
exit

Excute show_gds_status.sh

[oracle@SLC02PNY GDS]$ ./show_gds_status.sh
gdsctl -show << END
status
databases
services
exit
END
+ gdsctl -show
gsm       : GSM1
TNS_ADMIN : /u01/app/oracle/product/18.0.0/gsmhome_1/network/admin
driver    : jdbc:oracle:thin:
resolve   : QUAL_HOSTNAME
timeout   : 150
log_level : OFF
version   : 18.0.0.0.0
width     : 132
verbose   : ON
spool     : OFF
showtime  : OFF
GDSCTL: Version 18.0.0.0.0 - Production on Sat Jun 15 13:01:21 UTC 2019

Copyright (c) 2011, 2018, Oracle.  All rights reserved.

Welcome to GDSCTL, type "help" for information.

Current GSM is set to GSM1
GDSCTL>
Alias                     GSM1
Version                   18.0.0.0.0
Start Date                15-JUN-2019 12:22:28
Trace Level               off
Listener Log File         /u01/app/oracle/diag/gsm/SLC02PNY/gsm1/alert/log.xml
Listener Trace File       /u01/app/oracle/diag/gsm/SLC02PNY/gsm1/trace/ora_9504_140547635764096.trc
Endpoint summary          (ADDRESS=(HOST=SLC02PNY.localdomain)(PORT=1571)(PROTOCOL=tcp))
GSMOCI Version            2.2.1
Mastership                Y
Connected to GDS catalog  Y
Process Id                9507
Number of reconnections   0
Pending tasks.     Total  0
Tasks in  process. Total  0
Regional Mastership       TRUE
Total messages published  152
Time Zone                 +00:00
Orphaned Buddy Regions:
     None
GDS region                region1
Network metrics:
   Region: region2 Network factor:0

GDSCTL>
Database: "chi" Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: region2
   Service: "prim" Globally started: Y Started: N
            Scan: N Enabled: Y Preferred: Y
   Service: "stby" Globally started: Y Started: Y
            Scan: N Enabled: Y Preferred: Y
   Registered instances:
     sales%11
Database: "sfo" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: region1
   Service: "prim" Globally started: Y Started: Y
            Scan: N Enabled: Y Preferred: Y
   Service: "stby" Globally started: Y Started: N
            Scan: N Enabled: Y Preferred: Y
   Registered instances:
     sales%1

GDSCTL>
Service "prim.sales.oradbcloud" has 1 instance(s). Affinity: ANYWHERE
   Instance "sales%1", name: "sales", db: "sfo", region: "region1", status: ready.
Service "stby.sales.oradbcloud" has 1 instance(s). Affinity: ANYWHERE
   Instance "sales%11", name: "sales", db: "chi", region: "region2", status: ready.

GDSCTL>
exit
+ exit
[oracle@SLC02PNY GDS]$

help set

[oracle@SLC02PNY GDS]$ help set
set: set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
    Set or unset values of shell options and positional parameters.

    Change the value of shell attributes and positional parameters, or
    display the names and values of shell variables.

    Options:
      -a  Mark variables which are modified or created for export.
      -b  Notify of job termination immediately.
      -e  Exit immediately if a command exits with a non-zero status.
      -f  Disable file name generation (globbing).
      -h  Remember the location of commands as they are looked up.
      -k  All assignment arguments are placed in the environment for a
          command, not just those that precede the command name.
      -m  Job control is enabled.
      -n  Read commands but do not execute them.
      -o option-name
          Set the variable corresponding to option-name:
              allexport    same as -a
              braceexpand  same as -B
              emacs        use an emacs-style line editing interface
              errexit      same as -e
              errtrace     same as -E
              functrace    same as -T
              hashall      same as -h
              histexpand   same as -H
              history      enable command history
              ignoreeof    the shell will not exit upon reading EOF
              interactive-comments
                           allow comments to appear in interactive commands
              keyword      same as -k
              monitor      same as -m
              noclobber    same as -C
              noexec       same as -n
              noglob       same as -f
              nolog        currently accepted but ignored
              notify       same as -b
              nounset      same as -u
              onecmd       same as -t
              physical     same as -P
              pipefail     the return value of a pipeline is the status of
                           the last command to exit with a non-zero status,
                           or zero if no command exited with a non-zero status
              posix        change the behavior of bash where the default
                           operation differs from the Posix standard to
                           match the standard
              privileged   same as -p
              verbose      same as -v
              vi           use a vi-style line editing interface
              xtrace       same as -x
      -p  Turned on whenever the real and effective user ids do not match.
          Disables processing of the $ENV file and importing of shell
          functions.  Turning this option off causes the effective uid and
          gid to be set to the real uid and gid.
      -t  Exit after reading and executing one command.
      -u  Treat unset variables as an error when substituting.
================================================================================
      -v  Print shell input lines as they are read.
================================================================================
      -x  Print commands and their arguments as they are executed.
      -B  the shell will perform brace expansion
      -C  If set, disallow existing regular files to be overwritten
          by redirection of output.
      -E  If set, the ERR trap is inherited by shell functions.
      -H  Enable ! style history substitution.  This flag is on
          by default when the shell is interactive.
      -P  If set, do not follow symbolic links when executing commands
          such as cd which change the current directory.
      -T  If set, the DEBUG trap is inherited by shell functions.
      --  Assign any remaining arguments to the positional parameters.
          If there are no remaining arguments, the positional parameters
          are unset.
      -   Assign any remaining arguments to the positional parameters.
          The -x and -v options are turned off.

    Using + rather than - causes these flags to be turned off.  The
    flags can also be used upon invocation of the shell.  The current
    set of flags may be found in $-.  The remaining n ARGs are positional
    parameters and are assigned, in order, to $1, $2, .. $n.  If no
    ARGs are given, all shell variables are printed.

    Exit Status:
    Returns success unless an invalid option is given.
[oracle@SLC02PNY GDS]$

DGMGRL Using Help To Learn About New Validate Features

$
0
0

Wouldn’t be nicer and much better if Oracle would add (NF) for new features to help syntax?

DGMGRL for Linux: Release 12.2.0.1.0

[oracle@db-fs-1 bin]$ ./dgmgrl /
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Jun 28 17:49:16 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "orclcdb"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

DGMGRL>

DGMGRL for Linux: Release 18.0.0.0.0

[oracle@ADC6160274 GDS]$ dgmgrl /
DGMGRL for Linux: Release 18.0.0.0.0 - Production on Fri Jun 28 15:54:36 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "chi"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

  VALIDATE NETWORK CONFIGURATION FOR { ALL | <member name> }; [*** NF ***]

  VALIDATE STATIC CONNECT IDENTIFIER FOR { ALL | <database name> }; [*** NF ***]

DGMGRL>

validate network configuration

DGMGRL> validate network configuration for all;
Connecting to instance "sales" on database "sfo" ...
Connected to "sfo"
Checking connectivity from instance "sales" on database "sfo to instance "sales" on database "chi"...
Succeeded.
Connecting to instance "sales" on database "chi" ...
Connected to "chi"
Checking connectivity from instance "sales" on database "chi to instance "sales" on database "sfo"...
Succeeded.

Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

validate static connect identifier

DGMGRL> validate static connect identifier for all;
Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

DGMGRL>

Delete MGMTDB and MGMTLSNR from OEM using emcli

$
0
0

Doc ID 1933649.1, MGMTDB and MGMTLSNR should not be monitored.

$ grep oms /etc/oratab 
oms:/u01/middleware/13.2.0:N

$ . oraenv <<< oms

$ emcli login -username=SYSMAN
Enter password : 
Login successful

$ emcli sync
Synchronized successfully

$ emcli get_targets -targets=oracle_listener -format=name:csv|grep -i MGMT
1,Up,oracle_listener,MGMTLSNR_host01

$ emcli delete_target -name="MGMTLSNR_host01" -type="oracle_listener" 
Target "MGMTLSNR_host01:oracle_listener" deleted successfully

$ emcli sync
$ emcli get_targets|grep -i MGMT

Note: MGMTDB was not monitored and can be deleted as follow:

$ emcli get_targets -targets=oracle_database -format=name:csv|grep -i MGMT
$ emcli delete_target -name="MGMTDB_host01" -type="oracle_database" 

The problem with monitoring MGMTDB and MGMTLSNR is getting silly page when they are relocated to a new host.

Host=host01
Target type=Listener 
Target name=MGMTLSNR_host01
Categories=Availability 
Message=The listener is down:

Dealing with the same issue for scan listener and have not reached an agreement to have them deleted as I and a few others think they should not be monitored.
Unfortunately, there is no official Oracle documentation for this.

Here’s a typical page for when all scan listeners are running from only one node.

Host=host01
Target type=Listener
Target name=LISTENER_SCAN2_cluster
Categories=Availability
Message=The listener is down: 

$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node02
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node02

Resize ACFS Volume

$
0
0

Current Filesystem for ACFS is 299G.

Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  299G  2.6G  248G   2% /ggdata02

Free_MB is 872 which causes paging due to insufficient FREE space from ASM Disk Group ACFS_DATA.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184      872                0             872              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184      874                0             872              0             N  ACFS_DATA/

Review attributes for ASM Disk Group ACFS_DATA.

$ asmcmd lsattr -l -G ACFS_DATA
Name                     Value       
access_control.enabled   FALSE       
access_control.umask     066         
au_size                  4194304     
cell.smart_scan_capable  FALSE       
compatible.advm          12.1.0.0.0  
compatible.asm           12.1.0.0.0  
compatible.rdbms         12.1.0.0.0  
content.check            FALSE       
content.type             data        
disk_repair_time         3.6h        
failgroup_repair_time    24.0h       
idp.boundary             auto        
idp.type                 dynamic     
phys_meta_replicated     true        
sector_size              512         
thin_provisioned         FALSE       

Resize /ggdata02 to 250G.

$ acfsutil size 250G /ggdata02
acfsutil size: new file system size: 268435456000 (256000MB)

Review results.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/


$ df -h /ggdata02
Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  250G  2.6G  248G   2% /ggdata02

$ asmcmd volinfo --all
Diskgroup Name: ACFS_DATA

	 Volume Name: ACFS_VOL
	 Volume Device: /dev/asm/acfs_vol-177
	 State: ENABLED
	 Size (MB): 256000
	 Resize Unit (MB): 512
	 Redundancy: UNPROT
	 Stripe Columns: 8
	 Stripe Width (K): 1024
	 Usage: ACFS
	 Mountpath: /ggdata02

Check Cluster Resources Where Target != State

$
0
0

Current version.

[oracle@racnode-dc2-1 patch]$ cat /etc/oratab
#Backup file is  /u01/app/12.2.0.1/grid/srvm/admin/oratab.bak.racnode-dc2-1 line added by Agent
-MGMTDB:/u01/app/12.2.0.1/grid:N
hawk1:/u01/app/oracle/12.2.0.1/db1:N
+ASM1:/u01/app/12.2.0.1/grid:N          # line added by Agent
[oracle@racnode-dc2-1 patch]$

Kill database instance process.

[oracle@racnode-dc2-1 patch]$ ps -ef|grep pmon
oracle   13542     1  0 16:09 ?        00:00:00 asm_pmon_+ASM1
oracle   27663     1  0 16:39 ?        00:00:00 ora_pmon_hawk1
oracle   29401 18930  0 16:40 pts/0    00:00:00 grep --color=auto pmon
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ kill -9 27663
[oracle@racnode-dc2-1 patch]$

Check cluster resource – close but no cigar (false positive)

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc2-1            STABLE
               OFFLINE OFFLINE      racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      3        OFFLINE OFFLINE                               STABLE
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Check cluster resource – BINGO!

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '((TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Check cluster resource – sanity check.

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '((TARGET = ONLINE) and (STATE != ONLINE)'
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc2-1            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc2-2            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Rsync DBFS To ACFS For GoldenGate Trail Migration

$
0
0

Planning to move GoldenGate trail files from DBFS to ACFS.

This is pre-work before actual migration to stress IO for ACFS.

Learned some cron along the way.

# Run every 2 hours at even hours
0 */2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

# Run every 2 hours at odd hours
0 1-23/2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

Syntax and ouptput.

+ /bin/rsync -vrpogt --delete-after /DBFS/ggdata/ /ACFS/ggdata
building file list ... done

dirchk/E_SOURCE.cpe
dirchk/P_TARGET.cpe

dirdat/
dirdat/aa000307647
dirdat/aa000307648
.....
dirdat/aa000307726
dirdat/aa000307727

deleting dirdat/aa000306741
deleting dirdat/aa000306740
.....
deleting dirdat/aa000306662
deleting dirdat/aa000306661

sent 16,205,328,959 bytes  received 1,743 bytes  140,305,893.52 bytes/sec
total size is 203,021,110,174  speedup is 12.53

real	1m56.671s
user	1m24.643s
sys	0m45.875s

+ '[' 0 '!=' 0 ']'

+ /bin/diff -rq /DBFS/ggdata /ACFS/ggdata

Files /DBFS/ggdata/dirchk/E_SOURCE.cpe and /ACFS/ggdata/dirchk/E_SOURCE.cpe differ
Files /DBFS/ggdata/dirchk/P_TARGET.cpe and /ACFS/ggdata/dirchk/P_TARGET.cpe differ

Only in /ACFS/ggdata/dirdat: aa000306742
Only in /ACFS/ggdata/dirdat: aa000306743
Only in /ACFS/ggdata/dirdat: aa000306744
Only in /ACFS/ggdata/dirdat: aa000306745

Only in /DBFS/ggdata/dirdat: aa000307728
Only in /DBFS/ggdata/dirdat: aa000307729

real	69m15.207s
user	2m9.242s
sys	17m3.846s

+ ls /DBFS/ggdata/dirdat/
+ wc -l
975

+ ls -alrt /DBFS/ggdata/dirdat/
+ head
total 190631492
drwxrwxrwx 24 root    root             0 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199999934 Jul 23 06:11 aa000306798
-rw-r-----  1 ggsuser oinstall 199999194 Jul 23 06:13 aa000306799
-rw-r-----  1 ggsuser oinstall 199999387 Jul 23 06:14 aa000306800
-rw-r-----  1 ggsuser oinstall 199999122 Jul 23 06:16 aa000306801
-rw-r-----  1 ggsuser oinstall 199999172 Jul 23 06:19 aa000306802
-rw-r-----  1 ggsuser oinstall 199999288 Jul 23 06:19 aa000306803

+ ls -alrt /DBFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999671 Jul 24 07:59 aa000307764
-rw-r-----  1 ggsuser oinstall 199999645 Jul 24 08:01 aa000307765
-rw-r-----  1 ggsuser oinstall 199998829 Jul 24 08:02 aa000307766
-rw-r-----  1 ggsuser oinstall 199998895 Jul 24 08:04 aa000307767
-rw-r-----  1 ggsuser oinstall 199999655 Jul 24 08:05 aa000307768
-rw-r-----  1 ggsuser oinstall 199999930 Jul 24 08:07 aa000307769
-rw-r-----  1 ggsuser oinstall 199999761 Jul 24 08:09 aa000307770
-rw-r-----  1 ggsuser oinstall 199999421 Jul 24 08:11 aa000307771
-rw-r-----  1 ggsuser oinstall   7109055 Jul 24 08:11 aa000307772

+ ls /ACFS/ggdata/dirdat/
+ wc -l
986

+ ls -alrt /ACFS/ggdata/dirdat/
+ head
total 194779104
drwxrwxrwx 24 root    root          8192 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199998453 Jul 23 04:55 aa000306742
-rw-r-----  1 ggsuser oinstall 199999657 Jul 23 04:56 aa000306743
-rw-r-----  1 ggsuser oinstall 199999227 Jul 23 04:57 aa000306744
-rw-r-----  1 ggsuser oinstall 199999389 Jul 23 04:59 aa000306745
-rw-r-----  1 ggsuser oinstall 199999392 Jul 23 05:00 aa000306746
-rw-r-----  1 ggsuser oinstall 199999116 Jul 23 05:01 aa000306747

+ ls -alrt /ACFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999876 Jul 24 06:48 aa000307719
-rw-r-----  1 ggsuser oinstall 199999751 Jul 24 06:50 aa000307720
-rw-r-----  1 ggsuser oinstall 199999918 Jul 24 06:51 aa000307721
-rw-r-----  1 ggsuser oinstall 199999404 Jul 24 06:52 aa000307722
-rw-r-----  1 ggsuser oinstall 199999964 Jul 24 06:54 aa000307723
-rw-r-----  1 ggsuser oinstall 199999384 Jul 24 06:56 aa000307724
-rw-r-----  1 ggsuser oinstall 199999283 Jul 24 06:57 aa000307725
-rw-r-----  1 ggsuser oinstall 199998033 Jul 24 06:59 aa000307726
-rw-r-----  1 ggsuser oinstall 199999199 Jul 24 07:00 aa000307727

Too Old To Remember

$
0
0

Is it required to run datapatch after creating database?

Why bother trying to remember versus running datapatch -prereq to find out?

Test case for 12.2.

Database July 2019 Release Update 12.2 applied:

[oracle@racnode-dc2-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.2.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
29770090;ACFS JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770090)
29770040;OCW JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770040)
29757449;Database Jul 2019 Release Update : 12.2.0.1.190716 (29757449)
28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

OPatch succeeded.
+ exit
[oracle@racnode-dc2-1 ~]$

Create 12.2 RAC database:

[oracle@racnode-dc2-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname hawkcdb -sid hawkcdb -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc2-1,racnode-dc2-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
21% complete
Creating and starting Oracle instance
35% complete
Creating cluster database views
50% complete
Completing Database Creation
57% complete
Creating Pluggable Databases
78% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/hawkcdb/hawkcdb.log" for further details.
[oracle@racnode-dc2-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc2-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.2.0.1.0 Production on Thu Aug  1 17:45:13 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    Nothing to apply
**********************************************************************

SQL Patching tool complete on Thu Aug  1 17:46:39 2019
[oracle@racnode-dc2-1 ~]$

Test case for 12.1.

Database July 2019 Bundle Patch 12.1 applied:

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.2/grid
ORACLE_HOME=/u01/app/12.1.0.2/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.1.0.2/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
29509318;OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)
29496791;Database Bundle Patch : 12.1.0.2.190716 (29496791)
29423125;ACFS PATCH SET UPDATE 12.1.0.2.190716 (29423125)
26983807;WLM Patch Set Update: 12.1.0.2.180116 (26983807)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Create 12.1 RAC database:

[oracle@racnode-dc1-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname cdbhawk -sid cdbhawk -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc1-1,racnode-dc1-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
23% complete
Creating and starting Oracle instance
38% complete
Creating cluster database views
54% complete
Completing Database Creation
77% complete
Creating Pluggable Databases
81% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdbhawk/cdbhawk.log" for further details.
[oracle@racnode-dc1-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.1.0.2.0 Production on Thu Aug  1 18:24:53 2019
Copyright (c) 2012, 2017, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    The following patches will be applied:
      29496791 (DATABASE BUNDLE PATCH 12.1.0.2.190716)
**********************************************************************

SQL Patching tool complete on Thu Aug  1 18:26:26 2019
[oracle@racnode-dc1-1 ~]$

For 12.1, datapatch is required and not for 12.2.

19c Grid Dry-Run Upgrade

$
0
0

First test using GUI.

[oracle@racnode-dc2-1 grid]$ /u01/app/19.3.0.0/grid/gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_00-20-31AM/gridSetupActions2019-08-06_00-20-31AM.log
[oracle@racnode-dc2-1 grid]$

Create dryRunForUpgradegrid.rsp from grid_2019-08-06_00-20-31AM.rsp (above GUI test)

[oracle@racnode-dc2-1 grid]$ grep -v "^#" /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp | grep -v "=$" | awk 'NF' > /home/oracle/dryRunForUpgradegrid.rsp

[oracle@racnode-dc2-1 ~]$ cat /home/oracle/dryRunForUpgradegrid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=vbox-rac-dc2
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=racnode-dc2-1:,racnode-dc2-2:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
[oracle@racnode-dc2-1 ~]$

Create directory grid home for all nodes:

[root@racnode-dc2-1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[root@racnode-dc2-1 ~]# mkdir -p /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chown oracle:oinstall /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chmod 775 /u01/app/19.3.0.0/grid

[root@racnode-dc2-1 ~]# ll /u01/app/19.3.0.0/
total 4
drwxrwxr-x 2 oracle oinstall 4096 Aug  6 02:07 grid
[root@racnode-dc2-1 ~]#

Extract grid software for node1 ONLY:

[oracle@racnode-dc2-1 ~]$ unzip -qo /media/swrepo/LINUX.X64_193000_grid_home.zip -d /u01/app/19.3.0.0/grid/

[oracle@racnode-dc2-1 ~]$ ls /u01/app/19.3.0.0/grid/
addnode     clone  dbjava     diagnostics  gpnp          install        jdbc  lib      OPatch   ords  perl     qos       rhp            rootupgrade.sh  sqlpatch  tomcat  welcome.html  xdk
assistants  crs    dbs        dmu          gridSetup.sh  instantclient  jdk   md       opmn     oss   plsql    racg      root.sh        runcluvfy.sh    sqlplus   ucp     wlm
bin         css    deinstall  env.ora      has           inventory      jlib  network  oracore  oui   precomp  rdbms     root.sh.old    sdk             srvm      usm     wwg
cha         cv     demo       evm          hs            javavm         ldap  nls      ord      owm   QOpatch  relnotes  root.sh.old.1  slax            suptools  utl     xag

[oracle@racnode-dc2-1 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.0G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-1 ~]$

Run gridSetup.sh -silent -dryRunForUpgrade:

[oracle@racnode-dc2-1 ~]$ env|grep -i ora
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle

[oracle@racnode-dc2-1 ~]$ date
Tue Aug  6 02:35:47 CEST 2019

[oracle@racnode-dc2-1 ~]$ /u01/app/19.3.0.0/grid/gridSetup.sh -silent -dryRunForUpgrade -responseFile /home/oracle/dryRunForUpgradegrid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_02-35-52AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log


As a root user, execute the following script(s):
        1. /u01/app/19.3.0.0/grid/rootupgrade.sh

Execute /u01/app/19.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc2-1]

Run the script on the local node.

Successfully Setup Software with warning(s).
[oracle@racnode-dc2-1 ~]$

Run rootupgrade.sh for node1 ONLY and review log:

[root@racnode-dc2-1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Check /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log for the output of root script

[root@racnode-dc2-1 ~]# cat /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/rootcrs_racnode-dc2-1_2019-08-06_02-45-31AM.log
2019/08/06 02:45:44 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/08/06 02:45:52 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2019/08/06 02:47:56 CLSRSC-693: CRS entities validation completed successfully.
[root@racnode-dc2-1 ~]#

Check grid home for node2:

[oracle@racnode-dc2-2 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.6G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-2 ~]$

Check oraInventory for ALL nodes:

[oracle@racnode-dc2-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.2.0.1/db1" TYPE="O" IDX="2"/>
==========================================================================================
<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0.0/grid" TYPE="O" IDX="3"/>
==========================================================================================
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc2-2 ~]$

Check crs activeversion: 12.2.0.1.0

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc2-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [927320293].
[oracle@racnode-dc2-1 ~]$

Check log location:

[oracle@racnode-dc2-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/

[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$ ls -alrt
total 17420
-rw-r-----  1 oracle oinstall     129 Aug  6 02:35 installerPatchActions_2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall       0 Aug  6 02:35 gridSetupActions2019-08-06_02-35-52AM.err
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:35 temp_ob
-rw-r-----  1 oracle oinstall       0 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.err
drwxrwx--- 17 oracle oinstall    4096 Aug  6 02:39 ..
-rw-r-----  1 oracle oinstall     157 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall       0 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.err.racnode-dc2-2
-rw-r-----  1 oracle oinstall     142 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.out.racnode-dc2-2
-rw-r-----  1 oracle oinstall 9341920 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall   13419 Aug  6 02:43 time2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall 8443087 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.log
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:56 .
[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$

Since I have not performed actual upgrade, I don’t know if 19.3.0.0 grid home in oraInventory will be problematic.

It was problematic when performing test with silent mode after initial test with GUI.

To resolve the issue, detach 19.3.0.0 grid home

export ORACLE_HOME=/u01/app/19.3.0.0/grid
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME

Obvious But Not For Oracle Obviously

$
0
0

While dropping RAC database, I found error ORA-01081: cannot start already-running ORACLE – shut it down first from dbca log.

Looking up error: Cause is obvious

$ oerr ora 01081
01081, 00000, "cannot start already-running ORACLE - shut it down first"
// *Cause:  Obvious
// *Action:

Here is the process for 12.1.0:

$ ps -ef|grep pmon
oracle   41777     1  0 Aug09 ?        00:00:30 asm_pmon_+ASM2

$ srvctl config database
DBFS

$ srvctl status database -d DBFS -v
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Instance DBFS1 is not running on node node1
Instance DBFS2 is not running on node node2
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

$ dbca -silent -deleteDatabase -sourceDB DBFS
Connecting to database
9% complete
14% complete
19% complete
23% complete
28% complete
33% complete
38% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instances and datafiles
66% complete
80% complete
95% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/DBFS.log" for further details.

$ cat /u01/app/oracle/cfgtoollogs/dbca/DBFS.log
The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. 
All information in the database will be deleted. Do you want to proceed?
Connecting to database
DBCA_PROGRESS : 9%
DBCA_PROGRESS : 14%
DBCA_PROGRESS : 19%
DBCA_PROGRESS : 23%
DBCA_PROGRESS : 28%
DBCA_PROGRESS : 33%
DBCA_PROGRESS : 38%
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ORA-01081: cannot start already-running ORACLE - shut it down first
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DBCA_PROGRESS : 47%
Updating network configuration files
DBCA_PROGRESS : 48%
DBCA_PROGRESS : 52%
Deleting instances and datafiles
DBCA_PROGRESS : 66%
DBCA_PROGRESS : 80%
DBCA_PROGRESS : 95%
DBCA_PROGRESS : 100%
Database deletion completed.

$ srvctl config database
$

Duplicate OMF DB Using Backupset To New Host And Directories

$
0
0

Database 12.1.0.2.0 is created using OMF.

At SOURCE:
db_create_file_dest=/u01/app/oracle/oradata
db_recovery_file_dest=/u01/app/oracle/fast_recovery_area

At DESTINATION:
db_create_file_dest=/u01/oradata
db_recovery_file_dest=/u01/fast_recovery_area

BACKUP LOCATION on shared storage:
/media/shared_storage/rman_backup/EMU

I has to explicitly set control_files since I was not able to determine how it can be done automatically.

set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'

If don’t set_controlfiles then duplicate will restore to original locations.

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
output file name=/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

STEPS:

================================================================================
### SOURCE: Backup Database
================================================================================

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_1_gqsq2mvy_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsq2myy_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_2_gqsq2n1o_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsq2n3g_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_3_gqsq2n50_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsq2nql_.log

6 rows selected.

SQL>

--------------------------------------------------
### Backup database.
--------------------------------------------------

[oracle@ol741 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol741 EMU]$ rman @ backup.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:06 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_EMU_level0.log
2> set echo on
3> connect target;
4> show all;
5> set command id to "BACKUP_EMU";
6> run {
7> allocate channel d1 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
8> allocate channel d2 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
9> allocate channel d3 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
10> allocate channel d4 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
11> allocate channel d5 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
12> backup as compressed backupset incremental level 0 check logical database filesperset 1 tag="EMU"
13> plus archivelog filesperset 8 tag="EMU"
14> ;
15> }
16> alter database backup controlfile to trace as '/media/shared_storage/rman_backup/EMU/cf_@.sql' REUSE RESETLOGS;
17> create pfile='/media/shared_storage/rman_backup/EMU/init@.ora' from spfile;
18> list backup summary tag="EMU";
19> list backup of spfile tag="EMU";
20> list backup of controlfile tag="EMU";
21> report schema;
22> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol741 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:44 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838062773)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_system_gqsq2qw4_.dbf
2    550      SYSAUX               NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_sysaux_gqsq30xo_.dbf
3    265      UNDOTBS1             YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_undotbs1_gqsq3875_.dbf
4    5        USERS                NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_users_gqsq405f_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/app/oracle/oradata/EMU/datafile/o1_mf_temp_gqsq3c3d_.tmp

RMAN> exit

Recovery Manager complete.
[oracle@ol741 EMU]$


================================================================================
### TARGET: Restore Database
================================================================================

--------------------------------------------------
### Create pfile.
--------------------------------------------------

[oracle@ol742 dbs]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/dbs

[oracle@ol742 dbs]$ cat initemu.ora
*.db_name='emu'
[oracle@ol742 dbs]$

--------------------------------------------------
### Startup nomount.
--------------------------------------------------

[oracle@ol742 EMU]$ . oraenv << startup nomount;
ORACLE instance started.

Total System Global Area  234881024 bytes
Fixed Size                  2922904 bytes
Variable Size             176162408 bytes
Database Buffers           50331648 bytes
Redo Buffers                5464064 bytes
SQL>

--------------------------------------------------
### Create new directories.
--------------------------------------------------

[oracle@ol742 EMU]$ mkdir -p /u01/oradata/EMU/controlfile/
[oracle@ol742 EMU]$ mkdir -p /u01/fast_recovery_area/EMU/controlfile

--------------------------------------------------
### Duplicate database.
--------------------------------------------------

[oracle@ol742 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol742 EMU]$ rman @ dup_omf_bks.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:37:51 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_duplicate_database.log
2> set echo on
3> connect auxiliary *
4> show all;
5> set command id to "DUPLICATE_EMU";
6> DUPLICATE DATABASE TO emu
7>   SPFILE
8>   set db_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
9>   set log_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
10>   set db_create_file_dest='/u01/oradata'
11>   set db_recovery_file_dest='/u01/fast_recovery_area'
12>   set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'
13>   BACKUP LOCATION '/media/shared_storage/rman_backup/EMU'
14>   NOFILENAMECHECK
15> ;
16> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol742 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:40:33 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838070815)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/oradata/EMU/datafile/o1_mf_system_gqsyvo1y_.dbf
2    550      SYSAUX               NO      /u01/oradata/EMU/datafile/o1_mf_sysaux_gqsywg40_.dbf
3    265      UNDOTBS1             YES     /u01/oradata/EMU/datafile/o1_mf_undotbs1_gqsywx7n_.dbf
4    5        USERS                NO      /u01/oradata/EMU/datafile/o1_mf_users_gqsyxd90_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/oradata/EMU/datafile/o1_mf_temp_gqsyy469_.tmp

RMAN> exit


Recovery Manager complete.
[oracle@ol742 EMU]$

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/oradata/EMU/onlinelog/o1_mf_3_gqsyy2kh_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsyy2lx_.log
/u01/oradata/EMU/onlinelog/o1_mf_2_gqsyy165_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsyy17p_.log
/u01/oradata/EMU/onlinelog/o1_mf_1_gqsyxztc_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsyxzw3_.log

6 rows selected.

SQL>

Logs:

rman_EMU_level0.log

rman_duplicate_database.log

 

Be Careful When Subscribing To Oracle Learning Subscription

$
0
0

Subscribing to Oracle Learning Subscription seems good in theory but bad in reality.

Oracle support informed. “Oracle University’s policy regarding Learning Subscription courseware materials is that they cannot be downloaded by customers.”

How convenience of Oracle as the info should have been stated at https://education.oracle.com/oracle-learning-subscriptions

Took for granted materials can be downloaded since they are made available to download for all other training formats.

This seems to be a deceptive process by not disclosing the information. because by the time one has subscribe to find the lack of full disclosure, it may be too late.

Hopefully, this will help anyone to avoid the same mistake.

 

srvctl config all

$
0
0

Learned something new today and not sure if it’s new feature.

Seems a lot easier to gather clusterware configuration using one command.

Works with srvctl version: 18.0.0.0.0 or higher.

19c

oracle@ol7-19-rac2 ~]$ echo $ORACLE_HOME
/u01/app/19.0.0/grid

[oracle@ol7-19-rac2 ~]$ srvctl -version
srvctl version: 19.0.0.0.0

[oracle@ol7-19-rac2 ~]$ srvctl config all

Oracle Clusterware configuration details
========================================

Oracle Clusterware basic information
------------------------------------
  Operating system          Linux
  Name                      ol7-19-cluster
  Class                     STANDALONE
  Cluster nodes             ol7-19-rac1, ol7-19-rac2
  Version                   19.0.0.0.0
  Groups                    SYSOPER: SYSASM:dba SYSRAC:dba SYSDBA:dba
  OCR locations             +DATA
  Voting disk locations     DATA
  Voting disk file paths    /dev/oracleasm/asm-disk3

Cluster network configuration details
-------------------------------------
  Interface name  Type  Subnet           Classification
  eth1            IPV4  192.168.56.0/24  PUBLIC
  eth2            IPV4  192.168.1.0/24   PRIVATE, ASM

SCAN configuration details
--------------------------

SCAN "ol7-19-scan" details
++++++++++++++++++++++++++
  Name                ol7-19-scan
  IPv4 subnet         192.168.56.0/24
  DHCP server type    static
  End points          TCP:1521

  SCAN listeners
  --------------
  Name              VIP address
  LISTENER_SCAN1    192.168.56.105
  LISTENER_SCAN2    192.168.56.106
  LISTENER_SCAN3    192.168.56.107


ASM configuration details
-------------------------
  Mode             remote
  Password file    +DATA
  SPFILE           +DATA

  ASM disk group details
  ++++++++++++++++++++++
  Name  Redundancy
  DATA  EXTERN

Database configuration details
==============================

Database "ora.cdbrac.db" details
--------------------------------
  Name                ora.cdbrac.db
  Type                RAC
  Version             19.0.0.0.0
  Role                PRIMARY
  Management          AUTOMATIC
  policy
  SPFILE              +DATA
  Password file       +DATA
  Groups              OSDBA:dba OSOPER:oper OSBACKUP:dba OSDG:dba OSKM:dba OSRAC:dba
  Oracle home         /u01/app/oracle/product/19.0.0/dbhome_1
[oracle@ol7-19-rac2 ~]$

18c

[oracle@rac1 Desktop]$ srvctl -version
srvctl version: 18.0.0.0.0

[oracle@rac1 Desktop]$ srvctl config all

Oracle Clusterware configuration details                                        
========================================                                        

Oracle Clusterware basic information                                            
------------------------------------                                            
  Operating system         Linux                                           
  Name                     scan                                            
  Class                    STANDALONE                                      
  Cluster nodes            rac1, rac2                                      
  Version                  18.0.0.0.0                                      
  Groups                   SYSOPER:dba SYSASM:dba SYSRAC:dba SYSDBA:dba    
  Cluster home             /u01/app/18.0.0/grid                            
  OCR locations            +CRS                                            
  Voting disk locations    /dev/asm-disk8, /dev/asm-disk9, /dev/asm-disk7  

Cluster network configuration details                                           
-------------------------------------                                           
  Interface name  Type  Subnet           Classification  
  eth1            IPV4  10.1.1.0/24      PRIVATE, ASM    
  eth0            IPV4  192.168.11.0/24  PUBLIC          

SCAN configuration details                                                      
--------------------------                                                      

SCAN "scan.localdomain" details                                                 
+++++++++++++++++++++++++++++++                                                 
  Name                scan.localdomain  
  IPv4 subnet         192.168.11.0/24   
  DHCP server type    static            
  End points          TCP:1521          

  SCAN listeners                                                                
  --------------                                                                
  Name        VIP address    
  LISTENER    192.168.11.60  


ASM configuration details                                                       
-------------------------                                                       
  Mode             remote  
  Password file    +RAC    
  SPFILE           +RAC    

  ASM disk group details                                                        
  ++++++++++++++++++++++                                                        
  Name  Redundancy  
  CRS   NORMAL      
  DATA  EXTERN      
  FRA   EXTERN      
  RAC   EXTERN      

Database configuration details                                                  
==============================                                                  

Database "ora.uptst.db" details                                                 
-------------------------------                                                 
  Name                ora.uptst.db                                                   
  Type                RAC                                                            
  Version             18.0.0.0.0                                                     
  Role                PRIMARY                                                        
  Management          AUTOMATIC                                                      
  policy                                                                             
  SPFILE              +DATA                                                          
  Password file       +DATA                                                          
  Groups              OSDBA:dba OSOPER:dba OSBACKUP:dba OSDG:dba OSKM:dba OSRAC:dba  
  Oracle home         /u01/app/oracle/product/18.0.0/db_home1                        

Database "ora.uptst2.db" details                                                
--------------------------------                                                
  Name                 ora.uptst2.db                                        
  Type                 RAC                                                  
  Version              12.1.0.2.0                                           
  Role                 PRIMARY                                              
  Management policy    AUTOMATIC                                            
  SPFILE               +DATA                                                
  Password file        +DATA                                                
  Groups               OSDBA:dba OSOPER:dba OSBACKUP:dba OSDG:dba OSKM:dba  
  Oracle home          /u01/app/oracle/product/12.1.0.2_1                   
[oracle@rac1 Desktop]$ 

Cloning Oracle Homes in 19c

$
0
0

You may find conflicting information from Oracle’s documentation where Cloning an Oracle Database Home shows to use clone.pl and Database Upgrade Guide 19c shows Deprecation of clone.pl Script

To clone Oracle software, use createGoldImage and then install software as usual.

DEMO for DB:

Source: /u01/app/oracle/product/19.0.0/dbhome_1
Target: /u01/app/oracle/product/19.0.0/dbhome_2

[oracle@ol7-19-rac1 ~]$ ls -l /u01/app/oracle/product/19.0.0/dbhome_2/
total 0

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/runInstaller -createGoldImage -destinationLocation /u01/app/oracle/product/19.0.0/dbhome_2 -silent
Launching Oracle Database Setup Wizard...

[oracle@ol7-19-rac1 ~]$ ls -l /u01/app/oracle/product/19.0.0/dbhome_2/
total 3069584
-rw-r--r--. 1 oracle oinstall 3143250100 Oct 29 13:09 db_home_2019-10-29_12-59-52PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/oracle/product/19.0.0/dbhome_2/
[oracle@ol7-19-rac1 dbhome_2]$ unzip -qo db_home_2019-10-29_12-59-52PM.zip

[oracle@ol7-19-rac1 dbhome_2]$ ls -ld *
drwxr-xr-x. 2 oracle oinstall 102 Oct 2 00:06 addnode
drwxr-xr-x. 3 oracle oinstall 20 Oct 2 00:35 admin
drwxr-xr-x. 6 oracle oinstall 4096 Apr 17 2019 apex
drwxr-xr-x. 9 oracle oinstall 93 Apr 17 2019 assistants
drwxr-xr-x. 2 oracle oinstall 8192 Oct 29 13:00 bin
drwxr-xr-x. 4 oracle oinstall 87 Oct 2 00:06 clone
drwxr-xr-x. 6 oracle oinstall 55 Apr 17 2019 crs
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 css
drwxr-xr-x. 11 oracle oinstall 4096 Apr 17 2019 ctx
drwxr-xr-x. 7 oracle oinstall 71 Apr 17 2019 cv
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 data
-rw-r--r--. 1 oracle oinstall 3143250100 Oct 29 13:09 db_home_2019-10-29_12-59-52PM.zip
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 dbjava
drwxr-xr-x. 2 oracle oinstall 66 Oct 29 12:35 dbs
drwxr-xr-x. 5 oracle oinstall 4096 Oct 2 00:06 deinstall
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 demo
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 diagnostics
drwxr-xr-x. 13 oracle oinstall 4096 Apr 17 2019 dmu
drwxr-xr-x. 4 oracle oinstall 30 Apr 17 2019 drdaas
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 dv
-rw-r--r--. 1 oracle oinstall 852 Aug 18 2015 env.ora
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 has
drwxr-xr-x. 5 oracle oinstall 41 Apr 17 2019 hs
drwxr-xr-x. 10 oracle oinstall 4096 Oct 29 13:08 install
drwxr-xr-x. 2 oracle oinstall 29 Apr 17 2019 instantclient
drwxr-x---. 13 oracle oinstall 4096 Oct 29 13:00 inventory
drwxr-xr-x. 8 oracle oinstall 82 Oct 29 13:00 javavm
drwxr-xr-x. 3 oracle oinstall 35 Apr 17 2019 jdbc
drwxr-xr-x. 6 oracle oinstall 4096 Oct 29 13:00 jdk
drwxr-xr-x. 2 oracle oinstall 4096 Oct 8 20:23 jlib
drwxr-xr-x. 10 oracle oinstall 4096 Apr 17 2019 ldap
drwxr-xr-x. 4 oracle oinstall 12288 Oct 29 13:00 lib
drwxr-x---. 2 oracle oinstall 6 Oct 2 00:10 log
drwxr-xr-x. 9 oracle oinstall 98 Apr 17 2019 md
drwxr-xr-x. 4 oracle oinstall 31 Apr 17 2019 mgw
drwxr-xr-x. 10 oracle oinstall 4096 Oct 29 13:00 network
drwxr-xr-x. 5 oracle oinstall 46 Apr 17 2019 nls
drwxr-xr-x. 8 oracle oinstall 101 Apr 17 2019 odbc
drwxr-xr-x. 5 oracle oinstall 42 Apr 17 2019 olap
drwxr-x---. 14 oracle oinstall 4096 Oct 2 00:06 OPatch
drwxr-xr-x. 7 oracle oinstall 65 Apr 17 2019 opmn
drwxr-xr-x. 4 oracle oinstall 34 Apr 17 2019 oracore
drwxr-xr-x. 6 oracle oinstall 52 Apr 17 2019 ord
drwxr-xr-x. 4 oracle oinstall 66 Apr 17 2019 ords
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 oss
drwxr-xr-x. 8 oracle oinstall 4096 Oct 2 00:06 oui
drwxr-xr-x. 4 oracle oinstall 33 Apr 17 2019 owm
drwxr-xr-x. 5 oracle oinstall 39 Apr 17 2019 perl
drwxr-xr-x. 6 oracle oinstall 78 Apr 17 2019 plsql
drwxr-xr-x. 6 oracle oinstall 56 Oct 29 13:00 precomp
drwxr-xr-x. 2 oracle oinstall 26 Apr 17 2019 QOpatch
drwxr-xr-x. 5 oracle oinstall 52 Apr 17 2019 R
drwxr-xr-x. 4 oracle oinstall 29 Apr 17 2019 racg
drwxr-xr-x. 15 oracle oinstall 4096 Oct 29 13:00 rdbms
drwxr-xr-x. 3 oracle oinstall 21 Apr 17 2019 relnotes
-rwx------. 1 oracle oinstall 549 Oct 2 00:06 root.sh
-rwx------. 1 oracle oinstall 786 Apr 17 2019 root.sh.old
-rw-r-----. 1 oracle oinstall 10 Apr 17 2019 root.sh.old.1
-rwx------. 1 oracle oinstall 638 Apr 18 2019 root.sh.old.2
-rw-r-----. 1 oracle oinstall 10 Apr 17 2019 root.sh.old.3
-rwxr-x---. 1 oracle oinstall 1783 Mar 8 2017 runInstaller
-rw-r--r--. 1 oracle oinstall 2927 Oct 14 2016 schagent.conf
drwxr-xr-x. 5 oracle oinstall 4096 Apr 17 2019 sdk
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 slax
drwxr-xr-x. 4 oracle oinstall 41 Apr 17 2019 sqldeveloper
drwxr-xr-x. 3 oracle oinstall 17 Apr 17 2019 sqlj
drwxr-xr-x. 5 oracle oinstall 4096 Oct 8 20:22 sqlpatch
drwxr-xr-x. 6 oracle oinstall 53 Oct 2 00:05 sqlplus
drwxr-xr-x. 6 oracle oinstall 54 Apr 17 2019 srvm
drwxr-xr-x. 5 oracle oinstall 45 Oct 29 13:00 suptools
drwxr-xr-x. 3 oracle oinstall 35 Apr 17 2019 ucp
drwxr-xr-x. 4 oracle oinstall 31 Apr 17 2019 usm
drwxr-xr-x. 2 oracle oinstall 33 Apr 17 2019 utl
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 wwg
drwxr-x---. 7 oracle oinstall 69 Apr 17 2019 xdk
[oracle@ol7-19-rac1 dbhome_2]$

DEMO for GI:

Source: /u01/app/19.0.0/grid
Target: /u01/app/19.0.0/grid5

[root@ol7-19-rac1 ~]# mkdir -p /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# chmod 775 /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# chown oracle:oinstall /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# ls -ld /u01/app/19.0.0/grid5/
drwxrwxr-x. 2 oracle oinstall 6 Oct 29 13:15 /u01/app/19.0.0/grid5/

[oracle@ol7-19-rac1 ~]$ echo $ORACLE_HOME
/u01/app/19.0.0/grid
[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...
[oracle@ol7-19-rac1 ~]$

FAILED:

[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ grep -A1 "^WARNING" gridSetupActions2019-10-29_01-20-38PM.log
WARNING:  [Oct 29, 2019 1:20:54 PM] Validation disabled for the state init
INFO:  [Oct 29, 2019 1:20:54 PM] Completed validating state <init>
--
WARNING:  [Oct 29, 2019 1:20:55 PM] Command to get the files from '/u01/app/19.0.0/grid' not owned by 'oracle' failed.
WARNING:  [Oct 29, 2019 1:20:55 PM] Following files from the source home are not owned by the current user: [/u01/app/19.0.0/grid/acfs, /u01/app/19.0.0/grid/acfs/tunables, /u01/app/19.0.0/grid/acfs/tunables/acfstunables]
INFO:  [Oct 29, 2019 1:20:55 PM] Getting the last existing parent of: /u01/app/19.0.0/grid5
--
WARNING:  [Oct 29, 2019 1:20:57 PM] Files list is null or empty.
INFO:  [Oct 29, 2019 1:20:57 PM] Completed validating state <createGoldImage>
--
WARNING:  [Oct 29, 2019 1:20:58 PM] Following files are not readable: [/u01/app/19.0.0/grid/suptools/orachk/orachk, /u01/app/19.0.0/grid/log/procwatcher/prw.sh, /u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac1, /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora, /u01/app/19.0.0/grid/crf/admin/run/crfmond, /u01/app/19.0.0/grid/crf/admin/run/crflogd]
INFO:  [Oct 29, 2019 1:21:00 PM] Verifying whether Central Inventory is locked by any other OUI session...
--
WARNING:  [Oct 29, 2019 1:21:05 PM] Could not create symlink: /tmp/GridSetupActions2019-10-29_01-20-38PM/tempHome_1572355263979/log/procwatcher/prw.sh.
Refer associated stacktrace #oracle.install.ivw.common.driver.job.CreateGoldImageJob:7059
--
WARNING:  [Oct 29, 2019 1:21:34 PM] Could not create symlink: /tmp/GridSetupActions2019-10-29_01-20-38PM/tempHome_1572355294593/log/procwatcher/prw.sh.
Refer associated stacktrace #oracle.install.ivw.common.driver.job.CreateGoldImageJob:7118


[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ ll /u01/app/19.0.0/grid/acfs
total 0
drwxr-xr-x. 2 root root 26 Oct  8 20:33 tunables


[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ grep -i severe gridSetupActions2019-10-29_01-20-38PM.log
SEVERE: [Oct 29, 2019 1:21:11 PM] [FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-29_01-20-38PM for more details.
SEVERE: [Oct 29, 2019 1:21:40 PM] [FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-29_01-20-38PM for more details.
[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$

[oracle@ol7-19-rac1 ~]$

RESEARCH:

Bug 29220079 - Error INS-32700 Creating a GI Gold Image (Doc ID 29220079.8)	
Versions confirmed as being affected: 19.3.0	
The fix for 29220079 is first included in: 
19.3.0.0.190416 (Apr 2019) Database Release Update (DB RU) and 
20.1.0

Should have been fixed but does not seems like it.

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29851014;ACFS RELEASE UPDATE 19.4.0.0.0 (29851014)
29850993;OCW RELEASE UPDATE 19.4.0.0.0 (29850993)
29834717;Database Release Update : 19.4.0.0.190716 (29834717)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[oracle@ol7-19-rac1 ~]$

You might have to create SR :=(

UPDATE: Thanks to https://lonedba.wordpress.com/

[oracle@ol7-19-rac1 GridSetupActions2019-10-29_03-06-03PM]$ grep "Permission denied" gridSetupActions2019-10-29_03-06-03PM.log
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/prw.sh’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac1’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/prwinit.ora’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/crf/admin/run/crfmond’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/crf/admin/run/crflogd’: Permission denied
[oracle@ol7-19-rac1 GridSetupActions2019-10-29_03-06-03PM]$

[oracle@ol7-19-rac1 ~]$ echo $ORACLE_HOME; cd $ORACLE_HOME/log
/u01/app/19.0.0/grid

[oracle@ol7-19-rac1 log]$ ls -l
total 4
drwxr-x---.  4 oracle oinstall   57 Oct  1 23:57 diag
drwxr-xr-t. 20 root   oinstall 4096 Oct  1 23:55 ol7-19-rac1
drwxr--r--.  3 root   root       66 Oct 25 15:10 procwatcher

[root@ol7-19-rac1 log]# chmod 775 -R ol7-19-rac1/ procwatcher/
[root@ol7-19-rac1 log]# ls -l
total 4
drwxr-xr-x.  2 oracle oinstall    6 Oct  1 23:44 crs
drwxr-x---.  4 oracle oinstall   57 Oct  1 23:50 diag
drwxrwxr-x. 20 root   oinstall 4096 Oct  1 23:47 ol7-19-rac1
drwxrwxr-x.  3 root   root       66 Oct 25 15:08 procwatcher
[root@ol7-19-rac1 log]#

[oracle@ol7-19-rac1 ~]$ . oraenv <<< +ASM1
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-29_04-36-47PM.zip

[oracle@ol7-19-rac1 ~]$ ll /u01/app/19.0.0/grid5/*
-rw-r--r--. 1 oracle oinstall 4426495995 Oct 29 16:46 /u01/app/19.0.0/grid5/grid_home_2019-10-29_04-36-47PM.zip
[oracle@ol7-19-rac1 ~]$

Cloning Oracle Homes in 19c Part 2

$
0
0

You didn’t think there was going to be a part 2 did you?

$ORACLE_HOME/log is not included in creategoldimage which makes perfect sense as I was discussing on twitter for having garbage in a gold image but why is it being read?

For ol7-19-rac2, permission for $GRID_HOME/log has not been changed; hence, creategoldimage failed.

-exclFiles $ORACLE_HOME/.patch_storage (succeeded)
-exclFiles $ORACLE_HOME/log (failed)

[oracle@ol7-19-rac1 ~]$ ls /u01/app/19.0.0/grid/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties
[oracle@ol7-19-rac1 ~]$

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -exclFiles $ORACLE_HOME/.patch_storage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-30_12-05-43PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/19.0.0/grid5/
[oracle@ol7-19-rac1 grid5]$ unzip -qo grid_home_2019-10-30_12-05-43PM.zip

[oracle@ol7-19-rac1 grid5]$ ls .patch_storage
ls: cannot access .patch_storage: No such file or directory

[oracle@ol7-19-rac1 grid5]$ ls -l
total 3203396
drwxr-xr-x.  3 oracle oinstall         22 Oct  8 20:33 acfs
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsccm
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsccreg
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfscm
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsiob
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsrd
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 acfsrm
drwxr-xr-x.  2 oracle oinstall        102 Oct  1 23:45 addnode
drwxr-xr-x.  3 oracle oinstall         18 Oct  1 23:47 advmccb
drwxr-xr-x. 10 oracle oinstall       4096 Apr 17  2019 assistants
drwxr-xr-x.  2 oracle oinstall      12288 Oct 30 12:05 bin
drwxr-x---.  3 oracle oinstall         18 Oct  1 23:47 cdp
drwxr-x---.  4 oracle oinstall         31 Oct  1 23:47 cha
drwxr-xr-x.  4 oracle oinstall         87 Oct  1 23:45 clone
drwxr-xr-x. 12 oracle oinstall       4096 Oct 30 12:05 crs
drwx--x--x.  5 oracle oinstall         41 Oct  1 23:47 css
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 ctss
drwxrwxr-x.  7 oracle oinstall         71 Apr 17  2019 cv
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 dbjava
drwxr-xr-x.  2 oracle oinstall         39 Oct 30 12:05 dbs
drwxr-xr-x.  5 oracle oinstall       4096 Oct  1 23:45 deinstall
drwxr-xr-x.  3 oracle oinstall         20 Apr 17  2019 demo
drwxr-xr-x.  3 oracle oinstall         20 Apr 17  2019 diagnostics
drwxr-xr-x. 13 oracle oinstall       4096 Apr 17  2019 dmu
-rw-r--r--.  1 oracle oinstall        852 Aug 18  2015 env.ora
drwxr-x---.  6 oracle oinstall         53 Oct 30 12:05 evm
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 gipc
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 gnsd
drwxr-x---.  5 oracle oinstall         49 Oct 30 12:05 gpnp
-rw-r--r--.  1 oracle oinstall 3280127704 Oct 30 12:12 grid_home_2019-10-30_12-05-43PM.zip
-rwxr-x---.  1 oracle oinstall       3294 Mar  8  2017 gridSetup.sh
drwxr-xr-x.  4 oracle oinstall         32 Apr 17  2019 has
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 hs
drwxr-xr-x. 11 oracle oinstall       4096 Oct 30 12:12 install
drwxr-xr-x.  2 oracle oinstall         29 Apr 17  2019 instantclient
drwxr-x---. 13 oracle oinstall       4096 Oct 30 12:05 inventory
drwxr-xr-x.  8 oracle oinstall         82 Oct 30 12:05 javavm
drwxr-xr-x.  3 oracle oinstall         35 Apr 17  2019 jdbc
drwxr-xr-x.  6 oracle oinstall       4096 Oct 30 12:05 jdk
drwxr-xr-x.  2 oracle oinstall       8192 Oct  8 20:28 jlib
drwxr-xr-x. 10 oracle oinstall       4096 Apr 17  2019 ldap
drwxr-xr-x.  4 oracle oinstall      12288 Oct 30 12:05 lib
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 md
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 mdns
drwxr-xr-x. 10 oracle oinstall       4096 Oct 30 12:05 network
drwxr-xr-x.  5 oracle oinstall         46 Apr 17  2019 nls
drwxr-x---.  2 oracle oinstall          6 Oct  1 23:47 ohasd
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 ologgerd
drwxr-x---. 14 oracle oinstall       4096 Oct  1 23:45 OPatch
drwxr-xr-x.  8 oracle oinstall         77 Apr 17  2019 opmn
drwxr-xr-x.  4 oracle oinstall         34 Apr 17  2019 oracore
drwxr-xr-x.  6 oracle oinstall         52 Apr 17  2019 ord
drwxr-xr-x.  4 oracle oinstall         66 Apr 17  2019 ords
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 oss
drwxr-xr-x.  2 oracle oinstall          6 Oct  1 23:47 osysmond
drwxr-xr-x.  8 oracle oinstall       4096 Oct  1 23:45 oui
drwxr-xr-x.  4 oracle oinstall         33 Apr 17  2019 owm
drwxr-xr-x.  5 oracle oinstall         39 Apr 17  2019 perl
drwxr-xr-x.  6 oracle oinstall         78 Apr 17  2019 plsql
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 precomp
drwxr-xr-x.  2 oracle oinstall         26 Apr 17  2019 QOpatch
drwxr-xr-x.  5 oracle oinstall         42 Apr 17  2019 qos
drwxr-xr-x.  5 oracle oinstall         56 Oct 30 12:05 racg
drwxr-xr-x. 15 oracle oinstall       4096 Oct 30 12:05 rdbms
drwxr-xr-x.  3 oracle oinstall         21 Apr 17  2019 relnotes
drwxr-xr-x.  7 oracle oinstall        102 Apr 17  2019 rhp
-rwxr-xr-x.  1 oracle oinstall        405 Oct  1 23:45 root.sh
-rwx------.  1 oracle oinstall        490 Apr 17  2019 root.sh.old
-rw-r-----.  1 oracle oinstall         10 Apr 17  2019 root.sh.old.1
-rwx------.  1 oracle oinstall        405 Apr 18  2019 root.sh.old.2
-rw-r-----.  1 oracle oinstall         10 Apr 17  2019 root.sh.old.3
-rwxr-xr-x.  1 oracle oinstall        414 Oct  1 23:45 rootupgrade.sh
-rwxr-x---.  1 oracle oinstall        628 Sep  3  2015 runcluvfy.sh
drwxr-xr-x.  5 oracle oinstall       4096 Apr 17  2019 sdk
drwxr-xr-x.  3 oracle oinstall         18 Apr 17  2019 slax
drwxr-xr-x.  5 oracle oinstall       4096 Oct  8 20:26 sqlpatch
drwxr-xr-x.  6 oracle oinstall         53 Oct  1 23:44 sqlplus
drwxr-xr-x.  7 oracle oinstall         66 Oct 30 12:05 srvm
drwxr-x---.  5 oracle oinstall         63 Oct 30 12:05 suptools
drwxr-xr-x.  4 oracle oinstall         29 Apr 17  2019 tomcat
drwxr-xr-x.  3 oracle oinstall         35 Apr 17  2019 ucp
drwxr-xr-x.  7 oracle oinstall         71 Apr 17  2019 usm
drwxr-xr-x.  2 oracle oinstall         33 Apr 17  2019 utl
-rw-r-----.  1 oracle oinstall        500 Feb  6  2013 welcome.html
drwxr-xr-x.  3 oracle oinstall         18 Apr 17  2019 wlm
drwxr-xr-x.  3 oracle oinstall         19 Apr 17  2019 wwg
drwxr-xr-x.  5 oracle oinstall       4096 Oct  8 20:35 xag
drwxr-x---.  6 oracle oinstall         58 Apr 17  2019 xdk

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/log/
crs  diag  ol7-19-rac1  procwatcher

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/log/*
ls: cannot access /u01/app/19.0.0/grid5/log/*: No such file or directory
[oracle@ol7-19-rac1 grid5]$

================================================================================

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-30_12-23-31PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/19.0.0/grid5/
[oracle@ol7-19-rac1 grid5]$ unzip -qo grid_home_2019-10-30_12-23-31PM.zip

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/.patch_storage/
29401763_Apr_11_2019_22_26_25  29585399_Apr_9_2019_19_12_47   29851014_Jul_5_2019_01_15_35    NApply
29517242_Apr_17_2019_23_27_10  29834717_Jul_10_2019_02_09_26  interim_inventory.txt           record_inventory.txt
29517247_Apr_1_2019_15_08_20   29850993_Jul_5_2019_05_08_35   LatestOPatchSession.properties

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid/log/
crs  diag  ol7-19-rac1  procwatcher

[oracle@ol7-19-rac1 grid5]$ ls /u01/app/19.0.0/grid5/log/*
ls: cannot access /u01/app/19.0.0/grid5/log/*: No such file or directory
[oracle@ol7-19-rac1 grid5]$

================================================================================

[oracle@ol7-19-rac2 ~]$ . oraenv <<< +ASM2
ORACLE_SID = [cdbrac2] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-19-rac2 ~]$ ls $ORACLE_HOME/log/*
/u01/app/19.0.0/grid/log/diag:
adrci_dir.mif  asmtool  clients

/u01/app/19.0.0/grid/log/ol7-19-rac2:
acfs  admin  afd  chad  client  crsd  cssd  ctssd  diskmon  evmd  gipcd  gnsd  gpnpd  mdnsd  ohasd  racg  srvm  xag

/u01/app/19.0.0/grid/log/procwatcher:
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prw.sh: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac2: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora.org: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora: Permission denied
ls: cannot access /u01/app/19.0.0/grid/log/procwatcher/prw_ol7-19-rac2.log: Permission denied
prwinit.ora  prwinit.ora.org  prw_ol7-19-rac2.log  prw.sh  PRW_SYS_ol7-19-rac2

[oracle@ol7-19-rac2 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -exclFiles $ORACLE_HOME/log -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-30_12-40-37PM for more details.
Setup failed.
[oracle@ol7-19-rac2 ~]$

What’s Taking So Long For Autoupgrade

$
0
0

Directory for autoupgrade/log: $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log, where $ORACLE_UNQNAME=db_unique_name

Create upgrade.config as shown: $ORACLE_BASE/admin/$ORACLE_UNQNAME/${ORACLE_UNQNAME}_upgrade.config

global.autoupg_log_dir=$ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log
upg1.log_dir=$ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log
upg1.dbname=$ORACLE_UNQNAME
upg1.sid=$ORACLE_SID
upg1.start_time=NOW
upg1.source_home=/oracle/app/product/11.2/dbhome_1
upg1.target_home=/oracle/app/product/12.2/dbhome_1
upg1.upgrade_node=localhost
upg1.target_version=12.2
upg1.timezone_upg=no
upg1.restoration=yes

Let’s take a took at summary for autograde job process 102.
Find autoupgrade directories.

$ export JOBNO=102
$ ls -l $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/*/*/*
-rwx------    1 oracle   dba           73349 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/autoupgrade_20191104.log
-rwx------    1 oracle   dba             233 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/autoupgrade_20191104_user.log
-rwx------    1 oracle   dba               0 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/autoupgrade_err.log
-rwx------    1 oracle   dba           71390 Nov 04 13:07 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/autoupgrade_20191104.log
-rwx------    1 oracle   dba             233 Nov 04 13:06 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/autoupgrade_20191104_user.log
-rwx------    1 oracle   dba               0 Nov 04 13:06 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/autoupgrade_err.log
-rwx------    1 oracle   dba          891207 Nov 04 16:01 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/autoupgrade_20191104.log
-rwx------    1 oracle   dba           12371 Nov 04 16:01 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/autoupgrade_20191104_user.log
-rwx------    1 oracle   dba             245 Nov 04 15:50 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/autoupgrade_err.log
-rwx------    1 oracle   dba            1118 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/after_upgrade_pfile_ORACLE_SID.ora
-rwx------    1 oracle   dba               0 Nov 04 14:05 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID.restart
-rwx------    1 oracle   dba            2236 Nov 04 15:48 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_autocompile.sql
-rwx------    1 oracle   dba           16805 Nov 04 13:52 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_catnoamd.sql
-rwx------    1 oracle   dba            3685 Nov 04 13:56 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_catnoexf.sql
-rwx------    1 oracle   dba           19753 Nov 04 13:56 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_catnorul.sql
-rwx------    1 oracle   dba           20740 Nov 04 13:54 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_emremove.sql
-rwx------    1 oracle   dba             883 Nov 04 15:48 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_objcompare.sql
-rwx------    1 oracle   dba            1118 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/before_upgrade_pfile_ORACLE_SID.ora
-rwx------    1 oracle   dba            1168 Nov 04 14:00 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/during_upgrade_pfile_ORACLE_SID.ora

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/prechecks:
total 376
-rwx------    1 oracle   dba            2781 Nov 04 12:46 ORACLE_SID_checklist.cfg
-rwx------    1 oracle   dba            9328 Nov 04 12:46 ORACLE_SID_checklist.json
-rwx------    1 oracle   dba            9962 Nov 04 12:46 ORACLE_SID_checklist.xml
-rwx------    1 oracle   dba           25980 Nov 04 12:46 ORACLE_SID_preupgrade.html
-rwx------    1 oracle   dba           10018 Nov 04 12:46 ORACLE_SID_preupgrade.log
-rwx------    1 oracle   dba          121687 Nov 04 12:46 prechecks_ORACLE_SID.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/prechecks:
total 376
-rwx------    1 oracle   dba            2781 Nov 04 13:07 ORACLE_SID_checklist.cfg
-rwx------    1 oracle   dba            9328 Nov 04 13:07 ORACLE_SID_checklist.json
-rwx------    1 oracle   dba            9962 Nov 04 13:07 ORACLE_SID_checklist.xml
-rwx------    1 oracle   dba           25980 Nov 04 13:07 ORACLE_SID_preupgrade.html
-rwx------    1 oracle   dba           10018 Nov 04 13:07 ORACLE_SID_preupgrade.log
-rwx------    1 oracle   dba          121687 Nov 04 13:07 prechecks_ORACLE_SID.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/dbupgrade:
total 446064
-rwx------    1 oracle   dba           13059 Nov 04 15:48 autoupgrade20191104135121ORACLE_SID.log
-rwx------    1 oracle   dba           12042 Nov 04 15:48 ORACLE_SID_autocompile20191104135121ORACLE_SID0.log
-rwx------    1 oracle   dba             551 Nov 04 15:48 ORACLE_SID_autocompile20191104135121ORACLE_SID_catcon_10617264.lst
-rwx------    1 oracle   dba       208215073 Nov 04 15:48 catupgrd20191104135121ORACLE_SID0.log
-rwx------    1 oracle   dba         7481470 Nov 04 15:46 catupgrd20191104135121ORACLE_SID1.log
-rwx------    1 oracle   dba         5527017 Nov 04 15:46 catupgrd20191104135121ORACLE_SID2.log
-rwx------    1 oracle   dba         7040784 Nov 04 15:46 catupgrd20191104135121ORACLE_SID3.log
-rwx------    1 oracle   dba             527 Nov 04 14:01 catupgrd20191104135121ORACLE_SID_catcon_17039806.lst
-rwx------    1 oracle   dba               0 Nov 04 15:23 catupgrd20191104135121ORACLE_SID_datapatch_normal.err
-rwx------    1 oracle   dba            1050 Nov 04 15:46 catupgrd20191104135121ORACLE_SID_datapatch_normal.log
-rwx------    1 oracle   dba               0 Nov 04 15:17 catupgrd20191104135121ORACLE_SID_datapatch_upgrade.err
-rwx------    1 oracle   dba             702 Nov 04 15:18 catupgrd20191104135121ORACLE_SID_datapatch_upgrade.log
-rwx------    1 oracle   dba            9877 Nov 04 15:18 during_upgrade_pfile_catctl.ora
-rwx------    1 oracle   dba           32649 Nov 04 14:01 phase.log
-rwx------    1 oracle   dba            1489 Nov 04 15:48 upg_summary.log
-rwx------    1 oracle   dba              46 Nov 04 15:48 upg_summary_report.log
-rwx------    1 oracle   dba             408 Nov 04 15:48 upg_summary_report.pl

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/drain:
total 16
-rwx------    1 oracle   dba            4952 Nov 04 14:01 drain_ORACLE_SID.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/postchecks:
total 104
-rwx------    1 oracle   dba             969 Nov 04 15:50 ORACLE_SID_checklist.cfg
-rwx------    1 oracle   dba            3202 Nov 04 15:50 ORACLE_SID_checklist.json
-rwx------    1 oracle   dba            3395 Nov 04 15:50 ORACLE_SID_checklist.xml
-rwx------    1 oracle   dba           16916 Nov 04 15:50 ORACLE_SID_postupgrade.html
-rwx------    1 oracle   dba            3383 Nov 04 15:50 ORACLE_SID_postupgrade.log
-rwx------    1 oracle   dba           14861 Nov 04 15:50 postchecks_ORACLE_SID.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/postfixups:
total 40
-rwx------    1 oracle   dba           14864 Nov 04 16:00 postchecks_ORACLE_SID.log
-rwx------    1 oracle   dba            3262 Nov 04 15:59 postfixups_ORACLE_SID.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/postupgrade:
total 24
-rwx------    1 oracle   dba           10177 Nov 04 16:01 postupgrade.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/prechecks:
total 376
-rwx------    1 oracle   dba            2781 Nov 04 13:52 ORACLE_SID_checklist.cfg
-rwx------    1 oracle   dba            9328 Nov 04 13:52 ORACLE_SID_checklist.json
-rwx------    1 oracle   dba            9962 Nov 04 13:52 ORACLE_SID_checklist.xml
-rwx------    1 oracle   dba           25980 Nov 04 13:52 ORACLE_SID_preupgrade.html
-rwx------    1 oracle   dba           10018 Nov 04 13:52 ORACLE_SID_preupgrade.log
-rwx------    1 oracle   dba          121687 Nov 04 13:51 prechecks_ORACLE_SID.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/prefixups:
total 304
-rwx------    1 oracle   dba          121708 Nov 04 14:00 prechecks_ORACLE_SID.log
-rwx------    1 oracle   dba           32727 Nov 04 14:00 prefixups_ORACLE_SID.log

ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/preupgrade:
total 8
-rwx------    1 oracle   dba              98 Nov 04 13:51 preupgrade.log

/orahome/oracle/app/admin/ORACLE_SID/autoupgrade/log/cfgtoollogs/upgrade/auto:
total 880
-rwx------    1 oracle   dba          414589 Nov 04 16:01 autoupgrade.log
-rwx------    1 oracle   dba             780 Nov 04 13:00 autoupgrade_err.log
-rwx------    1 oracle   dba            3113 Nov 04 16:01 autoupgrade_user.log
drwx------    2 oracle   dba            4096 Nov 04 14:01 config_files
drwx------    2 oracle   dba             256 Nov 04 16:01 lock
-rwx------    1 oracle   dba           12381 Nov 04 16:01 state.html
drwx------    2 oracle   dba            4096 Nov 04 16:01 status
$

Find timing for autoupgrade process.

$ tail -12 `ls $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/$ORACLE_UNQNAME/$JOBNO/autoupgrade*.log|egrep -v 'user|err'`
2019-11-04 16:01:21.082 INFO ----------------------Stages  Summary------------------------ - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.082 INFO     SETUP             1 min                                  - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.083 INFO     PREUPGRADE        1 min                                  - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.083 INFO     PRECHECKS         1 min                                  - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.083 INFO     GRP               1 min                                  - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.084 INFO     PREFIXUPS         8 min                                   - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.084 INFO     DRAIN             1 min                                  - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.084 INFO     DBUPGRADE         108 min                                 - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.085 INFO     POSTCHECKS        1 min                                  - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.085 INFO     POSTFIXUPS        9 min                                   - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.086 INFO     POSTUPGRADE       1 min                                  - DispatcherOSHelper.writeStageSummary
2019-11-04 16:01:21.086 INFO End of dispatcher instance for ORACLE_SID - AutoUpgDispatcher.run

Find timing for database upgrade.

$ tail -35 $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/$ORACLE_UNQNAME/102/dbupgrade/catupgrd*${ORACLE_UNQNAME}0.log
========== PROCESS ENDED ==========
SQL&gt; Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Start of Summary Report
------------------------------------------------------

Oracle Database 12.2 Post-Upgrade Status Tool           11-04-2019 15:46:14

Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS

Oracle Server                          UPGRADED      12.2.0.1.0  00:19:53
JServer JAVA Virtual Machine           UPGRADED      12.2.0.1.0  00:07:34
Oracle Workspace Manager               UPGRADED      12.2.0.1.0  00:02:16
OLAP Analytic Workspace                UPGRADED      12.2.0.1.0  00:00:23
Oracle OLAP API                        UPGRADED      12.2.0.1.0  00:00:23
Oracle XDK                             UPGRADED      12.2.0.1.0  00:01:15
Oracle Text                            UPGRADED      12.2.0.1.0  00:01:24
Oracle XML Database                    UPGRADED      12.2.0.1.0  00:05:28
Oracle Database Java Packages          UPGRADED      12.2.0.1.0  00:00:25
Oracle Multimedia                      UPGRADED      12.2.0.1.0  00:03:25
Spatial                                UPGRADED      12.2.0.1.0  00:09:03
Oracle Application Express             UPGRADED     5.0.4.00.12  00:23:17
Final Actions                                                    00:04:44
Post Upgrade                                                     00:00:09

Total Upgrade Time: 01:20:32

Database time zone version is 14. It is older than current release time
zone version 26. Time zone upgrade is needed using the DBMS_DST package.

Grand Total Upgrade Time:    [0d:1h:47m:34s]

End of Summary Report
------------------------------------------------------
$

Find timing for datapatch.

$ cat $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/$ORACLE_UNQNAME/$JOBNO/dbupgrade/catupgrd*datapatch_normal.log
SQL Patching tool version 12.2.0.1.0 Production on Mon Nov 4 15:23:22 2019
Copyright (c) 2012, 2019, Oracle. All rights reserved.

Log file for this invocation: /orahome/oracle/app/cfgtoollogs/sqlpatch/sqlpatch_14221746_2019_11_04_15_23_22/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
ID 190716 in the binary registry and not installed in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
Nothing to roll back
The following patches will be applied:
29757449 (DATABASE JUL 2019 RELEASE UPDATE 12.2.0.1.190716)

Installing patches...
Patch installation complete. Total patches installed: 1

Validating logfiles...
Patch 29757449 apply: SUCCESS
logfile: /orahome/oracle/app/cfgtoollogs/sqlpatch/29757449/23013437/29757449_apply_ORACLE_SID_2019Nov04_15_24_11.log (no errors)
SQL Patching tool complete on Mon Nov 4 15:46:00 2019
$

Might be better to use grep -A vs tail; however, I was on AIX and was not able to find option.

Oracle Database 18c: New Features asmcmd

$
0
0
============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion
ASM version         : 19.4.0.0.0
[oracle@ol7-19-rac1 ~]$ 

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd -V
asmcmd version 19.4.0.0.0
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showpatches
---------------
List of Patches
===============
29401763
29517242
29517247
29585399
29834717
29850993
29851014

[oracle@ol7-19-rac1 ~]$ asmcmd showpatches -l
Oracle ASM release patch level is [2037353368] and 
the complete list of patches [29401763 29517242 29517247 29585399 29834717 29850993 29851014 ] have been applied on the local node. 
The release patch string is [19.4.0.0.0].
[oracle@ol7-19-rac1 ~]$

============================================================
OLD:
============================================================

### MISSING from OLD are previous 19.3 version:
29517247; ACFS RELEASE UPDATE 19.3.0.0.0	
29585399; OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242; Database Release Update : 19.3.0.0.190416 (29517242)

[oracle@ol7-19-rac1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2037353368] and 
the complete list of patches [29401763 29517242 29517247 29585399 29834717 29850993 29851014 ] have been applied on the local node. 
The release patch string is [19.4.0.0.0].
[oracle@ol7-19-rac1 ~]$

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29851014;ACFS RELEASE UPDATE 19.4.0.0.0 (29851014)
29850993;OCW RELEASE UPDATE 19.4.0.0.0 (29850993)
29834717;Database Release Update : 19.4.0.0.190716 (29834717)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion --softwarepatch
ASM version         : 19.4.0.0.0
Software patchlevel : 2037353368
[oracle@ol7-19-rac1 ~]$

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node ol7-19-rac1 is [2037353368].
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion --active
Oracle ASM active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. 
The cluster active patch level is [2037353368].
[oracle@ol7-19-rac1 ~]$

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. 
The cluster active patch level is [2037353368].
[oracle@ol7-19-rac1 ~]$

============================================================
NEW:
============================================================

[oracle@ol7-19-rac1 ~]$ asmcmd showversion --releasepatch
ASM version         : 19.4.0.0.0
Information about release patchlevel is unavailable since no ASM instance connected

[oracle@ol7-19-rac1 ~]$ asmcmd
ASMCMD> showversion --releasepatch
ASM version         : 19.4.0.0.0
Release patchlevel  : 2037353368
ASMCMD>

============================================================
OLD:
============================================================

[oracle@ol7-19-rac1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2037353368] and 
the complete list of patches [29401763 29517242 29517247 29585399 29834717 29850993 29851014 ] have been applied on the local node. 
The release patch string is [19.4.0.0.0].
[oracle@ol7-19-rac1 ~]$

Basically, the new features for asmcmd existed for crsctl query and use the ones best suited.

The Many Ways To Sign-In To Oracle Cloud

$
0
0

When signing up for Oracle Cloud, Cloud Account Name must be provided.

Login to Oracle Cloud Infrastructure Classic using Cloud Account Name
https://myservices-CloudAccountName.console.oraclecloud.com

Login to Oracle Cloud Infrastructure (most simple and need to enter Cloud Account Name)

https://www.oracle.com/cloud/sign-in.html

Login to Oracle Cloud Infrastructure Region (need to enter Cloud Account Name/Cloud Tenant)

https://console.us-phoenix-1.oraclecloud.com

Login to Oracle Cloud Infrastructure Region using Cloud Account Name
https://console.us-phoenix-1.oraclecloud.com/?tenant=CloudAccountName

If you find more, the please let me know.

Viewing all 668 articles
Browse latest View live