This is still all WIP and if you stay with me, I will provide all the sequences for export and import.
Too much info put in one post.
EXP-10 Usernames Which Cannot Be Exported (Doc ID 217135.1)
Did you know from $ORACLE_HOME/rdbms/admin/catexp.sql there is view EXU8USR providing which schemas will not be in full export?
This looks rather similar to column oracle_maintained from dba_users for 12c database.
Schema bold red will not be exported. Trust but verify.
oracle@arrow1:HAWKA:/media/sf_working/datapump $ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Mon Feb 13 18:28:51 2017 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning and Real Application Testing options ARROW1:(SYS@HAWKA):PRIMARY> set lines 120 tab off trimsp on pages 1000 ARROW1:(SYS@HAWKA):PRIMARY> col name for a55 ARROW1:(SYS@HAWKA):PRIMARY> select username from dba_users order by 1; USERNAME ------------------------------ APPQOSSYS DBSNMP DEMO DIP GGS_ADMIN ORACLE_OCM OUTLN SYS SYSTEM 9 rows selected. ARROW1:(SYS@HAWKA):PRIMARY> select name from exu8usr order by 1; NAME ------------------------------------------------------- DEMO GGS_ADMIN OUTLN SYS SYSTEM ARROW1:(SYS@HAWKA):PRIMARY> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning and Real Application Testing options oracle@arrow1:HAWKA:/media/sf_working/datapump $
First step is to pre-create tablespaces.
$ cat impdp_full01_tbs.par
directory=DATA_PUMP_DIR userid="/ as sysdba" metrics=Y dumpfile=full.dmp logfile=impdp_full_tbs.log include=TABLESPACE sqlfile=tbs.sql
$ impdp parfile=impdp_full01_tbs.par
Import: Release 11.2.0.4.0 - Production on Mon Feb 13 18:46:56 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning and Real Application Testing options
Startup took 0 seconds
Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01": /******** AS SYSDBA parfile=impdp_full01_tbs.par
Processing object type DATABASE_EXPORT/TABLESPACE
Completed 4 TABLESPACE objects in 0 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Mon Feb 13 18:46:57 2017 elapsed 0 00:00:01
$ cat /u01/app/oracle/product/11.2.0.4/db_1/rdbms/log/tbs.sql
See how the filename is hard coded even when OMF is being used.
-- CONNECT SYS ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1'; ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1'; ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1'; ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1'; ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1'; ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 '; -- new object type path: DATABASE_EXPORT/TABLESPACE CREATE UNDO TABLESPACE "UNDOTBS" DATAFILE SIZE 268435456 AUTOEXTEND ON NEXT 268435456 MAXSIZE 8192M BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL AUTOALLOCATE; CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE SIZE 1610612736 AUTOEXTEND ON NEXT 268435456 MAXSIZE 8192M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576; CREATE TABLESPACE "USERS" DATAFILE SIZE 135266304 AUTOEXTEND ON NEXT 134217728 MAXSIZE 8193M, SIZE 16777216, SIZE 10485760 LOGGING ONLINE PERMANENT BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT NOCOMPRESS SEGMENT SPACE MANAGEMENT AUTO; ALTER DATABASE DATAFILE '/oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf' RESIZE 16785408; CREATE TABLESPACE "GGS_DATA" DATAFILE SIZE 269484032 AUTOEXTEND ON NEXT 268435456 MAXSIZE 16385M LOGGING ONLINE PERMANENT BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576 DEFAULT NOCOMPRESS SEGMENT SPACE MANAGEMENT AUTO;
Basically datafile 5 was created with 16777216 and resized to 16785408.
Why did datapump not use the current size vs original size?
I know what you are probably thinking, why not just create SQL script to do the work.
True but it’s like buying a Mercedes-Benz and having to roll down the windows by hand. (Dating myself).
ARROW1:(SYS@HAWKA):PRIMARY> select file#,name,bytes from v$datafile where name like '%user%'; FILE# NAME BYTES ---------- ------------------------------------------------------- ---------- 4 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf 135266304 5 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf 16785408 7 /oradata/HAWKA/datafile/o1_mf_users_db4vw289_.dbf 10485760 ARROW1:(SYS@HAWKA):PRIMARY>
SYSTEM and SYSAUX tablespaces are not exported while UNDO and TEMP are.
Hopefully, the following was performed to get all the details from DB creation.
alter database backup controlfile to trace as ‘/tmp/cf_@.sql’ reuse resetlogs;
select property_name,property_value from DATABASE_PROPERTIES;
Processing object type DATABASE_EXPORT/TABLESPACE Completed 4 TABLESPACE objects in 1 seconds +++++++++ ARROW1:(SYS@HAWKA):PRIMARY> select name from v$tablespace order by 1; NAME ------------------------------------------------------- GGS_DATA SYSAUX SYSTEM TEMP UNDOTBS USERS 6 rows selected. ARROW1:(SYS@HAWKA):PRIMARY>
Last but not least, did you know you can create database in achivelog mode to begin with versus having to enable ARCHIVELOG mode after the fact?
Take a look at my post below.
OTN Appreciation Day : Create Database Using SQL | Thinking Out Loud Blog
ARCHIVELOG
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
SET TIME_ZONE=’US/Mountain’
