Database 12c: Common and local users and roles


Since 12c “nothing” is like before. Today we are talking about the user creation. The user beavior in a Non-CDB is like in 11g and before. In CDBs and PDBs the concept slightly changes. In a multitenant database users will be divided into two different types: Local and common users:

commonuser

Common users

A common user is user which is present in the CDB and all PDBs. Here from documentation:

“A common user is a database user that has the same identity in the root and in every existing and future PDB. Every common user can connect to and perform operations within the root, and within any PDB in which it has privileges.”

Special users are sys and system. Common users have the same characteristic in all instances (e.g. password, tablespace and so on)

Create a new common user:

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> create user C##TEST1 identified by test1 container = all;

User created.

Common users must start with c## or C##

SQL> create user TEST1 identified by bubu1 container=all;
create user TEST1 identified by bubu1 container=all
            *
ERROR at line 1:
ORA-65096: invalid common user or role name

Changes to common users apply to all container:

SQL> create user C##TEST1 identified by test1 container = all default tablespace users;
create user C##TEST1 identified by test1 container = all default tablespace users
*
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

The password is the same in CDB and all PDBs, when trying to change you get an error:

QL> alter user C##TEST1 identified by bubu1;
alter user C##TEST1 identified by bubu1
*
ERROR at line 1:
ORA-65066: The specified changes must apply to all containers

Changes to common users are only allowed in CDB

SQL> alter user C##TEST1 identified by bubu1 container=all;
alter user C##TEST1 identified by bubu1 container=all
*
ERROR at line 1:
ORA-65050: Common DDLs only allowed in CDB$ROOT

Local users

Local users are users which are only exsits in a PDB but not in all containers.

Create local users:

SQL> create user TEST1 identified by bubu1;

User created.

SQL> create user TEST2 identified by bubu1 container=current;

User created.

SQL> 

In CDBs no local users can be created:

SQL> create user TEST1 identified by bubu1 container=current;
create user TEST1 identified by bubu1 container=current
                                *
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT

The username for a local user must not start with c## or C##

SQL> create user C##TEST1 identified by bubu1;
create user C##TEST1 identified by bubu1
            *
ERROR at line 1:
ORA-65094: invalid local user or role name

SQL> create user C##TEST1 identified by bubu1 container=current;
create user C##TEST1 identified by bubu1 container=current
            *
ERROR at line 1:
ORA-65094: invalid local user or role name

Common roles

Common roles are like common users roles which are present in CDB and all PDBs. Important is that the privileges are granted at CDB or PDB level, but not over all containers, per default.

SQL> create role C##ROLE1 container=all;

Role created.

SQL> grant create session to C##ROLE1;

Grant succeeded.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';
GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE SESSION                 NO

SQL> alter session set container=PDB1;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

no rows selected

SQL> 

To grant privileges over all containers:

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> grant create table to C##ROLE1 container=all;

Grant succeeded.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE SESSION                 NO
C##ROLE1                       CREATE TABLE                   YES

SQL> alter session set container=PDB1;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE TABLE                   YES

SQL> 

Privileges granted in a pdb will only available in a pdb:

SQL> alter session set container=PDB1;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE TABLE                   YES

SQL> grant create view to C##ROLE1;

Grant succeeded.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE VIEW                    NO
C##ROLE1                       CREATE TABLE                   YES

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE SESSION                 NO
C##ROLE1                       CREATE TABLE                   YES

SQL> 

Database 12c: Extended Datatypes


Starting with 12c Extended Datatypes has been introduced. I’m going to show how to use the new feature. If you have installed default 12.1.0.1 database you will get the following error when trying to create a table with varchar2 greater 4000 bytes:

SQL> create table tab1 (id number,text varchar2(32767 byte));
create table tab1 (id number,text varchar2(32767 byte))
                                           *
ERROR at line 1:
ORA-00910: specified length too long for its datatype

There is a new parameter called “MAX_STRING_SIZE”. Here the parameter and values:

  • 32767 bytes or characters if MAX_STRING_SIZE = EXTENDED
  • 4000 bytes or characters if MAX_STRING_SIZE = STANDARD

The parameter cannot be changed at session level…

SQL> alter session set max_string_size=EXTENDED;
alter session set max_string_size=EXTENDED
*
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option

If you try to change a system level:

SQL> alter system set max_string_size=EXTENDED scope=both;
alter system set max_string_size=EXTENDED scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-14694: database must in UPGRADE mode to begin MAX_STRING_SIZE migration

OK enough for testing. Here the steps to change this parameter:

0. Requirements

To activate the feature the parameter COMPATIBLE must at least 12.0.0.0.

1. change parameter

SQL>  alter system set max_string_size=EXTENDED scope=spfile;

System altered.

SQL>

2. shutdown immediate

[bash]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 21:53:59 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

3. startup upgrade

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 2488635392 bytes
Fixed Size                  2291424 bytes
Variable Size             671090976 bytes
Database Buffers         1795162112 bytes
Redo Buffers               20090880 bytes
Database mounted.
Database opened.

5. run script utl32k.sql

SQL> @?/rdbms/admin/utl32k.sql

Session altered.

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database has not been opened for UPGRADE.
DOC>
DOC>   Perform a "SHUTDOWN ABORT"  and
DOC>   restart using UPGRADE.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database does not have compatible >= 12.0.0
DOC>
DOC>   Set compatible >= 12.0.0 and retry.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

PL/SQL procedure successfully completed.

Session altered.

0 rows updated.

Commit complete.

System altered.

PL/SQL procedure successfully completed.

Commit complete.

System altered.

Session altered.

PL/SQL procedure successfully completed.

No errors.

Session altered.

PL/SQL procedure successfully completed.

Commit complete.

Package altered.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2013-07-22 21:43:09

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2013-07-22 21:43:11

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0

Function created.

PL/SQL procedure successfully completed.

Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 21:43:19
...Compiled 0 out of 2998 objects considered, 0 failed compilation 21:43:20
...263 packages
...255 package bodies
...453 tables
...11 functions
...16 procedures
...3 sequences
...458 triggers
...1322 indexes
...207 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 21:43:20
...Completed key object existence check 21:43:20
...Setting DBMS Registry 21:43:20
...Setting DBMS Registry Complete 21:43:20
...Exiting validate 21:43:20

PL/SQL procedure successfully completed.

SQL>

This script is mandatory, else you will get the following error:

SQL> startup
ORACLE instance started.

Total System Global Area 2488635392 bytes
Fixed Size                  2291424 bytes
Variable Size             671090976 bytes
Database Buffers         1795162112 bytes
Redo Buffers               20090880 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-14695: MAX_STRING_SIZE migration is incomplete
Process ID: 6324
Session ID: 355 Serial number: 5

NOTE:
The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns
for the views where this is required. The script does not increase the maximum size of the
VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.

5. startup normal

SQL> startup
ORACLE instance started.

Total System Global Area 2488635392 bytes
Fixed Size                  2291424 bytes
Variable Size             671090976 bytes
Database Buffers         1795162112 bytes
Redo Buffers               20090880 bytes
Database mounted.
Database opened.
SQL> show parameter max_string_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_string_size                      string      EXTENDED
SQL>

NOW: test it

SQL> conn hr/hr
Connected.
SQL> create table tab1 (id number,text varchar2(32767));

Table created.

SQL> desc tab1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 TEXT                                               VARCHAR2(32767)

SQL>

Changing MAX_STRING_SIZE in PDBs

  1. Shut down the PDB.
  2. Restart the PDB in UPGRADE mode.
  3. Change the setting of MAX_STRING_SIZE in the PDB to EXTENDED.
  4. Run the @?/rdbms/admin/utl32k.sql script in the PDB. You must be connected AS SYSDBA to run the utl32k.sql script.
  5. Restart the PDB in NORMAL mode.

Changing MAX_STRING_SIZE in RAC

  1. Shut down all of the Oracle RAC database instances, except one.
  2. Restart the Oracle RAC database instance in UPGRADE mode.
  3. Change the setting of MAX_STRING_SIZE to EXTENDED.
  4. Run the @?/rdbms/admin/utl32k.sql script in the Oracle RAC database instance. You must be connected AS SYSDBA to run the script.
  5. Restart the Oracle RAC database instance in NORMAL mode.
  6. Restart the other Oracle RAC database instances in NORMAL mode.

Changing MAX_STRING_SIZE in DG Environment

  1. Shut down the Oracle Data Guard primary database and logical standby database.
  2. Restart the primary database and logical standby database in UPGRADE mode.
  3. Change the setting of MAX_STRING_SIZE to EXTENDED on the primary database and logical standby database.
  4. Run the @?rdbms/admin/utl32k.sql script on both the primary database and the logical standby database. You must be connected AS SYSDBA to run the script.
  5. Restart the primary database and logical standby database in NORMAL mode and start SQL Apply.

Bug 15845247

If you change the MAX_STRING_SIZE initialization parameter while creating a database using DBCA with a template that includes data files, the database creation fails with the following error:
ORA-14694: database must in UPGRADE mode to begin MAX_STRING_SIZE migration

Workaround: Leave the MAX_STRING_SIZE initialization parameter set to the default (STANDARD) or whatever it was set to in the template during DBCA. If you need to create a database with MAX_STRING_SIZE=EXTENDED, use the custom database template in DBCA.In a multitenant container database (CDB), the MAX_STRING_SIZE initialization parameter is a per-PDB parameter. The root CDB always uses STANDARD semantics, regardless of the parameter. You can change the MAX_STRING_SIZE initialization parameter for PDBs as needed after the CDB is created.

Database 12c: Get back from PDB to Non-CDB


I’vh written many articles about converting a Non-CDB to a PDB. I’m going to check the way back to a Non-CDB now.

First check what a convert to PDB is. The plug-in process of a Non-CDB consists of two parts:

  1. (PHYSICAL) Migration of the necessary datafiles
  2. (LOGICAL) Conversion of the data dictionary with the noncdb_to_pdb.sql script

How it works?

“The non-CDB’s datafiles, together with the manually created manifest, can now be treated as
if they were an ordinarily unplugged PDB and simply plugged in to the target CDB and
opened as a PDB. However, as will be explained immediately, it must be opened with the
Restricted status set to YES. The tablespaces holding quota-consuming data are immediately
viable, just as if this had been a Data Pump import using transportable tablespaces. However,
the former non-CDB’s data dictionary so far has a full description of the Oracle system. This
is now superfluous, and so it must simply be removed. While still with the Restricted status set
to YES, the noncdb_to_pdb.sql script (found on the admin_directory under Oracle Home) must be
run. Now the PDB may be closed and then opened ordinarily.” –> Found here

The seconds step make all physical revert step impossible, because there is datafile 1 or real SYSTEM tablespace any more. These steps are:

  1. RMAN Full or Point-in-time-recovery will restore the whole CDB and not only the PDB. Further this is no PDB to Non-CDB conversion
  2. Transportable Database is not usable because datafile 1 is not the real SYSTEM tablespace

All in all the root problem is, that no datafile 1 or independent SYSTEM tablespace exists any more.

And now the ways to get back:

  1. Transportable Tablespace (this is the smoothest way)
  2. DataPump full export/import
  3. Other logical migration ways like Oracle GoldenGate

I’vh tested old export tool (exp) too, but exporting a PDB will end with the error “EXP-00062: invalid source statements for an object type”

If there are more ways I will appreciate a comment 🙂

Database 12c: Duplicate PDB


Today morning I wanted to check how to duplicate a PDB. The first questions to me were:

  • Duplicate PDB, how to create an empty PDB and bring it to nomount like non-cdbs?
  • Duplicate within a CDB?

Here the answer:

The duplication process of a PDB works at CDB level. So if you want to duplicate a PDB you have to create an empty CDB instance. Then the needed parts of the CDB will be cloned and all other dropped. This will answer the second question: duplication within a CDB is not out-of-the-box possible. But you are able to clone the PDB or use a shadow CDB.

Here the example to clone:

Setup:

  • Target CDB = CDB12C1
  • Auxiliary CDB = CDB12C3
  • PDB to duplicate = Only PDB1
[bash]$ rman target sys/<password>@cdb12c1 auxiliary sys/<password>@cdb12c3

Recovery Manager: Release 12.1.0.1.0 - Production on Sat Jul 20 13:15:57 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB12C1 (DBID=2076797181)
connected to auxiliary database: CDB12C3 (not mounted)

RMAN> duplicate target database to CDB12C3 pluggable database pdb1 from active database;

Starting Duplicate Db at 20-JUL-13
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=5 device type=DISK
current log archived

contents of Memory Script:
{
   sql clone "alter system set  db_name = 
 ''CDB12C1'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name = 
 ''CDB12C3'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone from service  'cdb12c1' primary controlfile;
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''CDB12C1'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''CDB12C3'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     417546240 bytes

Fixed Size                     2289064 bytes
Variable Size                293601880 bytes
Database Buffers             113246208 bytes
Redo Buffers                   8409088 bytes

Starting restore at 20-JUL-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=5 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/opt/oracle/app/oradata/CDB12C3/control01.ctl
Finished restore at 20-JUL-13

database mounted
Skipping pluggable database PDB2
Automatically adding tablespace SYSTEM
Automatically adding tablespace SYSAUX
Automatically adding tablespace PDB$SEED:SYSTEM
Automatically adding tablespace PDB$SEED:SYSAUX
Automatically adding tablespace UNDOTBS1
Skipping tablespace USERS

contents of Memory Script:
{
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  21 to new;
   set newname for clone datafile  22 to new;
   restore
   from service  'cdb12c1'   clone database
   skip forever tablespace  "USERS",
 "PDB2":"SYSTEM",
 "PDB2":"SYSAUX"   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 20-JUL-13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00003 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00007 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00021 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00022 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 20-JUL-13

sql statement: alter system archive log current
current log archived

contents of Memory Script:
{
   restore clone force from service  'cdb12c1' 
           archivelog from scn  2534976;
   switch clone datafile all;
}
executing Memory Script

Starting restore at 20-JUL-13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=97
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: using network backup set from service cdb12c1
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=98
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 20-JUL-13

datafile 1 switched to datafile copy
input datafile copy RECID=10 STAMP=821279836 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_8ynwdjk0_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=11 STAMP=821279836 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwdqkh_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=12 STAMP=821279836 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_undotbs1_8ynwdynw_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=13 STAMP=821279836 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_8ynwf5kv_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=14 STAMP=821279836 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwf8lz_.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=15 STAMP=821279836 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_8ynwfhny_.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=16 STAMP=821279836 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwflo0_.dbf

contents of Memory Script:
{
   set until scn  2535061;
   recover
   clone database
   skip forever tablespace  "USERS",
 "PDB2":"SYSTEM",
 "PDB2":"SYSAUX"    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 20-JUL-13
using channel ORA_AUX_DISK_1

Executing: alter database datafile 6 offline drop
Executing: alter database datafile 23 offline drop
Executing: alter database datafile 24 offline drop
starting media recovery

archived log for thread 1 with sequence 97 is already on disk as file /opt/oracle/app/fast_recovery_area/CDB12C3/archivelog/2013_07_20/o1_mf_1_97_8ynwft7h_.arc
archived log for thread 1 with sequence 98 is already on disk as file /opt/oracle/app/fast_recovery_area/CDB12C3/archivelog/2013_07_20/o1_mf_1_98_8ynwfv8y_.arc
archived log file name=/opt/oracle/app/fast_recovery_area/CDB12C3/archivelog/2013_07_20/o1_mf_1_97_8ynwft7h_.arc thread=1 sequence=97
archived log file name=/opt/oracle/app/fast_recovery_area/CDB12C3/archivelog/2013_07_20/o1_mf_1_98_8ynwfv8y_.arc thread=1 sequence=98
media recovery complete, elapsed time: 00:00:00
Finished recover at 20-JUL-13
Oracle instance started

Total System Global Area     417546240 bytes

Fixed Size                     2289064 bytes
Variable Size                293601880 bytes
Database Buffers             113246208 bytes
Redo Buffers                   8409088 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name = 
 ''CDB12C3'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''CDB12C3'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile
Oracle instance started

Total System Global Area     417546240 bytes

Fixed Size                     2289064 bytes
Variable Size                293601880 bytes
Database Buffers             113246208 bytes
Redo Buffers                   8409088 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CDB12C3" RESETLOGS ARCHIVELOG 
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP   1  SIZE 50 M ,
  GROUP   2  SIZE 50 M ,
  GROUP   3  SIZE 50 M 
 DATAFILE
  '/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_8ynwdjk0_.dbf',
  '/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_8ynwf5kv_.dbf',
  '/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_system_8ynwfhny_.dbf'
 CHARACTER SET AL32UTF8

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwdqkh_.dbf", 
 "/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_undotbs1_8ynwdynw_.dbf", 
 "/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwf8lz_.dbf", 
 "/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwflo0_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwdqkh_.dbf RECID=1 STAMP=821279854
cataloged datafile copy
datafile copy file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_undotbs1_8ynwdynw_.dbf RECID=2 STAMP=821279854
cataloged datafile copy
datafile copy file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwf8lz_.dbf RECID=3 STAMP=821279854
cataloged datafile copy
datafile copy file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwflo0_.dbf RECID=4 STAMP=821279854

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=821279854 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwdqkh_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=821279854 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_undotbs1_8ynwdynw_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=3 STAMP=821279854 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwf8lz_.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=4 STAMP=821279854 file name=/opt/oracle/app/oradata/CDB12C3/datafile/o1_mf_sysaux_8ynwflo0_.dbf

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Executing: drop pluggable database "PDB2"

contents of Memory Script:
{
   sql clone "alter pluggable database all open";
}
executing Memory Script

sql statement: alter pluggable database all open
Dropping offline and skipped tablespaces
Executing: alter database default tablespace system
Executing: drop tablespace "USERS" including contents cascade constraints
Finished Duplicate Db at 20-JUL-13

RMAN>

Now check the new PDB

SQL> select * from v$pdbs

    CON_ID       DBID    CON_UID GUID                             NAME                           OPEN_MODE  RES OPEN_TIME                                                                   CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ------------------------------ ---------- --- --------------------------------------------------------------------------- ---------- ----------
         2 4063775335 4063775335 E1E189D14D2C2049E0430100007FAB5B PDB$SEED                       READ ONLY  NO  20-JUL-13 01.17.36.174 PM                                                      1720752  283115520
         3 3328955419 3328955419 E1EE338E12A16C3EE0430100007F1A4F PDB1                           READ WRITE NO  20-JUL-13 01.17.39.820 PM                                                      2502055  283115520

SQL> 

Important to know is that the GUID of the PDB doesn’t change in the new CDB.

Database 12c: Clone PDBs (offline and online)


In 12c it is possible to clone PDB’s. Here an example:

1. Bring source PDB in correct state:

SQL> select con_id,name,open_mode,restricted from v$pdbs;

    CON_ID NAME                           OPEN_MODE  RES GUID
---------- ------------------------------ ---------- --- --------------------------------
         2 PDB$SEED                       READ ONLY  NO  E1E189D14D2C2049E0430100007FAB5B
         3 PDB1                           READ WRITE NO  E1F01A2ED12F718AE0430100007F25BA

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

SQL>

2. Clone PDB

SQL> create pluggable database pdb2 from pdb1;

Pluggable database created.

    CON_ID NAME                 OPEN_MODE  RES GUID
---------- -------------------- ---------- --- --------------------------------
         2 PDB$SEED             READ ONLY  NO  E1E189D14D2C2049E0430100007FAB5B
         3 PDB1                 READ ONLY  NO  E1EE338E12A16C3EE0430100007F1A4F
         4 PDB2                 MOUNTED        E1F01A2ED12F718AE0430100007F25BA

SQL> select con_id,name from v$datafile order by 1;
    CON_ID NAME
---------- ----------------------------------------------------------------------------------------------------
         1 /opt/oracle/app/oradata/CDB12C1/system01.dbf
         1 /opt/oracle/app/oradata/CDB12C1/sysaux01.dbf
         1 /opt/oracle/app/oradata/CDB12C1/undotbs01.dbf
         1 /opt/oracle/app/oradata/CDB12C1/users01.dbf
         2 /opt/oracle/app/oradata/CDB12C1/pdbseed/sysaux01.dbf
         2 /opt/oracle/app/oradata/CDB12C1/pdbseed/system01.dbf
         3 /opt/oracle/app/oradata/CDB12C1/E1EE338E12A16C3EE0430100007F1A4F/datafile/o1_mf_system_8ynlfvd5_.dbf
         3 /opt/oracle/app/oradata/CDB12C1/E1EE338E12A16C3EE0430100007F1A4F/datafile/o1_mf_sysaux_8ynlfx2p_.dbf
         4 /opt/oracle/app/oradata/CDB12C1/E1F01A2ED12F718AE0430100007F25BA/datafile/o1_mf_sysaux_8yntf0xb_.dbf
         4 /opt/oracle/app/oradata/CDB12C1/E1F01A2ED12F718AE0430100007F25BA/datafile/o1_mf_system_8yntdzmf_.dbf

3. Bring both PDB’s in correct state

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL> alter pluggable database pdb2 open;

Pluggable database altered.

SQL>

As you can see you have to bring a PDB in READ ONLY mode to clone it. I think this is not really suitable for production environments.

After some bainstorming I found a way to clone PDB’s online. All you need is some storage. Here the plan to clone PDBs online:

  1. Duplicate target PDB to a auxiliary PDB
    –> Bring the PDB to an non production environment
  2. Create a clone PDB from the auxiliary PDB
    –> The GUID of the PDB must change to replug in old CDB. I don’t found another way to change the GUID right now.
  3. Unplug an plug the clone PDB in the target CDB
    –> Transport metadata back to target
  4. Cleanup the shadow CDB
    –> Cleanup all

I’vh read that the real online cloning functionality will come in the next release.

Database 12c: Convert Non-CDB with different character set to PDB


Today I want to discuss converting of a Non-CDB with different character set to a PDB. I think this a very interesting part, because it is possible. So you are able to provide multiple Character Sets in one database (maybe). Like in every characterset migration it depends on the character set you want to migrate to.

Here the limits for PDB migration:

  • The character set is the same as the national character set of the CDB. In this case, the plugging operation succeeds (as far as the national character set is concerned).
  • The character set is not the same as the national character set of the CDB. In this case, the new PDB can only be opened in restricted mode for administrative tasks and cannot be used for production. Unless you have a tool that can migrate the national character set of the new PDB to the character set of the CDB, the new PDB is unusable.

“If you cannot migrate your existing databases prior to consolidation, then you have to partition them into sets with plug-in compatible database character sets and plug each set into a separate CDB with the appropriate superset character set”:

  • US7ASCII, WE8ISO8859P1, and WE8MSWIN1252 into a WE8MSWIN1252 CDB
  • WE8ISO8859P15 into a WE8ISO8859P15 CDB
  • JA16SJISTILDE into a JA16SJISTILDE CDB
  • JA16EUC into a JA16EUC CDB
  • KO16KSC5601, KO16MSWIN949 into a KO16MSWIN949 CDB
  • UTF8 and AL32UTF8 into an AL32UTF8 CDB

Here an example of PDB_PLUG_IN_VIOLATIONS for a compatible PDB:

NAME                           CAUSE                  TYPE      MESSAGE                                                                                              STATUS
------------------------------ ---------------------- --------- ---------------------------------------------------------------------------------------------------- ---------
DB12CEE4                       Database CHARACTER SET WARNING   Character set mismatch: PDB character set UTF8 CDB character set AL32UTF8.                           PENDING
DB12CEE4                       Non-CDB to PDB         WARNING   PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.                                      PENDING
DB12CEE4                       Parameter              WARNING   CDB parameter sga_target mismatch: Previous 788529152 Current 2768240640                             PENDING
DB12CEE4                       Parameter              WARNING   CDB parameter pga_aggregate_target mismatch: Previous 262144000 Current 917504000                    PENDING

Migrating from UTF8 to AL32UTF8, after plug-in the database characterset has been changed automatical (it seems to be -> maybe during dictionary check):

SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';

PARAMETER                      VALUE
------------------------------ ------------------------------
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_CHARACTERSET               AL32UTF8

Here an example of PDB_PLUG_IN_VIOLATIONS for a non compatible PDB:

NAME                           CAUSE                  TYPE      MESSAGE                                                                                              STATUS
------------------------------ ---------------------- --------- ---------------------------------------------------------------------------------------------------- ---------
DB12CEE1                       Database CHARACTER SET ERROR     Character set mismatch: PDB character set WE8MSWIN1252 CDB character set AL32UTF8.                   PENDING
DB12CEE1                       Non-CDB to PDB         WARNING   PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.                                      PENDING
DB12CEE1                       Parameter              WARNING   CDB parameter sga_target mismatch: Previous 788529152 Current 2768240640                             PENDING
DB12CEE1                       Parameter              WARNING   CDB parameter pga_aggregate_target mismatch: Previous 262144000 Current 917504000                    PENDING

Migrating from WE8MSWIN1252 to AL32UTF8, after plug-in the database characterset nothing changed:

SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';

PARAMETER                      VALUE
------------------------------ ------------------------------
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_CHARACTERSET               WE8MSWIN1252

As documentated plug-in and convert of both were possible. Example 1 successfully and example 2 with errors in cause of the characterset. Notice: Productive use of the second example is not supported, but possible. Why?

  • R/W open only in restricted session mode –> Maybe user connect as SYSDBA?
  • R/O open possible –> Maybe you can use it for Reporting, without changing characterset?
  • RMAN Backup is possible

Why do I mention this? If you doesn’t take care on the CS during migration to PDB the way back is not as easy as it should be. Of course in this case I have to export and import in another PDB, but I can’t physically migrate back. I have to two options to get to an usable result (WE8MSWIN1252 –>AL32UTF8)

  1. AL32UTF8 is a logical superset and migrationable with export/import procedure
  2. Migration with replication

Directly convert with plug-in is possible but only a usable step for this case, because I’vh to do the migration again in a logical form.

Conslusion: Take care on the characterset before migration to PDB. Not convertable charactersets need an convert be for an logical and no physical or partly physical migration way.