Database 12c: Common and local users and roles


Since 12c “nothing” is like before. Today we are talking about the user creation. The user beavior in a Non-CDB is like in 11g and before. In CDBs and PDBs the concept slightly changes. In a multitenant database users will be divided into two different types: Local and common users:

commonuser

Common users

A common user is user which is present in the CDB and all PDBs. Here from documentation:

“A common user is a database user that has the same identity in the root and in every existing and future PDB. Every common user can connect to and perform operations within the root, and within any PDB in which it has privileges.”

Special users are sys and system. Common users have the same characteristic in all instances (e.g. password, tablespace and so on)

Create a new common user:

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> create user C##TEST1 identified by test1 container = all;

User created.

Common users must start with c## or C##

SQL> create user TEST1 identified by bubu1 container=all;
create user TEST1 identified by bubu1 container=all
            *
ERROR at line 1:
ORA-65096: invalid common user or role name

Changes to common users apply to all container:

SQL> create user C##TEST1 identified by test1 container = all default tablespace users;
create user C##TEST1 identified by test1 container = all default tablespace users
*
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

The password is the same in CDB and all PDBs, when trying to change you get an error:

QL> alter user C##TEST1 identified by bubu1;
alter user C##TEST1 identified by bubu1
*
ERROR at line 1:
ORA-65066: The specified changes must apply to all containers

Changes to common users are only allowed in CDB

SQL> alter user C##TEST1 identified by bubu1 container=all;
alter user C##TEST1 identified by bubu1 container=all
*
ERROR at line 1:
ORA-65050: Common DDLs only allowed in CDB$ROOT

Local users

Local users are users which are only exsits in a PDB but not in all containers.

Create local users:

SQL> create user TEST1 identified by bubu1;

User created.

SQL> create user TEST2 identified by bubu1 container=current;

User created.

SQL> 

In CDBs no local users can be created:

SQL> create user TEST1 identified by bubu1 container=current;
create user TEST1 identified by bubu1 container=current
                                *
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT

The username for a local user must not start with c## or C##

SQL> create user C##TEST1 identified by bubu1;
create user C##TEST1 identified by bubu1
            *
ERROR at line 1:
ORA-65094: invalid local user or role name

SQL> create user C##TEST1 identified by bubu1 container=current;
create user C##TEST1 identified by bubu1 container=current
            *
ERROR at line 1:
ORA-65094: invalid local user or role name

Common roles

Common roles are like common users roles which are present in CDB and all PDBs. Important is that the privileges are granted at CDB or PDB level, but not over all containers, per default.

SQL> create role C##ROLE1 container=all;

Role created.

SQL> grant create session to C##ROLE1;

Grant succeeded.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';
GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE SESSION                 NO

SQL> alter session set container=PDB1;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

no rows selected

SQL> 

To grant privileges over all containers:

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> grant create table to C##ROLE1 container=all;

Grant succeeded.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE SESSION                 NO
C##ROLE1                       CREATE TABLE                   YES

SQL> alter session set container=PDB1;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE TABLE                   YES

SQL> 

Privileges granted in a pdb will only available in a pdb:

SQL> alter session set container=PDB1;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE TABLE                   YES

SQL> grant create view to C##ROLE1;

Grant succeeded.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE VIEW                    NO
C##ROLE1                       CREATE TABLE                   YES

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> select grantee,privilege,common from dba_sys_privs where grantee='C##ROLE1';

GRANTEE                        PRIVILEGE                      COMMON
------------------------------ ------------------------------ ------------------------------
C##ROLE1                       CREATE SESSION                 NO
C##ROLE1                       CREATE TABLE                   YES

SQL> 

Database 12c: Extended Datatypes


Starting with 12c Extended Datatypes has been introduced. I’m going to show how to use the new feature. If you have installed default 12.1.0.1 database you will get the following error when trying to create a table with varchar2 greater 4000 bytes:

SQL> create table tab1 (id number,text varchar2(32767 byte));
create table tab1 (id number,text varchar2(32767 byte))
                                           *
ERROR at line 1:
ORA-00910: specified length too long for its datatype

There is a new parameter called “MAX_STRING_SIZE”. Here the parameter and values:

  • 32767 bytes or characters if MAX_STRING_SIZE = EXTENDED
  • 4000 bytes or characters if MAX_STRING_SIZE = STANDARD

The parameter cannot be changed at session level…

SQL> alter session set max_string_size=EXTENDED;
alter session set max_string_size=EXTENDED
*
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option

If you try to change a system level:

SQL> alter system set max_string_size=EXTENDED scope=both;
alter system set max_string_size=EXTENDED scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-14694: database must in UPGRADE mode to begin MAX_STRING_SIZE migration

OK enough for testing. Here the steps to change this parameter:

0. Requirements

To activate the feature the parameter COMPATIBLE must at least 12.0.0.0.

1. change parameter

SQL>  alter system set max_string_size=EXTENDED scope=spfile;

System altered.

SQL>

2. shutdown immediate

[bash]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 22 21:53:59 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

3. startup upgrade

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 2488635392 bytes
Fixed Size                  2291424 bytes
Variable Size             671090976 bytes
Database Buffers         1795162112 bytes
Redo Buffers               20090880 bytes
Database mounted.
Database opened.

5. run script utl32k.sql

SQL> @?/rdbms/admin/utl32k.sql

Session altered.

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database has not been opened for UPGRADE.
DOC>
DOC>   Perform a "SHUTDOWN ABORT"  and
DOC>   restart using UPGRADE.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database does not have compatible >= 12.0.0
DOC>
DOC>   Set compatible >= 12.0.0 and retry.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

PL/SQL procedure successfully completed.

Session altered.

0 rows updated.

Commit complete.

System altered.

PL/SQL procedure successfully completed.

Commit complete.

System altered.

Session altered.

PL/SQL procedure successfully completed.

No errors.

Session altered.

PL/SQL procedure successfully completed.

Commit complete.

Package altered.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2013-07-22 21:43:09

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2013-07-22 21:43:11

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0

Function created.

PL/SQL procedure successfully completed.

Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 21:43:19
...Compiled 0 out of 2998 objects considered, 0 failed compilation 21:43:20
...263 packages
...255 package bodies
...453 tables
...11 functions
...16 procedures
...3 sequences
...458 triggers
...1322 indexes
...207 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 21:43:20
...Completed key object existence check 21:43:20
...Setting DBMS Registry 21:43:20
...Setting DBMS Registry Complete 21:43:20
...Exiting validate 21:43:20

PL/SQL procedure successfully completed.

SQL>

This script is mandatory, else you will get the following error:

SQL> startup
ORACLE instance started.

Total System Global Area 2488635392 bytes
Fixed Size                  2291424 bytes
Variable Size             671090976 bytes
Database Buffers         1795162112 bytes
Redo Buffers               20090880 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-14695: MAX_STRING_SIZE migration is incomplete
Process ID: 6324
Session ID: 355 Serial number: 5

NOTE:
The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns
for the views where this is required. The script does not increase the maximum size of the
VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.

5. startup normal

SQL> startup
ORACLE instance started.

Total System Global Area 2488635392 bytes
Fixed Size                  2291424 bytes
Variable Size             671090976 bytes
Database Buffers         1795162112 bytes
Redo Buffers               20090880 bytes
Database mounted.
Database opened.
SQL> show parameter max_string_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_string_size                      string      EXTENDED
SQL>

NOW: test it

SQL> conn hr/hr
Connected.
SQL> create table tab1 (id number,text varchar2(32767));

Table created.

SQL> desc tab1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 TEXT                                               VARCHAR2(32767)

SQL>

Changing MAX_STRING_SIZE in PDBs

  1. Shut down the PDB.
  2. Restart the PDB in UPGRADE mode.
  3. Change the setting of MAX_STRING_SIZE in the PDB to EXTENDED.
  4. Run the @?/rdbms/admin/utl32k.sql script in the PDB. You must be connected AS SYSDBA to run the utl32k.sql script.
  5. Restart the PDB in NORMAL mode.

Changing MAX_STRING_SIZE in RAC

  1. Shut down all of the Oracle RAC database instances, except one.
  2. Restart the Oracle RAC database instance in UPGRADE mode.
  3. Change the setting of MAX_STRING_SIZE to EXTENDED.
  4. Run the @?/rdbms/admin/utl32k.sql script in the Oracle RAC database instance. You must be connected AS SYSDBA to run the script.
  5. Restart the Oracle RAC database instance in NORMAL mode.
  6. Restart the other Oracle RAC database instances in NORMAL mode.

Changing MAX_STRING_SIZE in DG Environment

  1. Shut down the Oracle Data Guard primary database and logical standby database.
  2. Restart the primary database and logical standby database in UPGRADE mode.
  3. Change the setting of MAX_STRING_SIZE to EXTENDED on the primary database and logical standby database.
  4. Run the @?rdbms/admin/utl32k.sql script on both the primary database and the logical standby database. You must be connected AS SYSDBA to run the script.
  5. Restart the primary database and logical standby database in NORMAL mode and start SQL Apply.

Bug 15845247

If you change the MAX_STRING_SIZE initialization parameter while creating a database using DBCA with a template that includes data files, the database creation fails with the following error:
ORA-14694: database must in UPGRADE mode to begin MAX_STRING_SIZE migration

Workaround: Leave the MAX_STRING_SIZE initialization parameter set to the default (STANDARD) or whatever it was set to in the template during DBCA. If you need to create a database with MAX_STRING_SIZE=EXTENDED, use the custom database template in DBCA.In a multitenant container database (CDB), the MAX_STRING_SIZE initialization parameter is a per-PDB parameter. The root CDB always uses STANDARD semantics, regardless of the parameter. You can change the MAX_STRING_SIZE initialization parameter for PDBs as needed after the CDB is created.

Database 12c: Get back from PDB to Non-CDB


I’vh written many articles about converting a Non-CDB to a PDB. I’m going to check the way back to a Non-CDB now.

First check what a convert to PDB is. The plug-in process of a Non-CDB consists of two parts:

  1. (PHYSICAL) Migration of the necessary datafiles
  2. (LOGICAL) Conversion of the data dictionary with the noncdb_to_pdb.sql script

How it works?

“The non-CDB’s datafiles, together with the manually created manifest, can now be treated as
if they were an ordinarily unplugged PDB and simply plugged in to the target CDB and
opened as a PDB. However, as will be explained immediately, it must be opened with the
Restricted status set to YES. The tablespaces holding quota-consuming data are immediately
viable, just as if this had been a Data Pump import using transportable tablespaces. However,
the former non-CDB’s data dictionary so far has a full description of the Oracle system. This
is now superfluous, and so it must simply be removed. While still with the Restricted status set
to YES, the noncdb_to_pdb.sql script (found on the admin_directory under Oracle Home) must be
run. Now the PDB may be closed and then opened ordinarily.” –> Found here

The seconds step make all physical revert step impossible, because there is datafile 1 or real SYSTEM tablespace any more. These steps are:

  1. RMAN Full or Point-in-time-recovery will restore the whole CDB and not only the PDB. Further this is no PDB to Non-CDB conversion
  2. Transportable Database is not usable because datafile 1 is not the real SYSTEM tablespace

All in all the root problem is, that no datafile 1 or independent SYSTEM tablespace exists any more.

And now the ways to get back:

  1. Transportable Tablespace (this is the smoothest way)
  2. DataPump full export/import
  3. Other logical migration ways like Oracle GoldenGate

I’vh tested old export tool (exp) too, but exporting a PDB will end with the error “EXP-00062: invalid source statements for an object type”

If there are more ways I will appreciate a comment 🙂

Database 12c: Convert Non-CDB with different character set to PDB


Today I want to discuss converting of a Non-CDB with different character set to a PDB. I think this a very interesting part, because it is possible. So you are able to provide multiple Character Sets in one database (maybe). Like in every characterset migration it depends on the character set you want to migrate to.

Here the limits for PDB migration:

  • The character set is the same as the national character set of the CDB. In this case, the plugging operation succeeds (as far as the national character set is concerned).
  • The character set is not the same as the national character set of the CDB. In this case, the new PDB can only be opened in restricted mode for administrative tasks and cannot be used for production. Unless you have a tool that can migrate the national character set of the new PDB to the character set of the CDB, the new PDB is unusable.

“If you cannot migrate your existing databases prior to consolidation, then you have to partition them into sets with plug-in compatible database character sets and plug each set into a separate CDB with the appropriate superset character set”:

  • US7ASCII, WE8ISO8859P1, and WE8MSWIN1252 into a WE8MSWIN1252 CDB
  • WE8ISO8859P15 into a WE8ISO8859P15 CDB
  • JA16SJISTILDE into a JA16SJISTILDE CDB
  • JA16EUC into a JA16EUC CDB
  • KO16KSC5601, KO16MSWIN949 into a KO16MSWIN949 CDB
  • UTF8 and AL32UTF8 into an AL32UTF8 CDB

Here an example of PDB_PLUG_IN_VIOLATIONS for a compatible PDB:

NAME                           CAUSE                  TYPE      MESSAGE                                                                                              STATUS
------------------------------ ---------------------- --------- ---------------------------------------------------------------------------------------------------- ---------
DB12CEE4                       Database CHARACTER SET WARNING   Character set mismatch: PDB character set UTF8 CDB character set AL32UTF8.                           PENDING
DB12CEE4                       Non-CDB to PDB         WARNING   PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.                                      PENDING
DB12CEE4                       Parameter              WARNING   CDB parameter sga_target mismatch: Previous 788529152 Current 2768240640                             PENDING
DB12CEE4                       Parameter              WARNING   CDB parameter pga_aggregate_target mismatch: Previous 262144000 Current 917504000                    PENDING

Migrating from UTF8 to AL32UTF8, after plug-in the database characterset has been changed automatical (it seems to be -> maybe during dictionary check):

SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';

PARAMETER                      VALUE
------------------------------ ------------------------------
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_CHARACTERSET               AL32UTF8

Here an example of PDB_PLUG_IN_VIOLATIONS for a non compatible PDB:

NAME                           CAUSE                  TYPE      MESSAGE                                                                                              STATUS
------------------------------ ---------------------- --------- ---------------------------------------------------------------------------------------------------- ---------
DB12CEE1                       Database CHARACTER SET ERROR     Character set mismatch: PDB character set WE8MSWIN1252 CDB character set AL32UTF8.                   PENDING
DB12CEE1                       Non-CDB to PDB         WARNING   PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.                                      PENDING
DB12CEE1                       Parameter              WARNING   CDB parameter sga_target mismatch: Previous 788529152 Current 2768240640                             PENDING
DB12CEE1                       Parameter              WARNING   CDB parameter pga_aggregate_target mismatch: Previous 262144000 Current 917504000                    PENDING

Migrating from WE8MSWIN1252 to AL32UTF8, after plug-in the database characterset nothing changed:

SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';

PARAMETER                      VALUE
------------------------------ ------------------------------
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_CHARACTERSET               WE8MSWIN1252

As documentated plug-in and convert of both were possible. Example 1 successfully and example 2 with errors in cause of the characterset. Notice: Productive use of the second example is not supported, but possible. Why?

  • R/W open only in restricted session mode –> Maybe user connect as SYSDBA?
  • R/O open possible –> Maybe you can use it for Reporting, without changing characterset?
  • RMAN Backup is possible

Why do I mention this? If you doesn’t take care on the CS during migration to PDB the way back is not as easy as it should be. Of course in this case I have to export and import in another PDB, but I can’t physically migrate back. I have to two options to get to an usable result (WE8MSWIN1252 –>AL32UTF8)

  1. AL32UTF8 is a logical superset and migrationable with export/import procedure
  2. Migration with replication

Directly convert with plug-in is possible but only a usable step for this case, because I’vh to do the migration again in a logical form.

Conslusion: Take care on the characterset before migration to PDB. Not convertable charactersets need an convert be for an logical and no physical or partly physical migration way.

Database Migration Assistant for Unicode (DMU)


Since 12c csscan and csalter is not supported anymore see here. Therefore the Database Migration Assistant (DMU) has to be used right now. DMU is an external tool with is not shipped with the database software. This tool is suitable for UNICODE migrations (UTF8 or AL32UTF8) only.

So our first step is to download the software

  • Link to download Database Migration Assistant for Unicode (DMU) here
  • This is a java tool like sqlpdeveloper and need a jdk here.

After first start, there may be a warning due to the java 7 version, just ignore it.

dmu2

Notice: Only the convert of a Non-CDB is supported by this tool, otherwise you get an error like this:

dmu3

1. Preparing Database for convert

1.1 Create repository

dmu4

1.2 Choose character set you want to migrate to

dmu5

1.2 Choose tablespace where you want to store the migration repository. This does’t need much space so SYSAUX is possible.

dmu6

1.3 Finish this

dmu7

2. Scan database data

Like csscan you have to fill your migration repository with data to get what should be converted:

dmu8

2.1 Choose parameters to start scan

dmu9

2.2 Choose data you want to scan

dmu10

2.3 Choose action for scan and start it

dmu11

2.4 Here an example output of the scanning process

dmu12

2.5 Finish

dmu13

3. Start convertion of the database

3.1 Make sure that you have a database backup!

3.2 Start convert wizard

dmu14

3.3 Check scan result timestamp is ok

dmu16

3.4 If not have done right now, than take a backup before start

dmu17

3.4 Scan wirzard will start now

dmu18

3.5 Choose conversion parameters

dmu19

3.6 Check details and start process

dmu20

3.7 Here an example output of the conversion process

dmu21

3.8 Finish

dmu22

4. Uninstall migration repository

If the migration was successful you can deinstall the migration repository

dmu23

dmu24

dmu25

 

Overall finished. Now your database has been migrated to UNICODE.

Database 12c: Convert Non-CDB to PDB


To get data from a Non-CDB to a PDB you have to do a convert of a Non-CDB to PDB or use Datapump or a tool for logical migration. The conversion process is similar to unplug and plug of a pdb.

I’vh tried the following ways, which are all possible:

  • 11.2.0.3 –> 12.1.0.1 (First upgrade to a Non-CDB and then convert)
  • 12.1.0.1 EE Non-CDB –> 12.1.0.1 EE
  • 12.1.0.1 SE Non-CDB –> 12.1.0.1 EE
  • 12.1.0.1 EE Non-CDB –> 12.1.0.1 SE
  • 12.1.0.1 SE Non-CDB –> 12.1.0.1 SE

Ok, let’s start.

1. Bring Non-CDB in a consistent state:

[bash]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Jul 18 23:30:06 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown immediate

Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1043886080 bytes
Fixed Size 2296280 bytes
Variable Size 692061736 bytes
Database Buffers 343932928 bytes
Redo Buffers 5595136 bytes
Database mounted.
SQL> alter database open read only;

Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY

2. Create the export XML

SQL>
BEGIN
 DBMS_PDB.DESCRIBE(
 pdb_descr_file => '/opt/oracle/unplug/db12c1_convert.xml');
END;
/
PL/SQL procedure successfully completed.

Take a look into the XML File, check the last important part (bold marked):

<?xml version="1.0" encoding="UTF-8"?>
<PDB>
 <pdbname>DB12C1</pdbname>
 <cid>0</cid>
 <byteorder>1</byteorder>
 <vsn>202375168</vsn>
 <dbid>4126847348</dbid>
 <cdbid>4126847348</cdbid>
 <guid>E1CE01B6E8AB0B36E0430100007F25C7</guid>
 <uscnbas>1759919</uscnbas>
 <uscnwrp>0</uscnwrp>
 <rdba>4194824</rdba>
 <tablespace>
 <name>SYSTEM</name>
 <type>0</type>
 <tsn>0</tsn>
 <status>1</status>
 <issft>0</issft>
 <file>
 <path>/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_system_8yjcbjpj_.dbf</path>
 <afn>1</afn>
 <rfn>1</rfn>
 <createscnbas>7</createscnbas>
 <createscnwrp>0</createscnwrp>
 <status>1</status>
 <fileblocks>99840</fileblocks>
 <blocksize>8192</blocksize>
 <vsn>202375168</vsn>
 <fdbid>4126847348</fdbid>
 <fcpsw>0</fcpsw>
 <fcpsb>1759918</fcpsb>
 <frlsw>0</frlsw>
 <frlsb>1720082</frlsb>
 <frlt>821131319</frlt>
 </file>
 </tablespace>
 <tablespace>
 <name>SYSAUX</name>
 <type>0</type>
 <tsn>1</tsn>
 <status>1</status>
 <issft>0</issft>
 <file>
 <path>/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_sysaux_8yjc93j2_.dbf</path>
 <afn>3</afn>
 <rfn>3</rfn>
 <createscnbas>3922</createscnbas>
 <createscnwrp>0</createscnwrp>
 <status>1</status>
 <fileblocks>89600</fileblocks>
 <blocksize>8192</blocksize>
 <vsn>202375168</vsn>
 <fdbid>4126847348</fdbid>
 <fcpsw>0</fcpsw>
 <fcpsb>1759918</fcpsb>
 <frlsw>0</frlsw>
 <frlsb>1720082</frlsb>
 <frlt>821131319</frlt>
 </file>
 </tablespace>
 <tablespace>
 <name>TEMP</name>
 <type>1</type>
 <tsn>3</tsn>
 <status>1</status>
 <issft>0</issft>
 <bmunitsize>128</bmunitsize>
 <file>
 <path>/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_temp_8yjcdrw0_.tmp</path>
 <afn>1</afn>
 <rfn>1</rfn>
 <createscnbas>1720182</createscnbas>
 <createscnwrp>0</createscnwrp>
 <status>1</status>
 <fileblocks>7680</fileblocks>
 <blocksize>8192</blocksize>
 <vsn>202375168</vsn>
 <autoext>1</autoext>
 <maxsize>4194302</maxsize>
 <incsize>80</incsize>
 </file>
 </tablespace>
 <tablespace>
 <name>USERS</name>
 <type>0</type>
 <tsn>4</tsn>
 <status>1</status>
 <issft>0</issft>
 <file>
 <path>/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_users_8yjccxy4_.dbf</path>
 <afn>6</afn>
 <rfn>6</rfn>
 <createscnbas>26029</createscnbas>
 <createscnwrp>0</createscnwrp>
 <status>1</status>
 <fileblocks>640</fileblocks>
 <blocksize>8192</blocksize>
 <vsn>202375168</vsn>
 <fdbid>4126847348</fdbid>
 <fcpsw>0</fcpsw>
 <fcpsb>1759918</fcpsb>
 <frlsw>0</frlsw>
 <frlsb>1720082</frlsb>
 <frlt>821131319</frlt>
 </file>
 </tablespace>
 <optional>
 <csid>178</csid>
 <ncsid>2000</ncsid>
 <options>
 <option>APS=12.1.0.1.0</option>
 <option>CATALOG=12.1.0.1.0</option>
 <option>CATJAVA=12.1.0.1.0</option>
 <option>CATPROC=12.1.0.1.0</option>
 <option>CONTEXT=12.1.0.1.0</option>
 <option>DV=12.1.0.1.0</option>
 <option>JAVAVM=12.1.0.1.0</option>
 <option>OLS=12.1.0.1.0</option>
 <option>ORDIM=12.1.0.1.0</option>
 <option>OWM=12.1.0.1.0</option>
 <option>SDO=12.1.0.1.0</option>
 <option>XDB=12.1.0.1.0</option>
 <option>XML=12.1.0.1.0</option>
 <option>XOQ=12.1.0.1.0</option>
 </options>
 <olsoid>0</olsoid>
 <dv>0</dv>
 <ncdb2pdb>1</ncdb2pdb>
 <APEX>4.2.0.00.27:1</APEX>
 <parameters>
 <parameter>processes=300</parameter>
 <parameter>memory_target=1048576000</parameter>
 <parameter>db_block_size=8192</parameter>
 <parameter>compatible=12.1.0.0.0</parameter>
 <parameter>open_cursors=300</parameter>
 </parameters>
 <tzvers>
 <tzver>primary version:18</tzver>
 <tzver>secondary version:0</tzver>
 </tzvers>
 <walletkey>0</walletkey>
 </optional>
</PDB>

3. Shutdown source database and login in the CDB
4. Check compatibility

SET SERVEROUTPUT ON
DECLARE
 compatible CONSTANT VARCHAR2(3) := 
 CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
 pdb_descr_file => '/opt/oracle/unplug/db12c1_convert.xml',
 pdb_name => 'NCDB1')
 WHEN TRUE THEN 'YES'
 ELSE 'NO'
END;
BEGIN
 DBMS_OUTPUT.PUT_LINE(compatible);
END;
/
NO

PL/SQL procedure successfully completed.

SQL> 
/

If the scripts returns NO, then check PDB_PLUG_IN_VIOLATIONS view

SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where name='NCDB1'

NAME                           CAUSE                  TYPE      MESSAGE                                                                                              STATUS
------------------------------ ---------------------- --------- ---------------------------------------------------------------------------------------------------- ---------
NCDB1                          OPTION                 WARNING   Database option APS mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.          PENDING
NCDB1                          OPTION                 WARNING   Database option SDO mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.          PENDING
NCDB1                          OPTION                 WARNING   Database option XOQ mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.          PENDING
NCDB1                          Non-CDB to PDB         WARNING   PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.                                      PENDING
NCDB1                          Parameter              WARNING   CDB parameter sga_target mismatch: Previous 788529152 Current 1979711488                             PENDING
NCDB1                          Parameter              WARNING   CDB parameter pga_aggregate_target mismatch: Previous 262144000 Current 655360000                    PENDING

6 rows selected.

There are WARNING’s only. We can continue.

5. Plug-in Non-CDB:
5.1 Use the NOCOPY Mode:

SQL> CREATE PLUGGABLE DATABASE ncdb1 USING '/opt/oracle/unplug/db12c1_convert.xml' NOCOPY;
*
ERROR at line 1:
ORA-27038: created file already exists
ORA-01119: error in creating database file
'/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_temp_8yjcdrw0_.tmp'

If use NOCOPY you can delete the tempfile or reuse it

SQL> CREATE PLUGGABLE DATABASE ncdb1 USING '/opt/oracle/unplug/db12c1_convert.xml' NOCOPY TEMPFILE REUSE;

Pluggable database created.

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_system_8xmpwox6_.dbf
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_sysaux_8xmptylz_.dbf
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_undotbs1_8xmpygh6_.dbf
/opt/oracle/app/oradata/pdbseed/system01.dbf
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_users_8xmpyf7f_.dbf
/opt/oracle/app/oradata/pdbseed/sysaux01.dbf
/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_system_8yjcbjpj_.dbf
/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_sysaux_8yjc93j2_.dbf
/opt/oracle/app/oradata/DB12C1/datafile/o1_mf_users_8yjccxy4_.dbf

9 rows selected.

5.2 Use the COPY mode

SQL> CREATE PLUGGABLE DATABASE ncdb1 USING '/opt/oracle/unplug/db12c1_convert.xml' COPY;

Pluggable database created.

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_system_8xmpwox6_.dbf
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_sysaux_8xmptylz_.dbf
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_undotbs1_8xmpygh6_.dbf
/opt/oracle/app/oradata/pdbseed/system01.dbf
/opt/oracle/app/oradata/CDB12C/datafile/o1_mf_users_8xmpyf7f_.dbf
/opt/oracle/app/oradata/pdbseed/sysaux01.dbf
/opt/oracle/app/oradata/CDB12C/E1CE01B6E8AB0B36E0430100007F25C7/datafile/o1_mf_system_8yjr6xjs_.dbf
/opt/oracle/app/oradata/CDB12C/E1CE01B6E8AB0B36E0430100007F25C7/datafile/o1_mf_sysaux_8yjr7262_.dbf
/opt/oracle/app/oradata/CDB12C/E1CE01B6E8AB0B36E0430100007F25C7/datafile/o1_mf_users_8yjr766m_.dbf

6. Switch containert to PDB and start convert script:

SQL> alter session set container = NCDB1;

Session altered.

SQL> @?/rdbms/admin/noncdb_to_pdb.sql
SQL> SET SERVEROUTPUT ON
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL> 
SQL> WHENEVER SQLERROR EXIT;
SQL> 
SQL> DOC
DOC>#######################################################################
DOC>#######################################################################
DOC> The following statement will cause an "ORA-01403: no data found"
DOC> error if we're not in a PDB.
DOC> This script is intended to be run right after plugin of a PDB,
DOC> while inside the PDB.
DOC>#######################################################################
DOC>#######################################################################
DOC>#
SQL> 
SQL> VARIABLE cdbname VARCHAR2(128)
SQL> VARIABLE pdbname VARCHAR2(128)
SQL> BEGIN
 2 SELECT sys_context('USERENV', 'CDB_NAME')
 3 INTO :cdbname
 4 FROM dual
 5 WHERE sys_context('USERENV', 'CDB_NAME') is not null;
 6 SELECT sys_context('USERENV', 'CON_NAME')
 7 INTO :pdbname
 8 FROM dual
 9 WHERE sys_context('USERENV', 'CON_NAME') <> 'CDB$ROOT';
 10 END;
 11 /

PL/SQL procedure successfully completed.

SQL> 
SQL> COLUMN pdbname NEW_VALUE pdbname
SQL> COLUMN pdbid NEW_VALUE pdbid
SQL> 
SQL> select :pdbname pdbname from dual;

PDBNAME
--------------------------------------------------------------------------------
NCDB1

1 row selected.
...
script is running some very interesting things
...
SQL> alter session set container = "&pdbname";

Session altered.

SQL> 
SQL> -- leave the PDB in the same state it was when we started
SQL> BEGIN
 2 execute immediate '&open_sql &restricted_state';
 3 EXCEPTION
 4 WHEN OTHERS THEN
 5 BEGIN
 6 IF (sqlcode <> -900) THEN
 7 RAISE;
 8 END IF;
 9 END;
 10 END;
 11 /

PL/SQL procedure successfully completed.

SQL> 
SQL> WHENEVER SQLERROR CONTINUE;
SQL>

7. Now you can open the PDB, there may occur an error:

SQL> show con_name

CON_NAME
------------------------------
NCDB1

SQL> alter database open read write;
alter database open read write
*
ERROR at line 1:
ORA-24344: success with compilation error

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

1 row selected.

In alertlog you will see
********************************************** ******************
WARNING: Pluggable Database NCDB1 with pdb id - 3 is altered with warnings.
Please look into PDB_PLUG_IN_VIOLATIONS view for more details.
****************************************************************

8. To correct the ORA-24344 check PDB_PLUG_IN_VIOLATIONS, correct them and resync

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> alter pluggable database ncdb1 close;

Pluggable database altered.

SQL> alter pluggable database ncdb1 open restricted;

Pluggable database altered.

SQL> exec dbms_pdb.sync_pdb();

PL/SQL procedure successfully completed.

SQL> alter pluggable database ncdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database ncdb1 open;

Pluggable database altered.

9. Recheck open problems

SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where name='NCDB1';

NAME                           CAUSE                  TYPE      MESSAGE                                                                                              STATUS
------------------------------ ---------------------- --------- ---------------------------------------------------------------------------------------------------- ---------
NCDB1                          Non-CDB to PDB         ERROR     PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.                                      RESOLVED
NCDB1                          OPTION                 WARNING   Database option APS mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.          PENDING
NCDB1                          OPTION                 WARNING   Database option SDO mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.          PENDING
NCDB1                          OPTION                 WARNING   Database option XOQ mismatch: PDB installed version NULL. CDB installed version 12.1.0.1.0.          PENDING

9. Done. PDB is ready to use. Don’t forget to backup the new database.