Tablespaces in multitenancy

Note:

  • In a non-CDB, all the tablespaces belong to one database. 
  • In the CDB, one set of tablespaces belong to the root container, and each PDB has its own set of tablespaces. 
  • Common objects are created and their data stored in a tablespace in the root container. The common object is visible in the PDBs through links. 
  • There are new clauses in the CREATE DATABASE command. The USER_DATA TABLESPACE allows you to specify a default tablespace other than USERS when using DBCA to create a database. This tablespace will also be used for XDB options.
  • The UNDO tablespace is common to all PDBs, that is, you can have more than one but there is only one active UNDO tablespace per CDB.

To view the all tablespaces

SQL> SELECT tablespace_name, DECODE(pdb_name, NULL, 'ROOT',pdb_name) pdb_name
  2  FROM cdb_tablespaces t, cdb_pdbs p
  3  WHERE t.con_id = p.pdb_id(+)
  4  ORDER BY 2,1;

TABLESPACE_NAME                PDB_NAME
------------------------------ --------------------
SYSAUX                         PDB$SEED
SYSTEM                         PDB$SEED
TEMP                           PDB$SEED
SYSAUX                         PDB2
SYSTEM                         PDB2
TEMP                           PDB2
SYSAUX                         ROOT
SYSTEM                         ROOT
TEMP                           ROOT
UNDOTBS1                       ROOT
UNDOTBS2                       ROOT
USERS                          ROOT

12 rows selected.

The CREATE TABLESPACE command should be familiar. The change in its behavior in a CDB is that the tablespace is created in the container where the command is executed. Separating the data files into different directories by PDB can help determine which files belong to which PDB, though it is not necessary but it is good from administration perspective.

Create tablespace at CDB

SQL> connect system/oracle@cdb
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> CREATE TABLESPACE cdb_tbs
  2  DATAFILE '+DATA/ora12c/cdb_tbs01.dbf'
  3  SIZE 5M;

Tablespace created.

Create tablespace at PDB

SQL> connect system/oracle@pdb2
Connected.

SQL> show con_name
CON_NAME
------------------------------
PDB2
SQL> CREATE TABLESPACE pdb2_tbs
  2  DATAFILE '+DATA/pdb2/pdb2_tbs01.dbf'
  3  SIZE 5M;

Tablespace created.

Using glogin.sql to have friendly and nice SQLPLUS multitenancy environment

When we are working at multitenancy it is good to have glogin configured to show container info. I am having following glogin.sql.

You can find the glogin.sql file in $ORACLE_HOME/sqlplus/admin


column id_plus_exp format 990 heading i
column parent_id_plus_exp format 990 heading p
column plan_plus_exp format a60
column object_node_plus_exp format a8
column other_tag_plus_exp format a29
column other_plus_exp format a44
define _editor=notepad
--define _editor=vi
set serveroutput on size 1000000
set trimspool on
set long 5000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
define gname=idle
column global_name new_value gname
select lower(user) || '@' || substr(global_name, 1,
decode( dot, 0,
length(global_name), dot-1)
) || '->'|| host_name global_name
from (select global_name, instr(global_name,'.') dot from global_name)
,v$instance;
set sqlprompt '&gname> '
set termout on

For unix or linux you need to change the define_editor
define_editor=notepad

to
define_editor=vi
The sample like following

C:\>sqlplus system/oracle@pdb2

SQL*Plus: Release 12.1.0.1.0 Production on Tue Dec 17 15:40:38 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Tue Dec 17 2013 15:32:59 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

system@PDB2->ora12cn1>

Cost Based Optimizer (CBO) Parameters Setting

The parameter that control the optimizer behaviors are:


  • CURSOR_SHARING

determines what kind of SQL statements can shared the same cursors. The possible values you can use are:

FORCE
FORCE, forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.

SIMILAR
SIMILAR causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literas affect either the meaning of the statement or the degree to which the plan is optimized. Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS application, or application that used stored outlines.

EXACT
EXACT only allows statements with identical text to share the same cursor. This is the default.



  • DB_FILE_MULTIBLOCK_READ_COUNT

is one of the parameters you can use to minimized I/O during table scans or index fast full scan. It specified the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan or an index fast full scan depends on factors, such as the size of the segment, the multiblock read count, and whether parallel execution is being utilized for the operation.

As of Oracle Database 10gR2 the default value of this parameter is a value that corresponds to the maximum I/O size that can be perform efficiently. This value is platform dependent and is 1 MB for most platforms.

Because the parameter is expressed in blocks, it automatically compute a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size.

Note that if the number of sessions is extremely large, the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers. Even tough the default value may be a large value, the optimizer does not favor large plans if you do not see this parameter. It would do so only if you explicitly set this parameter to a large value. Basically, if this parameter is not set explicitly (o is set at 0), the optimizer uses a default value of 8 when costing full table scans and index fast full scans.

OLTP and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to select full table scan over an index, if the value of this parameter is hight.



  • PGA_AGGREGATE_TARGET

specifies the target aggregate PGA memory available to all server processes attached to the instance. Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators, such as sort, group by, has-join, bitmap merge, and bitmap create, are automatically sized.

A nonzero value for the PGA_AGGREGATE_TARGET parameter is the default since, unless you specify otherwise, the system sets it to 20% of the SGA or 10MB, whichever is greater.

Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. This means that SQL work areas are sized using the *_AREA_SIZE parameters. The system attempts to keep the amount of private memory below the targe specified by this parameter by adapting the size of the work areas to private memory.

When increasing the value of this parameter, you indirectly increase the memory allotted to work area. Consequently, more memory-intensive operations are able to run fully in memory and a less number of them work their way over to the disk. When setting this parameter, you should examine the total memory on your system that is available to the Oracle instance and subtract the SGA. You can assign the remaining memory to PGA_AGGREGATE_TARGET.



  • STAR_TRANSFORMATION_ENABLED

determines whether a cost based query transformation is applied to star queries.



  • RESULT_CACHE_MODE

The query optimizer manages the result cache mechanism depending on the settings of the RESULT_CACHE_MODE parameter in the initialization parameter file. You can use this parameter to determine whether or not the optimizer automatically sends the results of queries to the result cache. The possible parameter values are:


MANUAL
when set to MANUAL (the default), you must specify, by using the RESULT CACHE hint, that a particular result is to be stored in the cache.

FORCE
When set FORCE, all results are stored in the cache. For the FORCE setting, if the statement contains a [NO_]RESULT_CACHE hint, the hint take precedence over the parameter setting.



  • RESULT_CACHE_MAX_SIZE
  • RESULT_CACHE_MAX_RESULT

The memory size allocated to the result cache depends on the memory size of the SGA as well as the memory management system. You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set its value to 0. The value of this parameter is rounded to the largest multiple of 32 KB that is not greater than the specified value. If the rounded value is 0, the feature is disabled.

Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache memory that can be used by any single result. The default value is 5%, but you can specify any percentage value between 1 and 100.



  • RESULT_CACHE_REMOTE_EXPIRATION

Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of minutes) for which a result that depends on remote database objects remains valid. The defaul value is 0, which implies that results using remote objects should not be cached. Setting this parameter to a nonzero value can produce stale answers, for example, if the remote table used by a result is modified at the remote database .



  • OPTIMIZER_INDEX_CACHING

OPTIMIZER_INDEX_CACHING controls the costing of an index probe in conjunction with a nested loop or an inlist iterator. The range of value 0 - 100 for OPTIMIZER_INDEX_CACHING indicates the percentage of index blocks in the buffer cache, which modifies the optimizer's assumptions about index caching for nested loop and inlist iteratos.

A value of 100 infers that 100% of the index blocks are likely to be found in the buffer cache and the optimizer adjust the cost of an index probe or nested loop accordingly. The default for this parameter is 0, which results in default optimizer behavior. Use caution when using this parameter because execution plans can change in favor of index caching.



  • OPTIMIZER_INDEX_COST_ADJ

OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be more or less index friendly. That is, to make the optimizer more or less prone to selecting an index access path over a full table scan.

The range of values is 1 to 100. The default for this parameter is 100%, at which the optimizer evaluates index access paths at the regular cost. Any other value makes the optimizer evaluate the access path at that percentage of the regular cost. For example, a setting 50 makes the index access path look half as expensive as normal.


OPTIMIZER_FEATURES_ENABLED
OPTIMIZER_FEATURES_ENABLED acts as umbrella parameter for enabling a series of optimizer features based on an Oracle release number

For example, if you upgrade your database from 10.1 to release 11.1, but you want to keep the release 10.1 optimizer behavior, you can do so by setting this parameter to 10.1.0 At a later time, you can try to enhancements introduced in the releases up to an including release 11.1 by setting the parameter to 11.1.0.6.

However, it is not recommended to explicitly set the OPTIMIZER_FEATURES_ENABLED parameter to an earlier release. To avoid possible SQL performance regression that may result from execution plan changes, consider using SQL plan management instead.



  • OPTIMIZER_MODE

OPTIMIZER_MODE establishes the default behavior for selecting an optimizing approach for either the instance or your session. The possible values are

ALL_ROWS
The ALL_ROWS value causes the optimizer to use a cost based approach for all SQL statements in the session regardless of the presence of statistics an optimizes with a goal of best throughput (minimum resource use to complete the entire statement). This is the default value.

FIRST_ROWS_n
When you use the FIRST_ROWS_n value, the optimizer uses a cost based approach, regardless of the presence of statistics, and optimizes with a goal best response time to return the first n number of rows; n can equal 1, 10, 100 or 1000.

FIRST_ROWS
The FIRST_ROWS value causes the optimizer to use a mix of cost and heuristics for find the best plan for fast delivery of the first few rows. Using heuristics sometimes leads the query optimizer to generate a plan with a cost that is significantly larger than the cost of a plan without applying the heuristic. FIRST_ROWS is available for backward compatibility and plan stability; use FIRS_ROW_n instead.



  • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES
  • OPTIMIZER_USER_SQL_PLAN_BASELINES
  • OPTIMIZER_DYNAMIC_SAMPLING

OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES enables or disables the automatic recvognition of repeatable SQL statements, as well as the generations of SQL plan baselines for such statements.

OPTIMIZER_USER_SQL_PLAN_BASELINES enables or disabales the use of SQL plan baselines stored in SQL Management Base. When enabled, the optimizer looks for a SQL plan baseline for the SQL steatment being compiled. If one is found in SQL Management Base, the optimizer costs each of the baseline plans and pick one with the lowest cost.

OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sapling performed by the optimizer. If OPTIMIZER_FEATURE_ENABLE is set to 10.0.0 or later, the default value is 2. If it is set to 9.2.0, the default value is 1, and if it is set to 9.0.1 or earlier, the default value is 0.


  • OPTIMIZER_USE_INVISIBLE_INDEXES
  • OPTIMIZER_USE_PENDING_STATISTICS

OPTIMIZER_USE_INVISIBLE_INDEXES enables or disables the use of invisible indexes.


OPTIMIZER_USE_PENDING_STATISTICS specifies whether or not the optimizer uses pending statisctis when compiling SQL statements.


SQL Tuning Tools

Several tuning tools to detect and rectify SQL errors and performance issue


Automatic Database Diagnostic Monitor

The Automatic Database Diagnostic Monitor (ADDM) continually analyzes the performance data that is collected from the database instance.


SQL Tuning Advisor

The SQL tuning advisor analyzes SQL statements that have been identified as problematic, in an effort to retune them. By default, this is an automated task. You can also, at any time, run the SQL Tuning Advisor on a specific SQL workload, to look for ways to improve performance.


SQL Tuning Sets

SQL tuning sets serve as a repository for sets of SQL statements. For example, the SQL Tuning Advisor can run againsts a workload that is represented by a SQL Tuning Set. They can even be transported from database to database, to perform analysis on different machines.


SQL Access Advisor

The SQL Access Advisor analyzes a SQL statement, and provides advise on materialized view, indexes, materialized view logs, and partitions.


SQL Performance Analyzer

The SQL performance analyzer automates the process of assessing the overal effect of a change, such as upgrading database or adding new indexes, on the full SQL workload by identifiying the performance divergence for each statement.


SQL Monitoring

The real-time SQL monitoring feature of Oracle Database enables you to monitor the performance of SQL statements while they execute.


SQL Plan Management 

The SQL Plan Management tool can be used to control the execution plan evolution.


Adaptive SQL PLan Management


SQL plan management (SPM) ensures that runtime performance will never degrade due to the change of an execution plan. To guarantee this, only accepted execution plans are used; any plan evolution that does occur, is tracked and evaluated at a later point in time, and only accepted if the new plan shows a noticeable improvement in runtime.

General Database Tuning Task

Many SQL Tuning tasks should be performed on a regular basis. You may see a way to rewrite a WHERE clause, but it may depend on a new index being built. The following is a list of tasks that gives you a background of some important tasks that must be performed, and gives you an idea of what dependencies you may have as you tune SQL:

1. Identifying high-load SQL statements

Identifying high-load SQL statements is one of the most important tasks you should perform. The ADDM is the ideal tool for this particular task.

2. Scheduling job to gather optimizer statistics

By default, Oracle Database gathers optimizer statistics automatically. For this, a job is schduled to run in the maintenance windows.

3. Identifying usage and performance of hardware components

OS statistics provide information about the usage and performance of the main hardware components as well as the performance of the OS itself,

4. Rebuilding indexes

Often, there is a beneficial impact on performance by rebuilding indexes. For example, removing non selective indexes to speed the DML, or adding columns to the index to improve selectivity.


5. Using stored statistics, outlines, or SQL plan baselines

You can maintain the existing execution plan of SQL statements over time by using stored statistics, outlines, or SQL plan baselines.


When you tune your system, it is important that you compare the CPU time with the wait time of your system. By comparing CPU time with wait time, you can determine how much of response time is spent on useful work and how much on waiting for resources potentially held by other processes.
As general rule, the systems where CPU time is dominant usually need less tuning than the ones where wait time is dominant. On the other hand, high CPU usage can be caused by badly-written SQL statements.

High CPU Time = Possibly needs SQL tuning
High Wait time = Need instance/RAC tuning

Although the proposition of CPU time to wait time always tends to decrease as load on the system increases, steep increases in wait time are a sign of contention and must be addressed for good scalability. Scalability is a system's ability to process more workload with a proportional increase in system resource use.

Adding more CPUs to a node, or nodes to a cluster, would provide very limited benefit under contention. Conversely, a system where the proportion of CPU time does not decrease significantly as load increases can scale better, and would most likely benefit from adding CPUs or RAC instances if needed.

AWR reports display CPU time together with wait time in the TOP Timed Events section, if the CPU time portion is among the top five events.

Poor application design, implementation, and configuration have a significant impact on scalability. This results in poor SQL and index design, resulting in a higher number of logical I/O for same number if rows returned. It also results in reduced availability because database objects take longer to maintain.

However, design is not the only problem. The physical implementation of the application can be the weak link, as in the following examples:
  • System can move to production environments with poorly written SQL that cause high I/O
  • Infrequent transaction COMMITs or ROLLBACKs can cause long locks on resources.
  • The production environment can use different execution plan than those generated in testing
  • Memory-intensive applications that allocated a large amount of memory without much thought for freeing the memory can cause excessive memory fragmentation, and.
  • Inefficient memory usage places high stress on the operating virtual memory subsystem, and this affects performance and availability.

There are some common mistakes made on customer systems. Although these mistakes are not directly detected by ADDM, ADDM highlights the resulting high-load SQL. These mistakes are:

1. Bad Connection Management

The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance and is totally unscalable

2. Bad use of cursors and the shared pool

Not using cursors results in repeated parses. If bind variables are not used, there may be hard parsing of all similar SQL statements. This has an order of magnitude impact on performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of application generating dynamic SQL.

3. Bad SQL

Bad SQLis SQL that uses more resources than appropriate for the application. This can be a DSS query that runs for more than 24 hours or a query from an online application that takes more than a minute.
SQL that consumes significant system resources should be investigated for potential improvement. ADDM identifies high-load SQL and the SQL Tuning Advisor can be used to provide recommendations for improvement.


4. Use of nonstandard initialization parameters

The use of nonstandard initialization parameters might have been implemented based on poor advise or incorrect assumptions. Most systems give acceptable performance using only the set of basic parameters. In particular, undocumented optimizer features can cause a great deal of problems that may require considerable investigation.
Likewise, optimizer parameter set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer setting should be managed together as a group to ensure consistency of performance.


5. Getting the database I/O wrong

Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly because they configure disks by disk space and not by I/O bandwidth

6. Redo log setup problems

Many sites run with too small redo log files. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and the I/O system. If there are very few redo logs, the archive cannot keep up, and the database waits for the archive process to catch up.

7. Excessive serialization

Serialization of data blocks in the buffer cache due to shortage of undo segments is particularly common in application with large number of active users and a few undo segments. Use Automatic Segment Space Management of ASSM and automatic undo management to solve this problem.


8. Long full table scans

Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive an unscalable.


9. High amount of recursive (SYS) SQL

Large amounts of recursive SQL executed by SYS could indicate that space management activities, such as extent allocations, take place. This is unscalable and impact user response time, Use locally managed tablespaces top reduce recursive SQL due to extent allocation. Reccursive SQL executed under another user ID is probably SQL and PL/SQL, so this is not a problem.


10. Deployment and migration errors

In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementations. Example of this are missing indexes or incorrect statistics. These errors can lead to suboptimal execution plans and poor interactive user performance, When migrating applications of known performance, export the schema statistics to maintain plan stability using DBMS_STAT package.


Proactive Tuning

Tuning usually implies fixing a performance problem. However , tuning should be part of the life cycle of an application, through the analysis, design, coding, production, and maintenance stages. The tuning phase is after left until the system is in production. At that time, tuning becomes a reactive exercise, where the most important bottleneck is identified and fixed.

Application are no different from any other designed and engineered product. If the design look right, it is probably is right. This principle should always kept in mind when building application. Consider some of the following design issue that should be tune proactively instead of reactively :


1. Simple Design

If the table design is so complicated that nobody can fully understand it, the table is probably designed badly.

If SQL statements are so long and involved that it would be impossible for any optimizer to effectively optimize it in real time, there is probably a bad statement, underlying transaction, or table design.

If there are many indexes on a table and the same columns are repeatly indexed, there is probably a bad index design.

If queries are submitted without suitable qualification (the WHERE clause) for rapid response for online users, there is probably a bad user interface or transaction design.


2. Data Modeling

Data modeling is important in successful relation application design. This should be done in a way that quickly and accurately represents the business practices. Apply our greatest modeling efforts to those entities affected by the most frequent business transactions. Use of modeling tools can then rapidly generae schema definations and can be useful when a fast prototype is required.

Normalizing data prevents duplication. When data is normalized, you have a clear picture of the keys and relationships. It is then easier to perform the next step of creating tables, constraints, and indexes, A good data model ultimately means that your queries are written more efficiently.

3. Table and Indexes

Table design is largely a compromise between flexibility and performance of core transactions. To keep the database flexible and able to accommodate unforeseen workloads, the table design should be very similar to the data model, and it should be normalized to at least third normal form. However, certain core transaction can require selective denormalization for performance purpose.

Use the features supplied with Oracle Database to simplify table design for performance, such as storing tables prejoined in clusters, adding derived columns and aggregate values, and using materialized view or partitioned tables. Additionally, crate check constrains and column with default value to prevent bad data from getting into the tables.

Design should be focused on business critical tables so that good performance can be achieved in areas  that are the most used. For noncritical tables, shortcuts in design can be adopted to enable a more rapid application development. If, however, a noncore table becomes a performance problem during prototyping and testing, remedial design efforts should be applied immediately.

Index design is also largely iterative process based on the SQL that is generated by application designers. However, it is possible to make a sensible start by building indexes that enforce foreign key constraints (to reduce response time on joining between primary key tables and foreign key tables) and creating indexes on frequently accessed data, such as a person's name.

Primary keys and unique keys are automatically indexed except for the DISABLE VALIDATE and DISABLE NONVALIDATE RELY constraints. As the application evolves and testing is performed on realistic sizes of data, certain queries need performance improvements, for which building a better index is a good solution.

When buildig a nexe index, one of the easiest way to speed up a query is to reduce the number of logical I/O by eliminating a table scan from the execution plan. This can be done by appending to index all the columns of the table referenced by the query. These columns are the select list columns and any required join or sort columns.

This technique is particularly useful in speeding up an online application's response time when time-consuming I/Os are reduced. This is best applied when testing the application with properly-sized data for the first time. The most aggressive form of this technique is to build an index-organized table or IOT

4. Views

Views can speed up and simplify application design. A simple view definition can mask data model complexity from the programmers whose priorities are to retrieve, display, collect, and store data.

However, though vies provide clean programming interfaces, they can cause suboptimal, resources-intensive queries when nested too deeply. The worst type of view use is creating joins on views that reference other views, which in turn reference other views. In many cases, developers can satisfy the query directly from the table without using a view. Because of their inherent properties, views usually make it difficult for the optimizer to generate the optimal execution plan.

5. Writing Efficient SQL

An application that is design for SQL execution efficiently must support the following characteristics:

5.1. good database connectivity

Connecting to the database is an expensive operation that is highly unscalable. Therefore, the number of concurrent connections to the database should be minimized as much as possible. A simple system, where a user connects at application initialization, is ideal.

However, in a web-based or multi-tiered application, where application servers are used to multiplex database connections to users, this can be difficult. With these types of applications, design efforts should ensure that database connections are pooled and not re-established for each user request.

5.2. good cursor usage and management

Maintaining user connections is equally important for minimizing the parsing activity on the system. Parsing is the process of interpreting a SQL statement and creating an execution plan for it. This process has many phases, including syntax checking, security checking, execution plan generation, and loading shared structures into the shared pool.

5.3. minimal parsing

There are two types of parse operations - hard and soft. In hard parsing, a SQL statement is submitted for the first time, and no match is found in the shared pool. Hard parses are the most resource-intensive and unsaclable because they perform all the operations involved in a parse.

In soft parsing, a SQL statement is submitted for the first time, and a match is found in the shared pool. The match can be the result of previous execution by another user. The SQL statement is shared, which is good for performance. However, soft parses are not ideal because they still require syntax and security checking, which consume system resources.

Because parsing should be minimized as much as possible, application developers should design their application to parse SQL statements once and execute them many times. This is done through cursors. Experienced SQL programmers should be familiar with the concept of opening and re-executing cursors.

5.4. usage of bind variables

Application developers must also ensure that SQL statements are shared within the shared pool. To do this, bind variables to represent the parts of the query that change for execution to execution. If this is not done, the SQL statement is likely to be parsed once and never reused by other users. To ensure that SQL is shared, use bind variables and do not use string literals with SQL statements.

6. Cursor Sharing

For Oracle to shared cursors, the code must be written in the same way character wise, unless you use some special initialization parameters, such as CURSOR_SHARING. This allows the system to recognize that two statements are the same and thus can be shared, You should therefore, develop coding conventions for SQL statements in ad hoc queries, SQL scripts, and ORacle Call Interface or OCI calls.

7. Bind Variables

Use bind variables.


SQL code readability can be improved by adhering to these guidelines:


1. Developing fromat standard for all statements, including those in PL/SQL code.
2. Developing rules for the use of uppercase and lowercase characters.
3. Developing rules for the use of whitespace (spces, table, returns).
4. Developing rules for the use of comments (preferably keeping them out of the SQL stements themselves).
5. Using the same names to refer to identical database objects. If possible, prefix each object with a schema name.

You can also create following performance checklist:

Set minimal initialization parameters

Set the minimal number of initialization parameters. Ideally, most initialization parameters should be left at default. If there is more tuning to perform, this shows up when the system is under load. Set storage options ofr tables and indexes in appropriate tablespaces.

Verify SQL statement

Verify that all SQL statements are optimal and understand their usages.

Validate middleware and programs efficiency

Validate that middle ware and programs that connected to the database are efficient in their connection management and do not log on and log off repeatdly.

Validate the use of cursors

Validate that the SQL statements use cursors efficiently. Each SQL statement should be parsed once and then executed multiple times. This does not happen mostly because bind variables are not used properly and the WHERE clause predicates are sent as string literals.

Validate migration of schema objects

Validate that all schema objects are correctly migrated from the development environment to the production database. This includes tables, indexes, sequences, triggers, packages, proceduires, functions, Java objects, synonyms, grants, and views. Ensure that any modifications made in testing are made to the production system.

Establish a baseline set of statistics

As soon as the system is rolled out, establish a baseline set of statistics from the database and operating system. This first set of statistics validates or corrects any assumptions made in the design and rollout process.


Poor performance SQL Statements

SQL statements can perform poorly for a variety of reason:

1. Stale optimizer statistics

SQL execution plans are generated by the cost-based optimizer (CBO). For CBO to effectively choose the most efficient plan, it need accurate information on the data volume and distribution of tables and indexes referenced in the queries. Without accurate optimizer statistics, the CBO can be easily mislead and generate suboptimal execution plans.

2. Missing access structure

Absence of access structures, such as indexes, materialized views, and partitions, is a common reason for poor SQL performance. The right set of access structures can improve SQL.

3. Suboptimal Execution Plan selection, and

The CBO can sometimes select a suboptimal execution plan for a SQL statement. This happen for the most part because of incorrect estimates of some attributes of the SQL statement, such as its cost, carnality, or predicate selectivity.

4. Poorly constructed SQL

If the SQL is designed poorly, there is not much that the optimizer can do to improve its performance. A missing join condition leading to a Cartesian product, or the use of more expensive SQL constructs like UNION in place of UNION ALL, are just a couple of examples of inefficient SQL design.

Above are the four main causes of poor SQL optimization can have a drastic impact on performance.

Additional reasons for poor performance might be connected with hardware-related issues, such as memory, I/O, CPUs, and so on.

Example of poorly constructs SQL query:

Query common business question type. Query is to determines how many products have list prices less than 15% above the average cost of the product. This statement has a correlated subquery, which means that the subquery is run for every row found in the outer query.
SELECT COUNT(*) FROM product p
WHERE prod_list_price < 1.15 * (SELECT AVG(unit_cost) FROM cost c
     WHERE c.prod_id = p.prod_id);

Following is better written.
SELECT COUNT(*)
FROM product p,
    (SELECT prod_id, AVG(unit_cost) auc FROM costs
     GROUP BY prod_id) c
WHERE p.prod_id = c.prod_id
AND p.prod_list_price < 1.15 * c.auc;

Use simple equality of having function based indexes

The query, in this example, applies functions to the join columns, restricting the condition where indexes can be used. Use a simple equality, if you can. Otherwise, a function-based index may be necessary.

SELECT * FROM job_history jh, employees e
  WHERE SUBSTR(TO_CHAR(e.employee_id),2 = SUBSTR(TO_CHAR(jh.employee_id),2);

In this example, the query ha a condition that forces implicit data type conversion; the ORDER_ID_CHAR column is a character type and the constant is a numeric type. You should make the literal match the column type.

SELECT * FROM orders WHERE order_id_char = 1000;

Example of query uses a data type conversions function in it to make the data types match in the comparison.

The problem here is that the TO_CHAR function is applied to the column values, rather than to the constant. This means that the function is called for ecery row in the table. It would be better to convert the literal once, an not convert the column.

SELECT * FROM employees WHERE TO_CHAR(salary) = :sal

The query is better written using this SQL statement

SELECT * FROM employees WHERE TO_CHARsalary = TO_NUMBER(:sal)

Unnecessary unique sort

In this query, the UNION operator, as opposed to the UNION ALL operator, ensures that there are no duplicate rows in the result set. However, this required an extra step, a unique sort, to eliminate any duplicates. If you know there a no rows in common between the two UNIONed queries, use UNION ALL instead of UNION. This eliminates the unnecessary sort.

SELECT * FROM parts_old
UNION
SELECT * FROM parts_new

The query is better writte using this SQL statement

SELECT * FROM parts_old
UNION ALL
SELECT * FROM parts_new



What is PGA (Program Global Area)

PGAs are memory regions that contain data and control information about a server or background process. A PGA is suballocated from the aggregated PGA area. Only a server process can access the PGA.

The content of the PGA memory varies, depending on whether or not the instance runs the shareed server option, Generally, the PGA memory divided into session memory, cursors, and SQL workarea.

Session memory is the memory allocated to hold a session's variables (logon information) and other information related to the session. For a shared server, the session memory is shared and not private.

Cursor are handles to private memory structure or specific SQL statement

SQL Workarea, for complex queries that use memory intensive operators like following:

  • Sort-based operators, such as ORDER BY, GROUP BY, ROLLUP. and window functions
  • Hash-join
  • Bitmap merge
  • Bitmap create
  • Write buffers used by bulk load operations


Generally bigger work areas can significantly improve performance of a particular operator at the cost of higher memory consumption.

If you are using automatic memory management to manage the database memory on your system, then you do not need to manually tune the PGA.


Automatically Open PDBs When CDB started

To automatically open PDB when CDB is started, we can use trigger at CDB level.

1. Login to CDB root

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> CREATE TRIGGER open_pdb3
  2  AFTER STARTUP ON DATABASE
  3  BEGIN
  4  EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE pdb3 OPEN';
  5  END open_pdb3;
  6  /

Trigger created.

2. Test it 

[oracle@ora12cn1 Desktop]$ srvctl stop database -d ora12c
[oracle@ora12cn1 Desktop]$ srvctl start database -d ora12c
[oracle@ora12cn1 Desktop]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Dec 10 17:53:27 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select name, open_mode from v$pdbs;

NAME                         OPEN_MODE
------------------------------ ----------
PDB$SEED                      READ ONLY
PDB2                          MOUNTED
PDB3                          READ WRITE
PDB3_CLONE                    MOUNTED

Install Oracle 12c Client/Database on Windows [INS-30131]

I need to upgrade all the clients station to 12c, however I encountered  following error

[INS-30131] Initial setup required for the execution of installer validations failed

When I click on detail

Cause - Failed to access the temporary location.

Action: Ensure that the current user has required permissions to access the temporary location.

This error occur when I try to install Oracle 12c for Windows

The solution to this is to execute following command

1. If you see following services running
OracleRemExecServiceV2
stop it

2. Share the C:\ with following command

C:\>net use \\localhost\C$


2.a. If you encountered following error
C:\>net use \\localhost\c$
System error 53 has occurred.

The network path was not found.
Try following to fix above issue

Right Click - My Computer -> Properties -> Advance system settings -> Remote (TAB) -> Tick, Allow connections only from computer running Remote Desktop with Network Level Authentication (more secure) -> Apply

Then try again the net use command, final result should look like following

C:\>net use \\localhost\c$
The command completed successfully.

Try to run the setup.exe again.


List All Users Created

There are two types of users as following:

Local User
  • A user in a non-CDB maps to a local user in a PDB.
  • A local user is defined in the PDB’s own data dictionary—and so is not known outside of
  • that PDB.
  • A local user can connect only to the PDB where it is defined.
  • A local user is specific to a particular PDB and owns a schema in this PDB.
  • According to the privileges granted, a user can work on the application data within the
  • PDB or with other PDBs’ application using database links. Moreover, there cannot be any
  • local users defined in the root.

Common User
  • A common user is defined in the root’s data dictionary.
  • Only common users can be defined in the root: Creating a common user allows the CDB administrator to create at once a user that is replicated in each PDB.
  • A common user is known, not only where it is defined in the root, but also in every PDB that belongs to the CDB.
  • A common user can perform administrative tasks specific to the root or PDBs, such as plugging and unplugging PDBs, starting up the CDB, or opening a PDB when granted the proper privileges.

List all common users

SQL> set pagesize 50
SQL> col username form a25
SQL> select distinct username from cdb_users where common = 'YES';

USERNAME
-------------------------
SPATIAL_WFS_ADMIN_USR
OUTLN
CTXSYS
SYSBACKUP
ORACLE_OCM
APEX_PUBLIC_USER
MDDATA
GSMADMIN_INTERNAL
SYSDG
ORDDATA
DIP
SYSKM
XS$NULL
OLAPSYS
SPATIAL_CSW_ADMIN_USR
SYSTEM
ORDPLUGINS
DBSNMP
ORDSYS
XDB
GSMCATUSER
SYS
APEX_040200
DVF
MDSYS
GSMUSER
FLOWS_FILES
AUDSYS
DVSYS
OJVMSYS
APPQOSSYS
SI_INFORMTN_SCHEMA
ANONYMOUS
LBACSYS
WMSYS

35 rows selected.

To view local user from CDB as following

SQL> col user_id form 99999
SQL> col username form a15
SQL> col name form a20
SQL> set linesize 250
SQL> SELECT *  
  2  FROM (                  
  3   SELECT user_id, username, common, u.con_id, pdb.name, pdb.open_mode
  4   FROM cdb_users u, v$pdbs pdb
  5   WHERE u.con_id(+) = pdb.con_id)
  6  WHERE common = 'NO'
  7  OR common is null;

  USER_ID USERNAME        COM CON_ID     NAME                 OPEN_MODE
------- --------------- --- ---------- -------------------- ----------
                                       PDB3_CLONE           MOUNTED
                                       PDB3                 MOUNTED
                                       PDB2                 MOUNTED

As you can see no user are shown and the PDBs are mounted state, we need to open the database to view the users.

SQL> alter pluggable database pdb2 open;

Pluggable database altered.

SQL> SELECT *  
  2  FROM (
  3   SELECT user_id, username, common, u.con_id, pdb.name, pdb.open_mode
  4   FROM cdb_users u, v$pdbs pdb
  5   WHERE u.con_id(+) = pdb.con_id)
  6  WHERE common = 'NO'
  7  OR common is null;

USER_ID USERNAME        COM CON_ID     NAME                 OPEN_MODE
------- --------------- --- ---------- -------------------- ----------
    102 PDB2_ADMIN      NO           3 PDB2                 READ WRITE
    103 USER1_PDB2      NO           3 PDB2                 READ WRITE
                                       PDB3_CLONE           MOUNTED
                                       PDB3                 MOUNTED


You can change session from CDB$ROOT to pluggable database by using following command. Once you are in that container you can perform the user admin as usual.

SQL> alter session set container=pdb2;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB2

SQL> alter user user1_pdb2 account lock;

User altered.

SQL> alter user user1_pdb2 account unlock;

User altered.

SQL> create user test identified by oracle;

User created.

SQL> drop user test;

User dropped.

Oracle 12c Data Dictionary Views CDB_xxx, DBA_xxx, ALL_xxx, USER_xxx



In a CDB, for every DBA_* view, a CDB_* view is defined.

  • In the root, CDB views can be used to obtain information about tables, tablespaces, users, privileges, parameters, PDBs and other types of objects contained in the root and all PDBs.
  • In a PDB, the CDB_* views show objects visible through a corresponding DBA_* view only.


In addition, to all the columns found in a given DBA_* view, the corresponding CDB_* view also contains the CON_ID column, which identifies a container whose data a given CDB_* row represents. In a non-CDB, the value of a CON_ID column is 0. In a CDB, the value can be either 1 used for rows containing data pertaining to the root only or n where n is the applicable container ID.

Examples of CDB views:

  • Connected to the root and querying CDB_USERS, you get the list of users, common and local, of each container.
  • Connected to a PDB and querying CDB_USERS or DBA_USERS, you get the same list of
  • users, common and local, of the PDB.
  • Connected to the root and querying the CDB_PDBS view, you get the list of all PDBs.
  • Querying the CDB_TABLESPACES view, you get the list of all tablespaces of all PDBs.

Create PDB from existing PDB (Clone PDB)

1. Connect to CDB as root

[oracle@ora12cn1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sun Dec 1 19:12:07 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL>

2. Make the source pluggable database READ ONLY Mode.

SQL> alter pluggable database pdb3 close;

Pluggable database altered.

SQL> alter pluggable database pdb3 open read only;

Pluggable database altered.

3. Create the directory for the clone datafiles.

[grid@ora12cn1 ~]$ asmcmd
ASMCMD> pwd
+
ASMCMD> cd DATA
ASMCMD> ls
ASM/
ORA12C/
ora12c-scan/
orapwasm
pdb$seed_sysaux01.dbf
pdb$seed_system01.dbf
pdb2/
pdb3/
pdbseed_temp01.dbf
ASMCMD> mkdir pdb3_clone
ASMCMD>

4. Identify the data files and the temp files of the sources, to learn the directory structure

SQL> select file_name from cdb_pdbs p, cdb_data_files f
  2  where p.pdb_id=f.con_id
  3  and p.pdb_name = 'PDB3';

FILE_NAME
----------------------------------------
+DATA/pdb3/system01.dbf
+DATA/pdb3/sysaux01.dbf

SQL> select file_name from cdb_pdbs p, cdb_temp_files f
  2  where p.pdb_id=f.con_id
  3  and p.pdb_name = 'PDB3';

FILE_NAME
----------------------------------------
+DATA/pdb3/temp01.dbf

SQL>

5. Issue CREATE PLUGGABLE DATABASE command

SQL> CREATE PLUGGABLE DATABASE pdb3_clone FROM pdb3
  2  FILE_NAME_CONVERT=('+DATA/pdb3', '+DATA/pdb3_clone')
  3  PATH_PREFIX = '+DATA/pdb3_clone';

Pluggable database created.

SQL>

6. Check the result

SQL> alter pluggable database pdb3_clone open;

Pluggable database altered.

SQL>

SQL> select file_name from cdb_pdbs p, cdb_data_files f
  2  where p.pdb_id=f.con_id
  3  and p.pdb_name = 'PDB3_CLONE';

FILE_NAME
----------------------------------------
+DATA/pdb3_clone/system01.dbf
+DATA/pdb3_clone/sysaux01.dbf

SQL> select file_name from cdb_pdbs p, cdb_temp_files f
  2  where p.pdb_id=f.con_id
  3  and p.pdb_name = 'PDB3_CLONE';

FILE_NAME
----------------------------------------
+DATA/pdb3_clone/temp01.dbf

SQL>!
[grid@ora12cn2 ~]$ lsnrctl services
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 01-DEC-2013 20:42:34
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "ora12c" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "ora12cXDB" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:0 refused:0 current:0 max:1022 state:ready
         DISPATCHER <machine: ora12cn2, pid: 3475>
         (ADDRESS=(PROTOCOL=tcp)(HOST=ora12cn2.localdomain)(PORT=50151))
Service "pdb2" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "pdb3" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "pdb3_clone" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
The command completed successfully 

7. Add connection string to $ORACLE_HOME/network/admin/tnsnames.ora or $GRID_HOME/network/admin/tnsnames.ora

PDB3_CLONE =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-scan)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = pdb3_clone)
    )
  )

Create PDB from PDB$SEED (appropriate PDB directory)

This is an extension example from previous post of Create PDB from PDB$SEED
The objective is to convert the data files and temp files to pdb folder appropriately.

Previous example result is as follow

Check the data and temp files

SQL> alter session set container=pdb2;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB2

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/pdb2/pdb$seed_system01.dbf
+DATA/pdb2/pdb$seed_sysaux01.dbf

SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/pdb2/pdbseed_temp01.dbf


If you notice that system01.dbf and sysaux01.dbf are having suffix pdb$seed but temp01.dbf is having pdbseed (without $). We therefore need to convert both of them.

Following is the command to rectify my error (I am creating PDB3 from the beginning).

1. Create the folder to place the pdb datafiles, in this example I am using ASM, the procedure is the same if you are using non RAC (ordinary file system)

[oracle@ora12cn1 ~]$ su - grid
Password:
[grid@ora12cn1 ~]$ asmcmd
ASMCMD> cd +DATA
ASMCMD> mkdir pdb3
ASMCMD> cd pdb3
ASMCMD> pwd
+DATA/pdb3
ASMCMD> 
2. Connect to root, with sys as sysdba

[oracle@ora12cn1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 25 15:55:04 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> 

3. Identify the location of PDB$SEED datafiles.

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB$SEED

SQL> select file_name from dba_data_files;

FILE_NAME
-------------------------------------------
+DATA/pdb$seed_system01.dbf
+DATA/pdb$seed_sysaux01.dbf


4.  Connect to root and issue the CREATE PLUGGABLE DATABASE command with following changed
     FILE_NAME_CONVERT = ('+DATA/pdb$seed_', '+DATA/pdb3/','+DATA/pdbseed_','+DATA/pdb3/')

SQL> conn / as sysdba

Connected.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> CREATE PLUGGABLE DATABASE pdb3
  2  ADMIN USER pdb3_admin IDENTIFIED BY oracle ROLES=(CONNECT)
  3  FILE_NAME_CONVERT=('+DATA/pdb$seed_', '+DATA/pdb3/','+DATA/pdbseed_','+DATA/pdb3/');

Pluggable database created.

SQL> alter session set container = PDB3;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB3

5. Check the result, it look better now

SQL> alter pluggable database pdb3 open;

Pluggable database altered.

SQL> alter session set container=pdb3;

Session altered.

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/pdb3/system01.dbf
+DATA/pdb3/sysaux01.dbf

SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/pdb3/temp01.dbf


7. The service is automatically added

[oracle@ora12cn2 Desktop]$ lsnrctl services
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 01-DEC-2013 18:38:17
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "ora12c" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "ora12cXDB" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:0 refused:0 current:0 max:1022 state:ready
         DISPATCHER <machine: ora12cn2, pid: 3475>
         (ADDRESS=(PROTOCOL=tcp)(HOST=ora12cn2.localdomain)(PORT=50151))
Service "pdb2" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "pdb3" has 1 instance(s).
  Instance "ora12c2", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
The command completed successfully

Drop Pluggable Database

To remove pluggable database permanently from CDB can use following command:

1. Connect to root and close the pluggable database

[oracle@ora12cn1 Desktop]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 25 23:21:29 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> alter pluggable database pdb2 close immediate;

Pluggable database altered.


2. Issue the DROP PLUGGABLE DATABASE command, you can issue including datafiles or keep datafiles

SQL> drop pluggable database pdb2 including datafiles;

Pluggable database dropped.

Create PDB from PDB$SEED

The creation of a new PDB from the seed is nearly instantaneous. The operation copies the
data files from the READ ONLY seed PDB to the target directory defined in the CREATE
PLUGGABLE DATABASE statement.

It creates tablespaces such as SYSTEM to store a full catalog including metadata pointing to
Oracle-supplied objects, SYSAUX for local auxiliary data.

It creates default schemas and common users that exist in seed PDB, SYS who continues to
have all superuser privileges and SYSTEM who can administer the PDB.
It creates a local user (the PDBA) granted a local PDB_DBA role. Until the PDB SYS user grants
privileges to the local PDB_DBA role, the new PDBA cannot perform any other operation than
connecting to the PDB.

A new default service is also created for the PDB.

1. Create the folder to place the pdb datafiles, in this example I am using ASM, the procedure is the same if you are using non RAC (ordinary file system)

[oracle@ora12cn1 ~]$ su - grid
Password:
[grid@ora12cn1 ~]$ asmcmd
ASMCMD> cd +DATA
ASMCMD> mkdir pdb2
ASMCMD> cd pdb2
ASMCMD> pwd
+DATA/pdb2
ASMCMD> 


2. Connect to root, with sys as sysdba

[oracle@ora12cn1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 25 15:55:04 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> 

3. Identify the location of PDB$SEED datafiles.

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB$SEED

SQL> select file_name from dba_data_files;

FILE_NAME
-------------------------------------------
+DATA/pdb$seed_system01.dbf
+DATA/pdb$seed_sysaux01.dbf


4.  Connect to root and issue the CREATE PLUGGABLE DATABASE command

SQL> conn / as sysdba            
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> CREATE PLUGGABLE DATABASE pdb2
  2  ADMIN USER pdb2_admin IDENTIFIED BY oracle ROLES=(CONNECT)
  3  FILE_NAME_CONVERT=('+DATA', '+DATA/pdb2');

Pluggable database created.

SQL> alter session set container = PDB2;

Session altered.
SQL> show con_name

CON_NAME
------------------------------
PDB2
5. The services is automatically created

[oracle@ora12cn1 oracle]$ lsnrctl services
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 25-NOV-2013 17:50:25
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "ora12c" has 1 instance(s).
  Instance "ora12c1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "ora12cXDB" has 1 instance(s).
  Instance "ora12c1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:0 refused:0 current:0 max:1022 state:ready
         DISPATCHER <machine: ora12cn1, pid: 4003>
         (ADDRESS=(PROTOCOL=tcp)(HOST=ora12cn1.localdomain)(PORT=52784))
Service "pdb1" has 1 instance(s).
  Instance "ora12c1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "pdb2" has 1 instance(s).
  Instance "ora12c1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
The command completed successfully

6. Now test open the database

SQL> alter pluggable database pdb2 open; 

Pluggable database altered.

SQL> select name, con_id, open_mode from v$pdbs; 

NAME            CON_ID OPEN_MODE
---------- ---------- ----------
PDB$SEED            2 READ ONLY
PDB1               3 MOUNTED
PDB2               4 READ WRITE


7. Check the data and temp files

SQL> alter session set container=pdb2;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB2

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/pdb2/pdb$seed_system01.dbf
+DATA/pdb2/pdb$seed_sysaux01.dbf

SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/pdb2/pdbseed_temp01.dbf


You may want to refer to modified Create PDB from PDB$SEED (appropriate PDB directory) post that explain appropriate PDB directory.

Understand Server Pool

When Oracle Clusterware is installed, two server pools are created automatically: Generic and Free. All servers in a new installation are assigned to the Free server pool, initially. Servers move from Free to newly defined server pools automatically.

The Free Server Pool

The Free server pool contains servers that are not assigned to any other server pools. The attributes of the Free server pool are restricted, as follows:

SERVER_NAMES, MIN_SIZE, and MAX_SIZE cannot be edited by the user
IMPORTANCE and ACL can be edited by the user

The Generic Server Pool

The Generic server pool stores any Oracle Database that is not policy managed. Additionally, the Generic server pool contains servers with names you specified in the SERVER_NAMES attribute of the server pools that list the Generic server pool as a parent server pool.

The Generic server pool's attributes are restricted, as follows:

No one can modify configuration attributes of the Generic server pool (all attributes are read-only)

When DBCA or SRVCTL specifies a server name in the HOSTING_MEMBERS resource attribute, Oracle Clusterware only allows it if the server is:

Online and exists in the Generic server pool

Online and exists in the Free server pool, in which case Oracle Clusterware moves the server into the  Generic server pool

Online and exists in any other server pool and the user is either a cluster administrator or is allowed to use the server pool's servers, in which case, the server is moved into the Generic server pool

Offline and the user is a cluster administrator



[oracle@ora12cn1 ~]$ crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=ora12cn1 ora12cn2

NAME=Generic
ACTIVE_SERVERS=

[oracle@ora12cn1 ~]$ crsctl status serverpool -f
NAME=Free
IMPORTANCE=0
MIN_SIZE=0
MAX_SIZE=-1
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r-x
SERVER_CATEGORY=
ACTIVE_SERVERS=ora12cn1 ora12cn2
NAME=Generic
IMPORTANCE=0
MIN_SIZE=0
MAX_SIZE=-1
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:grid:r-x,pgrp:oinstall:r-x,other::r-x
SERVER_CATEGORY=
ACTIVE_SERVERS=

To View 12c Tablespaces

To view tablespace and it's corresponding container

SQL> col name form a12
SQL> SELECT file#, tbs.name, tbs.ts#, tbs.con_id
  2  FROM v$datafile dbf, v$tablespace tbs
  3  WHERE dbf.ts#=tbs.ts#
  4  AND dbf.con_id=tbs.con_id;
     FILE# NAME       TS#        CON_ID
---------- ------------ ---------- ------
1 SYSTEM  0 1
2 SYSTEM               0      2
3 SYSAUX 1      1
4 SYSAUX 1      2
5 UNDOTBS1 2 1
6 USERS  4      1
7 SYSTEM 0      3
8 SYSAUX 1      3
9 USERS 3      3
9 rows selected.



Extracted from Oracle 12c Document, CON_ID = 0 mean CDB. 

About Viewing Information When the Current Container Is the Root

When the current container is the root, a common user can view data dictionary information for the root and for PDBs by querying container data objects. A container data object is a table or view that can contain data pertaining to the following:
  • One or more containers
  • The CDB as a whole
  • One or more containers and the CDB as a whole
Container data objects include V$GV$CDB_, and some Automatic Workload Repository DBA_HIST* views. A common user's CONTAINER_DATA attribute determines which PDBs are visible in container data objects.
In a CDB, for every DBA_ view, there is a corresponding CDB_ view. All CDB_ views are container data objects, but most DBA_ views are not.
Each container data object contains a CON_ID column that identifies the container for each row returned. Table 43-1 describes the meanings of the values in the CON_ID column.

Table 43-1 CON_ID Column in Container Data Objects
Value in CON_ID ColumnDescription
0
The data pertains to the entire CDB
1
The data pertains to the root
2
The data pertains to the seed
3 - 254
The data pertains to a PDB
Each PDB has its own container ID.

All Files of the CDB

To view logfile that we can query v$logfile, we can see the con_id. In this example I am using ASM +DATA volume.
SQL> col member format a50
SQL> select GROUP#, CON_ID, MEMBER from v$logfile;
    GROUP#     CON_ID MEMBER
---------- ---------- --------------------------------------------------
1          0 +DATA/ORA12C/ONLINELOG/group_1.260.831911745
1          0 +DATA/ORA12C/ONLINELOG/group_1.261.831911755
2          0 +DATA/ORA12C/ONLINELOG/group_2.262.831911763
2          0 +DATA/ORA12C/ONLINELOG/group_2.263.831911771
3          0 +DATA/ORA12C/ONLINELOG/group_3.264.831911779
3          0 +DATA/ORA12C/ONLINELOG/group_3.265.831911787
6 rows selected.

To view controlfile

SQL> col name form a50
SQL> select name, con_id from v$controlfile;
NAME                                          CON_ID
-------------------------------------------------- ----------
+DATA/ORA12C/CONTROLFILE/current.258.831911735    0
+DATA/ORA12C/CONTROLFILE/current.259.831911741             0



View all data files of the CDB, including those of the root and all PDBs. We can query cdb_data_files.

SQL> col file_name format A60
SQL> col tablespace_name format A8
SQL> col file_id format 9999
SQL> col con_id format 999
SQL> SELECT file_name, tablespace_name, file_id, con_id
  2  FROM cdb_data_files ORDER BY con_id;
FILE_NAME                                    TABLESPA FILE_ID CON_ID
---------------------------------------------------- -------- ------- ------
+DATA/ORA12C/DATAFILE/system.266.831911801 SYSTEM 1      1
+DATA/ORA12C/DATAFILE/users.273.831912215 USERS 6      1
+DATA/ORA12C/DATAFILE/undotbs1.270.831912051  UNDOTBS1       5      1
+DATA/ORA12C/DATAFILE/sysaux.268.831911935    SYSAUX         3      1
+DATA/ORA12C/PDBSEED/DATAFILE/sysaux.269.831912021  SYSAUX     4      2
+DATA/ORA12C/PDBSEED/DATAFILE/system.267.831911893 SYSTEM   2      2
6 rows selected.


Note that dba_data_files is list only root datafile.

SQL> col file_name form a50
SQL> select file_name, tablespace_name, file_id
  2  from dba_data_files;
FILE_NAME          TABLESPA FILE_ID
-------------------------------------------------- -------- -------
+DATA/ORA12C/DATAFILE/system.266.831911801         SYSTEM  1
+DATA/ORA12C/DATAFILE/sysaux.268.831911935         SYSAUX  3
+DATA/ORA12C/DATAFILE/undotbs1.270.831912051       UNDOTBS1       5
+DATA/ORA12C/DATAFILE/users.273.831912215          USERS         6


List the temp files of the CDB.

SQL> col file_name form a70
SQL> col tablespace_name form a10
SQL> col file_id form 999
SQL> select file_name, tablespace_name, file_id
  2  from cdb_temp_files;
FILE_NAME                                   TABLESPACE FILE_ID
---------------------------------------------------- ---------- -------
+DATA/ORA12C/PDBSEED/TEMPFILE/temp.272.831912083   TEMP            2
+DATA/ORA12C/TEMPFILE/temp.271.831912083   TEMP            1




Extracted from Oracle 12c Document, CON_ID = 0 mean CDB. 

About Viewing Information When the Current Container Is the Root

When the current container is the root, a common user can view data dictionary information for the root and for PDBs by querying container data objects. A container data object is a table or view that can contain data pertaining to the following:
  • One or more containers
  • The CDB as a whole
  • One or more containers and the CDB as a whole
Container data objects include V$GV$CDB_, and some Automatic Workload Repository DBA_HIST* views. A common user's CONTAINER_DATA attribute determines which PDBs are visible in container data objects.
In a CDB, for every DBA_ view, there is a corresponding CDB_ view. All CDB_ views are container data objects, but most DBA_ views are not.
Each container data object contains a CON_ID column that identifies the container for each row returned. Table 43-1 describes the meanings of the values in the CON_ID column.

Table 43-1 CON_ID Column in Container Data Objects
Value in CON_ID ColumnDescription
0
The data pertains to the entire CDB
1
The data pertains to the root
2
The data pertains to the seed
3 - 254
The data pertains to a PDB
Each PDB has its own container ID.

To Know Your Current Connected Container

You can use SHOW CON_NAME and CON_ID to know which container you are connected to.

SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> show con_id
CON_ID
------------------------------
1



About