Oracle 12C new feature all in one
- Online migration of an active data file
- Online table partition or sub-partition migration
- Invisible column
- Multiple indexes on the same column
- DDL logging
- Temporary undo in- and- outs
- New backup user privilege
- How to execute SQL statement in RMAN
- Table level recovery in RMAN
- Restricting PGA size
- Table partition maintenance enhancements
- Database upgrade improvements
- Restore/Recover data file over the network
- Data Pump enhancements
- Real-time ADDM
- Concurrent statistics gathering
Additions/Enhancements in Automatic Storage Management (ASM)
- Flex ASM
- Increased ASM storage limits
- Tuning ASM rebalance operations
- ASM Disk Scrubbing
- Active Session History (ASH) for ASM
Additions/Enhancements in Grid
Infrastructure
- Flex cluster
- OCR backup in ASM diskgroup
- IPv6 Support
Additions/Enhancements in RAC (database)
- Flex cluster
- What-If command evaluation
- Miscellaneous srvctl improvement
- Pluggable database through database consolidation.
- Redaction policy
- In database archiving
1. Online rename and relocation of an active
data file
·
Unlike in the previous
releases, a data file migration or renaming in Oracle database 12c R1 no longer
requires a number of steps i.e. putting the tablespace in READ ONLY mode,
followed by data file offline action. In 12c R1, a data file can be renamed or moved
online simply using the ALTER DATABASE MOVE DATAFILE SQL statement. While the
data file is being transferred, the end user can perform queries, DML and DDL
tasks. Additionally, data files can be migrated between storages e.g. from
non-ASM to ASM and vice versa.
·
Rename a data file:
·
SQL> ALTER DATABASE
MOVE DATAFILE '/u00/data/users01.dbf' TO '/u00/data/users_01.dbf';
·
Migrate a data file
from non-ASM to ASM:
·
SQL> ALTER DATABASE
MOVE DATAFILE '/u00/data/users_01.dbf' TO '+DG_DATA';
·
Migrate a data file from
one ASM disk group to another:
·
SQL> ALTER DATABASE
MOVE DATAFILE '+DG_DATA/DBNAME/DATAFILE/users_01.dbf ' TO '+DG_DATA_02';
·
Overwrite the data
file with the same name, if it exists at the new location:
·
SQL> ALTER DATABASE
MOVE DATAFILE '/u00/data/users_01.dbf' TO '/u00/data_new/user
2. Online migration of
table partition or sub-partition
Migration of a table
partition or sub-partition to a different tablespace no longer requires a
complex procedure in Oracle 12c R1. In a similar way to how a heap
(non-partition) table online migration was achieved in the previous releases, a
table partition or sub-partition can be moved to a different tablespace online
or offline. When an ONLINE clause is specified, all DML operations can be
performed without any interruption on the partition|sub-partition which is
involved in the procedure. In contrast, no DML operations are allowed if the
partition|sub-partition is moved offline.
Here are some working
examples:
SQL> ALTER TABLE table_name
MOVE PARTITION|SUBPARTITION partition_name TO tablespace tablespace_name;
SQL> ALTER TABLE table_name
MOVE PARTITION|SUBPARTITION partition_name TO tablespace tablespace_name UPDATE INDEXES
ONLINE;
The first example is
used to move a table partition|sub-partition to a new tablespace offline. The
second example moves a table partition/sub-partitioning online maintaining any
local/global indexes on the table. Additionally, no DML operation will get
interrupted when ONLINE clause is mentioned.
Important notes:
o
The UPDATE INDEXES
clause will avoid any local/global indexes going unusable on the table.
o
Table online migration
restriction applies here too.
o
There will be locking
mechanism involved to complete the procedure, also it might leads to
performance degradation and can generate huge redo, depending upon the size of
the partition, sub-partition.
3.
Invisible columns
In
Oracle 11g R1, Oracle introduced a couple of good enhancements in the form of
invisible indexes and virtual columns. Taking the legacy forward, invisible
column concepts has been introduced in Oracle 12c R1. I still remember, in the
previous releases, to hide important data –columns from being displayed in the
generic queries– we used to create a view hiding the required information or
apply some sort of security conditions.
In 12c
R1, you can now have an invisible column in a table. When a column is defined
as invisible, the column won’t appear in generic queries, unless the column is
explicitly referred to in the SQL statement or condition, or DESCRIBED in the
table definition. It is pretty easy to add or modify a column to be invisible
and vice versa:
SQL> CREATE TABLE emp (eno number(6), ename name varchar2(40), sal number(9) INVISIBLE);
SQL> ALTER TABLE emp MODIFY (sal visible);
You
must explicitly refer to the invisible column name with the INSERT statement to
insert the database into invisible columns. A virtual column or partition
column can be defined as invisible too. However, temporary tables, external
tables and cluster tables won’t support invisible columns.
4.
Multiple indexes on the same column
Pre Oracle 12c, you can’t create multiple indexes either on the
same column or set of columns in any form. For example, if you have an index on
column {a} or columns {a,b}, you can’t create another index on the same column
or set of columns in the same order. In 12c, you can have multiple indexes on
the same column or set of columns as long as the index type is different. However,
only one type of index is usable/visible at a given time. In order to test the
invisible indexes, you need to set the optimizer_use_use_invisible_indexes=true.
Here’s
an the example:
SQL> CREATE INDEX emp_ind1 ON EMP(ENO,ENAME);
SQL> CREATE BITMAP INDEX emp_ind2 ON EMP(ENO,ENAME) INVISIBLE;
5. DDL logging
There was no direction option available to log the DDL action in
the previous releases. In 12cR1, you can now log the DDL action into xml and
log files. This will be very useful to know when the drop or create command was
executed and by who. The ENABLE_DDL_LOGGING initiation parameter must be configured in
order to turn on this feature. The parameter can be set at the database or
session levels. When this parameter is enabled, all DDL commands are logged in
an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location. An xml file contains information,
such as DDL command, IP address, timestamp etc. This helps to identify when a
user or table dropped or when a DDL statement is triggered.
To enable DDL logging
SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;
The following DDL
statements are likely to be recorded in the xml/log file:
o
CREATE|ALTER|DROP|TRUNCATE
TABLE
o
DROP USER
o
CREATE|ALTER|DROP PACKAGE|FUNCTION|VIEW|SYNONYM|SEQUENCE
6.
Temporary Undo
Each
Oracle database contains a set of system related tablespaces, such as, SYSTEM,
SYSAUX, UNDO & TEMP, and each are used for different purposes within the
Oracle database. Pre Oracle 12c R1, undo records generated by the temporary
tables used to be stored in undo tablespace, much similar to a
general/persistent table undo records. However, with the temporary undo feature
in 12c R1, the temporary undo records can now be stored in a temporary table instead
of stored in undo tablespace. The prime benefits of temporary undo includes:
reduction in undo tablespace and less redo data generation as the information
won’t be logged in redo logs. You have the flexibility to enable the temporary
undo option either at session level or database level.
Enabling
temporary undo
To be
able to use the new feature, the following needs to be set:
o
Compatibility parameter must be set to 12.0.0 or higher
o
Enable TEMP_UNDO_ENABLED initialization parameter
o
Since the temporary undo records now stored in a temp
tablespace, you need to create the temporary tablespace with sufficient space
o
For session level, you can use: ALTER SESSION SET TEMP_UNDO_ENABLE=TRUE;
Query
temporary undo information
The
dictionary views listed below are used to view/query the information/statistics
about the temporary undo data:
o
V$TEMPUNDOSTAT
o
DBA_HIST_UNDOSTAT
o
V$UNDOSTAT
To
disable the feature, you simply need to set the following:
SQL> ALTER SYSTEM|SESSION SET TEMP_UNDO_ENABLED=FALSE;
7.
Backup specific user privilege
In 11g
R2, SYSASM privilege was introduced to perform ASM specific operations.
Similarly, backup and recovery tasks specific privilege SYSBACKUP has been
introduced in 12c to execute backup and recovery commands in Recovery Manager
(RMAN). Therefore, you can create a local user in the database and grant the
SYSBACKUP privilege to perform any backup and recovery related tasks in RMAN
without being granting the SYSDBA privilege.
$ ./rman target "username/password as SYSBACKUP"
8. How
to execute SQL statement in RMAN
In 12c,
you can now execute any SQL and PL/SQL commands in RMAN without the need of a
SQL prefix: you can execute any SQL and PLS/SQL commands directly from RMAN.
How you can execute SQL statements in RMAN:
RMAN> SELECT username,machine FROM v$session;
RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;
9. Table or partition
recovery in RMAN
Oracle database
backups are mainly categorized into two types: logical and physical. Each
backup type has its own pros and cons. In previous editions, it was not
feasible to restore a table or partition using existing physical backups. In
order to restore a particular object, you must have logical backup. With 12c
R1, you can recover a particular table or partition to a point-in-time or SCN
from RMAN backups in the event of a table drop or truncate.
When a table or
partition recovery is initiated via RMAN, the following action is performed:
o
Required backup sets
are identified to recover the table/partition
o
An auxiliary database
will be configured to a point-in-time temporarily in the process of recovering
the table/partition
o
Required
table/partitions will be then exported to a dumpfile using the data pumps
o
Optionally, you can
import the table/partitions in the source database
o
Rename option while
recovery
An example of a table
point-in-time recovery via RMAN (ensure you already have a full database backup
from earlier):
RMAN> connect
target "username/password as SYSBACKUP";
RMAN> RECOVER
TABLE username.tablename UNTIL TIME 'TIMESTAMP…'
AUXILIARY
DESTINATION '/u01/tablerecovery'
DATAPUMP
DESTINATION '/u01/dpump'
DUMP FILE
'tablename.dmp'
NOTABLEIMPORT -- this option avoids importing the table
automatically.
REMAP TABLE 'username.tablename':
'username.new_table_name'; -- can rename
table with this option.
Important notes:
o
Ensure sufficient free
space available under /u01 filesystem for auxiliary database and also to keep
the data pump file
o
A full database backup
must be exists, or at least the SYSTEM related tablespaces
The following
limitations/restrictions are applied on table/partition recovery in RMAN:
o
SYS user
table/partition can’t be recovered
o
Tables/partitions
stored under SYSAUX and SYSTEM tablespaces can’t be recovered
o
Recovery of a table is
not possible when REMAP option used to recovery a table that contains NOT NULL
constraints
10.
Restricting PGA size
Pre Oracle 12c R1, there was no option to limit and control the
PGA size. Although, you set a certain size toPGA_AGGREGATE_TARGET initialization parameter, Oracle could
increase/reduce the size of the PGA dynamically based on the workload and
requirements. In 12c, you can set a hard limit on PGA by enabling the automatic
PGA management, which requires PGA_AGGREGATE_LIMIT parameter settings. Therefore, you can now
set the hard limit on PGA by setting the new parameter to avoid excessive PGA
usage.
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; --disables the hard limit
Important notes:
When
the current PGA limits exceeds, Oracle will automatically terminates/abort the
session/process that holds the most untenable PGA memory.
11.Table
partition maintenance enhancements
In Part
I, I explained how to move a table partition or sub-partition to a different
tablespace either offline or online. In this section, you will learn other
enhancements relating to table partitioning.
Adding
multiple new partitions
Before Oracle 12c R1, it was only possible to add one new
partition at a time to an existing partitioned table. To add more than one new
partition, you had to execute an individual ALTER TABLE ADD PARTITION statement
to every new partition. Oracle 12c provides the flexibility to add multiple new
partitions using a single ALTER TABLE ADD PARTITION command. The following example explains how
to add multiple new partitions to an existing partitioned table:
SQL> CREATE TABLE emp_part
(eno number(8), ename varchar2(40), sal number (6))
PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (10000),
PARTITION p2 VALUES LESS THAN (20000),
PARTITION p3 VALUES LESS THAN (30000)
);
Now
lets add a couple of new partitions:
SQL> ALTER TABLE emp_part ADD PARTITION
PARTITION p4 VALUES LESS THAN (35000),
PARTITION p5 VALUES LESS THAN (40000);
In the same way, you can add multiple new partitions to a list
and system partitioned table, provided that theMAXVALUE partition doesn’t exist.
How to
drop and truncate multiple partitions/sub-partitions
As part of data maintenance, you typically either use drop or
truncate partition maintenance task on a partitioned table. Pre 12c R1,
it was only possible to drop or truncate one partition at a time on an existing
partitioned table. With Oracle 12c, multiple partitions or sub-partitions
can be dropped or merged using a single ALTER TABLE table_name
{DROP|TRUNCATE} PARTITIONS command.
The
following example explains how to drop or truncate multiple partitions on an
existing partitioned table:
SQL> ALTER TABLE emp_part DROP PARTITIONS p4,p5;
SQL> ALTER TABLE emp_part TRUNCATE PARTITONS p4,p5;
To keep indexes up-to-date, use the UPDATE INDEXES or UPDATE GLOBAL INDEXES clause, shown below:
SQL> ALTER TABLE emp_part DROP PARTITIONS p4,p5 UPDATE GLOBAL INDEXES;
SQL> ALTER TABLE emp_part TRUNCATE PARTITIONS p4,p5 UPDATE GLOBAL INDEXES;
If you truncate or drop a partition without the UPDATE GLOBAL INDEXES clause, you can query the columnORPHANED_ENTRIES in the USER_INDEXES or USER_IND_PARTITIONS dictionary views to find out whether the
index contains any stale entries.
Splitting
a single partition into multiple new partitions
The new enhanced SPLIT PARTITION clause in 12c will let you split a
particular partition or sub-partition into multiple new partitions using a
single command. The following example explains how to split a partition into
multiple new partitions:
SQL> CREATE TABLE emp_part
(eno number(8), ename varchar2(40), sal number (6))
PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (10000),
PARTITION p2 VALUES LESS THAN (20000),
PARTITION p_max VALUES LESS THAN (MAXVALUE)
);
SQL> ALTER TABLE emp_part SPLIT PARTITION p_max INTO
(PARTITION p3 VALUES LESS THAN (25000),
PARTITION p4 VALUES LESS THAN (30000), PARTITION p_max);
Merge
multiple partitions into one partition
You can merge multiple partitions to a single partition using a
single ALTER TBALE MERGE PARTITIONS
statement:
SQL> CREATE TABLE emp_part
(eno number(8), ename varchar2(40), sal number (6))
PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (10000),
PARTITION p2 VALUES LESS THAN (20000),
PARTITION p3 VALUES LESS THAN (30000),
PARTITION p4 VALUES LESS THAN (40000),
PARTITION p5 VALUES LESS THAN (50000),
PARTITION p_max (MAXVALUE)
);
SQL> ALTER TABLE emp_part MERGE PARTITIONS p3,p4,p5 INTO PARTITION p_merge;
If the
range falls in the sequence, you can use the following example:
SQL> ALTER TABLE emp_part MERGE PARTITIONS p3 TO p5 INTO PARTITION p_merge;
12.Database
upgrade improvements
Whenever
a new Oracle version is announced, the immediate challenge that every DBA
confronts is the upgrade process. In this section, I will explain the two new
improvements introduced for upgrading to 12c.
Pre-upgrade
script
A new and much improved pre-upgrade information script, preupgrd.sql,
replaces the legacy utlu[121]s.sql script in 12c R1. Apart from the preupgrade
checks verification, the script is capable of addressing the various issues –
in the form of fixup scripts – that are raised during the pre-post upgrade
process.
The
fixup scripts that are generated can be executed to resolve the problems at
different levels, for example, pre-upgrade and post upgrade. When upgrading the
database manually, the script must be executed manually before initiating the
actual upgrade procedure. However, when the Database Upgrade Assistant (DBUA)
tool is used to perform a database upgrade, it automatically executes the
pre-upgrade scripts as part of the upgrade procedure and will prompt you to
execute the fixup scripts in case of any errors that are reported.
The following
example demonstrates how to execute the scripts:
SQL> @$ORACLE_12GHOME/rdbms/admin/preupgrd.sql
The above script generates a log file and a [pre/post]upgrade_fixup.sql script. All these files are located
under the $ORACLE_BASE/cfgtoollogs directory. Before you continue with the
real upgrade procedure, you should run through the recommendations mentioned in
the log file and execute the scripts to fix any issues.
Note: Ensure
you copy the preupgrd.sql and utluppkg.sql scripts from the 12c Oracle home/rdbms/admin
directory to the current Oracle database/rdbms/admin location.
Parallel-upgrade
utility
The
database upgrade duration is directly proportional to the number of components
that are configured on the database, rather than the database size. In previous
releases, there was no direct option or workaround available to run the upgrade
process in parallel to quickly complete the overall upgrade procedure.
The catctl.pl (parallel-upgrade utility) that replaces
the legacy catupgrd.sql script in 12c R1 comes with an option to
run the upgrade procedure in parallel mode to improve the overall duration
required to complete the procedure.
The
following procedure explains how to initiate the parallel (with 3 processes)
upgrade utility; you need to run this after you STARTUP the database in UPGRADE
mode:
cd $ORACLE_12_HOME/perl/bin
$ ./perl catctl.pl –n 3 -catupgrd.sql
The
above two steps need to be run explicitly when a database is upgraded manually.
However, the DBUA inherits the both new changes.
13.Restore/Recover
data files over the network
Yet
another great enhancement in 12c R1. You can now restore or recover a data
file, control file, spfile, tablespace or entire database between primary and
standby databases using a SERVICE name. This is particularly useful to
synchronize the primary and standby databases.
When
there is a pretty long gap found between the primary and standby database, you
no longer require the complex roll-forward procedure to fill the gap between
the primary and standby. RMAN is able to perform standby recovery getting the
incremental backups through the network and applying them to the physical
standby database. Having said that, you can directly copy the required data
files from the standby location to the primary site using the SERVICE name e.g.
in the case of a data file, tablespace lost on the primary database, or without
actually restoring the data files from a backup set.
The
following procedure demonstrates how to perform a roll forward using the new
features to synchronize the standby database with its primary database:
On the
physical standby database:
./rman target "username/password@standby_db_tns as SYSBACKUP"
RMAN> RECOVER DATABASE FROM SERVICE primary_db_tns USING COMPRESSED BACKUPSET;
The above example uses the primary_db_tns connect
string defined on the standby database, connects to the primary database,
performs an incremental backup, transfers these incremental backups over
standby destination, and then applies these files to the standby database to
synchronize the standby. However, you need to ensure you have configured primary_db_tns to
point to the primary database on the standby database side.
In the
following example, I will demonstrate a scenario to restore a lost data file on
the primary database by fetching the data file from the standby database:
On the
primary database:
./rman target "username/password@primary_db_tns as SYSBACKUP"
RMAN>
RESTORE DATAFILE ‘+DG_DISKGROUP/DBANME/DATAFILE/filename’ FROM SERVICE
standby_db_tns;
14.Data
Pump enhancements
This
part of the section will focus on the important enhancements introduced in data
pumps. There are quite a few useful additions, such as converting view into a
table while exporting and turning off logging while import.
Turn
off redo log generation
The new TRANSFORM option introduced in data pumps import
provides the flexibility to turn off the redo generation for the objects during
the course of import. When DISABLE_ARCHIVE_LOGGING values is specified with theTRANSFORM option, redo generation for the objects in
the context will be turned off during the entire import duration. This feature
provides a great relief when importing large tables, and reduces the excessive
redo generation, which results in quicker imports. This attribute applies to
tables and indexes.
This
example demonstrates this feature:
$ ./impdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
Transport
view as table
This is another improvement in the data pumps. With the new VIEWS_AS_TABLES option, you can unload the view data into a
table. The following example describes how to unload views data into a table
during export:
$ ./expdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log views_as_tables=my_view:my_table
15.Real-time ADDM
analysis
Analyzing past and
current database health statuses through a set of automatic diagnostic tools
such as AWR, ASH and ADDM is part of every DBAs life. Though each individual
tool can be used at various levels to measure the database’s overall heath and
performance, no tool can be used when the database is unresponsive or totally
hung.
When you encounter an
unresponsive database or hung state, and if you have configured Oracle
Enterprise Manager 12c Cloud Control, you can diagnose serious performance
issues. This would give you a good picture about what’s currently going on in
the database, and might also provide a remedy to resolve the issue.
The following
step-by-step procedure demonstrates how to analyze the situation on the Oracle
EM 12c Cloud Control :
o
Select the Emergency
Monitoring option from the Performance menu on the Access the Database Home page.This will show the top blocking sessions in
the Hang Analysis table.
o
Select the Real-Time
ADDM option from the Performance to perform Real-time ADDM analysis.
o
After collecting the
performance data, click on the Findings tab to get the interactive summary of all the findings.
16.Gathering
statistics concurrently on multiple tables
In previous Oracle database editions, whenever you execute a DBMS_STATS procedure to gather table, index, schema or
database level statistics, Oracle used to collect stats one table at a time. If
the table is big enough, then increasing the parallelism was recommended. With
12c R1, you can now collect stats on multiple tables, partitions and sub
partitions concurrently. Before you start using it, you must set the
following at the database level to enable the feature:
SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN='DEFAULT_MAIN';
SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=4;
SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT', 'ALL');
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SCOTT');
1.
Additions/Enhancements in Automatic Storage Management (ASM)
Flex
ASM
In a
typical Grid Infrastructure installation, each node will have its own ASM
instance running and act the as the storage container for the databases running
on the node. There is a single point-of-failure threat with this setup. For
instance, if the ASM instance on the node suffers or fails all the databases
and instances running on the node will be impacted. To avoid ASM instance
single-point-failure, Oracle 12c provides a Flex ASM feature. The Flex ASM is a
different concept and architecture all together. Only a fewer number of ASM
Instances need to run on a group of servers in the cluster. When an ASM
instance fails on a node, Oracle Clusterware automatically starts surviving
(replacement) ASM instance on a different node to maintain availability. In addition,
this setup also provides ASM instance load balancing capabilities for the
instances running on the node. Another advantage of Flex ASM is that it can be
configured on a separate node.
When
you choose Flex Cluster option as part of the cluster installation, Flex ASM
configuration will be automatically selected as it is required by the Flex
Cluster. You can also have traditional cluster over Flex ASM. When you decide
to use Flex ASM, you must ensure the required networks are available. You
can choose the Flex ASM storage option as part of Cluster installation, or use
ASMCA to enable Flex ASM in a standard cluster environment.
The
following command shows the current ASM mode:
$ ./asmcmd showclustermode
$ ./srvctl config asm
Or connect to the ASM instances and query the INSTANCE_TYPE parameter. If the output value is ASMPROX, then, the Flex ASM is
configured.
Increased
ASM storage limits
The ASM
storage hard limits on maximum ASM disk groups and disk size has been
drastically increased. In 12cR1, ASM support 511 ASM disk groups against 63 ASM
disk groups in 11gR2. Also, an ASM disk can be now 32PB size against 20PB in
11gR2.
Tuning
ASM rebalance operations
The new EXPLAIN WORK FOR statement in 12c measures the amount of
work required for a given ASM rebalance operation and inputs the result in V$ASM_ESTIMATE dynamic view. Using the dynamic view, you
can adjust thePOWER LIMIT clause
to improve the rebalancing operation work. For example, if you want to measure
the amount of work required for adding a new ASM disk, before actually running
the manual rebalance operation, you can use the following:
SQL> EXPLAIN WORK FOR ALTER DISKGROUP DG_DATA ADD DISK data_005;
SQL> SELECT est_work FROM V$ASM_ESTIMATE;
SQL> EXPLAIN WORK SET STATEMENT_ID='ADD_DISK' FOR ALTER DISKGROUP DG_DATA AD DISK data_005;
SQL> SELECT est_work FROM V$ASM_ESTIMATE WHERE STATEMENT_ID = 'ADD_DISK’;
You can
adjust the POWER limit based on the output you get from the dynamic view to
improve the rebalancing operations.
ASM
Disk Scrubbing
The new
ASM Disk Scrubbing operation on a ASM diskgroup with normal or high redundancy
level, verifies the logical data corruption on all ASM disks of that ASM
diskgroup, and repairs the logical corruption automatically, if detected, using
the ASM mirror disks. The disk scrubbing can be performed at disk group,
specified disk or on a file and the impact is very minimal. The following
examples demonstrate the disk scrubbing scenario:
SQL> ALTER DISKGROUP dg_data SCRUB POWER LOW:HIGH:AUTO:MAX;
SQL> ALTER DISKGROUP dg_data SCRUB FILE '+DG_DATA/MYDB/DATAFILE/filename.xxxx.xxxx'
REPAIR POWER AUTO;
Active
Session History (ASH) for ASM
The V$ACTIVE_SESSION_HISOTRY dynamic view now provides the active
session sampling on ASM instance too. However, the use of diagnostic pack is
subject to the license.
2.
Additions/Enhancements in Grid Infrastructure
Flex
Clusters
Oracle
12c support two types of cluster configuration at the time of Clusterware
installation: Traditional Standard Cluster and Flex cluster. In a traditional
standard cluster, all nodes in a cluster are tightly integrated to each other
and interact through a private network and can access the storage directly. On
the other hand, the Flex Cluster introduced two types of nodes arranged in Hub
and Leaf nodes architecture. The nodes arranged in Hub nodes category are
similar to the traditional standard cluster, i.e. they are interconnected to
each other through a private network and have the directly storage read/write
access. The Leaf nodes are different from the Hub nodes. They don’t need to
have direct access to the underlying storage; rather they access the
storage/data through Hub nodes.
You can
configure Hub nodes up to 64, and Leaf nodes can be many. In an Oracle Flex
Cluster, you can have Hub nodes without having Leaf nodes configured, but no
Leaf nodes exist without Hub nodes. You can configure multiple Leaf nodes to a
single Hub node. In Oracle Flex Cluster, only Hub nodes will have direct
access to the OCR/Voting disks. When you plan large scale Cluster
environments, this would be a great feature to use. This sort of setup greatly
reduces interconnect traffic, provides room to scale up the cluster to the
traditional standard cluster.
There
are two ways to deploy the Flex Cluster:
1.
While configuring a brand new cluster
2.
Upgrade a standard cluster mode to Flex Cluster
If you
are configuring a brand new cluster, you need to choose the type of cluster
configuration during step 3, select Configure a Flex Cluster option and you
will have to categorize the Hub and Leaf nodes on Step 6. Against each node,
select the Role, Hub or Leaf, and optionally Virtual Hostname too.
The
following steps are required to convert a standard cluster mode to Flex Cluster
mode:
1. Get
the current status of the cluster using the following command:
$ ./crsctl get cluster mode status
2. Run
the following command as the root user:
$ ./crsctl set cluster mode flex
$ ./crsctl stop crs
$ ./crsctl start crs –wait
3. Change
the node role as per your design
$ ./crsctl get node role config
$ ./crsctl set node role hub|leaf
$ ./crsctl stop crs
$ ./crsctl start crs -wait
Note the following:
o
You can’t revert back from Flex to Standard cluster mode
o
Cluster node mode change requires cluster stack stop/start
o
Ensure GNS is configured with a fixed VIP
OCR
backup in ASM disk group
With
12c, OCR can be now be backed-up in ASM disk group. This simplifies the access
to the OCR backup files across all nodes. In case of OCR restore, you don’t
need to worry about which node the OCR latest backup is on. One can simply
identify the latest backup stored in the ASM from any node and can perform the
restore easily.
The
following example demonstrates how to set the ASM disk group as OCR backup
location:
$ ./ocrconfig -backuploc +DG_OCR
IPv6
support
With
Oracle 12c, Oracle now supports IPv4 and IPv6 network protocol configuration on
the same network. You can now configure public network (Public/VIP) either on
IPv4, IPv6 or combination protocol configuration. However, ensure you use the
same set of IP protocol configuration across all nodes in a cluster.
3.
Additions/Enhancements in RAC (database)
What-If
command evaluation
Using the new What-if command evaluation (-eval) option with srvctl, one can now determine
the impact of running the command. This new addition to the srvctl command, will let you simulate the command
without it actually being executed or making any changes to the current system.
This is particularly useful in a situation when you want to make a change to an
existing system and you’re not sure of the outcome. Therefore, the
command will provide the effect of making the change. The –eval option also can be used with crsctl command.
For
example, if you want to know what will happen if you stop a particular
database, you can use the following example:
$ ./srvctl stop database –d MYDB –eval
$ ./crsctl eval modify resource <resource_name> -attr “value”
Miscellaneous
srvctl improvements
There
are a few new additions to the srvctl command. The following demonstrates the
new addition to stop/start database/instance resources on the cluster:
srvctl start database|instance –startoption NOMOUNT|MOUNT|OPEN
srvctl stop database|instance –stopoption NOMOUNT|MOUNT|OPEN
No comments:
Post a Comment