Tuesday, April 12, 2016
How SQL operates in Oracle database internally
– The server process (which is the one that works at server side) scans the DBC
Soft parse- 0.000039 sec
Database as a Service (DBaaS) FAQ
Q – What is Database as a Service?
DBaaS is a cloud model that enables users to request database environments by choosing from a predefined service catalog using a self-service provisioning framework. The key benefits of these database clouds are agility and faster deployment of database services. As databases are provisioned and de-provisioned, the associated computing resources are consumed and then released. Database resources can be consumed for the duration of a project, and then be automatically de-provisioned and returned to the resource pool. Computing costs can be tracked and charged back to the consumer.Q – What drives organizations to deploy DBaaS environments?
Organizations are drawn to DBaaS because it can simplify IT infrastructures, making it easy to deliver database functionality to many users and multiple divisions from the same hardware and software infrastructure, while automating manual provisioning processes. Provisioning new databases in traditional environments can take days or weeks. DBaaS can compress that down to minutes. A simpler IT infrastructure ensures greater agility for the business with less risk and lower costs, as shown in the figure below.“Agility is a key motivator here. With DBaaS we can deliver better, faster, more effective IT services to agencies and the citizens that they serve.” -- Todd Kimbriel, Director of E-Government, State of Texas “A DBaaS environment empowers our DBAs to innovate. They not only are responsible for keeping the systems up and running but for bringing improvements to the business.” -- Eric Zonneveld, Oracle Architect, KPN3
Q – What are some popular use cases for the DBaaS model?
DBaaS architectures are quickly gaining traction with Dev/Test organizations since they permit developers to set up and tear down databases on an as-needed basis. They can create new instances quickly without IT assistance, avoiding the usual process of procuring and initializing servers. This level of independence and flexibility lets developers and QA experts provision new databases on demand. If they are using a storage environment that has snapshot storage, such as Oracle ZFS, then users can clone an existing pluggable database instantly.Q – What are the financial motivations for DBaaS?
One of the most appealing aspects of DBaaS is that consolidation results in a smaller hardware footprint, which lowers costs. In addition, many users and departments share the infrastructure while paying for individual database services, which drives down the unit price of those services. According to Todd Kimbriel, Director of E-Government for the State of Texas, this payment model also makes it easier for customers to obtain database capacity on demand. Smaller agencies and departments can obtain a level of service that they previously could not have afforded. Planning horizons are easier and more predictable, which resonates well with the line of business owners. The State of Texas created a planning tool that lets state agencies compare what it costs to run their current database applications in a traditional technology stack versus a DBaaS environment. They also demonstrated how a simplified pricing structure would be better for each participating organization. For example, each agency could pay a set amount per month for database services, versus the traditional model of paying independently for servers, storage devices, networking gear, software, and support contracts.Q – How do you sell the concept of DBaaS to the IT department?
IT is continually asked to do more with less. IT leaders are looking for ways to increase agility while reducing cost and risk. To illustrate these business dynamics, KPN created an “Exademo” to demonstrate the performance and resource requirements associated with running various application loads—both in a traditional database environment and in a DBaaS environment based on Oracle Exadata. They emulated users logging in and putting the system through its paces, and also demonstrated how Oracle Exadata could minimize the number of databases, versions and operating platforms. Finally, they simulated disaster recovery scenarios by switching the load between two data centers—first using traditional systems and then using Oracle Exadata systems. KPN performed“We looked at the products our customers were already using. Oracle was far and away the largest footprint from a database perspective. Staying with Oracle accelerated DBaaS adoption since our users were already familiar with it.” -- Todd Kimbriel, Director of E-Government, State of Texas “We have more than 400 databases and 60 totally distinct applications on six Exadata systems including batch, OLTP, and data warehouse. We have room for many more before we need additional capacity. - Eric Zonneveld, Oracle Architect, KPN4 rigorous testing and demonstrations before going live with a DBaaS environment to ensure that everybody would receive the requisite levels of security and performance. The State of Texas promoted its new DBaaS environment by offering deep-dive indoctrination sessions to acquaint DBAs with the basic concepts of Exadata and DBaaS. Many DBAs who were skeptical when they arrived became evangelists for this new model.Q – How do you sell DBaaS to management?
Capital Expenditures (CapEx) are incurred to create future benefit i.e. acquisition of assets that will have a useful life beyond the tax year. Operational expenditures (OpEx) are those expenditures required for the day-to-day functioning of the business. DBaaS generally follows the OpEx model, enabling consumers to avoid large upfront investments in hardware, software and services. This cost structure especially resonates with line of business managers. Not only does the DBaaS model enable more agile database services, it also helps the business to allocate costs. User organizations pay to the degree that they consume each service, incenting those organizations to be more efficient. They can commission databases for particular purposes, then decommission them and return them back to a resource pool that all organizations share. Charging back for database services can transform the IT environment from a cost center to a revenue generator.Q - What are some of the primary planning considerations for DBaaS?
When planning and designing a DBaaS architecture, Carl Olofson, a research director at IDC, says there are a couple of key things to focus on: 1 – Thorough capacity planning to ensure that you can meet user needs and uphold the service level agreements that you establish, especially as more databases are deployed and the environment gets more popular. Olofson says to be overly generous with capacity. “Instead of doing planning and analysis from the inside out, and then trying to fit everything within those choices, ask yourself: What do users need? What are their performance requirements? Then figure out what kinds of technology you need to meet their future requirements. Once you have DBaaS capabilities and people can request databases, the demand may be higher than you initially expect.” 2 - Selecting the right technology. Many customers rely on engineered systems for DBaaS environments. These integrated hardware and software solutions are designed to work together to facilitate extreme performance and availability, while simplifying deployment and maintenance issues. “You also need a database management system that simplifies virtualization and consolidation of IT resources and an integrated set of“DBaaS environments can be complicated, with lots of moving parts. You need database management software that makes it easy to establish the services that you envision. And you need robust management tools that enable you to maintain those databases.” – Carl Olofson, IDC “Oracle Multitenant helped us to see the value of offering a DBaaS platform to customers. This has enabled us to avoid what we call the “race to the bottom”—we no longer differentiate ourselves just on price but rather on the value of the capabilities we can offer. We can respond more quickly to customer needs.” -- James Anthony, Technology Director, e-DBA5 management tools to streamline provisioning, monitoring, and chargeback,” Olofson adds.
Q – How does DBaaS simplify activities for DBAs?
DBaaS represents a paradigm shift in IT, from merely “keeping the lights on” to higher value tuning and development work. “The capabilities we bring through DBaaS allow all DBAs and administrators to move away from managing backups and cloning systems and other routine tasks,” explains James Anthony at e-DBA. “They can work with customers to help move the business forward. It’s more interesting work.” When a user, department, or line of business needs a database in a traditional database environment, they must make a formal request to the IT department, wait for that request to be approved, and then wait for IT to procure the equipment, install the necessary software, and provision the database resources. This cycle involves management approvals, purchasing, and a series of discrete tasks for DBAs, system administrators, and storage administrators. A DBaaS environment speeds up this entire cycle by empowering users to provision their own database resources. IT defines different “tiers” or classes of database service based on standard templates. An online catalog defines varying sizes, service levels and appropriate levels of chargeback for each tier. Authorized users can select the databases they need from this catalog, consume those services for the duration of a project, and then automatically de-provision the databases and return them to the resource pool. According to Kimbriel, DBaaS deployments “polarize” the day-to-day activities of those who are consuming those services and those who are operationalizing those services. “The daily housekeeping chores are handled automatically as part of the basic service,” he explains. “DBAs have a new focus on handling value-added tasks.”Q – How do traditional database provisioning processes differ under the DBaaS model?
In a traditional database environment, each database typically resides on a dedicated server. Procuring, implementing, and provisioning these hardware/software environments requires careful coordination among the groups of people who use and maintain the software. In addition, many different IT specialties arise to deal with the equipment, operating systems, databases, security, applications, and compliance policies. Different administrators may approach common tasks in different ways, so there may be little consistency or standardization in IT processes. With DBaaS, business users are pre-authorized to select the database services they need from an online catalog. They can create new database services and dispose of“When we started off with DBaaS we had many, many databases and many, many versions. We needed a new platform to drive us forward into the future. By consolidating on to fewer servers we can offer more cost-effective services with higher availability.” -- Eric Zonneveld, Oracle Architect, KPN6 resources when they no longer need them—without relying on the IT department for assistance.
Q – How do you track costs and charge them back to consumers of database services?
System management software such as Oracle Enterprise Manager 12c provides visibility into actual usage so that people only pay for what they use—instead of paying up-front for more capacity than they might otherwise need. This proven cloud management solution includes tools for integrated chargeback and capacity planning. A key component of this solution is Oracle Enterprise Manager 12c Cloud Management Pack for Oracle Database, which lets DBaaS administrators identify pooled resources, configure role-based access, and define service catalogs and chargeback plans. It keeps both users and administrators apprised of the costs of the services delivered, and establishes accountability for consumption of resources. A graphical user interface makes it easy to provision database services.Q - What is the ideal Oracle private DBaaS configuration?
For much of the Oracle customer base, the desired future state environment includes the following technologies: Oracle Database 12c has been designed to enable DBaaS as part of a comprehensive cloud strategy. It uses a multitenant architecture in which a single “container” database can handle many “pluggable” databases. Collections of pluggable databases can be easily managed as a single entity, which maximizes consolidation density and simplifies administration. Oracle Enterprise Manager 12c is a complete DBaaS management solution. It allows administrators to manage the entire lifecycle of a database cloud, from planning, testing and deployment to monitoring performance. It includes tools for integrated chargeback and capacity planning, with visibility into both the physical and virtual IT environment. Oracle Exadata is an engineered system that provides an exceptional deployment platform for DBaaS. It supports multiple deployment models, multiple workloads, and multiple service levels from one integrated platform. Optimizations in storage, storage-compute interconnects, and I/O resource management allow for flexible tiered server and storage resource configurations.Q - What software do you need to manage a DBaaS environment?
Oracle Enterprise Manager 12c is a complete cloud lifecycle management solution. It includes tools for self-service provisioning and monitoring with integrated chargeback and capacity planning. Oracle Enterprise Manager 12c offers complete visibility into both the physical and virtual DBaaS environments, from applications to disk. Oracle Enterprise Manager empowers administrators to plan, monitor, and manage DBaaS resource capacity—all from within a single management console.Oracle Enterprise Manager 12c Cloud Management Pack for Oracle Database lets DBaaS administrators create zones, identify pooled resources, configure role-based access to those resources, and define service catalogs and chargeback plans. Oracle Cloud Management Pack empowers DBaaS users to request database services and scale them up and down in response to changes in the workload. It keeps both users and administrators apprised of the costs of the services delivered and establishes accountability for consumption of resources. A graphical user interface shows the entire topology of the database services being offered, so administrators can monitor the performance and quickly address any issues before they impact business users.
Q - How do you set up a self-service portal for cloud users and administrators?
Oracle Cloud Management Pack for Oracle Database includes a Self Service Portal that enables administrators to configure a database cloud and define policies for how it is“There are often situations in which you need to spin up a new database. But in a traditional data center this is a nontrivial task. Being able to do this on demand through an online service in moments is an amazing transformation of that situation.”
– Carl Olofson, Research Director, IDC
“DBaaS raises the level of DBA involvement, so they focus more on the business and less on the mechanics.”
-- James Anthony, Technology Director, e-DBA
3
used. These policies determine the total amount of resources people can reserve for personal and departmental use. Users can login to the Self Service Portal and provision the database assets they need, without knowing anything about the inner working of the database management system. They can even define the time when the database will be retired, and then it will happen automatically.
Once the Self Service Portal is online, authorized users can request a database service through a simple selection process. As users create databases they can specify a retirement schedule or keep the provisioned database indefinitely. Oracle Enterprise Manager 12c will automatically retire a database once it reaches its expiration date. Throughout this process they can monitor resource consumption through a graphical user interface.
Q – How do you create service catalogs and establish tiers of service?
Oracle Enterprise Manager 12c lets you specify how you wish to set up instances of databases and applications and then configure the environment according to the business need. For example, a bronze tier might include basic database services using pluggable databases within Oracle Database 12c. A silver tier might also include Oracle Real Application Clusters for instantaneous failover to another server in the case of an outage, and a platinum tier uses Oracle Maximum Availability Architecture to enable automatic failover among clusters in two data centers. Other parameters that may come into play when defining service tiers include geography, disaster recovery levels, and security. Most organizations create simple service catalogs with a few basic tiers, as shown in the figure below:“We are cloning large databases in moments using the snapshot capabilities. We never could have done that before. Once people log into the help desk it’s all automated. Our customers love it. It happens as a service. We cut the operator out of the equation.”
-- James Anthony, Technology Director, e-DBA
4
When requesting the service, users specify a service tier and storage size. The service catalog might also be set up to identify a specific purpose for the service, such as, application development or testing. For example, the requirements of a developer are very different from those of a DBA. And the requirements of a DBA are different from those of a quality assurance engineer. Using this approach the State of Texas, established four different sizes and four different classes of database service, with differentiated levels of availability, disaster recovery, and backup options.
Q – How do you meter and chargeback for database usage?
Oracle Enterprise Manager 12c lets you manage cloud resources as business services rather than as a collection of technical components. You can control not only the database resources, but also the associated network and CPU resources as well. Collecting usage metrics provides hard data that charts usage patterns, enabling better planning and budgeting while identifying underutilized assets. Users and departments can be charged for usage (chargeback) or simply apprised of that usage (showback). Read a white paper on metering and chargeback to learn more.Q - How does DBaaS facilitate consolidation?
The traditional approach to reducing unused capacity is to migrate the database environment into virtual machines—logical partitions of each physical server. Alternately, you can reduce the number of virtual and physical environments by consolidating databases into a shared database infrastructure. This can be an important component of DBaaS. Combining database assets into one consolidated platform makes the overall IT environment easier to manage and lowers IT operational costs.Oracle Exadata supports many types of consolidation scenarios including running multiple application schemas in one single database, hosting multiple databases on a single platform, or a hybrid of the two configurations. These converged environments enable a uniform set of IT services that can be deployed quickly and managed easily, reducing operating costs. Individual databases are consolidated on physical servers and clustered together into cloud pools for maximum density. Any server in the pool can host one or multiple database instances.
“Consolidation allows us to pass on performance and manageability benefits to our customers. We have seen an increase in performance and availability even as we reduce the cost of managing and provisioning databases.”
-- James Anthony, Technology Director, e-DBA
5
Oracle Multitenant, an option of Oracle Database 12c Enterprise Edition, offers all the benefits of managing many databases as one, yet retains the isolation and resource control of separate databases. The multitenant container architecture enables organizations to set up one cloud environment with dozens or even hundreds of pluggable databases in each container database. Each pluggable database in a multitenant container database appears to applications as a single database. This high-density architecture enables servers, operating systems, and databases to be shared and lets system administrators manage many databases as one database.
Q - How do you monitor usage once database instances have been provisioned?
Oracle Enterprise Manager 12c helps you manage the entire DBaaS lifecycle including performance, consolidation and capacity planning, patching, self-service provisioning, monitoring, and metering/chargeback. It includes tools for efficient, end-to-end monitoring and management of the entire hardware and software stack, with a discrete level of control over DBaaS environments based on common metrics such as CPU consumption and storage consumption. You can easily allocate CPU and I/O bandwidth to different workloads or databases, which simplifies administration of a consolidated database environment. The primary capabilities of Oracle Enterprise Manager for DBaaS environments are summarized in the following figure.“User expectations change when everything is virtualized. You end up creating a more demanding customer base, along with customer-focused DBAs who can act as advocates and work together to solve problems.”
-- Carl Olofson, Research Advisor, IDC
6
Oracle Enterprise Manager 12c makes it easy to allocate CPU resources between database containers, providing fine-grained control over the processing workload. For example, you can shift resources to accommodate customer-facing applications during the day and large reporting jobs at night. Oracle Enterprise Manager can also monitor and charge for the types of technologies being utilized, such as Oracle RAC. Click here to read more.
Q – How can Enterprise Architecture (EA) help with DBaaS Deployments?
For successful adoption and sustainable execution, it is important to understand how DBaaS fits into the overall environment. You will achieve the greatest value when the DBaaS strategy is considered along with your organization’s overall enterprise architecture and IT strategy. Enterprise architecture helps to structure DBaaS projects through a series of overlapping steps, or iterations. A systematic process determines how these environments should be created, with attention to the business, technical, and operational implications.For example, if the HR department wants to implement a new payroll service and share it with other departments, the EA team begins by creating use-cases that reflect the needs of each department. They consider the profiles of the various departments and develop a service catalog that lists the functions these agencies need. Then each department can provision as little or as much of each service as is needed and also specify the degree of availability, disaster recovery, and uptime requirements.
Oracle has developed a complete Database as a Service Reference Architecture that consists of artifacts, tools and samples to help you understand how to operationalize DBaaS with attention to the correct strategy, people, processes, and technology. Click here to read more.
“The sheer labor involved in maintaining many different databases and servers can become overwhelming. Once you standardize on a single environment you can take it to the next level and make it easier to spin up databases when users need them. Then you are well on your way to a DBaaS scenario. Simplicity is the operative word.”
-- Carl Olofson, IDC
7
Q - How does the DBaaS model save time or simplify activities for DBAs?
Creating database configurations based on standard templates enables a self-service model that relieves IT professionals and DBAs from having to manually configure and tear down databases in response to each individual request. Once the DBaaS environment is established, users can perform simple provisioning activities without engaging a DBA to allocate the resources, set access restrictions, and handle other common tasks.In Oracle Exadata environments, instead of the usual division of labor between storage people and networking people and OS people, there is the opportunity to cultivate a new type of administrator that knows about all of these things. This integrated level of knowledge not only diversifies their work but also makes these IT professionals responsible for end-to-end management of the entire environment. For example, they no longer have to consult with many different people to troubleshoot issues. “One person understands how to resolve issues,” remarks James Anthony at e-DBA. “This enables them to be more in control of their own destiny. DBAs can turn away from the technology and face the customer.”
Monday, April 11, 2016
10 practical cut command
Linux command cut is used for text processing. You can use this command to extract portion of text from a file by selecting columns.
This tutorial provides few practical examples of cut command that you can use in your day to day command line activities.
For most of the example, we’ll be using the following test file.
$ cat test.txt cat command for file oriented operations. cp command for copy files or directories. ls command to list out files and directories with its attributes.
1. Select Column of Characters
To extract only a desired column from a file use -c option. The following example displays 2nd character from each line of a file test.txt
$ cut -c2 test.txt a p s
As seen above, the characters a, p, s are the second character from each line of the test.txt file.
2. Select Column of Characters using Range
Range of characters can also be extracted from a file by specifying start and end position delimited with -. The following example extracts first 3 characters of each line from a file called test.txt
$ cut -c1-3 test.txt cat cp ls
3. Select Column of Characters using either Start or End Position
Either start position or end position can be passed to cut command with -c option.
The following specifies only the start position before the ‘-‘. This example extracts from 3rd character to end of each line from test.txt file.
$ cut -c3- test.txt t command for file oriented operations. command for copy files or directories. command to list out files and directories with its attributes.
The following specifies only the end position after the ‘-‘. This example extracts 8 characters from the beginning of each line from test.txt file.
$ cut -c-8 test.txt cat comm cp comma ls comma
The entire line would get printed when you don’t specify a number before or after the ‘-‘ as shown below.
$ cut -c- test.txt cat command for file oriented operations. cp command for copy files or directories. ls command to list out files and directories with its attributes.
4. Select a Specific Field from a File
Instead of selecting x number of characters, if you like to extract a whole field, you can combine option -f and -d. The option -f specifies which field you want to extract, and the option -d specifies what is the field delimiter that is used in the input file.
The following example displays only first field of each lines from /etc/passwd file using the field delimiter : (colon). In this case, the 1st field is the username. The file
$ cut -d':' -f1 /etc/passwd root daemon bin sys sync games bala
5. Select Multiple Fields from a File
You can also extract more than one fields from a file or stdout. Below example displays username and home directory of users who has the login shell as “/bin/bash”.
$ grep "/bin/bash" /etc/passwd | cut -d':' -f1,6 root:/root bala:/home/bala
To display the range of fields specify start field and end field as shown below. In this example, we are selecting field 1 through 4, 6 and 7
$ grep "/bin/bash" /etc/passwd | cut -d':' -f1-4,6,7 root:x:0:0:/root:/bin/bash bala:x:1000:1000:/home/bala:/bin/bash
6. Select Fields Only When a Line Contains the Delimiter
In our /etc/passwd example, if you pass a different delimiter other than : (colon), cut will just display the whole line.
In the following example, we’ve specified the delimiter as | (pipe), and cut command simply displays the whole line, even when it doesn’t find any line that has | (pipe) as delimiter.
$ grep "/bin/bash" /etc/passwd | cut -d'|' -f1 root:x:0:0:root:/root:/bin/bash bala:x:1000:1000:bala,,,:/home/bala:/bin/bash
But, it is possible to filter and display only the lines that contains the specified delimiter using -s option.
The following example doesn’t display any output, as the cut command didn’t find any lines that has | (pipe) as delimiter in the /etc/passwd file.
$ grep "/bin/bash" /etc/passwd | cut -d'|' -s -f1
7. Select All Fields Except the Specified Fields
In order to complement the selection field list use option –complement.
The following example displays all the fields from /etc/passwd file except field 7
$ grep "/bin/bash" /etc/passwd | cut -d':' --complement -s -f7 root:x:0:0:root:/root bala:x:1000:1000:bala,,,:/home/bala
8. Change Output Delimiter for Display
By default the output delimiter is same as input delimiter that we specify in the cut -d option.
To change the output delimiter use the option –output-delimiter as shown below. In this example, the input delimiter is : (colon), but the output delimiter is # (hash).
$ grep "/bin/bash" /etc/passwd | cut -d':' -s -f1,6,7 --output-delimiter='#' root#/root#/bin/bash bala#/home/bala#/bin/bash
9. Change Output Delimiter to Newline
In this example, each and every field of the cut command output is displayed in a separate line. We still used –output-delimiter, but the value is $’\n’ which indicates that we should add a newline as the output delimiter.
$ grep bala /etc/passwd | cut -d':' -f1,6,7 --output-delimiter=$'\n' bala /home/bala /bin/bash
10. Combine Cut with Other Unix Command Output
The power of cut command can be realized when you combine it with the stdout of some other Unix command.
Once you master the basic usage of cut command that we’ve explained above, you can wisely use cut command to solve lot of your text manipulation requirements.
The following example indicates how you can extract only useful information from the ps command output. We also showed how we’ve filtered the output of ps command using grep and sed before the final output was given to cut command. Here, we’ve used cut option -d and -f which we’ve explained in the above examples.
$ ps axu | grep python | sed 's/\s\+/ /g' | cut -d' ' -f2,11- 2231 /usr/bin/python /usr/lib/unity-lens-video/unity-lens-video 2311 /usr/bin/python /usr/lib/unity-scope-video-remote/unity-scope-video-remote 2414 /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 2463 /usr/bin/python /usr/lib/system-service/system-service-d 3274 grep --color=auto python
Sunday, April 10, 2016
Difference between UNDO and REDO in oracle
UNDO
Undo tablespaces are special tablespaces used solely for
storing undo information. You cannot create any other segment types (for
example, tables or indexes) in undo tablespaces. Each database contains zero or more undo tablespaces.
In automatic undo management mode, each Oracle instance is assigned one (and
only one) undo tablespace. Undo data is managed within an undo tablespace using undo segments
that are automatically created and maintained by Oracle.
Every Oracle Database must have
a method of maintaining information that is used to roll back, or undo, changes
to the database. Such information consists of records of the actions of
transactions, primarily before they are committed. These records are
collectively referred to as undo.
Undo records are
used to:
·
Roll back transactions when a ROLLBACK statement is issued
·
Recover the database
·
Provide read consistency
·
Analyze data as of an earlier point in time by using Oracle
Flashback Query
·
Recover from logical corruptions using Oracle Flashback
features
When a ROLLBACK statement is
issued, undo records are used to undo changes that was made to the database by
the uncommitted transaction. During database recovery, undo records are used to
undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by
maintaining the before image of the data for users who are accessing the data
at the same time that another user is changing it.
How long a undo data stored in the database?
Oracle provides flexibility of how long should undo data be stored with the help of undo_retention parameter. We can set undo_management parameter to automatic for oracle to manage undo retention(default), or even set this value manually and it's value should be greater than the time taken by the longest running query in your database.
REDO
Redo log files record changes
to the database as a result of transactions and internal Oracle server actions.
(A transaction is a logical unit of work, consisting of one or more SQL
statements run by a user.)
Redo log files protect the
database from the loss of integrity because of system failures caused by power
outages, disk failures, and so on.
Redo log files must be
multiplexed to ensure that the information stored in them is not lost in the
event of a disk failure.
The redo log consists of groups
of redo log files. A group consists of a redo log file and its multiplexed
copies. Each identical copy is said to be a member of that group, and each
group is identified by a number. The LogWriter (LGWR) process writes
redo records from the redo log buffer to all members of a redo log group until
the file is filled or a log switch operation is requested. Then, it switches
and writes to the files in the next group. Redo log groups are used in a
circular fashion.
-->REDO logs are
used in recovery. Every uncommitted or committed change made to the data is
stored in redo logs. The log writer (LGWR) process writes these changes from
'redo log buffer' to redo log files in disk. The main advantage of redo logs is
that we can prevent data loss, for example if we dropped a table by mistake
then we can recover it from the redo log files, or even we can recover a data
file in case of losing it.
In this way UNDO and
REDO guarantees the ACID(Atomicity,Consistency,Isolation,Durability) properties
in RDBMS.
UNDO
·
Record of how to undo a change.
·
Used for Rollback, read-consistency
·
Stored in Undo Segments
·
Protect against inconsistent reads in multiuser systems
REDO
·
Record of how to reproduce a change
·
Used for Rolling forward database changes
·
Stored in redo log files
Protect against data loss.
Undo data vs. Redo log
The main difference is that UNDO data can make the change go away, and REDO can make the change happen again if necessary. UNDO holds a copy of previous data untill the transaction commits and for some time afterwards (depending on undo retention period specified), thus providing read consistence, since the user can read the data as it is at the time a transaction started, while transaction is still in the process. Capturing the undo data enables Oracle to roll back the uncommitted data. Undo data is used for both read consistency and to recover from failed transactions.
UNDO segments store BEFORE images of a record, while REDO logs record redo entries.
For example the user adds a record to the database - an employee named John Doe with ID of 3. Currently database contains two records - with IDs of 1 and 2. So UNDO data captures those two records in case we need to rollback the insertion of record with ID 3.
Meanwhile REDO log records all the changes to the database as they happen. If the database crashes, Oracle will first read redo logs and apply all committed changes 9the ones that didn't end up in data files yet, due to the crash) before opening the database for regular use.
Here is a simple differences between the two:
UNDO segments store BEFORE images of a record, while REDO logs record redo entries.
For example the user adds a record to the database - an employee named John Doe with ID of 3. Currently database contains two records - with IDs of 1 and 2. So UNDO data captures those two records in case we need to rollback the insertion of record with ID 3.
Meanwhile REDO log records all the changes to the database as they happen. If the database crashes, Oracle will first read redo logs and apply all committed changes 9the ones that didn't end up in data files yet, due to the crash) before opening the database for regular use.
Here is a simple differences between the two:
Undo | Redo |
Makes a change go away | Reproduces a change |
Used for rollback and read consistency | Used for rolling forward the changes |
Protects the database from inconsistent reads | Protects from data loss |
Data stored in Undo segments in Undo tablespace | Logs stored in SGA, in memory |
Example:
-------------------------------------------------------------
An undo segment is just a
segment, like a table or an index or a hash cluster or a materialized view is a
segment. The clue is in the name. And the rule is that if you modify part of a
segment, any segment, regardless of its type, you must generate redo so that
the modification can be recovered in the event of media or instance failure.
Therefore, you modify EMP; the changes to the EMP blocks are recorded in redo.
The modification to EMP also has to be recorded in UNDO, because you might
change your mind and want to reverse the transaction before you commit. Therefore, the modification to EMP causes entries to be made in an
undo segment. But that’s a modification to a segment -this time, an undo
segment. Therefore, the changes to the undo segment also have to be recorded in
redo, in case you suffer a media or instance failure.
If your database now crashed
and you had to restore a set of datafiles, including those for the undo tablespace, from 10 days ago,
you would of course do what Oracle always does: start reading from your
archived redo, rolling the 10 day old files forward in time until they were 9,
then 8, then 7 then 6 and so on days old, until you get to the time where the
only record of the changes to segments (any segment) was contained in the
current online redo log, and then you’d use that redo log to roll the files
forward until they were 8 minutes old, 7 minutes, 6, 5, 4,… and so on, right up
until all changes to all segments that had ever been recorded in the redo had
been applied. At which point, your undo segments have been re-populated.
So now you can start rolling back those transactions which were recorded in the
redo logs, but which weren’t committed at the time of the database failure.
I can’t emphasize enough,
really, that undo segments are just slightly special tables. They’re
fundamentally not very different from EMP or DEPT, except that new inserts into
them can over-write a previous record, which never happens to EMP, of course.
If you generate change vectors when you update EMP, you generate change vectors
when you generate undo. Why do we store the before and after image in
redo and then duplicate half of that by repeating the store of the before image
in undo? Because redo is written and generated sequentially and isn’t cached
for long in memory (most log buffers are a few megas in size, tops).
Therefore, using redo to
rollback a mere mistake or as a result of a change of mind, whilst theoretically
do-able, would involve wading through huge amounts of redo sequentially,
looking for one little before image in a sea of changes made by lots of people
and all of that wading would be done by reading stuff off disk (like it is in a
recovery scenario). Undo, on the other hand, is stored in the buffer cache
(just as EMP is stored in the buffer cache), so there’s a good chance that
reading that will only require logical I/O, not physical. And your transaction
is dynamically linked to where it’s written its undo, so you and your
transaction can jump straight to where your undo is, without having to wade
through the entire undo generated by other transactions. In performance terms,
there is no comparison. Splitting ‘you need this for recovery’ from ‘you need
this for changes of mind’ was a stroke of genius on the part of Oracle: other
databases merely have ‘transaction logs’ which serve both purposes, and suffer
in performance and flexibility terms accordingly.
--------------------------------------------------------
USEFUL SCRIPTS:
To see the redo generated since instance
started:
col name
format a30 heading ‘Statistic|Name’
col value heading ‘Statistic|Value’
start title80 “Redo Log Statistics”
spool rep_out\&db\red_stat
SELECT name, value
FROM v$sysstat
WHERE name like ‘%redo%’
order by name
/
spool off
pause Press enter to continue
ttitle off
col value heading ‘Statistic|Value’
start title80 “Redo Log Statistics”
spool rep_out\&db\red_stat
SELECT name, value
FROM v$sysstat
WHERE name like ‘%redo%’
order by name
/
spool off
pause Press enter to continue
ttitle off
The redo generated during my session
since the session started:
select value
redo_size
from v$mystat, v$statname
where v$mystat.STATISTIC# = v$statname.STATISTIC#
and name = ‘redo size’
/
from v$mystat, v$statname
where v$mystat.STATISTIC# = v$statname.STATISTIC#
and name = ‘redo size’
/
The redo generated by current user
sessions:
select v$session.sid, username, value redo_size
from v$sesstat, v$statname, v$session
where v$sesstat.STATISTIC# = v$statname.STATISTIC#
and v$session.sid = v$sesstat.sid
and name = ‘redo size’
and value > 0
and username is not null
order by value
/
from v$sesstat, v$statname, v$session
where v$sesstat.STATISTIC# = v$statname.STATISTIC#
and v$session.sid = v$sesstat.sid
and name = ‘redo size’
and value > 0
and username is not null
order by value
/
Provide a current status for redo logs:
column
first_change# format 999,999,999 heading Change#
column group# format 9,999 heading Grp#
column thread# format 999 heading Th#
column sequence# format 999,999 heading Seq#
column members format 999 heading Mem
column archived format a4 heading Arc?
column first_time format a25 heading First|Time
break on thread#
set pages 60 lines 132 feedback off
start title132 ‘Current Redo Log Status’
spool rep_out\&db\log_stat
select thread#, group#, sequence#,bytes, members,archived,status,first_change#,to_char(first_time,’dd-mon-yyyy hh24:mi’) first_time
from sys.v_$log
order by thread#, group#;
spool off
pause Press Enter to continue
set pages 22 lines 80 feedback on
clear breaks
clear columns
ttitle off
/
column group# format 9,999 heading Grp#
column thread# format 999 heading Th#
column sequence# format 999,999 heading Seq#
column members format 999 heading Mem
column archived format a4 heading Arc?
column first_time format a25 heading First|Time
break on thread#
set pages 60 lines 132 feedback off
start title132 ‘Current Redo Log Status’
spool rep_out\&db\log_stat
select thread#, group#, sequence#,bytes, members,archived,status,first_change#,to_char(first_time,’dd-mon-yyyy hh24:mi’) first_time
from sys.v_$log
order by thread#, group#;
spool off
pause Press Enter to continue
set pages 22 lines 80 feedback on
clear breaks
clear columns
ttitle off
/
HOW TO DETECT UNDO:
There are some views that show
information related to undo activity:
·
V$UNDOSTAT: histogram-like view that shows statistics for
10-minute intervals.
·
V$TRANSACTION: present time view providing information on current
transactions.
·
V$SESSTAT: individual session statistics, which includes one for
undo usage.
·
V$UNDOSTAT will provide who did hint, recording the longest
running query for that 10-interval, through the MAXQUERYID column which may be
linked to V$SQL and use columns PARSING_USER_ID or PARSING_SCHEMA_NAME the get
a grip on the suspect.
·
V$TRANSACTION linked with V$SESSION will show current used undo blocks for ongoing transactions. This query may help:
SELECT a.sid, a.username, b.used_urec, b.used_ublk
FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr
ORDER BY b.used_ublk DESC
FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr
ORDER BY b.used_ublk DESC
·
V$SESSTAT provides another view, which uses the undo kind of view, but we must avoid
getting lost in the maze of Oracle statistics and focusing on just one: Undo
change vector size, which will accumulate the bytes of undo used during the
session lifetime. Following query is designed to pinpoint who is having a high
undo activity.
SELECT a.sid, b.name, a.value
FROM v$sesstat a, v$statname b
WHERE a.statistic# = b.statistic#
AND a.statistic# = 176<– Which stands for undo change vector size
ORDER BY a.value DESC
FROM v$sesstat a, v$statname b
WHERE a.statistic# = b.statistic#
AND a.statistic# = 176<– Which stands for undo change vector size
ORDER BY a.value DESC
HOW TO DETECT REDO:
To find sessions generating
lots of redo, you can use either of the following methods. Both methods examine
the amount of undo generated. When a transaction generates undo, it will
automatically generate redo as well. The methods are: 1) Query V$SESS_IO.
This view contains the column BLOCK_CHANGES which indicates how much blocks
have been changed by the session. High values indicate a session generating
lots of redo.
The query you can use
is:
SELECT s.sid, s.serial#, s.username, s.program, i.block_changes
FROM v$session s, v$sess_io i
WHERE s.sid = i.sid
ORDER BY 5 desc, 1, 2, 3, 4;
FROM v$session s, v$sess_io i
WHERE s.sid = i.sid
ORDER BY 5 desc, 1, 2, 3, 4;
1.
Run the query multiple times and examine the delta between each
occurrence of BLOCK_CHANGES. Large deltas indicate high redo generation by the
session.
2.
Query V$TRANSACTION. These view contains information about the amount of
undo blocks and undo records accessed by the transaction (as found in the
USED_UBLK and USED_UREC columns).
The query you can use is:
SELECT s.sid, s.serial#, s.username, s.program,
t.used_ublk, t.used_urec
FROM v$session s, v$transaction t
WHERE s.taddr = t.addr
ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
FROM v$session s, v$transaction t
WHERE s.taddr = t.addr
ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
Run the query multiple times
and examine the delta between each occurrence of USED_UBLK and USED_UREC. Large
deltas indicate high redo generation by the session. You use the first
query when you need to check for programs generating lots of redo when these
programs activate more than one transaction. The latter query can be used to
find out which particular transactions are generating redo.
Subscribe to:
Posts (Atom)