Please click on the INDEX and browse for more interesting posts.

1. If you are having an expdp dumpfile and the client missed to give you what all schemas or tablespace are there in the dump. How will you find out?

We can generate the DDL for the dumpfile using the below syntax and then check the ddl file for the schemas and tablespaces which needs to be imported :

impdp \\’/ as sysdba\\’  DIRECTORY=directory_name DUMPFILE=dumpfile_name.dmp SQLFILE=ddl.sql 

2. What is Unassigned state in logfiles?

If the standby redo logfile is unassigned,  Online redo log has never been written to. This is the state of a redo log that was just added, or just after a RESETLOGS when it is not the current redo log.

3. Suppose your database is hung, what will you do?

When your database is hung, it might be that we as DBA’s wont be able to login to the database with SYSDBA privileges.

Our challenge is to troubleshoot the problem what is causing this database to hang and then fix the problem ASAP. 

Step 1 :  Check the Database alert log and check for any errors.

Step 2 : Try to generate AWR or ASH report or query some of the ASH views. You may notice events like hard parses at the top of the load profile section of the AWR report, indicating that this is what is slowing down the database. 

Step 3 : See if large expdp/impdp operation running in parallel mode consuming all database resources. 

Step 4 : Check the database for blocking locks and latch contention. 

Step 5 : Check the server memory usage and CPU usage. 

Step 6 : See if it is caused by filling up of all archive log destination. If Archive destination is full, database will hang. 

If still you are unable to troubleshoot the exact cause, you need to use the PRELIM option

[oracle@jack ~]$ sqlplus /nolog

SQL> set _prelim on
SQL> connect / as sysdba
Prelim connection established
SQL>
You can also use
[oracle@jack ~]$ sqlplus -prelim “/ as sysdba”

SQL*Plus: Release 11.2.0.1.0 Production on Wed Apr 25 11:55:39 2012

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Now you can use oradebug hanganalyze command to analyze the hung database.
SQL> oradebug hanganalyze 3

in RAC env

SQL> oradebug setinst all
SQL> oradebug -g def hanganalyze 3

 

You can repeat the oradebug hanganalyze command mutiple times to generate dump files for varying process states.

When you call Oracle Support about this issue, they may ask you to attach logs for process state dump also called systemstate dump to analyze hung database.

systemstate dump report shows “what processes are doing and the resource currently they are holding”
Commands to get systemstate dump (NON RAC ENV)
SQL>oradebug setmypid

SQL>oradebug dump systemstate 266

RAC ENV
SQL>oradebug setmypid — You can also specify the process ID other than yours oradebug setmypid <pid>
SQL>oradebug ulimit
SQL>oradebug -g all dump systemstate 266

If you try to issue this command without setting the PID you get ORA-00074

4. What patches would u need to apply to change from 19.3 to 19.12?

PSU Patch

5. What is awk and sed?

AWK : 

The command awk allows searching data in a file and printing data on the console. If a file contains multiple columns, then it is also possible to find data in specific columns. Furthermore, it helps tasks such as searching, conditional execution, updating and filtering.

awk ‘{ print }’ file1.txt

The above command will display all the content in the file1.txt.

Assume that the file1.txt file contains multiple columns.

awk ‘{print $2}’ file1.txt

This command prints the second column of the file1.txt. By default, the awk command considers a white space as the separator.

awk ‘{print $1 “ ” $4}’ file1.txt

The above command prints the first and the fourth column of the file1.txt with a space in between.

awk ‘{print $1. $4}’ file1.txt

The above command will print the concatenation of first and fourth column.

awk ‘/infra/ {print}’ file1.txt

The above command prints all the lines in the file1.txt file that contains the word infra.

awk ‘/ [0-9] / {print}’ file1.txt

The above command prints all the lines in the file1.txt file that contains numbers from 0 to 9.

awk ‘/ ^ [0-9] / {print}’ file1.txt

This will print the lines in file1.txt that start with a number.

awk ‘/ [0-9] $ / {print}’ file1.txt

This will print the lines in file1.txt that ends with a number.

It is also possible to check the conditionals as follows.

awk ‘{ if ($1 ~ /123/ ) print}’ file1.txt

This will print the line if, the content starts with 123.

awk ‘{ if ($2 ~ /[0-9] / ) print}’ file1.txt

This will print the lines in column two that start with a number.

SED :

The command sed allows modifying and filtering text files.

sed ‘1,3 d’ test1.txt

The above command will display the lines after line 3. It will not display 1, 2, 3 lines of the test1.txt.

sed ‘ s /abc / cab/’ test1.txt

The above command will substitute cab instead of abc in the test1.txt file.

sed ‘s/one/1/’ test1.txt

The above command will substitute 1 instead of one in the test1.txt file.

sed ‘1,2 ! d’ test1.txt

This will only display line 1 and line 2.

For full course on Shell Scripting for Oracle DBA’s, please check our youtube playlist

6. How to resolve gaps in Dataguard?

We can resolve gaps in Dataguard with the below methods :

a. If there is a minimal archive log gap, we can transfer the archive logs from Primary to Standby and apply them at Standby.

b. By performing incremental restore. For full process, you can check on the given LINK

 

7. What are the methods of Database Upgrade?

a. Silent upgrade

b. Upgrade using DBUA

c. Using Datapump

d. Using transportable tablespace

 

8. Difference between maximum protection and maximum performance mode in Dataguard Environment?

Maximum Protection mode guarantees that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover each transaction must be written to both the local online redo log and to a standby redo log on at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down if a fault prevents it from writing its redo stream to at least one synchronized standby database.

When Data Guard is in MAXIMUM PERFORMANCE mode, data can potentially be lost if the primary database crashes and the redo logs have not been transferred to any archival destination. This protection mode provides the highest level of data protection that is possible without affecting the performance of the primary database. This is accomplished by allowing a transaction to commit as soon as the redo data needed to recover that transaction is written to the local redo log. The primary database’s redo data stream is also written to at least one standby database, but that redo stream is written asynchronously with respect to the commitment of the transactions that create the redo data.

9. ASM isnt coming up automatically after server reboot, but you can manually start the ASM, what would be the reason?

Check the ASM auto_start value with the below command :

crsctl stat res ora.asm -p

Possible auto_start values / constants are

-always (1)
Causes the resource to restart when the node restarts regardless of the resource’s state when the node stopped.

-restore (0)
Does not start the resource at restart time if it was in an offline state, such as STATE=OFFLINE, TARGET=OFFLINE, when the node stopped. The resource is restored to its state when the node went down. The resource is started only if it was online before and not otherwise.

-never (2)
Oracle Clusterware never restarts the resource regardless of the resource’s state when the node is stopped.

If the auto_start values are restore (0) or never (2), then ASM wont start automatically and you need to manually start the same

With the below command you can change the auto start value in ASM :

crsctl modify resource “ora.asm” -attr “AUTO_START=always” -unsupported

 

10. What is split brain syndrome?

Split Brain Syndrome occurs when the instance members in a RAC fail to ping/connect to each other via private interconnect, but the servers are all physically up and running and the database instance on each of these servers is also running. These individual nodes are running fine and can conceptually accept user connections and work independently. So basically due to lack of communication the instance thinks that the other instance that it is not able to connect is down and it needs to do something about the situation. The problem is if we leave these instances running, the same block might get read, updated in these individual instances and there would be data integrity issue, as the blocks changed in one instance, will not be locked and could be over-written by another instance. This situation is termed as Split Brain Syndrome.

 

11. What all things can you check from TOP command?

System time, uptime and user sessions
At the very top left of the screen , op displays the current time. This is followed by the system uptime, which tells us the time for which the system has been running.
Next comes the number of active user sessions.

Memory usage
The “memory” section shows information regarding the memory usage of the system. The lines marked “Mem” and “Swap” show information about RAM and swap space respectively.

Tasks
The “Tasks” section shows statistics regarding the processes running on your system. The “total” value is simply the total number of processes currently present in the system.

CPU usage
The CPU usage section shows the percentage of CPU time spent on various tasks. The us value is the time the CPU spends executing processes in user space. Similarly, the sy value is the time spent on running kernel space processes.
Linux uses a “nice” value to determine the priority of a process. A process with a high “nice” value is “nicer” to other processes, and gets a low priority. Similarly, processes with a lower “nice” gets higher priority.
This is followed by id, which is the time the CPU remains idle. Most operating systems put the CPU on a power saving mode when it is idle. Next comes the wa value, which is the time the CPU spends waiting for I/O to complete.

Load average

The load average section represents the average “load” over one, five and fifteen minutes. “Load” is a measure of the amount of computational work a system performs. On Linux, the load is the number of processes in the R and D states at any given moment. The “load average” value gives you a relative measure of how long you must wait for things to get done.

12. What things need to be checked before adding a disk in a disk group in asm?

a. Check whether the Disk is visible from all nodes

b. Check whether the Disk to be added is a RAW Disk

13. If the Disk size is different, will the disk be added to the diskgroup in Oracle ASM?

Yes, Oracle supports different sized disks within a diskgroup, but Oracle notes that there may be performance issues if you do this.

Oracle notes that Oracle ASM load balances the file activity by uniformly distributing file extents across all of the disks in a disk group. For this technique to be effective it is important that disks in a disk group be of similar performance characteristics. Hence, you should add ASM disks with different sizes into a separate ASM diskgroup

14. What is header status in asm?

Per-instance status of the disk as seen by discovery:

  • UNKNOWN – Automatic Storage Management disk header has not been read
  • CANDIDATE – Disk is not part of a disk group and may be added to a disk group with the ALTER DISKGROUP statement
  • INCOMPATIBLE – Version number in the disk header is not compatible with the Automatic Storage Management software version.
  • PROVISIONED – Disk is not part of a disk group and may be added to a disk group with the ALTER DISKGROUP statement. The PROVISIONED header status is different from the CANDIDATE header status in that PROVISIONED implies that an additional platform-specific action has been taken by an administrator to make the disk available for Automatic Storage Management.
  • MEMBER – Disk is a member of an existing disk group. No attempt should be made to add the disk to a different disk group. The ALTER DISKGROUP statement will reject such an addition unless overridden with the FORCE option
  • FORMER – Disk was once part of a disk group but has been dropped cleanly from the group. It may be added to a new disk group with the ALTER DISKGROUP statement.
  • CONFLICT – Automatic Storage Management disk was not mounted due to a conflict
  • FOREIGN – Disk contains data created by an Oracle product other than ASM. This includes datafiles, logfiles, and OCR disks

15. In a query which is good to have UNION or UNION ALL?

 

16. In a query which is good to have, sub queries or JOIN?

What are Joins?

A join is a query that combines records from two or more tables. A join will be performed whenever multiple tables appear in the FROM clause of the query. The select list of the query can select any columns from any of these tables.

What is Subquery?

A Subquery or Inner query or Nested query is a query within SQL query and embedded within the WHERE clause. A Subquery is a SELECT statement that is embedded in a clause of another SQL statement. They can be very useful to select rows from a table with a condition that depends on the data in the same or another table. A Subquery is used to return data that will be used in the main query as a condition to further restrict the data to be retrieved. The subquery can be placed in the following SQL clauses they are WHERE clause, HAVING clause, FROM clause.

Advantages of Joins:
a. It executes faster.
b. The retrieval time of the query using joins almost always will be faster than that of a subquery.
c. By using joins, you can maximize the calculation burden on the database i.e., instead of multiple queries using one join query. This means you can make better use of the database’s abilities to search through, filter, sort, etc.

Disadvantages of Joins:

a. They are not as easy to read as subqueries.
b. More joins in a query means the database server has to do more work, which means that it is more time consuming process to retrieve data
c. As there are different types of joins, it can be confusing as to which join is the appropriate type of join to use to yield the correct desired result set.
d. Joins cannot be avoided when retrieving data from a normalized database, but it is important that joins are performed correctly, as incorrect joins can result in serious performance degradation and inaccurate query results.

Advantages of Subquery:

a. Subqueries divide the complex query into isolated parts so that a complex query can be broken down into a series of logical steps.
b. It is easy to understand and code maintenance is also at ease.
Subqueries allow you to use the results of another query in the outer query.
c. In some cases, subqueries can replace complex joins and unions.

Disadvantages of Subquery:

a. The optimizer is more mature for MYSQL for joins than for subqueries, so in many cases a statement that uses a subquery can be executed more efficiently if you rewrite it as join.
b. We cannot modify a table and select from the same table within a subquery in the same SQL statement.

Conclusion :
A subquery is easier to write, but a joint might be better optimized by the server. For example a Left Outer join typically works faster because servers optimize it.

 

17. Discuss on the high level steps in Patch.

a. Download the Patch

b. Check whether OPatch update is needed, likewise update your OPatch

c. Check for any conflicting patches and resolve them if needed

d. Take a backup of your Oracle Home

e. Apply the Patch

For full steps in Patching a RAC instance, please click on the LINK

 

18. Can you upgrade from Oracle 10g to 19c?

Direct upgrade is not supported from Oracle 10G to 19c. So, you need to upgrade to a minimum of 11.2.0.4, after which you can upgrade your database to 19c.

Else, you can go ahead with Datapump utility and migrate to 19c database to upgrade.

Below is the table where it shows what versions of Oracle supports direct upgrade to 19c

19. How to stop a scan listener from 1 Node?

srvctl stop scan_listener -n <node name>

20 . What is Library Cache Contention?

The Oracle shared pool contains Oracle’s library cache, which is responsible for collecting, parsing, interpreting, and executing all of the SQL statements that go against the Oracle database.

The library cache is used like the data buffers, a place to store ready-to-run SQL which has already been parsed, semantically checked, optimized and has a load-and-go execution plan in-place.

Contentions for the shared pool and library cache latches are mainly due to intense hard parsing. A hard parse applies to new cursors and cursors that are aged out and must be re-executed. Excessive hard parsing is common among applications that primarily use SQL statements with literal values. A hard parse is a very expensive operation, and a child library cache latch must be held for the duration of the parse.

21. What is SQL Baseline and SQL Profile?

  • SQL Plan baseline

SQL plan management is a preventative mechanism that enables the optimizer to automatically manage execution plans, ensuring that the database uses only known or verified plans.

SQL plan management uses a mechanism called a SQL plan baseline. A plan baseline is a set of accepted plans that the optimizer is allowed to use for a SQL statement. In the typical use case, the database accepts a plan into the plan baseline only after verifying that the plan performs well. In this context, a plan includes all plan-related information (for example, SQL plan identifier, set of hints, bind values, and optimizer environment) that the optimizer needs to reproduce an execution plan.

When a SQL statement is hard parsed, the cost based optimizer produces several execution plans and selects the one with the lowest cost. If a SQL plan baseline is present, the optimizer compares the plan it just produced with the plans in the SQL plan baseline. If a matching plan is found that is flagged as accepted the plan is used. If the SQL plan baseline doesn’t contain an accepted plan matching the one it just created, the optimizer evaluates the accepted plans in the SQL plan baseline and uses the one with the lowest cost. If the execution plan originally produced by the optimizer has a lower cost than those present in the SQL plan baseline, it is added to the baseline as a not-accepted plan, so it is not used until it is verified not to cause a reduction in performance. If a system change affects all existing accepted plans, so they are considered non-reproducible, the optimizer will use the plan with the lowest cost.

The use of SQL plan baselines is controlled by the OPTIMIZER_USE_SQL_PLAN_BASELINES parameter, which is set to TRUE by default. 

 

  • SQL Profile

A SQL profile is a set of auxiliary information specific to a SQL statement. Conceptually, a SQL
profile is to a SQL statement what statistics are to a table or index. The database can use the
auxiliary information to improve execution plans.
A SQL profile contains corrections for poor optimizer estimates discovered by the SQL Tuning
advisor. This information can improve optimizer cardinality and selectivity estimates, which in
turn leads the optimizer to select better plans.
The SQL profile does not contain information about individual execution plans. Rather, the
optimizer has the following sources of information when choosing plans:
• The environment, which contains the database configuration, bind variable values, optimizer
statistics, data set, etc.
• The supplemental statistics in the SQL profile
Therefore, SQL profiles just guide the optimizer to a better plan.

 

22. What is Checkpoint Process?

The checkpoint process is responsible for updating file headers in the database datafiles. A checkpoint occurs when Oracle moves new or updated blocks (called dirty blocks) from the RAM buffer cache to the database datafiles.

Oracle checkpoints keeps the database buffer cache and the database datafiles synchronized. This synchronization is part of the mechanism that Oracle uses to ensure that your database can always be recovered.

Oracle Database uses checkpoints to achieve the following goals:

  • Reduce the time required for recovery in case of an instance or media failure
  • Ensure that dirty buffers in the buffer cache are written to disk regularly
  • Ensure that all committed data is written to disk during a consistent shutdown

Oracle Database uses the following types of checkpoints:

  • Thread checkpointsThe database writes to disk all buffers modified by redo in a specific thread before a certain target. The set of thread checkpoints on all instances in a database is a database checkpoint. Thread checkpoints occur in the following situations:
    • Consistent database shutdown
    • ALTER SYSTEM CHECKPOINT statement
    • Online redo log switch
    • ALTER DATABASE BEGIN BACKUP statement
  • Tablespace and data file checkpointsThe database writes to disk all buffers modified by redo before a specific target. A tablespace checkpoint is a set of data file checkpoints, one for each data file in the tablespace. These checkpoints occur in a variety of situations, including making a tablespace read-only or taking it offline normal, shrinking a data file, or executing ALTER TABLESPACE BEGIN BACKUP.
  • Incremental checkpointsAn incremental checkpoint is a type of thread checkpoint partly intended to avoid writing large numbers of blocks at online redo log switches. DBWn checks at least every three seconds to determine whether it has work to do. When DBWn writes dirty buffers, it advances the checkpoint position, causing CKPT to write the checkpoint position to the control file, but not to the data file headers.

Other types of checkpoints include instance and media recovery checkpoints and checkpoints when schema objects are dropped or truncated.

 

23. What is parsing?

When Oracle get a SQL query it needs to execute some tasks before being able to really execute the query. These tasks make up what is called parsing. To execute this job and limit the number of data that it must rebuilt each time, Oracle uses a shared memory area named the shared pool

24. Suppose you are restricted to take backups from your Production database to restore them in non prod. How will you proceed with non prod restore then?

There are numerous ways to proceed with this. Below are a few method :

a. Restore with RMAN Duplicate command

b. Restore with the service name of your Production Database

 

25. Can you restore the same way from standby database?

Yes we can.

a. restore primary controlfile from service ‘service_name’;

b. restore database from service ‘service_name section size 64G;

 

26. How to generate explain plan?

a. SQL> explain plan for
2 select count(*) from infraxpertzz;

Explained.

— DISPLAY THE EXPLAIN PLAN

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
——————————————————————————–
Plan hash value: 7047308117

———————————————————————–
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
———————————————————————–
| 0 | SELECT STATEMENT | | 1 | 42 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| INFRAXPERTZZ | 939 | 42 (1)| 00:00:01 |
———————————————————————–

Note

PLAN_TABLE_OUTPUT
——————————————————————————–
dynamic statistics used: dynamic sampling (level=2)

13 rows selected.

 

b. Explain plan of a sql_id from AWR:

SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR(‘&sql_id’));

c. Explain plan for a sql_id from cursor

SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR(‘&sql_id’));

d. Explain plan of sql baseline:

SELECT * FROM   TABLE(DBMS_XPLAN.display_sql_plan_baseline(plan_name=>’SQL_PLAN_agd921ut5s4jg30a4b4a9′));

27. What is a snapshot standby database?

A snapshot standby database is a fully updateable standby database created by converting a physical standby database into a snapshot standby database.

A snapshot standby database receives and archives, but does not apply, redo data from a primary database. The redo data received from the primary database is applied once a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database.

This means that you need to break the DG connectivity with your Primary database, once you convert your Physical Standby Database to Snapshot Standby DB. The Application would be able to connect to the Snapshot Standby database and perform WRITE operations on it. But once it is again converted to Physical Standby, the changes would be lost and the Physical Standby DB would again be in SYNC with the Primary

For steps to convert your Physical Standby Database to a Snapshot Standby database, please click on the LINK

 

28. Can you open the database in READ-WRITE mode in snapshot standby database?

Yes. Since Snapshot Standby database is a fully updateable database, you can open it in READ-WRITE mode.

 

29. From where do you check whether a parameter to be changed in a database is dynamic or static?

In v$parameter we can find one column ie. ISSYS_MODIFIABLE

select name, value, issys_modifiable from v$parameter ;

Values :-

IMMEDIATE – the parameter can be changed with ALTER SYSTEM

DEFERRED – the parameter cannot be changed until the next session

FALSE – the parameter cannot be changed with ALTER SYSTEM

30. Suppose our FRA is full, what will you do? 

You can try taking a backup of archive logs and delete the same. But mostly you wont be able to perform it until a space is available.

In this case, you can move a certain number of recent archive logs to any other space and take RMAN backup of the archive logs until SEQUENCE which are there in the FRA and delete the backed up files. Once this is done, move the archive logs back to FRA and again take archive log backup and delete.

31. Details on guaranteed restore points and restore point

  • Guaranteed restore points: A guaranteed restore point enables you to flash the database back deterministically to the restore point regardless of the DB_FLASHBACK_RETENTION_TARGET initialization parameter setting. The guaranteed ability to flash back depends on sufficient space being available in the flash recovery area.Guaranteed restore points guarantee only that the database will maintain enough flashback logs to flashback the database to the guaranteed restore point. It does not guarantee that the database will have enough undo to flashback any table to the same restore point.Guaranteed restore points must be dropped explicitly by the user using the DROP RESTORE POINT statement. They do not age out. Guaranteed restore points can use considerable space in the flash recovery area. Therefore, Oracle recommends that you create guaranteed restore points only after careful consideration.
  • Normal restore points: A normal restore point enables you to flash the database back to a restore point within the time period determined by the DB_FLASHBACK_RETENTION_TARGET initialization parameter. The database automatically manages normal restore points. When the maximum number of restore points is reached, according to the rules described restore_point, the database automatically drops the oldest restore point. However, you can explicitly drop a normal restore point using the DROP RESTORE POINT statement.

32. How to resolve patch conflicts?

All patches may not be compatible with one another. For example, if a patch has been applied, all the bugs fixed by that patch could reappear after another patch is applied. This is called a conflict situation. OPatch detects such situations and raises an error when a it detects a conflict. 

For more information, please click on the Oracle Link

33. Which parameter is mandatory to change while Upgradation?

You need to check your pre-upgrade log and find what modifications need to be made prior upgrade

34. What is the use of kernel parameters while installing the database?

Setting Kernel Parameters while installing the database help you to debug when you tune the performance at the OS level.

For more information, please follow the LINK

 

35. In which conditions you will recommend oracle data guard to client?

Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Oracle Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions.

 

36. When you connect via RMAN to auxiliary database both target and auxiliary before performing the Cloning, your RMAN is able to connect to the target database perfectly fine, but in the auxiliary, it is giving invalid username and password. What could be the issue?

The password file must not have been copied to the Target Database

37. What is the difference between auxiliary channel and maintenance channel?

Auxiliary Channel : An Auxiliary Database is a standby database that will be created as a result of the duplication of the target database. In RMAN’s terminology, Auxiliary instance identifies an instance which RMAN connects in order to execute the duplicate command. An Auxiliary Channel is a communication pipeline between a RMAN executable and an auxiliary database. A channel consists of a server session on the target or auxiliary database and a data stream from the database to the backup device or vice-versa.

Maintenance Channel :  A Maintenance channel is used in preparation for issuing a CHANGEDELETE, or CROSSCHECK command. As a rule, allocate one maintenance channel for each device.

38. How many channels can be allocated in RMAN?

You can allocate up to 255 channels; each channel can read up to 64 files in parallel. You can control the degree of parallelism within a job by the number of channels that you allocate. Allocating multiple channels simultaneously allows a single job to read or write multiple backup sets or disk copies in parallel, with each channel operating on a separate backup set or copy.

39. When do more undo generates and what action required?

Oracle Database creates and manages information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo

Undo records are used to:

  • Roll back transactions when a ROLLBACK statement is issued
  • Recover the database
  • Provide read consistency
  • Analyze data as of an earlier point in time by using Oracle Flashback Query
  • Recover from logical corruptions using Oracle Flashback features

When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the data files. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.

Now coming to the question, any operations where data changes occur, also Data Pump Export and Import consume more undo tablespace. This is due to additional metadata queries during export and some relatively long-running master table queries during import. As a result, for databases with large amounts of metadata, you may receive an ORA-01555: snapshot too old error. To avoid this, consider adding additional undo tablespace or increasing the value of the UNDO_RETENTION initialization parameter for the database.

40. Why do you copy the password file to the clone server?

Connections to the auxiliary instance can be established by using operating system authentication or password file authentication. For backup-based duplication, you can either create a password file or use operating system authentication to connect to the auxiliary instance. For active database duplication, you must use password file authentication.

To connect to a database by using password file authentication, you must create a password file for the database. When duplicating to a remote host, setting up a password file is mandatory. 

41.  Can we have a schema level backup in RMAN?

No, but if the schema objects is in a single tablespace one, can be made by taking tablespace backup

 

42. As a dba what all steps to configure the Dataguard?

Please click the LINK to get details 

43. Which script drops the stdbyperf user and tables?

In the new Active Data Guard option, users can now use statspack from the primary database to collect data from a standby database that is opened read-only and performing recovery. The standby statspack is installed in a separate schema on the Primary database, STDBYPERF which is then propagated to the standby. This user “STDBYPERF” user does not have DBA privileges and has no access to local V$ tables.

The script sbdrop.sql drops the stdbyperf user and tables. The script must be run when connected to SYS (or internal). Example:

connect / as sysdba
@sbdrop.sql

 

44. Which script is used to purge a set of snapshots in the data guard?

 The script sbpurge.sql purges a set of snapshots. The script asks for database id, instance number, low and high snapshots ids. The script purges all snapshots between the low and high snapshot ids for the given instance. Example:

@sbreport

 

45. Which script generates the Standby statistics report?

The script sbreport.sql generates the standby statistics report. The script asks for: database id, instance number, high and low snapshots id to create the report. Example:

sql> @sbreport

 

46. Which script creates the Standby stats pack schema to hold the Standby snapshots?

The perfstat schema and statspack related objects must be present on the primary and standby prior to installing standby statspack. The standby statspack installation script (sbcreate.sql) creates the standby statspack schema to hold the standby snapshots. The script asks for:

    1. A password for stdbyperf user
    2. Default tablespace
    3. Temporary tablespace

The script creates the ‘stdbyperf’ user and assigns it the selected default and temporary table space. After the user has been created it calls sbaddins.sql to add the first standby instance to the configuration.

47. What are the recovery process wait events in Data guard?

48. How to monitor and assess redo apply performance?

To monitor and assess redo apply performance, query the V$RECOVERY_PROGRESS view.

This view contains the columns described below :

The most useful statistic is the Active Apply rate because the Average Apply Rate includes idle time spent waiting for redo to arrive making it less indicative of apply performance.
In a Data Guard physical standby environment, it is important to determine if the standby database can recover redo as fast as, or faster than, the primary database can generate redo. The simplest way to determine application throughput in terms of redo volume is to collect Automatic Workload Repository (AWR) reports on the primary database during normal and peak workloads, and determine the number of bytes per second of redo data the production database is producing. You can then compare the speed at which redo is being generated with the Active Apply Rate columns in the V$RECOVERY_PROGRESS view to determine if the standby database is able to maintain the pace.

49. How you will find out the history of apply lag values since the Standby instance was last started?

To obtain a histogram that shows the history of apply lag values since the standby instance was last started, query the V$STANDBY_EVENT_HISTOGRAM view. For example:


 
SQL> SELECT * FROM V$STANDBY_EVENT_HISTOGRAM WHERE NAME = 'apply lag' - 
> AND COUNT > 0;

NAME             TIME       UNIT         COUNT        LAST_TIME_UPDATED
---------     ---------   --------    -----------    ------------------------
apply lag         0        seconds        79681          06/18/2009 10:05:00
apply lag         1        seconds         1006          06/18/2009 10:03:56
apply lag         2        seconds           96          06/18/2009 09:51:06
apply lag         3        seconds            4          06/18/2009 04:12:32
apply lag         4        seconds            1          06/17/2009 11:43:51
apply lag         5        seconds            1          06/17/2009 11:43:52

6 rows selected

 

To evaluate the apply lag over a time period, take a snapshot of V$STANDBY_EVENT_HISTOGRAM at the beginning of the time period and compare that snapshot with one taken at the end of the time period.

 

50. What are the commands to perform switchover?

 


Primary : 

1. Alter database commit to switchover to standby;

2. Shut immediate (If DB isnt shutdown automatically)

3. startup nomount

4. alter database mount standby database;

select name,open_mode,database_role from v$database;

Dataguard : 

5. Alter database commit to switchover to primary;

6. shut immediate

7. startup

select name,open_mode,database_role from v$database;

Primary : 

8. Alter database recover managed standby database disconnect from session;