Please click on the INDEX and browse for more interesting posts.
Below are some 50 Oracle DBA interview questions that you may encounter.
We will constantly updating our blog with more questions. Please check the INDEX link provided above for more posts regarding Interviews and other technical stuffs
1. What is SQL Elapsed Time?
The SQL execute elapsed time Oracle metric is the amount of elapsed time SQL statements are executing. Note that for SQL select statements this also includes the amount of time spent performing fetches of query results.
This query will show the SQL execution elapsed time duration (in hours) for long-running SQL statements:
col program format a30
select query_runs.*,
round ( (end_time - start_time) * 24, 2) as duration_hrs
from ( select u.username,
ash.program,
ash.sql_id,
ash.sql_plan_hash_value as plan_hash_value,
ash.session_id as sess#,
ash.session_serial# as sess_ser,
cast (min (ash.sample_time) as date) as start_time,
cast (max (ash.sample_time) as date) as end_time
from dba_hist_active_sess_history ash, dba_users u
where u.user_id = ash.user_id and ash.sql_id = lower(trim('&sql_id'))
group by u.username,
ash.program,
ash.sql_id,
ash.sql_plan_hash_value,
ash.session_id,
ash.session_serial#) query_runs
order by sql_id, start_time;
2. Have u worked on cloning pdb?
Steps :
a. First start the PDB in read only , which need to be cloned.
b. From container execute the below command :
create pluggable database <targetpdb> from <sourcepdb> FILE_NAME_CONVERT=(‘sourcedatafilepath’,’targetdatafilepath’);
3. What about the memory parameters in CDB/PDB? Are they shared?
If you are running a lone-PDB setup there is no point using these settings as you want the PDB to use all the memory assigned to the instance.
However, if you there are multiple PDB’s running in the container, the below parameters can be set up exclusively :
DB_CACHE_SIZE
: The minimum buffer cache size for the PDB.SHARED_POOL_SIZE
: The minimum shared pool size for the PDB.PGA_AGGREGATE_LIMIT
: The maximum PGA size for the PDB.PGA_AGGREGATE_TARGET
: The target PGA size for the PDB.SGA_MIN_SIZE
: The minimum SGA size for the PDB.SGA_TARGET
: The maximum SGA size for the PDB.
There are a number of restrictions regarding what values can be used, which are explained in the documentation here.
4. Are system datafiles shared or separate in CDB/PDB?
- There is a separate SYSTEM and SYSAUX tablespace for the root container of the CDB and each PDB
- There is only one UNDO tablespace for the entire CDB
- There is only one set of control files and online redo logs files for the entire CDB. Individual PDB’s have their own data files (which contain the user data), but do not have distinct redo log or control files.
- We can create onr default temporary tablespace for the entire CDB or each PDB can have its own additional temporary tablespaces
- There is single network administration files like listener.ora, tnsnames.ora, and sqlnet.ora file for an entire CDB. All of the PDBs in the CDB use the same files.
- There is only one set of background processes shared by the root and all PDBs
- There is a single SGA shared by all PDB’s
5. Some parameters exclusive to PDB?
Let us know in comments if you have the answer
6. How will you check the RAW disks associated with your ASM?
[root@jack ~]# oracleasm listdisks DATA1 DATA2 DATA3 DATA4 DATA5 FRA1 FRA2 FRA3 [root@jack ~]# oracleasm querydisk -d DATA1 Disk "DATA1" is a valid ASM disk on device [8,17] [root@jack ~]# [root@jack ~]# [root@jack ~]# ls -l /dev/* |grep "8, *17" | awk '{print $10}' /dev/sdb1
7. Concept of CDB,PDB?
The multitenant option introduced in Oracle Database 12c allows a single container database (CDB) to host multiple separate pluggable databases (PDB).
A CDB includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non-CDB.
Every CDB has the following containers:
- Exactly one root. The root stores Oracle-supplied metadata and common users. An example of metadata is the source code for Oracle-supplied PL/SQL packages. A common user is a database user known in every container. The root container is named
CDB$ROOT
. - Exactly one seed PDB. The seed PDB is a system-supplied template that the CDB can use to create new PDBs. The seed PDB is named
PDB$SEED
. You cannot add or modify objects inPDB$SEED
. - Zero or more user-created PDBs. A PDB is a user-created entity that contains the data and code required for a specific set of features. For example, a PDB can support a specific application, such as a human resources or sales application. No PDBs exist at creation of the CDB. You add PDBs based on your business requirements.
8. If app user says database is running slow, what will u do?
This is troubleshooting. You need to check how you will proceed with this
9. Consider in a server, there are 10 db’s running, and 2-3 db’s are reported slow, what will u do?
Same as question 8
10. What are the foreground events in AWR?
DB File Scattered Read. This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that your table may have missing or suppressed indexes.
- Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits.
- Try to cache small tables to avoid reading them in repeatedly, since a full table scan is put at the cold end of the LRU (Least Recently Used) list.
DB File Sequential Read. This event generally indicates a single block read (an index read, for example). A large number of waits here could indicate poor joining orders of tables, or unselective indexing. It is normal for this number to be large for a high-transaction, well-tuned system, but it can indicate problems in some circumstances. You should correlate this wait statistic with other known issues within the Statspack report, such as inefficient SQL.
- Check to ensure that index scans are necessary.
- Check join orders for multiple table joins.
- The DB_CACHE_SIZE will also be a determining factor in how often these waits show up. Problematic hash-area joins should show up in the PGA memory, but they’re also memory hogs that could cause high wait numbers for sequential reads.
- They can also show up as direct path read/write waits.
These circumstances are usually interrelated. When they occur in conjunction with the appearance of the db file scattered read and db file sequential read in the Top 5 Wait Events section.
- First you should examine the SQL Ordered by Physical Reads section of the report, to see if it might be helpful to tune the statements with the highest resource usage.
- Then, to determine whether there is a potential I/O bottleneck, examine the OS I/O statistics for corresponding symptoms.
- Also look at the average time per read in the Tablespace and File I/O sections of the report. If many I/O-related events appear high in the Wait Events list, re-examine the host hardware for disk bottlenecks and check the host-hardware statistics for indications that a disk reconfiguration may be of benefit.
Free Buffer. This indicates your system is waiting for a buffer in memory, because none is currently available.
It happens because of Dirty Buffers. For example, if a query tries to fetch DEPT table, and then there are dirty buffers for DEPT table, then first those DIRTY buffers will be written through DB writes before it is available to the query. Also, it can happen if the Buffer cache is pretty small
Waits in this category may indicate that you need to increase the DB_BUFFER_CACHE, if all your SQL is tuned.
Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks, leaving none for this particular statement that is waiting for the system to process. This normally indicates that there is a substantial amount of DML (insert/update/delete) being done and that the Database Writer (DBWR) is not writing quickly enough; the buffer cache could be full of multiple versions of the same buffer, causing great inefficiency.
To address this, you may want to consider accelerating incremental checkpointing, using more DBWR processes, or increasing the number of physical disks.
To investigate if this is an I/O problem, look at the File I/O Statistics.
Buffer Busy. This is a wait for a buffer that is being used in an unsharable way or is being read into the buffer cache.
Occurs when someone is already reading the buffer and someone else is trying to read that buffer
This is a case of concurrency issue, where 2 different users need to access the same buffer
Latches protect buffers from geeting corrupted, like whenever 2 users are trying to write on a buffer, it might get corrupted. This is bad application design
Buffer busy waits should not be greater than 1%.
Check the Buffer Wait Statistics section (or V$WAITSTAT) to find out the type of wait:
- Segment header ==> Increase the freelist groups or increase the pctused to pctfree gap.
- Undo header ==> Add rollback segments.
- Undo block ==> Reduce the data density on the table driving this consistent read or increase the value of the initialization parameter B_CACHE_SIZE.
- Data block ==>
- Move data to another block to avoid this hot block
- Increase the freelists on the table
- Use Locally Managed Tablespaces (LMTs).
- Index block ==> Rebuild the index, partition the index, or use a reverse key index.
To prevent buffer busy waits related to data blocks, you can also use a smaller block size: fewer records fall within a single block in this case, so it’s not as “hot.”
When a DML occurs, Oracle Database writes information into the block, including all users who are “interested” in the state of the block (Interested Transaction List, ITL).
To decrease waits in this area:
- Increase the INITRANS, which will create the space in the block to allow multiple ITL slots.
- Increase the PCTFREE on the table where this block exists (this writes the ITL information up to the number specified by maxtrans, when there are not enough slots built with the initrans that is specified).
Library Cache Lock
The library cache load lock Oracle metric occurs when the process is waiting for the opportunity to load an object or a piece of an object into the library cache. The loads of SQL statements are serialized and locked in exclusive mode, such that only one process can load an object or a piece of an object at a time. In general, all library cache waits are associated with non-reentrant SQL or an undersized shared pool.
In general the library cache load lock occurs during periods of high activity within the library cache, especially with databases that do not use host variables and have a hard parsing due to non-reusable SQL.
Latch Free. Latches are low-level queuing mechanisms (they’re accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA).
Latches are like locks on memory that are very quickly obtained and released. They are used to prevent concurrent access to a shared memory structure. If the latch is not available, a latch free miss is recorded.
Most latch problems are related to:
- Failure to use bind variables(library cache latch)
- Redo generation issues (redo allocation latch)
- Buffer cache contention issues (cache buffers LRU chain)
- Hot blocks in the buffer cache (cache buffers chain).
There are also latch waits related to bugs; check MetaLink for bug reports if you suspect this is the case.
When latch miss ratios are greater than 0.5%, you should investigate the issue.
If latch free waits are in the Top 5 Wait Events or high in the complete Wait Events list, look at the latch-specific sections of the AWR report to see which latches are contended for.
Enqueue. An enqueue is a lock that protects a shared resource.
Locks protect shared resources, such as data in a record, to prevent two people from updating the same data at the same time.
An enqueue includes a queuing mechanism, which is FIFO (first in, first out).
Note: Oracle’s latching mechanism is not FIFO. Enqueue waits usually point to the following enqueues:
- ST:Used for space management and allocation for dictionary-managed tablespaces. Use LMTs, or try to preallocate extents or at least make the next extent larger for problematic dictionary-managed tablespaces.
- HW:The HW enqueue is used to manage the allocation of space beyond the high water mark of a segment. The high water mark of a segment is the boundary between used and unused space in that segment. If contention is occurring for “enq: HW – contention” it is possible that automatic extension is occuring to allow the extra data to be stored since the High Water Mark has been reached. Frequent allocation of extents, reclaiming chunks, and sometimes poor I/O performance may be causing contention for the LOB segments high water mark.
- SQ:Indicates sequence (SQ) enqueues (lock) contention. When you select from a sequence, the NEXTVAL generated from a the seq$ table if it is not cached. If it is cached, it will be available in a memory structure and no need to generate the contention.
- TM:Occur during DML to prevent DDL to the affected object. If you have foreign keys, be sure to index them to avoid this general locking issue.
- TX4:Are the most common enqueue waits and they are usually the result of one of three issues:
- Duplicates in a unique index; you need to commit/rollback to free the enqueue.
- Multiple updates to the same bitmap index fragment. Since a single bitmap fragment may contain multiple rowids, you need to issue a commit or rollback to free the enqueue when multiple users are trying to update the same fragment.
- Multiple users are updating the same block. If there are no free ITL slots, a block-level lock could occur. You can easily avoid this scenario by increasing the initrans and/or maxtrans to allow multiple ITL slots and/or by increasing the pctfree on the table.
Log Buffer Space. This wait occurs because you are writing the log buffer faster than LGWR can write it to the redo logs, or because log switches are too slow.
To address this problem:
- Increase the size of the log files
- Increase the size of the log buffer
- Get faster disks to write to where you might even consider using solid-state disks, for their high speed.
Log File Switch
All commit requests are waiting for logfile switch (archiving needed) or logfile switch (checkpoint Incomplete).
- Ensure that the archive disk is not full or slow
- DBWR may be too slow because of I/O
- Add more or larger redo logs
- Add database writers if the DBWR is the problem.
Log File Sync. When a user commits or rolls back data, the LGWR flushes the session’s redo from the log buffer to the redo logs. The log file sync process must wait for this event to successfully complete.
To reduce wait events here:
- Slow disk I/O: Segregating the redo log file onto separate disk spindles can reduce log file sync waits. Moving the online redo logs to fast SSD storage and increasing the log_buffer size above 10 megabytes (It is automatically set in 11g and beyond). If I/O is slow (timings in AWR or STATSPACK reports > 15 ms), then the only solution for log file sync waits is to improve I/O bandwidth.
- LGWR is not getting enough CPU: If the vmstat runqueue column is greater than cpu_count, then the instance is CPU-bound and this can manifest itself in high log file sync waits. The solution is to tune SQL (to reduce CPU overhead), to add processors, or to ‘nice’ the dispatching priority of the LGWR process.
- High COMMIT activity: A poorly-written application is issuing COMMIT s too frequently, causing high LGWR activity and high log file sync waits. The solution would be to reduce the frequency of COMMIT statements in the application.
- LGWR is paged out: Check the server for RAM swapping, and add RAM if the instance processes are getting paged-out.
- Bugs: There is also the possibility that bugs can cause high log file sync waits.
In summmary, high log file sync waits can be caused either by too-high COMMIT frequency in the application, or by exhausted CPU, disk or RAM resources.
Idle Event. There are several idle wait events listed after the output; you can ignore them. Idle events are generally listed at the bottom of each section and include such things as SQL*Net message to/from client and other background-related timings. Idle events are listed in the stats$idle_event table.
Direct path writes: They occur when performing appends or data loads. In addition, they occur when you see full table scans or fast full index scans as of Oracle 11g
Direct path reads: Occur when there is a lot of parallel query activity
db file parallel writes / read: Occurs when there is a lot of partition activity; it could be a table or index partition
db file single write: Occurs if there is a high number of data files
Direct path read temp or Direct path write temp: Often this event is a result of an undersized temporary tablespace
These two events are triggered when a session is reading or writing buffers to or from temporay segments. These reads and writes are directly to or from the PGA. These occur because work areas are too large to fit in memory and are written to disk
These are the biggest waits for large data warehouse sites.
However, if the workload is not a DSS workload then examine why this is happening.
You can do so by taking the following actions:
- Look at the SQL statement currently being run by the session experiencing waits to see what is causing the sorts. Query V$TEMPSEG_USAGE to find the SQL statement that is generating the sort.
- Query the statistics from the session to determine the size of the sort.
- Investigate whether it is possible to reduce the use of work areas by tuning the SQL statement.
- Investigate whether to increase PGA_AGGREGATE_TARGET.
SQL*Net more message from dblink:A session waits while the server process receives messages over a database link from another server process.
This time could go up because:
- Network bottleneck.
- Time taken to execute the SQL on the remote node.
- Note: This wait is also used when waiting for data from “extproc” or from a remote gateway process.
Buffer Type Waits: So what’s going on in your memory
- cache buffer chains: Check for hot objects
- free buffer waits:Insufficient buffers, process holding buffers too long or I/O subsystem is over loaded. Also check your db writes may be getting clogged up.
- buffer busy waits: See what is causing them further along in report; most of the time its data block related.
- gc buffer busy: Its in the RAC environment. Maybe caused by insufficient memory on your nodes or overloaded interconnect. Also look RAC specific section of the report
- cache buffers lru chain: Freelist issues, hot blocks latch:
- cache buffer handles: Freelist issues, hot blocks
- buffer busy:See what is causing them further along in report
- no free buffers: Insufficient buffers or dbwr contention
Log Type Waits
- log file parallel write : Look for log file contention
- log buffer space : Look at increasing log buffer size
- log file switch (checkpoint incomplete) : May indicate excessive db files or slow IO subsystem
- log file switch (archiving needed) : Indicates archive files are written too slowly
- log file switch completion : May need more log files per
- log file sync : Could indicate excessive commits
11. My database was running fine yesterday but it is really slow today? What has changed?
Use AWR Compare Periods Report to Identify
Changes in Database Performance
• AWR Compare Periods Report
– awrddrpt.sql – single instance
– awrgdrpt.sql – RAC
• Compares database performance over two time periods
• Good for identifying what changed in performance
Check whether any long running SQL query is there and find out whether there is any change in PLAN_HASH_VALUE for the query
Check whether there were any bulk insertion occurred in the database
12. A user complains that his session seemed to hang for a few minutes. What happened?
Use ASH for Targeted Performance Diagnostics
• AWR snapshots and reports cover entire system
• Transient events can be averaged over a snapshot and be nonobvious from an AWR report
• ASH can be used for examining:
– Targeted time range
– A specific
• session
• service
• wait_class
• client_id
• SQL_ID
– A targeted time range in combination with the above
13. Can a table have multiple unique, foreign, and/or primary keys?
A table can have multiple unique and foreign keys. However, a table can have only one primary key.
14. Can a unique key have NULL values? Can a primary key have NULL values?
Unique key columns are allowed to hold NULL values. The values in a primary key column, however, can never be NULL.
15. Can a foreign key reference a non-primary key?
Yes, a foreign key can actually reference a key that is not the primary key of a table. But, a foreign key must reference a unique key.
16. Can a foreign key contain null values?
Yes, a foreign key can hold NULL values. Because foreign keys can reference unique, non-primary keys – which can hold NULL values – this means that foreign keys can themselves hold NULL values as well.
17. When is Index Rebuild required ?
Every so often, we need to rebuild indexes in Oracle, because indexes become fragmented over time. This causes their performance – and by extension – that of your database queries, to degrade. Hence, rebuilding indexes every now and again can be quite beneficial.
Having said that, indexes should not be rebuilt to often, because it’s a resource intensive task. Worse, as an index is being rebuilt, locks will be placed on the index, preventing anyone from accessing it while the rebuilding occurs. Any queries trying to access this index in order to return the required results will be temporarily blocked, until the rebuild is complete.
In Oracle, you can get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE command.
SQL> ANALYZE INDEX IDX_INFRA_TEST VALIDATE STRUCTURE; Statement processed. SQL> SELECT name, height,lf_rows,lf_blks,del_lf_rows FROM INDEX_STATS; NAME HEIGHT LF_ROWS LF_BLKS DEL_LF_ROW ------------- ----------- ---------- ---------- ---------- IDX_INFRA_TEST 2 1 3 6 1 row selected.
There are two rules of thumb to help determine if the index needs to be rebuilt:
- If the index has height greater than four, rebuild the index.
- The deleted leaf rows should be less than 20%.
18. When does checkpoint occur in oracle?
The checkpoint process is responsible for updating file headers in the database datafiles. A checkpoint occurs when Oracle moves new or updated blocks (called dirty blocks) from the RAM buffer cache to the database datafiles.
Oracle checkpoints keeps the database buffer cache and the database datafiles synchronized. This synchronization is part of the mechanism that Oracle uses to ensure that your database can always be recovered.
Checkpoint automatically occurs at:
1. Checkpoint automatically occurs at a log switch.
2. When we will specify the parameter fast_start_mttr_target=<No of Seconds>.
3. If the datafile is offline checkpoint will occur.
4. Consistent database shutdown
5. ALTER SYSTEM CHECKPOINT statement
6. ALTER DATABASE BEGIN BACKUP statement.
19. When does dbwr write to the data file?
DBWn writes when:
1)checkpoint occurs
2)dirty buffers reach threshold
3)there are no free buffers
4)timeout occurs
5)RAC ping request is made
6)when any tablespace is taken offline, read only, drop or truncated & when begin backup
20. When does LGWR write to the database?
• When a user process commits a transaction
• When the redo log buffer is one-third full
• Before a DBWn process writes modified buffers to disk (if necessary)
• Every three seconds
21. What is deadlock and what will u do?
Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process.
When the DB encounters a deadlock, it automatically resolves it by rolling back one of the statements involved in the deadlock by releasing one set of data. The transaction that is rolled back by the DB will throw the error. The other transaction that goes on to complete successfully.
Library Cache Lock is the parameter which can be checked in AWR report if a Dead Lock is detected
22. Where is the cssd and crsd logfiles location in oracle rac?
From Oracle 12c onwards, CRSD and CSSD logfiles can be found at :
$ORACLE_BASE/diag/crs/host_name/crs/trace
Where the alert log is for CRS and trace files are there for cssd
Till 11g, below are the locations :
Cluster Ready Services Daemon (crsd) Log Files:
$CRS_HOME/log/hostname/crsd
Cluster Synchronization Services (CSS):
$CRS_HOME/log/hostname/cssd
Event Manager (EVM) information generated by evmd:
$CRS_HOME/log/hostname/evmd
Oracle RAC RACG:
$CRS_HOME/log/hostname/racg
$ORACLE_HOME/log/hostname/racg
23. What is SID and service name?
A SID is a unique name that uniquely identifies the database instance where as a service name is the Database TNS Alias that is given when users remotely connect to the database. The Service name is recorded in the tnsnames.ora file on the clients and it can be the same as the SID, and it can also be given any other name.
A service name is a feature in which a database can register itself with the listener. If a database is registered with a listener using a service name then the service name can be used as a parameter in the tnsnames.ora file. Otherwise a SID can be used in the tnsnames.ora file.
With Oracle Real Application Clusters (RAC) there will be different service names for each database instance.
24. What does a voting disk do?
The Voting Disk File is a file on the shared cluster system or a shared raw device file. Voting disk is akin to the quorum disk which helps to avoid the split brain syndrome. The older RAC release called a voting disk a quorum disk, but today it is all called a voting disk. In essence, a voting disk determine which RAC nodes are members of a cluster. Nodes (instances) can be “evicted” and there is always one “master” node that controls other nodes.
– The voting disk is used to arbitrate in cases where the old master has crashed.
– You must have an odd number of voting disks. For high availability, Oracle recommends that you have a minimum of three voting disks.
– If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.
– All nodes in the RAC cluster register their heart-beat information on the voting disks. The RAC heartbeat is a polling mechanism that is sent over the cluster interconnect to ensure that all RAC nodes are available.
– After ASM was introduced, they are called “voting files”
25. What is smon and pmon?
SMON (System Monitor) performs recovery after instance failure, monitor temporary segments and extents; clean temp segment, coalesce free space. It is mandatory process of DB and starts by default.
PMON (Process MONitor) is an Oracle background process created when you start a database instance. The PMON process will free up resources if a user process fails (eg. release database locks).
PMON normally wakes up every 3 seconds to perform its housekeeping activities. PMON must always be running for an instance. If not, the instance will terminate.
26. What are bind variables?
With bind variable in SQL, oracle can cache queries in a single time in the SQL cache area. This avoids a hard parse each time, which saves on various locking and latching resource we use to check object existence and so on.
If you create a bind variable in SQL*Plus, you can use the variable as you would a declared variable in your PL/SQL subprogram and then access the variable from SQL*Plus
27. What is a system change number (SCN)?
SCN is a value that is incremented whenever a dirty read occurs.
SCN is incremented whenever a deadlock occurs.
SCN is a value that keeps track of explicit locks.
SCN is a value that is incremented whenever database changes are made.
28. What are logfile states?
CURRENT
ACTIVE
INACTIVE
UNUSED
“CURRENT” state means that redo records are currently being written to that group. It will be until a log switch occurs. At a time there can be only one redo group current.
If a redo group containing redo’s of a dirty buffer that redo group is said to be ‘ACTIVE’ state. As we know log file keep changes made to the data blocks then data blocks are modified in buffer cache (dirty blocks). These dirty blocks must be written to the disk (RAM to permanent media).
And when a redolog group contains no redo records belonging to a dirty buffer it is in an “INACTIVE” state. These inactive redolog can be overwritten.
One more state ‘UNUSED’ initially when you create new redo log group its log file is empty on that time it is unused. Later it can be any of the above mentioned state.
29. Why do you run orainstRoot and ROOT.SH once you finalize the Installation?
orainstRoot.sh needs to be run to change the Permissions and groupname to 770 and to dba.
Root.sh (ORACLE_HOME) location needs to be run to create a ORATAB in /etc/oratab or /opt/var/oratab in Solaris and to copy dbhome, oraenv and coraenv to /usr/local/bin.
30. If any one of the background processes is killed/not running, what will happen to the instance ?
Background processes are started automatically when the instance is started.
Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.
If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted.
Any issues related to background processes should be monitored and analyzed from the trace files generated and the alert log.
31. What is RESULT Cache?
A result cache is an area of memory, either in the Shared Global Area (SGA) or client application memory, that stores the results of a database query or query block for reuse. The cached rows are shared across SQL statements and sessions unless they become stale.
32. What is dirty buffers
A dirty buffer is a buffer whose contents have been modified. Dirty buffers are freed for reuse when DBWR has written the blocks to disk.
33. What is soft parse?
A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.
This metric represents the percentage of parse requests where the cursor was already in the cursor cache compared to the number of total parses.
34. If soft parse is less, is that good or bad? How will u troubleshoot further?
It is obviously bad. Less ratio of soft parse means the database is performing hard parsing.
This means the database is consuming IO. You need to check the queries responsible for hard parsing.
To find SQL with excessive hard parses you need to look for SQL statements that have only executed once, so you can see all non-reentrant SQL in your library cache as follows:
select count(*) from v$sql where executions=1;
If there appears to be excessive time spent parsing, evaluate SQL statements to determine those that can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The Top Sessions page sorted by Hard Parses will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.
By showing you which sessions are incurring the most hard parses, this page may lead you to the application or programs that are the best candidates for SQL rewrites.
Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.
To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.
35. What is CURSOR?
A database cursor is a mechanism that enables traversal over the records in a database.
A cursor is a pointer to this context area. PL/SQL controls the context area through a cursor. A cursor holds the rows (one or more) returned by a SQL statement. The set of rows the cursor holds is referred to as the active set.
36. Difference between awr, ash and addm reports in oracle
AWR gathers, processes and maintains performance stats used for problem detection and self-tuning the databases.
ADDM analyzes the AWR data on a regular basis, to give you overview of the root cause of the problem which is affecting your database’s performance. It also provides suggestions or recommendations for rectifying any problem identified and lists the areas which are having no issues. ADDM recommends multiple solutions for the DBA to choose from which includes,
- CPU bottlenecks
- Undersized memory structures
- I/O capacity issues
- High load SQL statements
- RAC specific issues
- Database configuration issues
- Also provides recommendations on hardware changes, database configuration & schema changes.
ASH gathers statistics from the in-memory performance monitoring tables also used to track session activity and simplify performance tuning.
ASH reports Give the following information :
- Top User Events (frequent wait events)
- Details to the wait events
- Top Queries
- Top Sessions
- Top Blocking Sessions
- Top DB Object.
- Activity Over Time
37. Difference between Differential and Cumulative Backups
Differential Backup : RMAN looks for last level 1 or level 0 backup, and a differential backup only captures those data block changes that were made after those backups. Differential backups are faster because there are fewer changes stored, but they take longer at recovery time.
Cumulative Backup : RMAN backups up all data block changes that are made after a level 0 backup. The main advantage of cumulative backup over differential is the fast recovery time, but at the expense of a longer daily backup window and more disk usage.
38. What are the different patches available in Oracle ?
Interim patch
An interim patch provides a single bug fix, a collection of bug fixes, or a customer-specific security fix. They generally address specific bugs for a particular customer, and generally should not be applied unless instructed by Oracle Support to do so.
Diagnostic patch
A diagnostic patch is designed to help diagnose or verify a fix or collection of bug fixes.
Bundle patch
A bundle patch is a cumulative collection of fixes for a specific product or component.
Critical patch update (CPU)
These are the cumulative patches consisting of security fixes. The new name for the critical patch updates is security patch update.
Critical patch updates or CPUs are released on quarterly basis. Mostly they are released on the third week of January, April, July and October.
For a database CPU or SPU cannot be applied once the PSU have been applied until the database is upgraded to a new base version.
Security patch update (SPU)
SPU is the new name for CPU .The program name which deliver SPUs will still be called critical patch update.
Patch set update (PSU)
These are quarterly cumulative patches that contain security fixes as well as additional fixes sometimes, PSU includes feature enhancements as well. So we can say that PSU includes the security fixes of the CPU plus additional fixes .The patch documentation will tell about the CPU included in the PSU.
Patch Set Updates are released on quarterly basis mostly on the third week of January, April, July and October.
Once the PSU is applied only PSUs can be applied in future quarters until the database is upgraded to knew base version
39. PSU or SPU or CPU ..? Which one I should apply to my database..
This depends upon what kind of fixes that you’re looking for. If you are aiming for only the security fixes then you can go ahead with the CPU or SPU patch application but if you are planning to have the security fixes plus additional fixes and minor version enhancement in that case you should apply Patch Set Update
40. How do you recover from Block Corruption?
a. Start SQL*Plus and connect to the target database.
b. Query V$DATABASE_BLOCK_CORRUPTION to determine whether corrupt blocks exist. For example, execute the following statement:
SQL> SELECT * FROM V$DATABASE_BLOCK_CORRUPTION;
c. Start RMAN and connect to the target database.
d. Recover all blocks marked corrupt in V$DATABASE_BLOCK_CORRUPTION.
The following command repairs all physically corrupted blocks recorded in the view:
RMAN> RECOVER CORRUPTION LIST;
After the blocks are recovered, the database removes them from V$DATABASE_BLOCK_CORRUPTION.
41. What is DB Time in AWR?
DB Time = Amount of time the SQL spent inside the DB using CPU + Amount of time the SQL spent in waiting for something
42. What is User Response Time?
Total time taken to get back the response, after clicking. It includes Application and DB response time. It also includes the network time, also the browser response time from user browser perspective
43. What are the Wait Metrics
Information about wait events is displayed in three dynamic performance views:
V$SESSION_WAIT displays the events for which sessions have just completed waiting or are currently waiting.
V$SYSTEM_EVENT displays the total number of times all the sessions have waited for the events in that view.
V$SESSION_EVENT is similar to V$SYSTEM_EVENT, but displays all waits for each session.
44. What is ROW Migration and ROW Chaining?
Row Migration :
It happens during update.
Row migration happens due to not reserving PCTFree due to which io happens. If there is insufficient space in the block due to PCTfree, then when an update occurs, the Row will migrate into another block. It still leaves a piece into the original blocks and point to another block. This results in block io when more rows are read due to less pctfree
Continued fetch is a column in AWR report under instances, which if there, you can understand whether row migrations are taking place in a table
Row Chaining :
Row Chaining happens at the time of an insert. it happens when the block size of the database is smaller than 1 row size. This results in 1 row getting seated in 2 blocks or more. To avoid this :
Set block size appropriately
Design to have small row size
45. What is PCTFree and PCTUsed?
PCT Free – states how much of free space should I reserve for future updates that are present inside the block.
For example, if PCTFree = 50, then if there is 50% or less free space, then this block will be removed from the freelist and no more inserts can happen into the block. Thus 50% is reserved for future updates
PCT Used : Determines when can a block which has been removed from a freelist, be brought back to the free list.
For example, if PCTUsed = 30, then if used space in the block comes to 30%, then it will add the blocks into the free list. Thereby new inserts can happen into the block
Freelist – Contains list of blocks in which inserts are allowed
46. What is Index Organized Table?
IOT is actually an Index created.
An index is always ordered by column. IOT is ordered by the primary key
47. What will happen if Undo is being used by 1 application and another heavy query is being fired and you dont have sufficient undo?
If undo tablespace is full, it will be overwritten giving ORA-01555 error to the query, this can be avoided by enabling RETENTION GUARANTEE using ALTER statement. This option must be used with caution, because it can cause DML operations to fail if the undo tablespace is not big enough. However, with proper settings, long-running queries can complete without risk of receiving the ORA-01555 “snapshot too old” error message, and you can guarantee a time window in which the execution of Flashback features will succeed.
48. How memory parameters are defined in database?
The goal of memory management is to reduce the physical I/O overhead as much as possible, either by making it more likely that the required data is in memory, or by making the process of retrieving the required data more efficient. To achieve this goal, proper sizing and effective use of Oracle Database memory caches is essential.
Methods to manage database memory :
Automatic Memory Management
Automatic memory management enables Oracle Database to manage and tune the database memory automatically. In automatic memory management mode, management of the shared global area (SGA) and program global area (instance PGA) memory is handled completely by Oracle Database. This method is the most automated and is strongly recommended by Oracle. Before setting any memory pool sizes manually, strongly consider using automatic memory management.
But with larger databases, you might find issues with AMM, with “Out of Memory” error. So you need to plan well before setting up to AMM
MEMORY_TARGET and MEMORY_MAX_TARGET are the initialization parameters which needs to be set for this.
utomatic Shared Memory Management
If automatic memory management is disabled, then Oracle Database uses automatic shared memory management to manage SGA memory. In this mode, Oracle Database automatically distributes memory to individual SGA components based on a target size that you set for the total SGA memory.
SGA_TARGET and SGA_MAX_TARGET are the intialization parameters to be set for this.
Manual Shared Memory Management
If both automatic memory management and automatic shared memory management are disabled, then you must manage SGA memory manually by sizing the individual memory pools in the SGA. Although this mode enables you to exercise complete control over how SGA memory is distributed, it requires the most effort because the SGA components must be manually tuned on an ongoing basis.
DB_CACHE_SIZE , SHARED_POOL_SIZE , LARGE_POOL_SIZE , JAVA_POOL_SIZE , STREAMS_POOL_SIZE , INMEMORY_SIZE are the intialization parameters which needs to be taken care for this
Automatic PGA Memory Management
If automatic memory management is disabled, then Oracle Database uses automatic PGA memory management to manage PGA memory. In this mode, Oracle Database automatically distributes memory to work areas in the instance PGA based on a target size that you set for the total PGA memory.
PGA_AGGREGATE_TARGET and PGA_AGGREGATE_LIMIT are the intialization parameters which needs to be taken care for this
Manual PGA Memory Management
If both automatic memory management and automatic PGA memory management are disabled, then you must manage PGA memory manually by adjusting the portion of PGA memory dedicated to each work area. This method can be very difficult because the workload is always changing and is not recommended by Oracle. Although manual PGA memory management is supported by Oracle Database, Oracle strongly recommends using automatic memory management or automatic PGA memory management instead.
49. How many IP’s are required in RAC?
SCAN IPs : 3
Private IPs : 1 per node
Public IPs : 1 per node
Virtual IPS : 1 per node
50. For a 2 hour AWR report, will the DB time be more or less?
This can be more. If its more, then the instance is CPU bound.
Suppose you have a DB server of 4 CPU’s. So, in 2hours (120mins), the DB time would be 4*120=480mins.
If the DB time is more than 480mins, then you can say it is CPU bound.
Please click on the INDEX and browse for more interesting posts.