Thursday, May 19, 2016

Performance Tuning


Application user is complaining the database is slow.How would you find the performance issue of SQL queries?
High performance is common expectation for end user, in fact the database is never slow or fast in most of the case session connected to the database slow down when they receives unexpected hit. Thus to solve this issue you need to find those unexpected hit. To know exactly what the session is doing join your query v$session with v$session_wait.
SELECT NVL(s.username,’(oracle)’) as username,s.sid,s.serial#,sw.event,sw.wait_time, sw.seconds_in_wait, sw.state FROM v$session_wait sw,v$session s
WHERE s.sid=sw.sid and s.username= ‘&username’ORDER BY sw.seconds_in_wait DESC;
1.Check the events that are waiting for something.
2.Try to find out the objects locks for that particular session.
3.Locking is not only the cause to effects the performance. Disk I/O contention is another case. When a session retrieves data from the database datafiles on disk to the buffer cache, it has to wait until the disk sends the data. The wait event shows up for the session as “db file sequential read” (for index scan) or “db file scattered read” (for full table scan).When you see the event, you know that the session is waiting for I/O from the disk to complete. To improve session performance, you have to reduce that waiting period. The exact step depends on specific situation, but the first technique “reducing the number of blocks retrieved by a SQL statement” almost always works.Reduce the number of blocks retrieved by the SQL statement. Examine the SQL statement to see if it is doing a full-table scan when it should be using an index, if it is using a wrong index, or if it can be rewritten to reduce the amount of data it retrieves.
4.Run SQL Tuning Advisor
(@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as the input for generating the findings and recommendations.
SQL Tuning Advisor seems to be doing logical optimization mainly by checking your SQL structure and statistics.
SQL Tuning Advisor suggests indexes that might be very useful.
SQL Tuning Advisor suggests query rewrites.
SQL Tuning Advisor suggests SQL profile.
More:
1.Run TOP command in Linux to check CPU usage.
2.Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.
3.Enable the trace file before running your queries,then check the trace file using tkprof create output file.
According to explain plan check the elapsed time for each query,then tune them respectively.
What is the use of iostat/vmstat/netstat command in Linux?
Iostat – reports on terminal, disk and tape I/O activity.
Vmstat – reports on virtual memory statistics for processes, disk, tape and CPU activity.
Netstat – reports on the contents of network data structures.
If you are getting high “Busy Buffer waits”, how can you find the reason behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 “File #”,p2 “Block #”,p3 “Reason Code” from v$session_wait Where event = ‘buffer busy waits’;
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
What to Look for in AWR Report and STATSPACK Report?
Many DBAs already know how to use STATSPACK but are not always sure what to check regularly.
Remember to separate OLTP and Batch activity when you run STATSPACK, since they usually
generate different types of waits. The SQL script “spauto.sql” can be used to run STATSPACK
every hour on the hour. See the script in $ORACLE_HOME/rdbms/admin/spauto.sql for more
information (note that JOB_QUEUE_PROCESSES must be set > 0). Since every system is different,
this is only a general list of things you should regularly check in your STATSPACK output:
¦ Top 5 wait events (timed events)
¦ Load profile
¦ Instance efficiency hit ratios
¦ Wait events
¦ Latch waits
¦ Top SQL
¦ Instance activity
¦ File I/O and segment statistics
¦ Memory allocation
¦ Buffer waits
What is the difference between DB file sequential read and DB File Scattered Read?
DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan.
The DB file sequential read, reads block into contiguous memory and DB File scattered read gets from multiple block and scattered them into buffer cache.
Which factors are to be considered for creating index on Table? How to select column for index?
Creation of index on table depends on size of table, volume of data. If size of table is large and we need only few data for selecting or in report then we need to create index. There are some basic reason of selecting column for indexing like cardinality and frequent usage in where condition of select query. Business rule is also forcing to create index like primary key, because configuring primary key or unique key automatically create unique index.
It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously.
Is creating index online possible?
YES. You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
How to recover password in oracle 10g?
You can query with the table user_history$. The password history is store in this table.
How can you track the password change for a user in oracle?
Oracle only tracks the date that the password will expire based on when it was latest changed. Thus listing the view DBA_USERS.EXPIRY_DATE and subtracting PASSWORD_LIFE_TIME you can determine when password was last changed. You can also check the last password change time directly from the PTIME column in USER$ table (on which DBA_USERS view is based). But If you have PASSWORD_REUSE_TIME and/or PASSWORD_REUSE_MAX set in a profile assigned to a user account then you can reference dictionary table USER_HISTORY$ for when the password was changed for this account.
SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
What is Secure External password Store (SEPS)?
Through the use of SEPS you can store password credentials for connecting to database by using a client side oracle wallet, this wallet stores signing credentials. This feature introduced since oracle 10g. Thus the application code, scheduled job, scripts no longer needed embedded username and passwords. This reduces risk because the passwords are no longer exposed and password management policies are more easily enforced without changing application code whenever username and password change.
Why we need CASCADE option with DROP USER command whenever dropping a user and why “DROP USER” commands fails when we don’t use it?
If a user having any object then ‘YES’ in that case you are not able to drop that user without using CASCADE option. The DROP USER with CASCADE option command drops user along with its all associated objects. Remember it is a DDL command after the execution of this command rollback cannot be performed.
What is the difference between Redo,Rollback and Undo?
I find there is always some confusion when talking about Redo, Rollback and Undo. They all sound like pretty much the same thing or at least pretty close.
Redo: Every Oracle database has a set of (two or more) redo log files. The redo log records all changes made to data, including both uncommitted and committed changes. In addition to the online redo logs Oracle also stores archive redo logs. All redo logs are used in recovery situations.
Rollback: More specifically rollback segments. Rollback segments store the data as it was before changes were made. This is in contrast to the redo log which is a record of the insert/update/deletes.
Undo: Rollback segments. They both are really one in the same. Undo data is stored in the undo tablespace. Undo is helpful in building a read consistent view of data.
You have more than 3 instances running on the Linux server? How can you determine which shared memory and semaphores are associated with which instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
Why drop table is not going into Recycle bin?
If you are using SYS user to drop any table then user’s object will not go to the recyclebin as there is no recyclebin for SYSTEM tablespace, even we have already SET recycle bin parameter TRUE.
Select * from v$parameter where name = ‘recyclebin’;
Show parameter recyclebin;
Temp Tablespace is 100% FULL and there is no space available to add datafiles to increase temp tablespace. What can you do in that case to free up TEMP tablespace?
Try to close some of the idle sessions connected to the database will help you to free some TEMP space. Otherwise you can also use ‘Alter Tablespace PCTINCREASE 1’ followed by ‘Alter Tablespace PCTINCREASE 0’
What is Row Chaning and Row Migration?
Row Migration:
A row migrates when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.
Row Chaining:
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces.
Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces.
So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.
How to find out background processes ?
SQL> select SID,PROGRAM from v$session where TYPE=’BACKGROUND’;
SQL> select name,description from V$bgprocess;
How to findout background processes from OS:
$ ps -ef|grep ora_|grep SID
To Find and Delete bigger size and older files in Linux
–To find out files size more than 5MB
find . -size +5000 -exec ls -ltr {} \;
– To **Remove** files size more than 5MB
find . -size +5000k -exec rm -rf {} \;
–To find out files older than 30days
find . -mtime +30 -exec ls -ltr {} \;
–To find **Remove** files older than 30days
find . -mtime +30 -exec rm -rf {} \;

Saturday, May 7, 2016

Oracle RMAN

##########################################################

What is RMAN?
Recovery Manager (RMAN) is a utility that can manage your entire Oracle backup and recovery activities.

What is the difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be preserved in recovery catalog.
In recovery catalog we can store scripts.
Recovery catalog is central and can have information of many databases.

Can we use same target database as catalog?
No, The recovery catalog should not reside in the target database (database should be backed up), because the database can’t be recovered in the mounted state.

How do you know that how much RMAN task has been completed?
By querying v$rman_status or v$session_longops

From where list & report commands will get input?
Both the commands command quering v$ and recovery catalog views. V$BACKUP_FILES or many of the recovery catalog views such asRC_DATAFILE_COPY or RC_ARCHIVED_LOG.

Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;

How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.

How to view the current defaults for the database.
RMAN> show all;

What is the use of crosscheck command in RMAN?
Crosscheck will be useful to check whether the catalog information is intact with OS level information. This command only updates repository records with the status of the backups.
e.g. If user removes archived logs from disk with an operating system command, the repository still indicates that the logs are on disk, when in fact they are not.

What are the differences between crosscheck and validate commands?
Validate command is to examine a backup set and report whether it can be restored. RMAN scans all of the backup pieces in the specified backup sets and looks at the checksum to verify that the contents are intact so that backup can be successfully restored if necessary.
Crosscheck command is to verify the status of backups and copies recorded in the RMAN repository against media such as disk or tape. The crosscheck command only processes files created on the same device type as the channel running crosscheck.

Which one is good, differential (incremental) backup or cumulative (incremental) backup?
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;
Cumulative backups are preferable to differential backups when recovery time is more important than disk space, because during recovery each differential backup must be applied in succession. Use cumulative incremental backups instead of differential, if enough disk space is available to store cumulative incremental backups.
This is command for taking Level 0 backup.
RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE;

What is the difference between backup set and backup piece?
Backup set is logical and backup piece is physical.
RMAN command to backup for creating standby database
RMAN> duplicate target database

You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have backup of this (newly created) datafile. How do you restore/recover file?
Create data file and recover datafile.
SQL> alter database create datafile ‘/u01/app/oracle/oradata/xyz.dbf’ size 2G;
RMAN> recover datafile file_id;

What is obsolete backup & expired backup?
A status of “expired” means that the backup piece or backup set is not found in the backup destination.
A status of “obsolete” means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.

What is the difference between hot backup & RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.

How to put manual/user-managed backup in RMAN (recovery catalog)?
By using catalog command.
RMAN> CATALOG START WITH ‘/tmp/backup.ctl’;

What are the Architectural components of RMAN?
  1. RMAN Executables
  2. Sercer process
  3. Channels
  4. Target database
  5. Recovery catalog database (optional)
  6. Media management Layer (optional)
  7. Backups, backup sets and backup pieces
What are channels?
A channel is an RMAN server process started when there is a need to communicate with an I/O device, such as a disk or a tape. A channel is what reads and writes RMAN backup files. It is through the allocation of channels that you govern I/O characteristics:
  • Type of I/O device being read or written to, either a disk or an sbt_tape
  • Number of processes simultaneously accessing an I/O device
  • Maximize size of files created on I/O devices
  • Maximize rate at which database files are read
  • Maximize number of files open at a time

Why is the catalog optional?
Because RMAN manages backup and recovery operations, it requires a place to store necessary information about the database. RMAN always stores this information in the target database control file. You can also store RMAN metadata in a recovery catalog schema contained in a separate database. The recovery catalog schema must be stored in a database other than the target database.

What is a Backup set?
A logical grouping of backup files — the backup pieces — that are created when you issue an RMAN backup command. A backup set is RMAN’s name for a collection of files associated with a backup. A backup set is composed of one or more backup pieces.

What are the benefits of using RMAN?
  1. Incremental backups that only copy data blocks that have changed since the last backup.
  2. Tablespaces are not put in backup mode, thus there is noextra redo log generation during online backups.
  3. Detection of corrupt blocks during backups.
  4. Parallelization of I/O operations.
  5. Automatic logging of all backup and recovery operations.
  6. Built-in reporting and listing commands.
What are the various reports available with RMAN
RMAN>list backup;
RMAN> list archive;

In catalog database, if some of the blocks are corrupted due to system crash, How will you recover?
using RMAN BLOCK RECOVER command

How do you enable the autobackup for the controlfile using RMAN?
Issue command at RMAN prompt.
RMAN> configure controlfile autobackup on;
Also we can configure controlfile backup format.
RMAN> configure controlfile autobackup format for device type disk to
2> ‘$HOME/BACKUP/RMAN/ F.bkp’;

How do you identify what are the all the target databases that are being backed-up with RMAN database?
You don’t have any view to identify whether it is backed up or not. The only option is connect to the target database and give list backup this will give you the backup information with date and timing.

How do you identify the block corruption in RMAN database? How do you fix it?
Using v$block_corruption view you can find which blocks corrupted.
RMAN> block recover datafile <fileid> block <blockid>;
Using the above statement You recover the corrupted blocks. First check whether the block is corrupted or not by using this command
SQL>select file# block# from v$database_block_corruption;
file# block
2 507
the above block is corrupted…
conn to Rman
To recover the block use this command…
RMAN>blockrecover datafile 2 block 507;
the above command recover the block 507
Now just verify it…..
Rman>blockrecover corruption list;

How do you clone the database using RMAN software? Give brief steps? When do you use crosscheck command?
Check whether backup pieces proxy copies or disk copies still exist.
Two commands available in RMAN to clone database:
1) Duplicate
2) Restore.

List some of the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION RC_BACKUP_COPY_DETAILS
RC_BACKUP_CORRUPTION
RC_BACKUP-DATAFILE_SUMMARY

How do you install the RMAN recovery catalog?
Steps to be followed:
1) Create connection string at catalog database.
2) At catalog database create one new user or use existing user and give that user a recovery_catalog_owner privilege.
3) Login into RMAN with connection string
a) export ORACLE_SID
b) rman target catalog @connection string
4) rman> create catalog;
5) register database;

What is the difference between physical and logical backups?
In Oracle Logical Backup is “which is taken using either Traditional Export/Import or Latest Data Pump”. Where as Physical backup is known “when you take Physical O/s Database related Files as Backup”.

What is RAID? What is RAID0? What is RAID1? What is RAID 10?
RAID: It is a redundant array of independent disk
RAID0: Concatenation and stripping
RAID1: Mirroring

How to enable Fast Incremental Backup to backup only those data blocks that have changed?
SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;

How do you set the flash recovery area?
SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 100G;
SQL> ALTER SYSTEM SET db_recovery_file_dest = ‘/u10/oradata/school’;

What is auxiliary channel in RMAN? When do you need this?
An auxiliary channel is a link to auxiliary instance. If you do not have automatic channels configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN command.

How do you use the V$RECOVERY_FILE_DEST view to display information regarding the flashrecovery area?
SQL> SELECT name, space_limit, space_used,space_reclaimable, number_of_filesFROM v$recovery_file_dest;

How can you display warning messages?
SQL> SELECT object_type, message_type,message_level, reason, suggested_actionFROM dba_outstanding_alerts;

How do you backup the entire database?
RMAN> BACKUP DATABASE;

How do you backup an individual tablespaces?
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
RMAN> BACKUP TABLESPACE system;

How do you backup datafiles and control files?
RMAN> BACKUP DATAFILE 3;
RMAN> BACKUP CURRENT CONTROLFILE;
Use a fast recovery without restoring all backups from their backup location to the location specified inthe controlfile.
RMAN> SWITCH DATABASE TO COPY;


Interview Question List

General Questions

1) what is block, extent, segment?
2) what are main background processess of Oracle?
3) What are main components of Oracle Database instance?
4) How the query will be executed in Oracle database?
5) what is the function of SMON and PMON?
6) what is SCN and what is use of SCN?
7) what is checkpoint and what is use of it?
8) what are redo log file states? (active, inactive, current and unused)
9) Why do you run orainstRoot.sh and ROOT.SH once you finalize the Installation?
10)What are new features of 11g?
11) What is SGA_TARGET and SGA_MAX_SIZE?
12) How oracle database recovery is done ie transaction recovery and rollback?
13) How to increase library pool size?
14) how to add redo log member ?     
15) How to add undo tablespace and remove old undo tablespace?
16) What is MEMORY_TARGET and MEMORY_MAX_TARGET ?
17) what is ora-1555 error? why it is raised and how we can overcome it?
18) what is  Quiesce and suspend state of database?
19) How user connection is established to database?   

RMAN

1) Difference between differential and cumulative incremental backups?
2) Rman convert datafile command?
3) catalog and auxiliary database
4) rman duplicate database command
5) obsolete, expired and crosscheck command
6) alter system archive log current vs alter system switch logfile;
7) how to catalog backup piece?
8) how do you know what all databases are registered in catalog database?
9) views to monitor rman task status?
10)How to recover corrupted block? v$block_corruption
11)How to create catalog database and register database in catalog?
12)what is block change tracking?
13)How to check the status of backup? whether it is running or hanged?

Dataguard

1) What is failover, switchover and how to perform it?
2) Data guard configuration parameters?
3) How to setup data-guard?
4) How to recover standby database, if one of archived log corrupted?
5) Dataguard background processes?
6) Dataguard protection modes?
7) Dataguard data dictionary views?
8) why force logging is needed?          
9) Difference between physical and logical standby?
10) DGMGRL utility?
11) What is active dataguard?
12) How to open standby database in readonly and managed recovery mode?
13) what is snapshot standby?

ASM

1) How to check status of re-balance activity? or stripping?
2) how to find mirrored blocks?
3) Coarse-Grained Striping vs Fine-Grained Striping
4) what are background processess of ASM?
5) what is mirroring and striping?
6) How to create tablespace in ASM?
7) How to add space to datafile in ASM?
8) what is use of asm_power_limit and au_size?
9) Parameters specific to ASM? or Parameters which are different than database?
10) Advantages of ASM?
11) How to copy asm files to normal disk?
12) How to create diskgroups?
13) Upgrading/patching ASM?
14) adding disk to diskgroup?
15) what is ACFS?
16) Important ASM data dictionary views?
17) ACFS command line tools or commands?

RAC

1) what is voting disk and OCR?
2) why odd number of voting disks?
3) can we store voting disk and OCR in ASM?-- Yes since 11gR2. Also we can access OCR file even if cluster services are down.
4) How to backup voting disk and OCR?
5) Background processes of RAC?
6) How cache fusion work? 
7) How SCAN(Single Client Access Name) listener work?
8) what are other vendor to provide cluster solutions?
9) what is TAF (Transparent Application Failover)?
10) How to backup RAC database? RMAN backup?
11) How to upgrade/apply patch to RAC database?
12) Adding new node to RAC?
13) Removing node from RAC database?
14) OCR and Voting disk backup restore, check status?
15) srvctl and crsctl commands?
16) How to refresh single instance database from RAC database backup?
17) Advantages of RAC?     
18) What is VIP? and its use?                    


Oracle Data Guard

#####################################################

What is Dataguard?
Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as copies of the production database. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.

What is DG Broker?
DG Broker “it is the management and monitoring tool”.
Oracle dataguard broker is a distributed management framework that automates and centralizes the creation , maintenance and monitoring of DG configuration.
All management operations can be performed either through OEM, which uses the broker (or) broker specified command-line tool interface “DGMGRL”.

What is the difference between Dataguard and Standby?
Dataguard :
Dataguard is mechanism/tool to maintain standby database.
The dataguard is set up between primary and standby instance .
Data Guard is only available on Enterprise Edition.
Standby Database :
Physical standby database provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis.
Standby capability is available on Standard Edition.

What are the differences between Physical/Logical standby databases? How would you decide which one is best suited for your environment?
Physical standby DB:
As the name, it is physically (datafiles, schema, other physical identity) same copy of the primary database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical structure can be different.
It synchronized with primary database though SQL Apply, Redo received from the primary database into SQL statements and then executing these SQL statements on the standby DB.
We can open “physical stand by DB to “read only” and make it available to the applications users (Only select is allowed during this period). we can not apply redo logs received from primary database at this time.
We do not see such issues with logical standby database. We can open the database in normal mode and make it available to the users. At the same time, we can apply archived logs received from primary database.
For OLTP large transaction database it is better to choose logical standby database.

Explain Active Dataguard?
11g Active Data Guard
Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, web-based access, etc., while continuously applying changes received from the production database.
Oracle Active Data Guard also enables the use of fast incremental backups when offloading backups to a standby database, and can provide additional benefits of high availability and disaster protection against planned or unplanned outages at the production site.

What is a Snapshot Standby Database?
11g Snapshot Standby Database
Oracle 11g introduces the Snapshot Standby database which essentially is an updateable standby database which has been created from a physical standby database.
We can convert a physical standby database to a snapshot standby database, do some kind of testing on a database which is a read write copy of the current primary or production database and then finally revert it to it’s earlier state as a physical standby database.
While the snapshot standby database is open in read-write mode, redo is being received from the primary database, but is not applied.
After converting it back to a physical standby database, it is resynchronized with the primary by applying the accumalated redo data which was earlier shipped from the primary database but not applied.
Using a snapshot standby, we are able to do real time application testing using near real time production data. Very often we are required to do production clones for the purpose of testing. But using snapshot standby databases we can meet the same requirement sparing the effort,time,resources and disk space.

Snapshot Standby Database (UPDATEABLE SNAPSHOT FOR TESTING)
A snapshot standby database is a fully updatable standby database that is created by converting a physical standby database into a snapshot standby database.
Like a physical or logical standby database, a snapshot standby database receives and archives redo data from a primary database. Unlike a physical or logical standby database, a snapshot standby database does not apply the redo data that it receives. The redo data received by a snapshot standby database is not applied until the snapshot standby is converted back into a physical standby database, after first discarding any local updates made to the snapshot standby database.

What is the Default mode will the Standby will be, either SYNC or ASYNC?
ASYNC

Dataguard Architechture?
Data Guard Configurations:
A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other.
Dataguard Architecture
The Oracle 9i Data Guard architecture incorporates the following items:
• Primary Database – A production database that is used to create standby databases. The archive logs from the primary database are transfered and applied to standby databases. Each standby can only be associated with a single primary database, but a single primary database can be associated with multiple standby databases.
• Standby Database – A replica of the primary database.
• Log Transport Services – Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.
• Network Configuration – The primary database is connected to one or more standby databases using Oracle Net.
• Log Apply Services – Apply the archived redo logs to the standby database. The Managed Recovery Process (MRP) actually does the work of maintaining and applying the archived redo logs.
• Role Management Services – Control the changing of database roles from primary to standby. The services include switchover, switchback and failover.
• Data Guard Broker – Controls the creation and monitoring of Data Guard. It comes with a GUI and command line interface.
Primary Database:
A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.
Standby Database:
A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.
The types of standby databases are as follows:
Physical standby database:
Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database, through Redo Apply, which recovers the redo data received from the primary database and applies the redo to the physical standby database.
Logical standby database:
Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database through SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executes the SQL statements on the standby database.

What are the services required on the primary and standby database ?
The services required on the primary database are:
• Log Writer Process (LGWR) – Collects redo information and updates the online redo logs. It can also create local archived redo logs and transmit online redo to standby databases.
• Archiver Process (ARCn) – One or more archiver processes make copies of online redo logs either locally or remotely for standby databases.
• Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. .
The services required on the standby database are:
• Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence.
• Remote File Server (RFS) – Receives archived and/or standby redo logs from the primary database.
• Archiver (ARCn) Processes – Archives the standby redo logs applied by the managed recovery process (MRP).
• Managed Recovery Process (MRP) – Applies archive redo log information to the standby database.

What is RTS (Redo Transport Services) in Dataguard?
It controls the automated transfer of redo data from the production database to one or more archival destinations. The redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the
primary database or another standby database.

What are the Protection Modes in Dataguard?
Data Guard Protection Modes
This section describes the Data Guard protection modes.
In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap. Redo gaps are discussed in Section 6.3.3.
Maximum Availability
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.
This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Maximum Performance
This protectionmode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.
This is the default protection mode.
Maximum Protection
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.
Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.

How to delay the application of logs to a physical standby?
A standby database automatically applies redo logs when they arrive from the primary database. But in some cases, we want to create a time lag between the archiving of a redo log at the primary site, and the application of the log at the standby site.
Modify the LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set a delay for the standby database.
Example: For 60min Delay:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=stdby_srvc DELAY=60′;
The DELAY attribute is expressed in minutes.
The archived redo logs are still automatically copied from the primary site to the standby site, but the logs are not immediately applied to the standby database. The logs are applied when the specified time interval expires.

Steps to create Physical Standby database?
1.Take a full hot backup of Primary database
2.Create standby control file
3.Transfer full backup, init.ora, standby control file to standby node.
4.Modify init.ora file on standby node.
5.Restore database
6.Recover Standby database
(Alternatively, RMAN DUPLICATE DATABASE FOR STANDBY DO RECOVERY can be also used)
7.Setup FAL_CLIENT and FAL_SERVER parameters on both sides
8.Put Standby database in Managed Recover mode

What are the DATAGUARD PARAMETERS in Oracle?Set Primary Database Initialization Parameters
———————————————-
On the primary database, you define initialization parameters that control redo transport services while the database is in the primary role. There are additional parameters you need to add that control the receipt of the redo data and log apply services when the primary database is transitioned to the standby role.
DB_NAME=chicago
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/chicago/control1.ctl’, ‘/arch2/chicago/control2.ctl’
LOG_ARCHIVE_DEST_1=
‘LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_2=
‘SERVICE=boston LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
Primary Database: Standby Role Initialization Parameters
FAL_SERVER=boston
FAL_CLIENT=chicago
DB_FILE_NAME_CONVERT=’boston’,'chicago’
LOG_FILE_NAME_CONVERT= ‘/arch1/boston/’,'/arch1/chicago/’,'/arch2/boston/’,'/arch2/chicago/’
STANDBY_FILE_MANAGEMENT=AUTO
Prepare an Initialization Parameter File for the Standby Database
—————————————————————–
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE) used by the primary database; a text initialization parameter file can be copied to the standby location and modified. For example:
CREATE PFILE=’/tmp/initboston.ora’ FROM SPFILE;
Modifying Initialization Parameters for a Physical Standby Database.

DB_NAME=chicago
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/boston/control1.ctl’, ‘/arch2/boston/control2.ctl’
DB_FILE_NAME_CONVERT=’chicago’,'boston’
LOG_FILE_NAME_CONVERT= ‘/arch1/chicago/’,'/arch1/boston/’,'/arch2/chicago/’,'/arch2/boston/’
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1= ‘LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_2= ‘SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
FAL_CLIENT=boston