UGA (User Global Area) & PGA (Program Global Area)
Before i start explain UGA & PGA areas of Oracle Server one should always has better understanding about Shared Server (Multi Threaded Server – MTS) and Dedicated Server architecture.Let’s start with the basic knowledge about MTS and Dedicated Server Architecture.
Each user process is associated to a single server process. The server process can be Dedicated or Shared. A dedicated process has one to one relationship with user process. A dedicated process occupies certain amount of memory. In oracle database 10g, a dedicated server process consumes about 5 MB of memory. In large scale application, memory requirements for dedicated server can be a serious issue. On the other hand, a shared server process is shared by multiple user processes. We do not need a dedicated server process for every user process, so less memory is required and performance increases. At a given time, Shared server process can serve only one user process. Multiple shared server processes are configured to handle multiple user processes.
Shared server architecture consists of Listener Process, Dispatcher Process, Request Queue, Shared server process and Response Queue.
After deciding MTS is right for your environment, it is now time to dig into the implementation details. The DBA will need to consider the following two issues: User Session-Specific Data (UGA) and Routing Client Requests to Shared Server Processes.
User Session-Specific Data (UGA)
Every connection to the Oracle database has a “session-specific” memory associated with it. This memory is referred to as the User Global Area (UGA) and is used to hold the values of PL/SQL variables, the values of bind variables and other items specific to a session.
Every connection to the Oracle database has a “session-specific” memory associated with it. This memory is referred to as the User Global Area (UGA) and is used to hold the values of PL/SQL variables, the values of bind variables and other items specific to a session.
With a dedicated server connection, the UGA is stored within the dedicated server process’s Program Global Area (PGA). When an MTS connection is made though, the UGA is stored in either the “LARGE POOL” or the “SHARED POOL”.
In an MTS environment, the UGA needs to be moved into a common memory structure like the large pool or shared pool since it contains “session-specific” data.
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Oracle Server = Oracle Database + Oracle Instance
Oracle Database = Control files + Redo Log files + Data files
Oracle Instance = Memory Structures + Background Process
AMM & ASMM in Oracle
Evolution of Memory Management Features:
Memory management has evolved with each database release:
Oracle Database 10g
Automatic Shared Memory Management (ASMM) was introduced in 10g. You enable the automatic shared memory management feature by setting the SGA_TARGET parameter to a non-zero value.
Automatic Shared Memory Management (ASMM) was introduced in 10g. You enable the automatic shared memory management feature by setting the SGA_TARGET parameter to a non-zero value.
Oracle Database 11g
Automatic Memory Management is being introduced in 11g. This enables automatic tuning of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET.
Automatic Memory Management is being introduced in 11g. This enables automatic tuning of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET.
Oracle Database 11g supports various memory management methods, which are chosen by initialization parameter
settings. Oracle recommends that you enable the automatic memory management method.
1. Automatic Memory Management – For Both the SGA and Instance PGA
2. Automatic Shared Memory Management – For the SGA
3. Manual Shared Memory Management – For the SGA
4. Automatic PGA Memory Management – For the Instance PGA
5. Manual PGA Memory Management – For the Instance PGA
settings. Oracle recommends that you enable the automatic memory management method.
1. Automatic Memory Management – For Both the SGA and Instance PGA
2. Automatic Shared Memory Management – For the SGA
3. Manual Shared Memory Management – For the SGA
4. Automatic PGA Memory Management – For the Instance PGA
5. Manual PGA Memory Management – For the Instance PGA
Automatic Memory Management – For Both the SGA and Instance PGA
Beginning with Oracle Database 11g, Oracle Database can manage the SGA memory and instance PGA memory completely automatically. You designate only the total memory size to be used by the instance, and Oracle Database dynamically exchanges memory between the SGA and the instance PGA as needed to meet processing demands.
This capability is referred to as automatic memory management. With this memory management method, the database also dynamically tunes the sizes of the individual SGA components and the sizes of the individual PGAs. To achieve this, two new parameters have been introduced named MEMORY_MAX_TARGET and MEMORY_TARGET. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET).
This capability is referred to as automatic memory management. With this memory management method, the database also dynamically tunes the sizes of the individual SGA components and the sizes of the individual PGAs. To achieve this, two new parameters have been introduced named MEMORY_MAX_TARGET and MEMORY_TARGET. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET).
Switching to Automatic Memory Management
1)Check the current values configured for SGA_TARGET and PGA_AGGREGATE_TARGET.
SQL>SHOW PARAMETER TARGET
NAME TYPE VALUE
—————————— ———– —————-
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 200M
sga_target big integer 500M
—————————— ———– —————-
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 200M
sga_target big integer 500M
Add the values of pga_aggregate_target and sga_target. In our case it would sum to 700 M
2)Decide on a maximum amount of memory that you would want to allocate to the database which will determine the maximum value for the sum of the SGA and instance PGA sizes. In our case we decide to set to 808M
3)Change the parameter in initialization parameter file.
Using Spfile
==============
SQL>ALTER SYSTEM SET MEMORY_MAX_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET MEMORY_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET SGA_TARGET =0 SCOPE = SPFILE;
SQL>ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0 SCOPE = SPFILE;
==============
SQL>ALTER SYSTEM SET MEMORY_MAX_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET MEMORY_TARGET = 808M SCOPE = SPFILE;
SQL>ALTER SYSTEM SET SGA_TARGET =0 SCOPE = SPFILE;
SQL>ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0 SCOPE = SPFILE;
Using Pfile
==============
If you have started the instance with Pfile, then edit the pfile and set the parameters manually
MEMORY_MAX_TARGET = 808M
MEMORY_TARGET = 808M
SGA_TARGET =0
PGA_AGGREGATE_TARGET = 0
==============
If you have started the instance with Pfile, then edit the pfile and set the parameters manually
MEMORY_MAX_TARGET = 808M
MEMORY_TARGET = 808M
SGA_TARGET =0
PGA_AGGREGATE_TARGET = 0
In case you do not specify any value for MEMORY_MAX_TARGET and only use MEMORY_TARGET then database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of
MEMORY_MAX_TARGET.
MEMORY_MAX_TARGET is a static parameter i.e it cannot be changed Dynamically and Instance has to be bounced for modifying the value. So ensure that you have set it to appropriate value.
MEMORY_MAX_TARGET.
MEMORY_MAX_TARGET is a static parameter i.e it cannot be changed Dynamically and Instance has to be bounced for modifying the value. So ensure that you have set it to appropriate value.
4)Shutdown and startup the database
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 845348864 bytes
Fixed Size 1303188 bytes
Variable Size 469765484 bytes
Database Buffers 369098752 bytes
Redo Buffers 5181440 bytes
ORACLE instance started.
Total System Global Area 845348864 bytes
Fixed Size 1303188 bytes
Variable Size 469765484 bytes
Database Buffers 369098752 bytes
Redo Buffers 5181440 bytes
SQL> show parameter target
NAME TYPE VALUE
———————————— ———– ——————————
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 808M
memory_target big integer 808M
pga_aggregate_target big integer 0
sga_target big integer 0
———————————— ———– ——————————
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 808M
memory_target big integer 808M
pga_aggregate_target big integer 0
sga_target big integer 0
Automatic Shared Memory Management – For the SGA
If you want to exercise more direct control over the size of the SGA, you can disable automatic memory management and enable automatic shared memory management.This feature was introduced in 10g with a parameter known as SGA_TARGET. When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of current workload without requiring any additional configuration. Please refer to following document for setting SGA_TARGET
If you want to exercise more direct control over the size of the SGA, you can disable automatic memory management and enable automatic shared memory management.This feature was introduced in 10g with a parameter known as SGA_TARGET. When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of current workload without requiring any additional configuration. Please refer to following document for setting SGA_TARGET
In case you have enabled Automatic Memory Management , then to switch to Automatic Shared Memory Management , please follow below procedure
SQL>Alter system set MEMORY_TARGET=0 scope=both;
SQL>Alter system set SGA_TARGET=500M scope=both;
SQL>Alter system set SGA_TARGET=500M scope=both;
expdp/impdp: export and import between releases using datapump
Expdp / Impdp
Data pump is a new feature in Oracle10g that provides fast parallel data load. With direct path and parallel execution, data pump is several times faster then the traditional exp/imp. Traditional exp/imp runs on client side. But impdp/expdp runs on server side. So we have much control on expdp/expdp compared to traditional exp/imp. When compared to exp/imp, data pump startup time is longer. Because, it has to setup the jobs, queues, and master table. Also at the end of the export operation the master table data is written to the dump file set, and at the beginning of the import job the master table is located and loaded in the schema of the user.
Following are the process involved in the data pump operation:
Client Process : This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job.
Shadow Process : When client log into the database, foreground process is created. It services the client data pump API requests. This process creates the master table and creates Advanced queuing queues used for communication. Once client process ends, shadow process also go away.
Master Control Process : MCP controls the execution of the data pump job. There is one MCP per job. MCP divides the data pump job into various metadata and data load or unload jobs and hands them over to the worker processes.
Worker Process : MCP creates worker process based on the valule of the PARALLEL parameter. The worker process performs the task requested by MCP.
Advantage of Data pump
1. We can perform export in parallel. It can also write to multiple files on different disks. (Specify parameters PARALLEL=2 and the two directory names with file specification DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp)
2. Has ability to attach and detach from job, monitor the job progress remotely.
3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE
4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job
5. Data can be exported from remote database by using Database link
6. Explicit DB version can be specified, so only supported object types are exported.
7. During impdp, we can change the target file names, schema, and tablespace. Ex, REMAP_SCHEMA, REMAP_DATAFILES, REMAP_TABLESPACE
8. Has the option to filter data rows during impdp. Traditional exp/imp, we have this filter option only in exp. But here we have filter option on both impdp, expdp.
9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK parameter.
10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the parameter DIRECT. But here, it decides where direct path can not be used , conventional path is used.
11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs, dba_datapump_sessions etc)
Exp & Expdp common parameters: These below parameters exists in both traditional exp and expdp utility.
FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)
Comparing exp & expdp parameters: These below parameters are equivalent parameters between exp & expdp. Exp and corresponding Expdp parameters...
FEEDBACK => STATUS
FILE => DUMPFILE
LOG => LOGFILE
OWNER => SCHEMAS
TTS_FULL_CHECK => TRANSPROT_FULL_CHECK
New parameters in expdp Utility
ATTACH Attach the client session to existing data pump jobs
CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)
DIRECTORY Location to write the dump file and log file.
ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE_ONLY It estimate the space, but does not perform export
EXCLUDE List of objects to be excluded
INCLUDE List of jobs to be included
JOB_NAME Name of the export job
KEEP_MASTER Specify Y not to drop the master table after export
NETWORK_LINK Specify dblink to export from remote database
NOLOGFILE Specify Y if you do not want to create log file
PARALLEL Specify the maximum number of threads for the export job
VERSION DB objects that are incompatible with the specified version will not be exported.
ENCRYPTION_PASSWORD The table column is encrypted, then it will be written as clear text in the dump file set when the password is not specified. We can define any string as a password for this parameter.
COMPRESSION Specifies whether to compress metadata before writing to the dump file set. The default is METADATA_ONLY. We have two values(METADATA_ONLY,NONE). We can use NONE if we want to disable during the expdp.
SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the source database. The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
Imp & Impdp common parameters: These below parameters exist in both traditional imp and impdp utility.
FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES
Comparing imp & impdp parameters: These below parameters are equivalent parameters between imp & impdp. imp and corresponding impdp parameters...
DATAFILES => TRANSPORT_DATAFILES
DESTROY =>REUSE_DATAFILES
FEEDBACK =>STATUS
FILE =>DUMPFILE
FROMUSER =>SCHEMAS, REMAP_SCHEMAS
IGNORE =>TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
INDEXFILE, SHOW=>SQLFILE
LOG =>LOGFILE
TOUSER =>REMAP_SCHEMA
New parameters in impdp Utility
FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database. Valid only when NETWORK_LINK parameter is used.
FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.
NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter. The dump file will be not be created in server when we use this parameter. To get a consistent export from the source database, we can use the FLASHBACK_SCN or FLASHBACK_TIME parameters. These two parameters are only valid when we use NETWORK_LINK parameter.
REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.
REMAP_SCHEMA Loads objects to a different target schema name.
REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.
TRANSFORM We can specify that the storage clause should not be generated in the DDL for import. This is useful if the storage characteristics of the source and target database are different. The valid values are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE statement DDL, whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and storage attributes.
TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.
For instance, TRANSFORM=storage:N:table
ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation.
CONTENT, INCLUDE, EXCLUDE are same as expdp utilities.
Prerequisite for expdp/impdp:
Set up the dump location in the database.
system@orcl> create directory dumplocation
2 as 'c:/dumplocation';
Directory created.
system@orcl> grant read,write on directory dumploc to scott;
Grant succeeded.
system@orcl>
Let us experiment expdp & impdp utility as different scenario...... We have two database orcl, ordb. All the below scenarios are tested in Oracle10g R2 version.
Scenario1 Export the whole orcl database.
Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation
Scenario2 Export the scott schema from orcl and import into ordb database. While import, exclude some objects(sequence,view,package,cluster,table). Load the objects which came from RES tablespace into USERS tablespace in target database.
Export Parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott
Import parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('LOAD_EXT')"
Scenario3 Export the emp table from scott schema at orcl instance and import into ordb instance.
Expdb parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp
Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE
Scenario4 Export only specific partition in emp table from scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20
Impdp parfile content: If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).
scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.
scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.
scott@ordb> commit;
Commit complete.
userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append
Scenario5 Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott
Impdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
Scenario6 Export only rows belonging to department 10 and 20 in emp and dept table from orcl database. Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:"in('EMP','DEPT')"
query="where deptno in(10,20)"
Impdp parfile content:
userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query="where deptno = 10"
table_exists_action=APPEND
Scenario7 Export the scott schema from orcl database and split the dump file into 50M sizes. Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB, the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB, then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario8 Export the scott schema from orcl database and split the dump file into four files. Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created - schemaexp_split_01.dmp, schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence of the substation variable is incremented each time. Since there is no FILESIZE parameter, no more files will be created.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario9 Export the scott schema from orcl database and split the dump file into three files. The dump files will be stored in three different location. This method is especially useful if you do not have enough space in one file system to perform the complete expdp job. After export is successful, import the dump file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
As per above expdp par file content, it place the dump file into three different location. Let us say, entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and place 10 files in each file system.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace
Scenario10 We are in orcl database server. Now export the ordb data and place the dump file in orcl database server. After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user and source database schema users should have identical privileges. If there no identical privileges, then we get the below error.
C:\impexpdp>expdp parfile=networkexp1.par
Export: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 12:06:40
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.
Impdp parfile content:
userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
Scenario11 Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed with out writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database. If we do not have much space in the file system to place the dump file, then we can use this option to load the data.
Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace
Scenario12 Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training
Scenario 13 Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter is not valid for network exports.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.
Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
Managing Data Pump jobs
The datapump clients expdp and impdp provide an interactive command interface. Since each expdp and impdp operation has a job name, you can attach to that job from any computer and monitor the job or make adjustment to the job.
Here are the data pump interactive commands.
ADD_FILE Adds another file or a file set to the DUMPFILE set.
CONTINUE_CLIENT Changes mode from interactive client to logging mode
EXIT_CLIENT Leaves the client session and discontinues logging but leaves the current job running.
KILL_JOB Detaches all currently attached client sessions and terminates the job
PARALLEL Increase or decrease the number of threads
START_JOB Starts(or resume) a job that is not currently running. SKIP_CURRENT option can skip the recent failed DDL statement that caused the job to stop.
STOP_JOB stops the current job, the job can be restarted later
STATUS Displays detailed status of the job, the refresh interval can be specified in seconds. The detailed status is displayed to the output screen but not written to the log file.
Scenario14 Let us start the job and in between, we stop the job in middle and resume the job. After some time, let us kill the job and check the job status for every activity....
We can find what jobs are running currently in the database by using the below query.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
C:\impexpdp>impdp parfile=schemaimp1.par
Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:06:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
When we want to stop the job, we need press Control-M to returnImport> prompt. Once it is returned to prompt(Import>), we can stop the job as above by using stop_job command.
After the job is stoped, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
NOT RUNNING SYS_IMPORT_FULL_01
SQL>
Now we are attaching job again..... Attaching the job does not restart the job.
C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01
Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:17:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2009 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp
Worker 1 Status:
State: UNDEFINED
Import>
After attaching the job, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
IDLING SYS_IMPORT_FULL_01
SQL>
Attaching the job does not resume the job. Now we are resuming job again.....
Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2009 14:17
Restarting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
Now again we are killing the same job.... Before we kill, we need to press Control-C to return the Import> prompt.
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
Now the job is disappared in the database.
SQL> select state,job_name from dba_datapump_jobs;
no rows selected.
Following are the process involved in the data pump operation:
Client Process : This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job.
Shadow Process : When client log into the database, foreground process is created. It services the client data pump API requests. This process creates the master table and creates Advanced queuing queues used for communication. Once client process ends, shadow process also go away.
Master Control Process : MCP controls the execution of the data pump job. There is one MCP per job. MCP divides the data pump job into various metadata and data load or unload jobs and hands them over to the worker processes.
Worker Process : MCP creates worker process based on the valule of the PARALLEL parameter. The worker process performs the task requested by MCP.
Advantage of Data pump
1. We can perform export in parallel. It can also write to multiple files on different disks. (Specify parameters PARALLEL=2 and the two directory names with file specification DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp)
2. Has ability to attach and detach from job, monitor the job progress remotely.
3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE
4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job
5. Data can be exported from remote database by using Database link
6. Explicit DB version can be specified, so only supported object types are exported.
7. During impdp, we can change the target file names, schema, and tablespace. Ex, REMAP_SCHEMA, REMAP_DATAFILES, REMAP_TABLESPACE
8. Has the option to filter data rows during impdp. Traditional exp/imp, we have this filter option only in exp. But here we have filter option on both impdp, expdp.
9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK parameter.
10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the parameter DIRECT. But here, it decides where direct path can not be used , conventional path is used.
11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs, dba_datapump_sessions etc)
Exp & Expdp common parameters: These below parameters exists in both traditional exp and expdp utility.
FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)
Comparing exp & expdp parameters: These below parameters are equivalent parameters between exp & expdp. Exp and corresponding Expdp parameters...
FEEDBACK => STATUS
FILE => DUMPFILE
LOG => LOGFILE
OWNER => SCHEMAS
TTS_FULL_CHECK => TRANSPROT_FULL_CHECK
New parameters in expdp Utility
ATTACH Attach the client session to existing data pump jobs
CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)
DIRECTORY Location to write the dump file and log file.
ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE_ONLY It estimate the space, but does not perform export
EXCLUDE List of objects to be excluded
INCLUDE List of jobs to be included
JOB_NAME Name of the export job
KEEP_MASTER Specify Y not to drop the master table after export
NETWORK_LINK Specify dblink to export from remote database
NOLOGFILE Specify Y if you do not want to create log file
PARALLEL Specify the maximum number of threads for the export job
VERSION DB objects that are incompatible with the specified version will not be exported.
ENCRYPTION_PASSWORD The table column is encrypted, then it will be written as clear text in the dump file set when the password is not specified. We can define any string as a password for this parameter.
COMPRESSION Specifies whether to compress metadata before writing to the dump file set. The default is METADATA_ONLY. We have two values(METADATA_ONLY,NONE). We can use NONE if we want to disable during the expdp.
SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the source database. The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
Imp & Impdp common parameters: These below parameters exist in both traditional imp and impdp utility.
FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES
Comparing imp & impdp parameters: These below parameters are equivalent parameters between imp & impdp. imp and corresponding impdp parameters...
DATAFILES => TRANSPORT_DATAFILES
DESTROY =>REUSE_DATAFILES
FEEDBACK =>STATUS
FILE =>DUMPFILE
FROMUSER =>SCHEMAS, REMAP_SCHEMAS
IGNORE =>TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
INDEXFILE, SHOW=>SQLFILE
LOG =>LOGFILE
TOUSER =>REMAP_SCHEMA
New parameters in impdp Utility
FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database. Valid only when NETWORK_LINK parameter is used.
FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.
NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter. The dump file will be not be created in server when we use this parameter. To get a consistent export from the source database, we can use the FLASHBACK_SCN or FLASHBACK_TIME parameters. These two parameters are only valid when we use NETWORK_LINK parameter.
REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.
REMAP_SCHEMA Loads objects to a different target schema name.
REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.
TRANSFORM We can specify that the storage clause should not be generated in the DDL for import. This is useful if the storage characteristics of the source and target database are different. The valid values are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE statement DDL, whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and storage attributes.
TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.
For instance, TRANSFORM=storage:N:table
ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation.
CONTENT, INCLUDE, EXCLUDE are same as expdp utilities.
Prerequisite for expdp/impdp:
Set up the dump location in the database.
system@orcl> create directory dumplocation
2 as 'c:/dumplocation';
Directory created.
system@orcl> grant read,write on directory dumploc to scott;
Grant succeeded.
system@orcl>
Let us experiment expdp & impdp utility as different scenario...... We have two database orcl, ordb. All the below scenarios are tested in Oracle10g R2 version.
Scenario1 Export the whole orcl database.
Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation
Scenario2 Export the scott schema from orcl and import into ordb database. While import, exclude some objects(sequence,view,package,cluster,table). Load the objects which came from RES tablespace into USERS tablespace in target database.
Export Parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott
Import parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('LOAD_EXT')"
Scenario3 Export the emp table from scott schema at orcl instance and import into ordb instance.
Expdb parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp
Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE
Scenario4 Export only specific partition in emp table from scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20
Impdp parfile content: If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).
scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.
scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.
scott@ordb> commit;
Commit complete.
userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append
Scenario5 Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott
Impdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
Scenario6 Export only rows belonging to department 10 and 20 in emp and dept table from orcl database. Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:"in('EMP','DEPT')"
query="where deptno in(10,20)"
Impdp parfile content:
userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query="where deptno = 10"
table_exists_action=APPEND
Scenario7 Export the scott schema from orcl database and split the dump file into 50M sizes. Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB, the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB, then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario8 Export the scott schema from orcl database and split the dump file into four files. Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created - schemaexp_split_01.dmp, schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence of the substation variable is incremented each time. Since there is no FILESIZE parameter, no more files will be created.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario9 Export the scott schema from orcl database and split the dump file into three files. The dump files will be stored in three different location. This method is especially useful if you do not have enough space in one file system to perform the complete expdp job. After export is successful, import the dump file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
As per above expdp par file content, it place the dump file into three different location. Let us say, entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and place 10 files in each file system.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace
Scenario10 We are in orcl database server. Now export the ordb data and place the dump file in orcl database server. After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user and source database schema users should have identical privileges. If there no identical privileges, then we get the below error.
C:\impexpdp>expdp parfile=networkexp1.par
Export: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 12:06:40
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.
Impdp parfile content:
userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
Scenario11 Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed with out writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database. If we do not have much space in the file system to place the dump file, then we can use this option to load the data.
Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace
Scenario12 Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training
Scenario 13 Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter is not valid for network exports.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.
Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
Managing Data Pump jobs
The datapump clients expdp and impdp provide an interactive command interface. Since each expdp and impdp operation has a job name, you can attach to that job from any computer and monitor the job or make adjustment to the job.
Here are the data pump interactive commands.
ADD_FILE Adds another file or a file set to the DUMPFILE set.
CONTINUE_CLIENT Changes mode from interactive client to logging mode
EXIT_CLIENT Leaves the client session and discontinues logging but leaves the current job running.
KILL_JOB Detaches all currently attached client sessions and terminates the job
PARALLEL Increase or decrease the number of threads
START_JOB Starts(or resume) a job that is not currently running. SKIP_CURRENT option can skip the recent failed DDL statement that caused the job to stop.
STOP_JOB stops the current job, the job can be restarted later
STATUS Displays detailed status of the job, the refresh interval can be specified in seconds. The detailed status is displayed to the output screen but not written to the log file.
Scenario14 Let us start the job and in between, we stop the job in middle and resume the job. After some time, let us kill the job and check the job status for every activity....
We can find what jobs are running currently in the database by using the below query.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
C:\impexpdp>impdp parfile=schemaimp1.par
Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:06:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
When we want to stop the job, we need press Control-M to returnImport> prompt. Once it is returned to prompt(Import>), we can stop the job as above by using stop_job command.
After the job is stoped, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
NOT RUNNING SYS_IMPORT_FULL_01
SQL>
Now we are attaching job again..... Attaching the job does not restart the job.
C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01
Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:17:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2009 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp
Worker 1 Status:
State: UNDEFINED
Import>
After attaching the job, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
IDLING SYS_IMPORT_FULL_01
SQL>
Attaching the job does not resume the job. Now we are resuming job again.....
Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2009 14:17
Restarting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
Now again we are killing the same job.... Before we kill, we need to press Control-C to return the Import> prompt.
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
Now the job is disappared in the database.
SQL> select state,job_name from dba_datapump_jobs;
no rows selected.
Different RMAN Recovery Scenarios
In this article we will discuss about different type of database recovery scenarios. The target database name as well as catalog database can be different. Consider you have sufficient backup for this example such as daily incremental backup for all targets database on (Sat-Thurs day) and Weekly full backup on (Friday). The Daily and weekly backup scripts includes datafile, archive log and control & spfile autobackup. The target databases versions are Oracle 9i/10g where as platform can be windows 2003 and LINUX. The motive of this article is to give the fresher or Junior DBA confidence “How to apply different Recovery scenario in different situation”. Some of the scenario’s recovery logs are from our Production database and some of them taken from other DBA’s.
Unfortunately, if you lost all the data files, control files, redo log files. But luckily the current archived redo log files were intact. In that case you are not able to mount the database since lost your controlfile too.
One fine morning our DBserver is restarted due to hard disk crashed and OS issue. Once the disk is repaired and OS is restored then the database is mounted successfully but we cannot able to open the database. It needs media recovery with the following errors. As most of the tablespace datafiles are corrupted, we decided to do complete database recovery.
SQL> startup;Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: ‘D:\ORACLE\ORADATA\SADHAN\SYSTEM01.DBF'The corruption happens Wednesday morning session so we have daily incremental night backup of Sun-Wed with full backup of Saturday (29thSeptember 2012) and entire month archive log on the disk. We just mounted the database.
C:\rman target sys/****@sadhan catalog catalog/catalog@rman
Recovery Manager: Release 9.2.0.1.0 – Production Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> SHUTDOWN IMMEDIATE;
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: ‘D:\ORACLE\ORADATA\SADHAN\SYSTEM01.DBF'The corruption happens Wednesday morning session so we have daily incremental night backup of Sun-Wed with full backup of Saturday (29thSeptember 2012) and entire month archive log on the disk. We just mounted the database.
C:\rman target sys/****@sadhan catalog catalog/catalog@rman
Recovery Manager: Release 9.2.0.1.0 – Production Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> SHUTDOWN IMMEDIATE;
database dismounted
Oracle instance shut down
RMAN> STARTUP MOUNT;
connected to target database (not started)
RMAN> RESTORE DATABASE;
Starting restore at 03-OCT-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=13 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE\ORADATA\SADHAN\SYSTEM01.DBF
restoring datafile 00002 to D:\ORACLE\ORADATA\SADHAN\UNDOTBS01.DBF
restoring datafile 00009 to G:\ORA_DBF_EXTENDED\SDH_HRMS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\MONTHLY_20120929_FULL_SADHAN-2981_1.DB tag=SADHAN_FULL_D BBACKUP params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00005 to D:\ORACLE\ORADATA\SADHAN\TOOLS01.DBF
restoring datafile 00011 to G:\ORA_DBF_EXTENDED\SDH_SHTR01.DBF
restoring datafile 00012 to G:\ORA_DBF_EXTENDED\SDH_FIN01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\MONTHLY_20120929_FULL_SADHAN-2983_1.DB tag=SADHAN_FULL_D BBACKUP params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00007 to D:\ORACLE\ORADATA\SADHAN\SDH_TIMS01.DBF
restoring datafile 00013 to F:\ORACLE\SADHAN\SDH_EDSS02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\MONTHLY_20120929_FULL_SADHAN-2985_1.DB tag=SADHAN_FULL_D BBACKUP params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00004 to D:\ORACLE\ORADATA\SADHAN\INDX01.DBF
restoring datafile 00010 to F:\ORACLE\SADHAN\SDH_EDSS01.DBF
restoring datafile 00014 to D:\ORACLE\ORADATA\SADHAN\INDX02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\MONTHLY_20120929_FULL_SADHAN-2982_1.DB tag=SADHAN_FULL_D BBACKUP params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00003 to D:\ORACLE\ORADATA\SADHAN\EXAMPLE01.DBF
restoring datafile 00006 to D:\ORACLE\ORADATA\SADHAN\USERS01.DBF
restoring datafile 00008 to D:\ORACLE\ORADATA\SADHAN\SDH_TIMS02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\MONTHLY_20120929_FULL_SADHAN-2984_1.DB tag=SADHAN_FULL_D BBACKUP params=NULL
channel ORA_DISK_1: restore complete
Finished restore at 03-OCT-12
RMAN> RECOVER DATABASE;
Starting recover at 03-OCT-12
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: D:\ORACLE\ORADATA\SADHAN\SYSTEM01.DBF
destination for restore of datafile 00002: D:\ORACLE\ORADATA\SADHAN\UNDOTBS01.DBF
destination for restore of datafile 00009: G:\ORA_DBF_EXTENDED\SDH_HRMS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_CUMUL_SADHAN-3036_1.DB tag=CUMULATIVE_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00005: D:\ORACLE\ORADATA\SADHAN\TOOLS01.DBF
destination for restore of datafile 00011: G:\ORA_DBF_EXTENDED\SDH_SHTR01.DBF
destination for restore of datafile 00012: G:\ORA_DBF_EXTENDED\SDH_FIN01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_CUMUL_SADHAN-3038_1.DB tag=CUMULATIVE_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: D:\ORACLE\ORADATA\SADHAN\INDX01.DBF
destination for restore of datafile 00010: F:\ORACLE\SADHAN\SDH_EDSS01.DBF
destination for restore of datafile 00014: D:\ORACLE\ORADATA\SADHAN\INDX02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_CUMUL_SADHAN-3037_1.DB tag=CUMULATIVE_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00007: D:\ORACLE\ORADATA\SADHAN\SDH_TIMS01.DBF
destination for restore of datafile 00013: F:\ORACLE\SADHAN\SDH_EDSS02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_CUMUL_SADHAN-3040_1.DB tag=CUMULATIVE_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00003: D:\ORACLE\ORADATA\SADHAN\EXAMPLE01.DBF
destination for restore of datafile 00006: D:\ORACLE\ORADATA\SADHAN\USERS01.DBF
destination for restore of datafile 00008: D:\ORACLE\ORADATA\SADHAN\SDH_TIMS02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_CUMUL_SADHAN-3039_1.DB tag=CUMULATIVE_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: D:\ORACLE\ORADATA\SADHAN\SYSTEM01.DBF
destination for restore of datafile 00002: D:\ORACLE\ORADATA\SADHAN\UNDOTBS01.DBF
destination for restore of datafile 00009: G:\ORA_DBF_EXTENDED\SDH_HRMS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_INCR_SADHAN-3041_1.DB tag=DIFFERENTIAL_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00005: D:\ORACLE\ORADATA\SADHAN\TOOLS01.DBF
destination for restore of datafile 00011: G:\ORA_DBF_EXTENDED\SDH_SHTR01.DBF
destination for restore of datafile 00012: G:\ORA_DBF_EXTENDED\SDH_FIN01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_INCR_SADHAN-3043_1.DB tag=DIFFERENTIAL_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: D:\ORACLE\ORADATA\SADHAN\INDX01.DBF
destination for restore of datafile 00010: F:\ORACLE\SADHAN\SDH_EDSS01.DBF
destination for restore of datafile 00014: D:\ORACLE\ORADATA\SADHAN\INDX02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_INCR_SADHAN-3042_1.DB tag=DIFFERENTIAL_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00007: D:\ORACLE\ORADATA\SADHAN\SDH_TIMS01.DBF
destination for restore of datafile 00013: F:\ORACLE\SADHAN\SDH_EDSS02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_INCR_SADHAN-3045_1.DB tag=DIFFERENTIAL_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00003: D:\ORACLE\ORADATA\SADHAN\EXAMPLE01.DBF
destination for restore of datafile 00006: D:\ORACLE\ORADATA\SADHAN\USERS01.DBF
destination for restore of datafile 00008: D:\ORACLE\ORADATA\SADHAN\SDH_TIMS02.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=H:\ORABACK\DAILY_20121003_INCR_SADHAN-3044_1.DB tag=DIFFERENTIAL_L1_DATAFILE params=NULL
channel ORA_DISK_1: restore complete
starting media recovery
media recovery failed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 10/03/2012 20:00:52
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed start
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [37833581], [1], [4504], [70575], [244], [], []
ORA-10567: Redo is inconsistent with data block (file# 9, block# 84845)
ORA-10564: tablespace SDH_HRMS_DBF
ORA-01110: data file 9: 'G:\ORA_DBF_EXTENDED\SDH_HRMS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 26918
RMAN> ALTER DATABASE OPEN;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 10/03/2012 20:01:30
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: 'D:\ORACLE\ORADATA\SADHAN\SYSTEM01.DBF'
At this stage we do not have to be panic. We have already restored the database successfully. This is due to the reason. It is very old database created in noresetlogs mode. We cannot open it without performing incomplete recovery.
Now Login with SQL*Plus in mount phase
SQL> recover database until cancel;
Press enter as long as you reach to the missing log
CANCEL
SQL> alter database open resetlogs;
Note: You must take fresh full backup after opening the database with resetlogs option and must perform "reset database" to connect rman target to take fresh backup.
C:\CONNECT RMAN TARGET SYS/SYSMAN@SADHAN.WORD CATALOG CATALOG/CATALOG@RMAN
RMAN> RESET DATABASE;
One of the junior DBA while working with Production environment on Thursday afternoon, due to media failure, one of the data file is corrupted. But all other data files are working fine. As the DBA restarted the database, one of data file is starts complaining. Then the DBA decided to recover that particular datafile.
SQL> startup;
ORACLE instance started.
Database mounted.
ORA-01157: cannot identify/lock data file 6 - see DBWR trace file
ORA-01110: data file 6: ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF'He made the corrupted data file OFFLINE and opened the database, so users can use the database while recovering that particular data file (but in case of system01.dbf you need sufficient downtime to recover).
SQL> alter database datafile ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF' OFFLINE;Database altered.
SQL> alter database open;
Database altered.
SQL> Select file_id from dba_data_files where file_name = ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF';
FILE_ID
----------
6
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production With the Partitioning, OLAP and Data Mining optionsNow connect the RMAN with catalog and restore and recover the datafile 6
C:\>rman target sys/****@mujazhr.world catalog catalog/catalog@rman
Recovery Manager: Release 10.2.0.1.0 - Production on Thu Jun 25 14:30:09 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: MUJAZORC (DBID=1165034825)
connected to recovery catalog database
RMAN>run
{
restore datafile 6;
recover datafile 6;
}
Starting restore at 25-JUN-12
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=135 devtype=DISK
creating datafile fno=6 name=D:\ORACLE\ORA92\MUJAZORC\USERS02.DBFrestore not done; all files readonly, offline, or already restored
Finished restore at 25-JUN-12
Starting recover at 25-JUN-12
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:01
Finished recover at 25-JUN-12
RMAN> exit
Recovery Manager complete.
C:\>sqlplus sys/oracle as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 25 14:34:10 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> alter database datafile ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF' ONLINE;
Database altered.
SQL> Select distinct status from dba_data_files;
STATUS
---------
AVAILABLE
ORACLE instance started.
Database mounted.
ORA-01157: cannot identify/lock data file 6 - see DBWR trace file
ORA-01110: data file 6: ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF'He made the corrupted data file OFFLINE and opened the database, so users can use the database while recovering that particular data file (but in case of system01.dbf you need sufficient downtime to recover).
SQL> alter database datafile ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF' OFFLINE;Database altered.
SQL> alter database open;
Database altered.
SQL> Select file_id from dba_data_files where file_name = ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF';
FILE_ID
----------
6
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production With the Partitioning, OLAP and Data Mining optionsNow connect the RMAN with catalog and restore and recover the datafile 6
C:\>rman target sys/****@mujazhr.world catalog catalog/catalog@rman
Recovery Manager: Release 10.2.0.1.0 - Production on Thu Jun 25 14:30:09 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: MUJAZORC (DBID=1165034825)
connected to recovery catalog database
RMAN>run
{
restore datafile 6;
recover datafile 6;
}
Starting restore at 25-JUN-12
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=135 devtype=DISK
creating datafile fno=6 name=D:\ORACLE\ORA92\MUJAZORC\USERS02.DBFrestore not done; all files readonly, offline, or already restored
Finished restore at 25-JUN-12
Starting recover at 25-JUN-12
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:01
Finished recover at 25-JUN-12
RMAN> exit
Recovery Manager complete.
C:\>sqlplus sys/oracle as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 25 14:34:10 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> alter database datafile ‘D:\ORACLE\ORA92\MUJAZORC\USERS02.DBF' ONLINE;
Database altered.
SQL> Select distinct status from dba_data_files;
STATUS
---------
AVAILABLE
One of our Production database is related with Retail & Trading firm on Windows 2003 env. On Friday morning database goes down. As we restarted the database, DB complaining one of datafile is corrupted, may be this is due to the hard disk repair work happened on Thursday Evening where as other tablespaces datafiles are as it is having no issue. So we decided to recover only that particular tablespace.
SQL> startup;
Database mounted.
ORA-01157: cannot identify/lock data file 6 - see DBWR trace fileORA-01110: data file 6: ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF 'We made this tablespace offline and open the database. So that end users can use the database while recovering the tablespace (as this is not the system tablespace).
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF' OFFLINE;
Database altered.
SQL> alter database open;
alter database open
* ERROR at line 1:
ORA-01157: cannot identify/lock data file 7 - see DBWR trace fileORA-01110: data file 7: ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF'
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF' OFFLINE;Database altered.
SQL> alter database open;
Database altered.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 –
Database mounted.
ORA-01157: cannot identify/lock data file 6 - see DBWR trace fileORA-01110: data file 6: ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF 'We made this tablespace offline and open the database. So that end users can use the database while recovering the tablespace (as this is not the system tablespace).
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF' OFFLINE;
Database altered.
SQL> alter database open;
alter database open
* ERROR at line 1:
ORA-01157: cannot identify/lock data file 7 - see DBWR trace fileORA-01110: data file 7: ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF'
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF' OFFLINE;Database altered.
SQL> alter database open;
Database altered.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 –
Production With the Partitioning, OLAP and Data Mining options
C:\>rman target sys/****@ISSCOHR.world catalog catalog/catalog@rman
Recovery Manager: Release 10.2.0.1.0 - Production on Fri Jan 12 10:13:26 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.connected to target database: ISSCOHR (DBID=2613999945)
connected to recovery catalog databaseRMAN> run{
restore tablespace EDSS_DBF;
recover tablespace EDSS_DBF;
}
Starting restore at 12-JAN-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=128 devtype=DISK
creating datafile fno=7 name= D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00006 to D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF
channel ORA_DISK_1: reading from backup piece H:\ORABACK\ WEEKLY_20130112_FULL_ISSCOHR-2587_1.DB
channel ORA_DISK_1: restored backup piece 1
piece handle= H:\ORABACK\WEEKLY_20130112_FULL_ISSCOHR-2588_1.DB
channel ORA_DISK_1: restore complete, elapsed time: 00:13:11
Finished restore at 12-JAN-13
Starting recover at 12-JAN-13
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 36 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02244.001
archive log thread 1 sequence 37 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02245.001
archive log thread 1 sequence 38 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02246.001
archive log thread 1 sequence 39 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02247.001
media recovery complete, elapsed time: 00:01:23
Finished recover at 12-JUN-13
SQL> connect sys/password as sysdbaConnected.
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF' ONLINE;Database altered.
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF' ONLINE;
If SYSTEM tablespace gets corrupted and others are intact. Then you need sufficient downtime to recover this tablespace as you can not open the database without recovering the SYSTEM tablespace.
C:\>rman target sys/****@ISSCOHR.world catalog catalog/catalog@rman
Recovery Manager: Release 10.2.0.1.0 - Production on Fri Jan 12 10:13:26 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.connected to target database: ISSCOHR (DBID=2613999945)
connected to recovery catalog databaseRMAN> run{
restore tablespace EDSS_DBF;
recover tablespace EDSS_DBF;
}
Starting restore at 12-JAN-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=128 devtype=DISK
creating datafile fno=7 name= D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00006 to D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF
channel ORA_DISK_1: reading from backup piece H:\ORABACK\ WEEKLY_20130112_FULL_ISSCOHR-2587_1.DB
channel ORA_DISK_1: restored backup piece 1
piece handle= H:\ORABACK\WEEKLY_20130112_FULL_ISSCOHR-2588_1.DB
channel ORA_DISK_1: restore complete, elapsed time: 00:13:11
Finished restore at 12-JAN-13
Starting recover at 12-JAN-13
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 36 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02244.001
archive log thread 1 sequence 37 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02245.001
archive log thread 1 sequence 38 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02246.001
archive log thread 1 sequence 39 is already on disk as file E:\ORACLE\ARCHIVE\ ARC02247.001
media recovery complete, elapsed time: 00:01:23
Finished recover at 12-JUN-13
SQL> connect sys/password as sysdbaConnected.
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF01.DBF' ONLINE;Database altered.
SQL> alter database datafile ‘D:\ORACLE\ORADATA\ISSCOHR\EDSS_DBF02.DBF' ONLINE;
If SYSTEM tablespace gets corrupted and others are intact. Then you need sufficient downtime to recover this tablespace as you can not open the database without recovering the SYSTEM tablespace.
SQL> startup;
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1:'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF'Take the SYSTEM tablespace offline and recover that tablespace only. The recovery log taken from any other Test database.
SQL> alter database datafile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF’ OFFLINE;Database altered.
SQL> ALTER DATABASE OPEN;
ALTER DATABASE OPEN
* ERROR at line 1:
ORA-01147: SYSTEM tablespace file 1 is offline
ORA-01110: data file 1: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF'
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
C:\>rman catalog=rman/rman@catdb target=sys/password@orclRecovery Manager: Release 10.2.0.1.0 - Production on Fri May 22 10:21:04 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215124933, not open)
connected to recovery catalog database
RMAN> run
{
restore tablespace system;
recover tablespace system;
}Starting restore at 22-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
channel ORA_DISK_1: reading from backup piece C:\RMANBACKUP\BACKUPORCL_DB_0JKFLE9Q_1_1_%S_%P
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\RMANBACKUP\BACKUPORCL_DB_0JKFLE9Q_1_1_%S_%P tag=TAG20090522T094738
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 22-MAY-09
Starting recover at 22-MAY-09
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:04
Finished recover at 22-MAY-09
RMAN> exit
Recovery Manager complete.
C:\>sqlplus sys/password as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Fri May 22 10:22:20 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production With the Partitioning, OLAP and Data Mining options
SQL> alter database open;alter database open
* ERROR at line 1:
ORA-01147: SYSTEM tablespace file 1 is offline
ORA-01110: data file 1: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF'
SQL> alter database datafile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF' online;
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL> select distinct status from dba_tablespaces;
STATUS
---------
ONLINE
SQL> select distinct status from dba_data_files;
STATUS
---------
AVAILABLE
One afternoon, you restarted your database and realized that all the redo log files are gets corrupted but fortunately you did not lost the controlfile. Thus you are able to mount the database. To recover all those redo log files, you have decided to perform incomplete recovery.
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1:'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF'Take the SYSTEM tablespace offline and recover that tablespace only. The recovery log taken from any other Test database.
SQL> alter database datafile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF’ OFFLINE;Database altered.
SQL> ALTER DATABASE OPEN;
ALTER DATABASE OPEN
* ERROR at line 1:
ORA-01147: SYSTEM tablespace file 1 is offline
ORA-01110: data file 1: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF'
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
C:\>rman catalog=rman/rman@catdb target=sys/password@orclRecovery Manager: Release 10.2.0.1.0 - Production on Fri May 22 10:21:04 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215124933, not open)
connected to recovery catalog database
RMAN> run
{
restore tablespace system;
recover tablespace system;
}Starting restore at 22-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
channel ORA_DISK_1: reading from backup piece C:\RMANBACKUP\BACKUPORCL_DB_0JKFLE9Q_1_1_%S_%P
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\RMANBACKUP\BACKUPORCL_DB_0JKFLE9Q_1_1_%S_%P tag=TAG20090522T094738
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 22-MAY-09
Starting recover at 22-MAY-09
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:04
Finished recover at 22-MAY-09
RMAN> exit
Recovery Manager complete.
C:\>sqlplus sys/password as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Fri May 22 10:22:20 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production With the Partitioning, OLAP and Data Mining options
SQL> alter database open;alter database open
* ERROR at line 1:
ORA-01147: SYSTEM tablespace file 1 is offline
ORA-01110: data file 1: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF'
SQL> alter database datafile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF' online;
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL> select distinct status from dba_tablespaces;
STATUS
---------
ONLINE
SQL> select distinct status from dba_data_files;
STATUS
---------
AVAILABLE
One afternoon, you restarted your database and realized that all the redo log files are gets corrupted but fortunately you did not lost the controlfile. Thus you are able to mount the database. To recover all those redo log files, you have decided to perform incomplete recovery.
SQL> startup;
ORACLE instance started.
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1:'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG'
SQL>exitAs you have taken the backup in the morning mount the database and run the RMAN backup first, just to make sure all the archived redo log files are backed up before start the actual recovery process.
ORACLE instance started.
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1:'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG'
SQL>exitAs you have taken the backup in the morning mount the database and run the RMAN backup first, just to make sure all the archived redo log files are backed up before start the actual recovery process.
Once RMAN backup is completed, you have to perform incomplete recovery and recovered until the last archived redo log. The recovery logs are taken from any other source.
C:\>rman catalog=rman/rman@catdb target=sys/password@orclRecovery Manager: Release 10.2.0.1.0 - Production on Sat May 23 20:36:24 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215124933, not open)
connected to recovery catalog database
RMAN> backup archivelog all;Starting backup at 23-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=152 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=2 recid=31 stamp=687541158
input archive log thread=1 sequence=3 recid=32 stamp=687542042
input archive log thread=1 sequence=4 recid=33 stamp=687542062
comment=NONE
Starting Control File and SPFILE Autobackup at 23-MAY-09
piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\C-1215124933-20090523-0A comment=NONE
Finished Control File and SPFILE Autobackup at 23-MAY-09
RMAN> list backup of archivelog from time='sysdate-1';List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
10692 37.00K DISK 00:00:01 23-MAY-09
BP Key: 10697 Status: AVAILABLE Compressed: NO Tag: TAG20090523T224042
Piece Name: C:\RMANBACKUP\BACKUPORCL_DB_07KFPFVA_7_1
List of Archived Logs in backup set 10692
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 6 551206 23-MAY-09 551263 23-MAY-09
RMAN> restore database until sequence=6 thread=1 force;Starting restore at 23-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\02KFPFLV_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\02KFPFLV_1_1 tag=TAG20090523T223542
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 23-MAY-09
RMAN> recover database until sequence=6 thread=1 ;
Starting recover at 23-MAY-09
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 4 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_4_1_687649381.ARC
archive log thread 1 sequence 5 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_5_1_687649381.ARC
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_4_1_687649381.ARC thread=1 sequence=4
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_5_1_687649381.ARC thread=1 sequence=5
media recovery complete, elapsed time: 00:00:03
Finished recover at 23-MAY-09
RMAN> alter database open resetlogs;database opened
new incarnation of database registered in recovery catalogThe database is recovered successfully. But this is incomplete recovery. Prior to oracle10g, oracle strongly recommended to takes the full database backup whenever there is incomplete recovery. But in oracle10g, it is optional. But still it is good to take the full database backup for safer side.
C:\>rman catalog=rman/rman@catdb target=sys/password@orclRecovery Manager: Release 10.2.0.1.0 - Production on Sat May 23 20:36:24 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215124933, not open)
connected to recovery catalog database
RMAN> backup archivelog all;Starting backup at 23-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=152 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=2 recid=31 stamp=687541158
input archive log thread=1 sequence=3 recid=32 stamp=687542042
input archive log thread=1 sequence=4 recid=33 stamp=687542062
comment=NONE
Starting Control File and SPFILE Autobackup at 23-MAY-09
piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\C-1215124933-20090523-0A comment=NONE
Finished Control File and SPFILE Autobackup at 23-MAY-09
RMAN> list backup of archivelog from time='sysdate-1';List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
10692 37.00K DISK 00:00:01 23-MAY-09
BP Key: 10697 Status: AVAILABLE Compressed: NO Tag: TAG20090523T224042
Piece Name: C:\RMANBACKUP\BACKUPORCL_DB_07KFPFVA_7_1
List of Archived Logs in backup set 10692
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 6 551206 23-MAY-09 551263 23-MAY-09
RMAN> restore database until sequence=6 thread=1 force;Starting restore at 23-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\02KFPFLV_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\02KFPFLV_1_1 tag=TAG20090523T223542
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 23-MAY-09
RMAN> recover database until sequence=6 thread=1 ;
Starting recover at 23-MAY-09
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 4 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_4_1_687649381.ARC
archive log thread 1 sequence 5 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_5_1_687649381.ARC
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_4_1_687649381.ARC thread=1 sequence=4
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_5_1_687649381.ARC thread=1 sequence=5
media recovery complete, elapsed time: 00:00:03
Finished recover at 23-MAY-09
RMAN> alter database open resetlogs;database opened
new incarnation of database registered in recovery catalogThe database is recovered successfully. But this is incomplete recovery. Prior to oracle10g, oracle strongly recommended to takes the full database backup whenever there is incomplete recovery. But in oracle10g, it is optional. But still it is good to take the full database backup for safer side.
Unfortunately, if you lost all the data files, control files, redo log files. But luckily the current archived redo log files were intact. In that case you are not able to mount the database since lost your controlfile too.
SQL> startup;
ORACLE instance started.
ORA-00205: error in identifying control file, check alert log for more infoYou need to recover first the control file from RMAN backup. Once control file is recovered, you are able to mount the database. After the database is mounted run the RMAN backup to make sure, all the current archive log files are backed up and recover the data base until last the sequence of archived log file.
C:\>set oracle_sid=orcl
C:\>rman catalog=rman/rman@catdb target=sys/passwordRecovery Manager: Release 10.2.0.1.0 - Production on Sun May 24 15:26:51 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: orcl (not mounted)
connected to recovery catalog database
RMAN> restore controlfile;Starting restore at 24-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=157 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\C-1215492928-20090524-00
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\C-1215492928-20090524-00 tag=TAG20090524T152409
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
output filename=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL01.CTL
output filename=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL02.CTL
output filename=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL03.CTL
Finished restore at 24-MAY-09
SQL> alter database mount;Database altered.
SQL> select archivelog_change#-1 from v$database;
ARCHIVELOG_CHANGE#-1
--------------------
547010
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
C:\>rman catalog=rman/rman@catdb target=sys/password@orclRecovery Manager: Release 10.2.0.1.0 - Production on Sun May 24 15:29:33 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215492928, not open)
connected to recovery catalog database
RMAN> run
ORACLE instance started.
ORA-00205: error in identifying control file, check alert log for more infoYou need to recover first the control file from RMAN backup. Once control file is recovered, you are able to mount the database. After the database is mounted run the RMAN backup to make sure, all the current archive log files are backed up and recover the data base until last the sequence of archived log file.
C:\>set oracle_sid=orcl
C:\>rman catalog=rman/rman@catdb target=sys/passwordRecovery Manager: Release 10.2.0.1.0 - Production on Sun May 24 15:26:51 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: orcl (not mounted)
connected to recovery catalog database
RMAN> restore controlfile;Starting restore at 24-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=157 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\C-1215492928-20090524-00
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\C-1215492928-20090524-00 tag=TAG20090524T152409
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
output filename=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL01.CTL
output filename=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL02.CTL
output filename=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL03.CTL
Finished restore at 24-MAY-09
SQL> alter database mount;Database altered.
SQL> select archivelog_change#-1 from v$database;
ARCHIVELOG_CHANGE#-1
--------------------
547010
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
C:\>rman catalog=rman/rman@catdb target=sys/password@orclRecovery Manager: Release 10.2.0.1.0 - Production on Sun May 24 15:29:33 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215492928, not open)
connected to recovery catalog database
RMAN> run
{
set until scn 547010;
restore database;
recover database;
alter database open resetlogs;
}
executing command: SET until clause
Starting restore at 24-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\RMANBACKUP\BACKUPORCL_DB_02KFRAN7_1_1_%S_%P
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\RMANBACKUP\BACKUPORCL_DB_02KFRAN7_1_1_%S_%P tag=TAG20090524T1523
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 24-MAY-09
Starting recover at 24-MAY-09
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 3 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_3_1_687712197.ARC
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_3_1_687712197.ARC thread=1 sequence=3
media recovery complete, elapsed time: 00:00:04
Finished recover at 24-MAY-09
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync completeOne of the Application programmer truncated the critical table in the evening around 3.46 PM and they need to recover the truncated table. They have to decide to go to the time based incomplete recovery of the exact 03.45 PM. Find the Exact time for recovery and set the date and time format along with time based recovery scripts.
SQL> Select count(*) from employee;
COUNT(*)
----------
14
SQL> select to_char(sysdate,'DD-MM-YYYY:HH24:MI:SS')
from dual;
TO_CHAR(SYSDATE,'DD
-------------------
24-05-2009:15:45:42
SQL> truncate table employee;
Table truncated.
SQL> select count(*) from employee;
COUNT(*)
----------
0
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.
SQL>
C:\>rman catalog=rman/rman@catdb target=sys/password@orcl
Recovery Manager: Release 10.2.0.1.0 - Production on Sun May 24 15:58:56 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215492928, not open)
connected to recovery catalog database
RMAN> run
{
set until time "to_date('24-05-2009:15:45:42','DD-MM-YYYY HH24:MI:SS')";
restore database;
recover database;
}executing command: SET until clause
Starting restore at 24-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\RMANBACKUP\BACKUPORCL_DB_09KFRBBR_1_1_%S_%P
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\RMANBACKUP\BACKUPORCL_DB_09KFRBBR_1_1_%S_%P tag=TAG20090524T1534
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 24-MAY-09
Starting recover at 24-MAY-09
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 2 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_2_1_687713476.ARC
archive log thread 1 sequence 3 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_3_1_687713476.ARC
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_2_1_687713476.ARC thread=1 sequence=2
media recovery complete, elapsed time: 00:00:05
Finished recover at 24-MAY-09
RMAN> exit
Recovery Manager complete.
C:\>sqlplus sys/password as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Sun May 24 16:01:31 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production With the Partitioning, OLAP and Data Mining options
SQL> alter database open resetlogs;Database altered.
SQL> connect scott/tiger@orcl
Connected.
SQL> select count(*) from employee;
COUNT(*)
----------
14Note: If you use the ‘alter database open resetlogs’ command from SQL*PLUS then you must need to use ‘reset database’ command to connect RMAN catalog for fresh backup.
set until scn 547010;
restore database;
recover database;
alter database open resetlogs;
}
executing command: SET until clause
Starting restore at 24-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\RMANBACKUP\BACKUPORCL_DB_02KFRAN7_1_1_%S_%P
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\RMANBACKUP\BACKUPORCL_DB_02KFRAN7_1_1_%S_%P tag=TAG20090524T1523
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 24-MAY-09
Starting recover at 24-MAY-09
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 3 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_3_1_687712197.ARC
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_3_1_687712197.ARC thread=1 sequence=3
media recovery complete, elapsed time: 00:00:04
Finished recover at 24-MAY-09
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync completeOne of the Application programmer truncated the critical table in the evening around 3.46 PM and they need to recover the truncated table. They have to decide to go to the time based incomplete recovery of the exact 03.45 PM. Find the Exact time for recovery and set the date and time format along with time based recovery scripts.
SQL> Select count(*) from employee;
COUNT(*)
----------
14
SQL> select to_char(sysdate,'DD-MM-YYYY:HH24:MI:SS')
from dual;
TO_CHAR(SYSDATE,'DD
-------------------
24-05-2009:15:45:42
SQL> truncate table employee;
Table truncated.
SQL> select count(*) from employee;
COUNT(*)
----------
0
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.
SQL>
C:\>rman catalog=rman/rman@catdb target=sys/password@orcl
Recovery Manager: Release 10.2.0.1.0 - Production on Sun May 24 15:58:56 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1215492928, not open)
connected to recovery catalog database
RMAN> run
{
set until time "to_date('24-05-2009:15:45:42','DD-MM-YYYY HH24:MI:SS')";
restore database;
recover database;
}executing command: SET until clause
Starting restore at 24-MAY-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\RMANBACKUP\BACKUPORCL_DB_09KFRBBR_1_1_%S_%P
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\RMANBACKUP\BACKUPORCL_DB_09KFRBBR_1_1_%S_%P tag=TAG20090524T1534
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 24-MAY-09
Starting recover at 24-MAY-09
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 2 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_2_1_687713476.ARC
archive log thread 1 sequence 3 is already on disk as file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_3_1_687713476.ARC
archive log filename=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ARCHIVE\LOG_2_1_687713476.ARC thread=1 sequence=2
media recovery complete, elapsed time: 00:00:05
Finished recover at 24-MAY-09
RMAN> exit
Recovery Manager complete.
C:\>sqlplus sys/password as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Sun May 24 16:01:31 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production With the Partitioning, OLAP and Data Mining options
SQL> alter database open resetlogs;Database altered.
SQL> connect scott/tiger@orcl
Connected.
SQL> select count(*) from employee;
COUNT(*)
----------
14Note: If you use the ‘alter database open resetlogs’ command from SQL*PLUS then you must need to use ‘reset database’ command to connect RMAN catalog for fresh backup.
No comments:
Post a Comment