Monday, April 15, 2013

Datapump tips

Datapump - some tips

Data Pump is a utility for unloading/loading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.
In this post, let us see some tips and tricks that can done with Datapump.

Tip #1 : Using PARALLEL parameter
PARALLEL parameter is used to improve the speed of the export. But this will be more effective when you split the dumpfiles with DUMPFILE parameter across the filesystem.
Create 2 or 3 directories in different filesystems and use the commands effectively.
expdp / dumpfile=dir1:test_1.dmp, dir1:test_2.dmp, dir2:test_3.dmp, dir3:test_4.dmp logfile=dir1:test.log full=y parallel=4
where dir1, dir2 and dir3 are directory names created in the database.

Tip #2 : Using FILESIZE parameter
FILESIZE parameter is used to limit the dumpfile size. For eg., if you want to limit your dumpfiles to 5gb, you can issue command as below
expdp / directory=dir1 dumpfile=test1.dmp,test2.dmp,test3.dmp logfile=test.log filesize=5120m
or
expdp / directory=dir1 dumpfile=test_%U.dmp logfile=test.log filesize=5120m full=y
where %U will assign numbers automatically from 1 to 99.
Note: If you use %U, dumpfile number 100 can't be created and export fails with "dumpfile exhausted" error.

Tip #3 : Usage of VERSION parameter
VERSION parameter is used while taking export if you want to create a dumpfile which should be imported into a DB which is lower than the source DB.
For eg., if your source DB is 11g and target DB is 10g, you can't use the dumpfile taken from 11g expdp utility to import into 10g DB.
This throws the below error.
ORA-39142: incompatible version number 3.1 in dump file "/u02/dpump/test.dmp"
To overcome this we can use the VERSION parameter.
VERSION={COMPATIBLE | LATEST | version_string}
Eg.: expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log VERSION=10.2.0

Tip #4 : PARALLEL with single DUMPFILE
When you use PARALLEL parameter and use only one dumpfile to unload datas from the DB, you may get the below error.
expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log parallel=4
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
Job "USER"."TABLE_UNLOAD" stopped due to fatal error at 00:37:29
Now a simple work around is to remove the PARALLEL parameter or add dumpfiles. This will over come the error.
expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log
or
expdp / directory=dir1 dumpfile=test_1.dmp,test_2.dmp,test_3.dmp, test_4.dmp logfile=test.log parallel=4
or
expdp / directory=dir1 dumpfile=test_%U.dmp logfile=test.log parallel=4

Tip #5 : Drop dba_datapump_job rows
Sometimes before the export completes or when the export encounters a resumable wait or you would have stopped the export job in between. Now you start the DataPump job that stopped. Then the dump file has been removed from the directory location. You are not able to attach to the job.
You will get an error like this.
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/oracle/product/10.2.0/db_2/rdbms/log/test.dmp" for read
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
But you will see the row updated in view dba_datapump_jobs
SQL> select * from dba_datapump_jobs;
OWNER JOB_NAME                       OPERATI JOB_M STATE                              DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
----- ------------------------------ ------- ----- ------------------------------ ---------- ----------------- -----------------
SYS   SYS_EXPORT_FULL_01             EXPORT  FULL  NOT RUNNING                             0        ##########                 0
You are not able to remove the row from dba_datapump_jobs as you are not able to attach to the export job with expdp client to kill the job.
In this case you can remove the row by dropping the master table created by the datapump export.
SQL> drop table SYS_EXPORT_FULL_01 purge;
Table dropped.
SQL> select * from dba_datapump_jobs;
no rows selected
Now you can see the row is deleted from the dba_datapump_jobs view.

Tip #6 : FLASHBACK_SCN and FLASHBACK_TIME
Do not use FLASHBACK_SCN and FLASHBACK_TIME as these parameters slow down the performace of export.

Tip #7 : Effective EXCLUDE
Import of full database should be split as tables first and indexes next. Use the parameter exclude effectively to improve the speed of import.
EXCLUDE = INDEX,STATISTICS
This will not import the indexes and statistics which in turn only import the tables, hence improving the performance.

Tip #8 : INDEXFILE=<filename> usage
After the import of tables has been completed, you can create the indexes and collect statistics of the tables. To get the indexes creation ddl, you can use the INDEXFILE = <filename> parameter to get all the indexes creation statements which were involved in the import operation.
Example of effective import
impdp / directory=dir1,dir2,dir3 dumpfile=test_%U.dmp logfile=test.log EXCLUDE=INDEX,STATISTICS Full=Y INDEXFILE=index_ddl.sql

Tip #9 : Contents of Dump file
If you are not sure about the schemas that were present in the dumpfile or tablespaces present inside the dumpfile, etc., you can easily check the dumpfile for those information using the below command
grep -a "CREATE USER" test_1.dmp
grep -a "CREATE TABLESPACE" test_1.dmp
-a is not a recognised flag in some OS and hence command works without the flag. Mind, the dumpfile created is a binary file.
The above command gives all the CREATE USER statements and CREATE TABLESPACE statements which will be useful in many cases. You can also get the INDEXES and TABLES creation ddl from the dumpfile as well.

Tip #10 : init.ora parameter cursor_sharing
Always set init.ora parameter cursor_sharing to exact which has a good effect on import's performance.

Tip #11 : STATUS parameter usage
You can check the on going datapump export/import operation with the use of STATUS parameter and track the progress by yourself. You can attach to a export/import session and check the status.
For example:
[oracle@ini8115l3aa2ba-136018207027 ~]$ expdp attach=SYS_EXPORT_FULL_01
Export: Release 11.2.0.1.0 - Production on Mon May 21 10:56:28 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_01
  Owner: SYS
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: C08622D4FB5571E4E04012881BCF4C92
  Start Time: Monday, 21 May, 2012 10:55:55
  Mode: FULL
  Instance: newdb
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        sys/******** AS SYSDBA directory=dmpdir full=y
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: /u02/dpump/expdat.dmp
    bytes written: 4,096
Worker 1 Status:
  Process Name: DW00
  State: EXECUTING
  Object Type: DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
  Completed Objects: 400
  Worker Parallelism: 1
Export> status
Job: SYS_EXPORT_FULL_01
  Operation: EXPORT
  Mode: FULL
  State: COMPLETING
  Bytes Processed: 37,121
  Percent Done: 100
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: /u02/dpump/expdat.dmp
    bytes written: 561,152
Worker 1 Status:
  Process Name: DW00
  State: WORK WAITING

Here you can see the bytes written which will be progressing and you can track the export/import job easily.

Note: The parameter ATTACH when used, it cannot be combined with any other parameter other than the USERID parameter.

$ expdp ATTACH= JOB_NAME

Standby database on Oracle 8i

Adding Tablespaces or Datafiles to the Primary Database
To add a tablespace or datafile to the primary database and create the datafile in the standby database:
  1. Create a tablespace on the primary database as usual. For example, to create new datafile t_db2.f in tablespace tbs_2, issue:
SQL> CREATE TABLESPACE tbs_2 DATAFILE 't_db2.f' SIZE 2M; 

  1. Switch redo logs on the primary database to initiate redo archival to the standby database:
SQL> ALTER SYSTEM SWITCH LOGFILE;

If the recovery process on the standby database tries to apply the redo containing the CREATE TABLESPACE statement, it stops because the new datafile does not exist on the standby site.
  1. Either wait for the standby database to cancel recovery because it cannot find the new datafile, or manually cancel managed recovery:
SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;


  1. Note that CREATE TABLESPACE redo adds the new filename to the standby control file. The following alert.log entry is generated:
WARNING! Recovering datafile 2 from a fuzzy file. If not the current file it
might be an online backup taken without entering the begin backup command.
Successfully added datafile 2 to media recovery
Datafile #2: '/private1/stby/t_db2.f'

  1. Create the datafile on the standby database. For example, issue:
SQL> ALTER DATABASE CREATE DATAFILE '/private1/stby/t_db2.f'
                                 AS '/private1/stby/t_db2.f';

  1. Place the standby database in managed recovery mode:
SQL> RECOVER MANAGED STANDBY DATABASE;

Continue normal processing on the primary database. The primary and standby databases are now synchronized.

Tuesday, April 9, 2013

Wraping (encrypting) a PL/SQL code

Wrapping oracle PL/SQL Code with the wrap Utility:

The wrap utility processes an input SQL file and obfuscates only the PL/SQL units in the file,
such as a package specification, package body, function, procedure, type specification, or type body.

It does not obfuscate PL/SQL content in anonymous blocks or triggers or non-PL/SQL code.
To run the wrap utility, enter the wrap command at your operating system prompt using the following syntax:
         wrap iname=input_file [oname=output_file]
Do not use any spaces around the equal signs.
Example:

1.      Write a procedure abc(), in a file abc.sql (do not compile it in database) and save it in C:\
create procedure abc is
begin
dbms_output.put_line('Hello World');
end abc;
2.      First generate the wrapped code for this sql file using the following command.
     wrap iname=C:\abc.sql
You will get the following message:

      PL/SQL Wrapper: Release 9.0.1.3.1- Production on Sat Jun 04 12:59:59  2011
Copyright (c) Oracle Corporation 1993, 2001.  All Rights Reserved.
Processing C:\abc.sql to abc.plb


3.      The wrapped file code abc.plb is created in the same directory (C:\abc.sql). It will look like this


C:\oracle11\product\11.2.0\dbhome_1\BIN\wrap.exe iname=C:\temp\XX\xx_proc_new.sql

Restarting a Data Pump failed Job.


 The solution, however, is simple enough. All you need to do is use the Data Pump Restart Capability:

1)    In the IMPDP window, click CTRL-C to stop the job.


2)    In the command line type:
Import> stop_job=immediate


3)    Use SQLPlus to make the required changes to the table space. You can:
a)    Add DataFile ALTER TABLESPACE <tablespace name> ADD DATAFILE ‘<path to file name and  file name>’ SIZE <integer>M;
b)    Resize the Datafile:
ALTER DATABASE DATAFILE ‘< path to file name and  file name >’ RESIZE <integer>M;

c)     Enable autoextend:
ALTER DATABASE DATAFILE ‘< path to file name and  file name >’ AUTOEXTEND ON MAXSIZE UNLIMITED;

4)    Attach the Job.
>impdp system/manager attach=Job_Name
If you did not provide a specific job name when you first ran the IMPDP, Oracle will assign a default name for it. In order to find the System assigned name for the IMPDP job you can run the following query:

SELECT * FROM DBA_DATAPUMP_JOBS;
The result will probably be something like SYS_IMPORT_FULL_number.

5)     Restart the job
Import> start_job

6)    You can check the status of the job by simply typing STATUS in the utility command prompt.
Import> status

Monday, April 8, 2013

Cross checking archive logs


In Oracle 10g you can run the following command to crosscheck and expire any expired archive logs:

Example for Oracle 10g use:

RMAN> crosscheck archivelog all;
RMAN> delete expired archivelog all;

However in Oracle 8 and 9 you need to do the following

Example for Oracle 8 use:
RMAN> Change Archivelog All Validate;

Example for Oracle 8i use:
RMAN> change Archivelog All Crosscheck;

Example for Oracle 9i:RMAN> crosscheck archivelog all;


Deleting archives alredy backed up

DELETE ARCHIVELOG ALL BACKED UP 2 TIMES TO DEVICE TYPE sbt;

Friday, April 5, 2013

Connection script

To start this blog I am sharing a script I like to use to certify in which database I am connected.

Most of the time you are connected in more than one Database. To be sure you are testing your scripts in a non-production environmente you need to query the V$instance or something similar.

I have this script that is located on $OH/dbs

Once the script is there it is visible to your Sqlplus.

wdb.sql (which database)

set serveroutput on
set linesize 500
alter session set nls_date_format = 'dd/mm/yyyy hh24:mi:ss'
/
set head off
Select '************************************'||chr(10)||
       ' PLATFORM : '||platform_name||chr(10)||
       ' HOST     : '||host_name||chr(10)||
       ' SID      : '||instance_name||chr(10)||
       ' VERSION  : '||version||chr(10)||
       ' STARTED  : '||startup_time||'('||status||')'||chr(10)||
       ' CREATED  : '||created||chr(10)||
       ' ROLE     : '||DATABASE_ROLE||chr(10)||
       ' ARC MODE : '||LOG_MODE||chr(10)||
       ' USER     : '||sys_context('USERENV','SESSION_USER')||chr(10)||
       '************************************'
from v$instance, v$database
/
set head on
set serveroutput off

This is the output: 


connect system@oab_dev
Enter password:
Connected.
system@oabde> @wdb
Session altered.

************************************
 PLATFORM : Linux x86 64-bit
 HOST     : duck.com
 SID      : OABDE
 VERSION  : 11.2.0.3.0
 STARTED  : 04/10/2012 01:17:11(OPEN)
 CREATED  : 05/04/2012 13:04:44
 ROLE     : PRIMARY
 ARC MODE : NOARCHIVELOG
 USER     : SYSTEM
************************************


You can see a lot of infomation in a single command. 

Thank you.