How large is the database
col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
, round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/
|
Distribution of objects and data
Which schemas are taking up all of the space
set pages 999
col "size MB" format 999,999,999
col "Objects" format 999,999,999
select obj.owner "Owner"
, obj_cnt "Objects"
, decode(seg_size, NULL, 0, seg_size) "size MB"
from (select owner, count(*) obj_cnt from dba_objects group by owner) obj
, (select owner, ceil(sum(bytes)/1024/1024) seg_size
from dba_segments group by owner) seg
where obj.owner = seg.owner(+)
order by 3 desc ,2 desc, 1
/
|
Show the ten largest objects in the database
col owner format a15
col segment_name format a30
col segment_type format a15
col mb format 999,999,999
select owner
, segment_name
, segment_type
, mb
from (
select owner
, segment_name
, segment_type
, bytes / 1024 / 1024 "MB"
from dba_segments
order by bytes desc
)
where rownum < 11
/
|
Is java installed in the database?
This will return 9000'ish if it is...
select count(*)
from all_objects
where object_type like '%JAVA%'
and owner = 'SYS'
/
|
Display character set information
select * from nls_database_parameters
/
|
Show all used features
select name
, detected_usages
from dba_feature_usage_statistics
where detected_usages > 0
/
|
Tuesday, July 23, 2013
Database Size query
Tuesday, July 16, 2013
Adding Redo Log members to Standby database
Checking Redo log files
col member for a90
set linesize 200
select * from v$logfile order by 1;
Adding Redo Logs
alter system set standby_file_management=manual;
ALTER DATABASE ADD LOGFILE MEMBER '&arq' to group &x;
check v$logfile if redo status INVALID
check which redo is not current
select * from v$log;
alter database clear logfile group &X;
If you want to drop a member
ALTER DATABASE DROP LOGFILE MEMBER '&member';
If broker get failed, disable and enable configuration
Adding Standby Redo Logs
alter system set standby_file_management=manual;
to check if standby redo logs are being used check:
select * from v$standby_log;
alter system set standby_file_management=manual;
alter database recover managed standby database cancel;
To add a member:
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '&arq' to group &x;
To Drop a member
alter database clear logfile group &X;
ALTER DATABASE DROP LOGFILE MEMBER '&arq';
Or to add standby redo log groups
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 101
('/oracle/db1/XXXX/DATA/standby101a_XXXX.log', '/oracle/db2/XXXXX/DATA/standby101b_XXXX.log') SIZE 50M REUSE;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 102
('/oracle/db1/XXXX/DATA/standby102a_XXXX.log', '/oracle/db2/XXXXX/DATA/standby102b_XXXX.log') SIZE 50M REUSE;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 103
('/oracle/db2/XXXX/DATA/standby103b_XXXX.log', '/oracle/db1/XXXX/DATA/standby103a_XXXX.log') SIZE 50M REUSE;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 104
('/oracle/db2/XXXX/DATA/standby104b_XXXX.log', '/oracle/db1/XXXX/DATA/standby104a_XXXX.log') SIZE 50M REUSE;
To drop a group
ALTER DATABASE DROP LOGFILE GROUP &x;
alter system set standby_file_management=auto;
alter database recover managed standby database through all switchover disconnect nodelay;
If broker get failed, disable and enable configuration
to check if standby redo logs are being used check:
select * from v$standby_log;
Wednesday, June 19, 2013
Recovering from a Failed DG Broker Switchover
Problem Description
Should you find yourself in a situation where a Data Guard Broker switchover to Standby has
failed and left your environment with 2 Physical Standby Databases, follow this simple
procedure to switch the failed switchover Standby Database back to Primary.
You may also see the following error from a DGMGRL "show configuration" command:
ORA-16816: incorrect database role
Solution
1. Logon (as sysdba) to the instance that was your Primary database instance before
the switchover.
2. Confirm the database role.
SQL> select database_role from v$database;
DATABASE_ROLE
----------------
PHYSICAL STANDBY
3. Shutdown the instance.
SQL> shutdown immediate;
4. Mount the database.
SQL> startup mount;
5. Cancel the MRP process. You will receive “ORA-16136: Managed Standby Recovery
not active” if it is not running, but you can ignore.
SQL> alter database recover managed standby database cancel;
6. Terminate the current switchover to Standby that never completed fully.
SQL> alter database recover managed standby database finish;
7. Now switchover to Primary.
SQL> alter database commit to switchover to primary with
session shutdown;
8. Open the database.
SQL> alter database open;
9. Confirm the database role.
SQL> select database_role from v$database;
DATABASE_ROLE
----------------
PRIMARY
Additional Steps
When attempting to open the Primary Database you may suffer the following error:
SQL> alter database open
*
ERROR at line 1:
ORA-16649: possible failover to another database prevents this
database being opened
In this case, before you can open the database, you must disable Data Guard Broker as
follows:
SQL> alter system set dg_broker_start=false scope=both
sid=’*’;
System altered.
SQL> alter database open;
Database altered.
Now re-install Data Guard Broker.
Tuesday, June 11, 2013
Refresh materialized views
refresh materialized views
- using refresh groups:
views:
select * from dba_rgroup
select * from DBA_RCHILD
select * from DBA_REFRESH_CHILDREN
select * from DBA_REFRESH
- using refresh groups:
views:
select * from dba_rgroup
select * from DBA_RCHILD
select * from DBA_REFRESH_CHILDREN
select * from DBA_REFRESH
Friday, June 7, 2013
Starting OMS - OEM
Stoping OMS
From the OMS_HOME directory run the following to stop the OMS and WebTier services
OMS_HOME/bin/emctl stop oms –all [-force]
starting OMS
OMS_HOME/bin/emctl start oms
Checking status
OMS_HOME/bin/emctl status oms [-details]
Monday, April 15, 2013
Datapump tips
Datapump - some tips
Data Pump is a utility for unloading/loading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.
In this post, let us see some tips and tricks that can done with Datapump.
Tip #1 : Using PARALLEL parameter
PARALLEL parameter is used to improve the speed of the export. But this will be more effective when you split the dumpfiles with DUMPFILE parameter across the filesystem.
Create 2 or 3 directories in different filesystems and use the commands effectively.
expdp / dumpfile=dir1:test_1.dmp, dir1:test_2.dmp, dir2:test_3.dmp, dir3:test_4.dmp logfile=dir1:test.log full=y parallel=4
where dir1, dir2 and dir3 are directory names created in the database.
Tip #2 : Using FILESIZE parameter
FILESIZE parameter is used to limit the dumpfile size. For eg., if you want to limit your dumpfiles to 5gb, you can issue command as below
expdp / directory=dir1 dumpfile=test1.dmp,test2.dmp,test3.dmp logfile=test.log filesize=5120m
or
expdp / directory=dir1 dumpfile=test_%U.dmp logfile=test.log filesize=5120m full=y
where %U will assign numbers automatically from 1 to 99.
Note: If you use %U, dumpfile number 100 can't be created and export fails with "dumpfile exhausted" error.
Tip #3 : Usage of VERSION parameter
VERSION parameter is used while taking export if you want to create a dumpfile which should be imported into a DB which is lower than the source DB.
For eg., if your source DB is 11g and target DB is 10g, you can't use the dumpfile taken from 11g expdp utility to import into 10g DB.
This throws the below error.
ORA-39142: incompatible version number 3.1 in dump file "/u02/dpump/test.dmp"
To overcome this we can use the VERSION parameter.
VERSION={COMPATIBLE | LATEST | version_string}
Eg.: expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log VERSION=10.2.0
Tip #4 : PARALLEL with single DUMPFILE
When you use PARALLEL parameter and use only one dumpfile to unload datas from the DB, you may get the below error.
expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log parallel=4
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
Job "USER"."TABLE_UNLOAD" stopped due to fatal error at 00:37:29
Now a simple work around is to remove the PARALLEL parameter or add dumpfiles. This will over come the error.
expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log
or
expdp / directory=dir1 dumpfile=test_1.dmp,test_2.dmp,test_3.dmp, test_4.dmp logfile=test.log parallel=4
or
expdp / directory=dir1 dumpfile=test_%U.dmp logfile=test.log parallel=4
Tip #5 : Drop dba_datapump_job rows
Sometimes before the export completes or when the export encounters a resumable wait or you would have stopped the export job in between. Now you start the DataPump job that stopped. Then the dump file has been removed from the directory location. You are not able to attach to the job.
You will get an error like this.
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/oracle/product/10.2.0/db_2/rdbms/log/test.dmp" for read
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
But you will see the row updated in view dba_datapump_jobs
SQL> select * from dba_datapump_jobs;
OWNER JOB_NAME OPERATI JOB_M STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
----- ------------------------------ ------- ----- ------------------------------ ---------- ----------------- -----------------
SYS SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0 ########## 0
You are not able to remove the row from dba_datapump_jobs as you are not able to attach to the export job with expdp client to kill the job.
In this case you can remove the row by dropping the master table created by the datapump export.
SQL> drop table SYS_EXPORT_FULL_01 purge;
Table dropped.
SQL> select * from dba_datapump_jobs;
no rows selected
Now you can see the row is deleted from the dba_datapump_jobs view.
Tip #6 : FLASHBACK_SCN and FLASHBACK_TIME
Do not use FLASHBACK_SCN and FLASHBACK_TIME as these parameters slow down the performace of export.
Tip #7 : Effective EXCLUDE
Import of full database should be split as tables first and indexes next. Use the parameter exclude effectively to improve the speed of import.
EXCLUDE = INDEX,STATISTICS
This will not import the indexes and statistics which in turn only import the tables, hence improving the performance.
Tip #8 : INDEXFILE=<filename> usage
After the import of tables has been completed, you can create the indexes and collect statistics of the tables. To get the indexes creation ddl, you can use the INDEXFILE = <filename> parameter to get all the indexes creation statements which were involved in the import operation.
Example of effective import
impdp / directory=dir1,dir2,dir3 dumpfile=test_%U.dmp logfile=test.log EXCLUDE=INDEX,STATISTICS Full=Y INDEXFILE=index_ddl.sql
Tip #9 : Contents of Dump file
If you are not sure about the schemas that were present in the dumpfile or tablespaces present inside the dumpfile, etc., you can easily check the dumpfile for those information using the below command
grep -a "CREATE USER" test_1.dmp
grep -a "CREATE TABLESPACE" test_1.dmp
-a is not a recognised flag in some OS and hence command works without the flag. Mind, the dumpfile created is a binary file.
The above command gives all the CREATE USER statements and CREATE TABLESPACE statements which will be useful in many cases. You can also get the INDEXES and TABLES creation ddl from the dumpfile as well.
Tip #10 : init.ora parameter cursor_sharing
Always set init.ora parameter cursor_sharing to exact which has a good effect on import's performance.
Tip #11 : STATUS parameter usage
You can check the on going datapump export/import operation with the use of STATUS parameter and track the progress by yourself. You can attach to a export/import session and check the status.
For example:
[oracle@ini8115l3aa2ba-136018207027 ~]$ expdp attach=SYS_EXPORT_FULL_01
Export: Release 11.2.0.1.0 - Production on Mon May 21 10:56:28 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_01
Owner: SYS
Operation: EXPORT
Creator Privs: TRUE
GUID: C08622D4FB5571E4E04012881BCF4C92
Start Time: Monday, 21 May, 2012 10:55:55
Mode: FULL
Instance: newdb
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND sys/******** AS SYSDBA directory=dmpdir full=y
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u02/dpump/expdat.dmp
bytes written: 4,096
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Completed Objects: 400
Worker Parallelism: 1
Export> status
Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: COMPLETING
Bytes Processed: 37,121
Percent Done: 100
Current Parallelism: 1
Job Error Count: 0
Dump File: /u02/dpump/expdat.dmp
bytes written: 561,152
Worker 1 Status:
Process Name: DW00
State: WORK WAITING
Here you can see the bytes written which will be progressing and you can track the export/import job easily.
Note: The parameter ATTACH when used, it cannot be combined with any other parameter other than the USERID parameter.
$ expdp ATTACH= JOB_NAME
Data Pump is a utility for unloading/loading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.
In this post, let us see some tips and tricks that can done with Datapump.
Tip #1 : Using PARALLEL parameter
PARALLEL parameter is used to improve the speed of the export. But this will be more effective when you split the dumpfiles with DUMPFILE parameter across the filesystem.
Create 2 or 3 directories in different filesystems and use the commands effectively.
expdp / dumpfile=dir1:test_1.dmp, dir1:test_2.dmp, dir2:test_3.dmp, dir3:test_4.dmp logfile=dir1:test.log full=y parallel=4
where dir1, dir2 and dir3 are directory names created in the database.
Tip #2 : Using FILESIZE parameter
FILESIZE parameter is used to limit the dumpfile size. For eg., if you want to limit your dumpfiles to 5gb, you can issue command as below
expdp / directory=dir1 dumpfile=test1.dmp,test2.dmp,test3.dmp logfile=test.log filesize=5120m
or
expdp / directory=dir1 dumpfile=test_%U.dmp logfile=test.log filesize=5120m full=y
where %U will assign numbers automatically from 1 to 99.
Note: If you use %U, dumpfile number 100 can't be created and export fails with "dumpfile exhausted" error.
Tip #3 : Usage of VERSION parameter
VERSION parameter is used while taking export if you want to create a dumpfile which should be imported into a DB which is lower than the source DB.
For eg., if your source DB is 11g and target DB is 10g, you can't use the dumpfile taken from 11g expdp utility to import into 10g DB.
This throws the below error.
ORA-39142: incompatible version number 3.1 in dump file "/u02/dpump/test.dmp"
To overcome this we can use the VERSION parameter.
VERSION={COMPATIBLE | LATEST | version_string}
Eg.: expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log VERSION=10.2.0
Tip #4 : PARALLEL with single DUMPFILE
When you use PARALLEL parameter and use only one dumpfile to unload datas from the DB, you may get the below error.
expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log parallel=4
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
Job "USER"."TABLE_UNLOAD" stopped due to fatal error at 00:37:29
Now a simple work around is to remove the PARALLEL parameter or add dumpfiles. This will over come the error.
expdp / directory=dir1 dumpfile=test_1.dmp logfile=test.log
or
expdp / directory=dir1 dumpfile=test_1.dmp,test_2.dmp,test_3.dmp, test_4.dmp logfile=test.log parallel=4
or
expdp / directory=dir1 dumpfile=test_%U.dmp logfile=test.log parallel=4
Tip #5 : Drop dba_datapump_job rows
Sometimes before the export completes or when the export encounters a resumable wait or you would have stopped the export job in between. Now you start the DataPump job that stopped. Then the dump file has been removed from the directory location. You are not able to attach to the job.
You will get an error like this.
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/oracle/product/10.2.0/db_2/rdbms/log/test.dmp" for read
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
But you will see the row updated in view dba_datapump_jobs
SQL> select * from dba_datapump_jobs;
OWNER JOB_NAME OPERATI JOB_M STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
----- ------------------------------ ------- ----- ------------------------------ ---------- ----------------- -----------------
SYS SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0 ########## 0
You are not able to remove the row from dba_datapump_jobs as you are not able to attach to the export job with expdp client to kill the job.
In this case you can remove the row by dropping the master table created by the datapump export.
SQL> drop table SYS_EXPORT_FULL_01 purge;
Table dropped.
SQL> select * from dba_datapump_jobs;
no rows selected
Now you can see the row is deleted from the dba_datapump_jobs view.
Tip #6 : FLASHBACK_SCN and FLASHBACK_TIME
Do not use FLASHBACK_SCN and FLASHBACK_TIME as these parameters slow down the performace of export.
Tip #7 : Effective EXCLUDE
Import of full database should be split as tables first and indexes next. Use the parameter exclude effectively to improve the speed of import.
EXCLUDE = INDEX,STATISTICS
This will not import the indexes and statistics which in turn only import the tables, hence improving the performance.
Tip #8 : INDEXFILE=<filename> usage
After the import of tables has been completed, you can create the indexes and collect statistics of the tables. To get the indexes creation ddl, you can use the INDEXFILE = <filename> parameter to get all the indexes creation statements which were involved in the import operation.
Example of effective import
impdp / directory=dir1,dir2,dir3 dumpfile=test_%U.dmp logfile=test.log EXCLUDE=INDEX,STATISTICS Full=Y INDEXFILE=index_ddl.sql
Tip #9 : Contents of Dump file
If you are not sure about the schemas that were present in the dumpfile or tablespaces present inside the dumpfile, etc., you can easily check the dumpfile for those information using the below command
grep -a "CREATE USER" test_1.dmp
grep -a "CREATE TABLESPACE" test_1.dmp
-a is not a recognised flag in some OS and hence command works without the flag. Mind, the dumpfile created is a binary file.
The above command gives all the CREATE USER statements and CREATE TABLESPACE statements which will be useful in many cases. You can also get the INDEXES and TABLES creation ddl from the dumpfile as well.
Tip #10 : init.ora parameter cursor_sharing
Always set init.ora parameter cursor_sharing to exact which has a good effect on import's performance.
Tip #11 : STATUS parameter usage
You can check the on going datapump export/import operation with the use of STATUS parameter and track the progress by yourself. You can attach to a export/import session and check the status.
For example:
[oracle@ini8115l3aa2ba-136018207027 ~]$ expdp attach=SYS_EXPORT_FULL_01
Export: Release 11.2.0.1.0 - Production on Mon May 21 10:56:28 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_01
Owner: SYS
Operation: EXPORT
Creator Privs: TRUE
GUID: C08622D4FB5571E4E04012881BCF4C92
Start Time: Monday, 21 May, 2012 10:55:55
Mode: FULL
Instance: newdb
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND sys/******** AS SYSDBA directory=dmpdir full=y
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u02/dpump/expdat.dmp
bytes written: 4,096
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Completed Objects: 400
Worker Parallelism: 1
Export> status
Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: COMPLETING
Bytes Processed: 37,121
Percent Done: 100
Current Parallelism: 1
Job Error Count: 0
Dump File: /u02/dpump/expdat.dmp
bytes written: 561,152
Worker 1 Status:
Process Name: DW00
State: WORK WAITING
Here you can see the bytes written which will be progressing and you can track the export/import job easily.
Note: The parameter ATTACH when used, it cannot be combined with any other parameter other than the USERID parameter.
$ expdp ATTACH= JOB_NAME
Standby database on Oracle 8i
Adding Tablespaces or Datafiles to the Primary Database
To add a tablespace or datafile to the primary database and create the datafile in the standby database:
- Create a tablespace on the primary database as usual. For example, to create new datafile
t_db2.fin tablespace tbs_2, issue:
SQL> CREATE TABLESPACE tbs_2 DATAFILE 't_db2.f' SIZE 2M;
If the recovery process on the standby database tries to apply the redo containing the CREATE TABLESPACE statement, it stops because the new datafile does not exist on the standby site.
- Either wait for the standby database to cancel recovery because it cannot find the new datafile, or manually cancel managed recovery:
- Note that CREATE TABLESPACE redo adds the new filename to the standby control file. The following alert.log entry is generated:
might be an online backup taken without entering the begin backup command.
Tuesday, April 9, 2013
Wraping (encrypting) a PL/SQL code
Wrapping oracle PL/SQL Code with the wrap Utility:
The wrap utility processes an input SQL file and obfuscates only the PL/SQL units in the file,
such as a package specification, package body, function, procedure, type specification, or type body.
It does not obfuscate PL/SQL content in anonymous blocks or triggers or non-PL/SQL code.
To run the wrap utility, enter the wrap command at your operating system prompt using the following syntax:
wrap iname=input_file [oname=output_file]
Do not use any spaces around the equal signs.
Example:
1. Write a procedure abc(), in a file abc.sql (do not compile it in database) and save it in C:\
create procedure abc is
begin
dbms_output.put_line('Hello World');
end abc;
2. First generate the wrapped code for this sql file using the following command.
wrap iname=C:\abc.sql
You will get the following message:
PL/SQL Wrapper: Release 9.0.1.3.1- Production on Sat Jun 04 12:59:59 2011
Copyright (c) Oracle Corporation 1993, 2001. All Rights Reserved.
Processing C:\abc.sql to abc.plb
3. The wrapped file code abc.plb is created in the same directory (C:\abc.sql). It will look like this
C:\oracle11\product\11.2.0\dbhome_1\BIN\wrap.exe iname=C:\temp\XX\xx_proc_new.sql
Restarting a Data Pump failed Job.
The solution, however, is simple enough. All you need to do is use the Data Pump Restart Capability:
1) In the IMPDP window, click CTRL-C to stop the job.
2) In the command line type:
Import> stop_job=immediate
3) Use SQLPlus to make the required changes to the table space. You can:
a) Add DataFile ALTER TABLESPACE <tablespace name> ADD DATAFILE ‘<path to file name and file name>’ SIZE <integer>M;
b) Resize the Datafile:
ALTER DATABASE DATAFILE ‘< path to file name and file name >’ RESIZE <integer>M;
c) Enable autoextend:
ALTER DATABASE DATAFILE ‘< path to file name and file name >’ AUTOEXTEND ON MAXSIZE UNLIMITED;
4) Attach the Job.
>impdp system/manager attach=Job_Name
If you did not provide a specific job name when you first ran the IMPDP, Oracle will assign a default name for it. In order to find the System assigned name for the IMPDP job you can run the following query:
SELECT * FROM DBA_DATAPUMP_JOBS;
The result will probably be something like SYS_IMPORT_FULL_number.
5) Restart the job
Import> start_job
6) You can check the status of the job by simply typing STATUS in the utility command prompt.
Import> status
1) In the IMPDP window, click CTRL-C to stop the job.
2) In the command line type:
Import> stop_job=immediate
3) Use SQLPlus to make the required changes to the table space. You can:
a) Add DataFile ALTER TABLESPACE <tablespace name> ADD DATAFILE ‘<path to file name and file name>’ SIZE <integer>M;
b) Resize the Datafile:
ALTER DATABASE DATAFILE ‘< path to file name and file name >’ RESIZE <integer>M;
c) Enable autoextend:
ALTER DATABASE DATAFILE ‘< path to file name and file name >’ AUTOEXTEND ON MAXSIZE UNLIMITED;
4) Attach the Job.
>impdp system/manager attach=Job_Name
If you did not provide a specific job name when you first ran the IMPDP, Oracle will assign a default name for it. In order to find the System assigned name for the IMPDP job you can run the following query:
SELECT * FROM DBA_DATAPUMP_JOBS;
The result will probably be something like SYS_IMPORT_FULL_number.
5) Restart the job
Import> start_job
6) You can check the status of the job by simply typing STATUS in the utility command prompt.
Import> status
Monday, April 8, 2013
Cross checking archive logs
In Oracle 10g you can run the following command to crosscheck and expire any expired archive logs:
Example for Oracle 10g use:
RMAN> crosscheck archivelog all;
RMAN> delete expired archivelog all;
However in Oracle 8 and 9 you need to do the following
Example for Oracle 8 use:
RMAN> Change Archivelog All Validate;
Example for Oracle 8i use:
RMAN> change Archivelog All Crosscheck;
Example for Oracle 9i:RMAN> crosscheck archivelog all;
Deleting archives alredy backed up
DELETE ARCHIVELOG ALL BACKED UP 2 TIMES TO DEVICE TYPE sbt;
Friday, April 5, 2013
Connection script
To start this blog I am sharing a script I like to use to certify in which database I am connected.
Most of the time you are connected in more than one Database. To be sure you are testing your scripts in a non-production environmente you need to query the V$instance or something similar.
I have this script that is located on $OH/dbs
Once the script is there it is visible to your Sqlplus.
wdb.sql (which database)
set serveroutput on
set linesize 500
alter session set nls_date_format = 'dd/mm/yyyy hh24:mi:ss'
/
set head off
Select '************************************'||chr(10)||
' PLATFORM : '||platform_name||chr(10)||
' HOST : '||host_name||chr(10)||
' SID : '||instance_name||chr(10)||
' VERSION : '||version||chr(10)||
' STARTED : '||startup_time||'('||status||')'||chr(10)||
' CREATED : '||created||chr(10)||
' ROLE : '||DATABASE_ROLE||chr(10)||
' ARC MODE : '||LOG_MODE||chr(10)||
' USER : '||sys_context('USERENV','SESSION_USER')||chr(10)||
'************************************'
from v$instance, v$database
/
set linesize 500
alter session set nls_date_format = 'dd/mm/yyyy hh24:mi:ss'
/
set head off
Select '************************************'||chr(10)||
' PLATFORM : '||platform_name||chr(10)||
' HOST : '||host_name||chr(10)||
' SID : '||instance_name||chr(10)||
' VERSION : '||version||chr(10)||
' STARTED : '||startup_time||'('||status||')'||chr(10)||
' CREATED : '||created||chr(10)||
' ROLE : '||DATABASE_ROLE||chr(10)||
' ARC MODE : '||LOG_MODE||chr(10)||
' USER : '||sys_context('USERENV','SESSION_USER')||chr(10)||
'************************************'
from v$instance, v$database
/
set head on
set serveroutput off
set serveroutput off
This is the output:
Session altered.
************************************
PLATFORM : Linux x86 64-bit
HOST : duck.com
SID : OABDE
VERSION : 11.2.0.3.0
STARTED : 04/10/2012 01:17:11(OPEN)
CREATED : 05/04/2012 13:04:44
ROLE : PRIMARY
ARC MODE : NOARCHIVELOG
USER : SYSTEM
************************************
VERSION : 11.2.0.3.0
STARTED : 04/10/2012 01:17:11(OPEN)
CREATED : 05/04/2012 13:04:44
ROLE : PRIMARY
ARC MODE : NOARCHIVELOG
USER : SYSTEM
************************************
You can see a lot of infomation in a single command.
Thank you.
Subscribe to:
Comments (Atom)