Tuesday, 18 March 2014

How to kill, cancel, restart, stop data pump jobs on window

Datapump jobs (expdp, impdp) can be stopped, killed or resumed from the database level. Killing or stopping a datapump job from Oslevel does not kill or stop it, we have to do it from database . PFB :-

Exporting a Full database backup :-

C:\Users\NewAdmin>expdp system/orcl directory=datapump dumpfile=expdp_full_29062013.dmp logfile=expdp_full_29062013.log compression=all full=y &

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 19:46:41 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** directory=datapump dumpfile=expdp_full_29062013.dmp logfile=expdp_full_29062013.log compression=all full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Stop a datapump job :-

Datapump jobs can be stopped in two ways, either press CTRL + C you will get EXPDP prompt or attach to a already running job and after that write STOP_JOB

If datapump job is not running in background then once you press CTRL+C you will get EXPDP prompt . PFB :-

Export> STOP_JOB
Are you sure you wish to stop this job ([yes]/no): y

Datapump job is stopped.

if datapump is running in background then you have to attach a datapump job, find out the datapump job name by using below query :-

SQL> SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME                     JOB_NAME                       OPERATION                           JOB_MODE                  STATE
------------------------------ ------------------------------ ----------------------------------- ------------------------- -----------------------------
SYSTEM                         SYS_EXPORT_FULL_02             EXPORT                              FULL                      EXECUTING

Now attach to a running job:-

C:\Users\NewAdmin>expdp system/orcl attach=SYS_EXPORT_FULL_02

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 20:01:45 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_EXPORT_FULL_02
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: 42902560C22D46BDA6924724126F59FD
  Start Time: Saturday, 29 June, 2013 20:01:08
  Mode: FULL
  Instance: orcl
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** directory=datapump dumpfile=expdp_full_29062013_1.dmp logfile=expdp_full_29062013_1.log compression=all full=y
     COMPRESSION           ALL
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: C:\DATAPUMP\EXPDP_FULL_29062013_1.DMP
    bytes written: 4,096

Worker 1 Status:
  Process Name: DW00
  State: EXECUTING

Export> STOP_JOB
Are you sure you wish to stop this job ([yes]/no): y

you will get a message like this :-

Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped by user request at 20:02:01

SQL> SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME                     JOB_NAME                       OPERATION                           JOB_MODE                  STATE
------------------------------ ------------------------------ ----------------------------------- ------------------------- ------------------------------
SYSTEM                         SYS_EXPORT_FULL_02             EXPORT                              FULL                      NOT RUNNING

Restart a Stopped datapump Job :-

Attach to a stopped datapump job :-

C:\Users\NewAdmin>expdp system/orcl attach=SYS_EXPORT_FULL_02

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 20:08:17 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_EXPORT_FULL_02
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: 42902560C22D46BDA6924724126F59FD
  Start Time: Saturday, 29 June, 2013 20:08:20
  Mode: FULL
  Instance: orcl
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** directory=datapump dumpfile=expdp_full_29062013_1.dmp logfile=expdp_full_29062013_1.log compression=all full=y
     COMPRESSION           ALL
  State: IDLING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: C:\datapump\expdp_full_29062013_1.dmp
    bytes written: 4,096

Worker 1 Status:
  Process Name: DW00
  State: UNDEFINED

Export> START_JOB

Export> CONTINUE_CLIENT
Job SYS_EXPORT_FULL_02 has been reopened at Saturday, 29 June, 2013 20:08
Restarting "SYSTEM"."SYS_EXPORT_FULL_02":  system/******** directory=datapump dumpfile=expdp_full_29062013_1.dmp logfile=expdp_full_29062013_1.log compression=all full=y
Processing object type DATABASE_EXPORT/TABLESPACE

Job is again restarted.

Kill a datapump job :-

Attach to a running datapump job :-

C:\Users\NewAdmin>expdp system/orcl attach=SYS_EXPORT_FULL_01

Export: Release 11.2.0.1.0 - Production on Sat Jun 29 20:12:55 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_EXPORT_FULL_01
  Owner: SYSTEM
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: 452263D9B00047A9BCD6E97DE83878F4
  Start Time: Saturday, 29 June, 2013 20:12:57
  Mode: FULL
  Instance: orcl
  Max Parallelism: 1
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** directory=datapump dumpfile=expdp_full_29062013.dmp logfile=expdp_full_29062013.log compression=all full=y
     COMPRESSION           ALL
  State: IDLING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: C:\datapump\expdp_full_29062013.dmp
    bytes written: 5,570,560

Worker 1 Status:
  Process Name: DW00
  State: RUNNING

Export> KILL_JOB
Are you sure you wish to stop this job ([yes]/no): y

DIfference between KILLING and STOPPING a datapump job is that, once you killed a datapump job you cannot able to restart it . But if we stop a datapump job we can restart it. Killing a datapump job will delete the dump and log files also.

I hope this article helped you.

Kill, cancel and resume or restart datapump expdp and impdp jobs

Kill, cancel and resume or restart datapump expdp and impdp jobs


The expdp and impdp utilities are command-line driven, but when starting them from the OS-prompt, one does not notice it. When you want to kill, cancel, start or resume a job, you will and up in the datapump command prompt… now what?!
All command shown here can be used with expdp and impdp datapump.

Identifying datapump jobs

Do a select from dba_datapump_jobs in sqlplus to get the job name:
> expdp system full=y

SELECT owner_name, job_name, operation, job_mode, state
FROM dba_datapump_jobs;

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE
---------- -------------------- ---------- ---------- ------------
SYSTEM     SYS_EXPORT_FULL_01   EXPORT     FULL       EXECUTING
Or when you use the JOB_NAME parameter when datapumping, you already identified the job with a name. You don’t need to look up afterwards…
expdp system full=y JOB_NAME=EXP_FULL

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE
---------- -------------------- ---------- ---------- ------------
SYSTEM     EXP_FULL             EXPORT     FULL       EXECUTING

Killing or stopping a running datapump job

The difference between Kill and Stop is simple to explain. When killing a job, you won’t be able to resume or start it again. Also logs and dumpfiles will be removed!
When exporting (or importing), press Ctrl-c to show the datapump prompt and type KILL_JOB orSTOP_JOB[=IMMEDIATE]. You will be prompted to confirm if you are sure…
Adding ‘=IMMEDIATE‘ to STOP_JOB will not finish currently running ‘sub-job’ and must be redone when starting it again.
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
[Ctrl-c]
Export> KILL_JOB
..or..
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([yes]/no): yes

Resuming a stopped job

Identify your job with SQL or you already knew it because you used ‘JOB_NAME=‘ ;)
SELECT owner_name, job_name, operation, job_mode, state
FROM dba_datapump_jobs;

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE      
---------- -------------------- ---------- ---------- ------------
SYSTEM     EXP_FULL             EXPORT     FULL       NOT RUNNING
Now we can ATTACH to the job using it as a parameter to the expdp or impdp command, and a lot of gibberish is shown:
> expdp system ATTACH=EXP_FULL

Job: EXP_FULL
 Owner: SYSTEM
 Operation: EXPORT
 Creator Privs: TRUE
 GUID: A5441357B472DFEEE040007F0100692A
 Start Time: Thursday, 08 June, 2011 20:23:39
 Mode: FULL
 Instance: db1
 Max Parallelism: 1
 EXPORT Job Parameters:
 Parameter Name      Parameter Value:
 CLIENT_COMMAND        system/******** full=y JOB_NAME=EXP_FULL
 State: IDLING
 Bytes Processed: 0
 Current Parallelism: 1
 Job Error Count: 0
 Dump File: /u01/app/oracle/admin/db1/dpdump/expdat.dmp
 bytes written: 520,192

Worker 1 Status:
 Process Name: DW00
 State: UNDEFINED
(Re)start the job with START_JOB, use ‘=SKIP_CURRENT‘ if you want to skip the current job. To show progress again, type CONTINUE_CLIENT (Job will be restarted if idle).
Export> START_JOB[=SKIP_CURRENT]
Export> CONTINUE_CLIENT
Job EXP_FULL has been reopened at Thursday, 09 June, 2011 10:26
Restarting "SYSTEM"."EXP_FULL":  system/******** full=y JOB_NAME=EXP_FULL

Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Done…

9i Export/Import Parameters

9i Export/Import Parameters


USERID:   username/passwordBUFFER:   size of data bufferFILE:   output file (EXPDAT.DMP)COMPRESS:   import into one extent (Y)GRANTS:   export grants (Y)INDEXES:   export indexes(Y)ROWS:   export data rows (Y)CONSTRAINTS:   export table constraints (Y)CONSISTENT:   cross-table consistency (N)LOG:   log file of screen output (None)STATISTICS:   analyze objects (ESTIMATE)DIRECT:   Bypass the SQLcommand processing layer (N) (new in Oracle8)FEEDBACK:   Show a process meter (a dot) every X rows exported (0 – Xvalue)HELP:   Shows help listing MLS MLS_LABEL_FORMAT Used with secure Oracle; we won't cover
these.FULL:   export entire file (N)OWNER:   list of owner usernamesTABLES:   list of table namesRECORDLENGTH:  length of IO recordINCTYPE:  incremental export typeRECORD:   track incr. export (Y)PARFILE:   parameter file name

11g data pump parameters

11g data pump parameters


Export Parameters :

Parameter
Description
abort_step
Undocumented feature
access_method
Data Access Method – default is Automatic
attach
Attach to existing job – no default
cluster
Start workers across cluster; default is YES
compression
Content to export: default is METADATA_ONLY
content
Content to export: default is ALL
current_edition
Current edition: ORA$BASE is the default
data_options
Export data layer options
directory
Default directory specification
dumpfile
dumpfile names: format is (file1,…) default is expdat.dmp
encryption
Encryption type to be used: default varies
encryption_algorithm
Encryption algorithm to be used: default is AES128
encryption_mode
Encryption mode to be used: default varies
encryption_password
Encryption key to be used
estimate
Calculate size estimate: default is BLOCKS
estimate_only
Only estimate the length of the job: default is N
exclude
Export exclude option: no default
filesize
file size: the size of export dump files
flashback_time
database time to be used for flashback export: no default
flashback_scn
system change number to be used for flashback export: no default
full
indicates a full mode export
include
export include option: no default
ip_address
IP Address for PLSQL debugger
help
help: display description on export parameters, default is N
job_name
Job Name: no default
keep_master
keep_master: Retain job table upon completion
log_entry
logentry
logfile
log export messages to specified file
metrics
Enable/disable object metrics reporting
mp_enable
Enable/disable multi-processing for current session
network_link
Network mode export
nologfile
No export log file created
package_load
Specify how to load PL/SQL objects
parallel
Degree of Parallelism: default is 1
parallel_threshold
Degree of DML Parallelism
parfile
parameter file: name of file that contains parameter specifications
query
query used to select a subset of rows for a table
remap_data
Transform data in user tables
reuse_dumpfiles
reuse_dumpfiles: reuse existing dump files; default is No
sample
Specify percentage of data to be sampled
schemas
schemas to export: format is ‘(schema1, .., schemaN)’
service_name
Service name that job will charge against
silent
silent: display information, default is NONE
status
Interval between status updates
tables
Tables to export: format is ‘(table1, table2, …, tableN)’
tablespaces
tablespaces to transport/recover: format is ‘(ts1,…, tsN)’
trace
Trace option: enable sql_trace and timed_stat, default is 0
transport_full_check
TTS perform test for objects in recovery set: default is N
transportable
Use transportable data movement: default is NEVER
transport_tablespaces
Transportable tablespace option: default is N
tts_closure_check
Enable/disable transportable containment check: def is Y
userid
user/password to connect to oracle: no default
version
Job version: Compatible is the default



Import Parameters :

Parameter
Description
abort_step
Undocumented feature
access_method
Data Access Method – default is Automatic
attach
Attach to existing job – no default
cluster
Start workers across cluster; default is Y
content
Content to import: default is ALL
data_options
Import data layer options
current_edition
Applications edition to be used on local database
directory
Default directory specification
dumper_directory
Directory for stream dumper
dumpfile
import dumpfile names: format is (file1, file2…)
encryption_password
Encryption key to be used
estimate
Calculate size estimate: default is BLOCKS
exclude
Import exclude option: no default
flashback_scn
system change number to be used for flashback import: no default
flashback_time
database time to be used for flashback import: no default
full
indicates a full Mode import
help
help: display description of import parameters, default is N
include
import include option: no default
ip_address
IP Address for PLSQL debugger
job_name
Job Name: no default)’
keep_master
keep_master: Retain job table upon completion
logfile
log import messages to specified file
master_only
only import the master table associated with this job
metrics
Enable/disable object metrics reporting
mp_enable
Enable/disable multi-processing for current session
network_link
Network mode import
nologfile
No import log file created
package_load
Specify how to load PL/SQL objects
parallel
Degree of Parallelism: default is 1
parallel_threshold
Degree of DML Parallelism
parfile
parameter file: name of file that contains parameter specifications
partition_options
Determine how partitions should be handle: Default is NONE
query
query used to select a subset of rows for a table
remap_data
Transform data in user tables
remap_datafile
Change the name of the source datafile
remap_schema
Remap source schema objects to new schema
remap_table
Remap tables to a different name
remap_tablespace
Remap objects to different tablespace
reuse_datafiles
Re-initialize existing datafiles (replaces DESTROY)
schemas
schemas to import: format is ‘(schema1, …, schemaN)’
service_name
Service name that job will charge against
silent
silent: display information, default is NONE
skip_unusable_indexes
Skip indexes which are in the unsed state)
source_edition
Applications edition to be used on remote database
sqlfile
Write appropriate SQL DDL to specified file
status
Interval between status updates
streams_configuration
import streams configuration metadata
table_exists_action
Action taken if the table to import already exists
tables
Tables to import: format is ‘(table1, table2, …, tableN)
tablespaces
tablespaces to transport: format is ‘(ts1,…, tsN)’
trace
Trace option: enable sql_trace and timed_stat, default is 0
transform
Metadata_transforms
transportable
Use transportable data movement: default is NEVER
transport_datafiles
List of datafiles to be plugged into target system
transport_tablespaces
Transportable tablespace option: default is N
transport_full_check
Verify that Tablespaces to be used do not have dependencies
tts_closure_check
Enable/disable transportable containment check: def is Y
userid
user/password to connect to oracle: no default
version
Job version: Compatible is the default


The Following commands are valid while in interactive mode.


Command                                           Description
--------------------                                   ----------------------------------------------------------
ADD_FILE                                                    Add dumpfile to dumpfile set.
CONTINUE_CLIENT                              Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT                                          Quit client session and leave job running.
FILESIZE                                                  Default filesize (bytes) for subsequent ADD_FILE commands.
HELP                                                          Summarize interactive commands.
KILL_JOB                                                Detach and delete job.
PARALLEL                                              Change the number of active workers for current job.
START_JOB                                            Start/resume current job.
STATUS                                                   Frequency (secs) job status is to be monitored where  the default (0) will show new status when available.

                                                                 STATUS[=interval]

STOP_JOB                                      Orderly shutdown of job execution and exits the client.
                                                             STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job.