天天看点

清理DBA_DATAPUMP_JOBS中的孤立数据泵作业

今天在

重构数据库时(将表空间中的表、索引转移到其它表空间)时,发现有两个奇怪的对象sys_export_full_01、

sys_export_full_02搜索了一下,发现这个可能是expdp导出异常时遗留下的对象,但是搜到的资料不多,不能确认其具体用途以及该表是

如下所示,sys_export_full_01、sys_export_full_02是一个完整的数据库导出作业,状态是not running,意味着作业是暂时的停止,实际上作业失败了也是not running状态。

清理DBA_DATAPUMP_JOBS中的孤立数据泵作业
清理DBA_DATAPUMP_JOBS中的孤立数据泵作业

这表示以前(可能是很久以前)停止的作业,当然这些作业不可能重新启动,完全可以删除这些master表.

drop table admin.sys_export_full_01;

drop table admin.sys_export_full_02;

the jobs used in this example:

- export job scott.expdp_20051121 is a schema level export that is running

- export job scott.sys_export_table_01 is an orphaned table level export job

- export job scott.sys_export_table_02 is a table level export job that was stopped

- export job system.sys_export_full_01 is a full database export job that is temporary stopped

step 1. determine in sql*plus which data pump jobs exist in the database:

%sqlplus /nolog

connect / as sysdba

set lines 200

col owner_name format a10;

col job_name format a20

col state format a12

col operation like state

col job_mode like state

col owner.object for a50

-- locate data pump jobs:

select owner_name, job_name, rtrim(operation) "operation",

       rtrim(job_mode) "job_mode", state, attached_sessions

  from dba_datapump_jobs

where job_name not like 'bin$%'

order by 1,2;

owner_name job_name            operation job_mode  state       attached

---------- ------------------- --------- --------- ----------- --------

scott      expdp_20051121      export    schema    executing          1

scott      sys_export_table_01 export    table     not running        0

scott      sys_export_table_02 export    table     not running        0

system     sys_export_full_01  export    full      not running        0

step 2. ensure that the

listed jobs in dba_datapump_jobs are not export/import data pump jobs

that are active: status should be 'not running'.

step 3. check with

the job owner that the job with status 'not running' in

dba_datapump_jobs is not an export/import data pump job that has been

temporary stopped, but is actually a job that failed. (e.g. the full

database export job by system is not a job that failed, but was deliberately paused with stop_job).

step 4. determine in sql*plus the related master tables:

-- locate data pump master tables:

select o.status, o.object_id, o.object_type,

       o.owner||'.'||object_name "owner.object"

  from dba_objects o, dba_datapump_jobs j

where o.owner=j.owner_name and o.object_name=j.job_name

   and j.job_name not like 'bin$%' order by 4,2;

status   object_id object_type  owner.object

------- ---------- ------------ -------------------------

valid        85283 table        scott.expdp_20051121

valid        85215 table        scott.sys_export_table_02

valid        85162 table        system.sys_export_full_01

step 5. for jobs that were stopped in the past and won't be restarted anymore, delete the master table. e.g.:

drop table scott.sys_export_table_02;

-- for systems with recycle bin additionally run:

purge dba_recyclebin;

note:

in case the table name is mixed case, you can get errors on the drop, e.g.:

sql> drop table system.impdp_schema_stgmdm_10202014_0;

drop table system.impdp_schema_stgmdm_10202014_0

                *

error at line 1:

ora-00942: table or view does not exist

because the table has a mixed case, try using these statements with double quotes around the table name, for instance:

drop table system."impdp_schema_stgmdm_04102015_1";

drop table system."impdp_schema_stgmdm_10202014_0";

step 6. re-run the query

on dba_datapump_jobs and dba_objects (step 1 and 4). if there are still

jobs listed in dba_datapump_jobs, and these jobs do not have a master

table anymore, cleanup the job while connected as the job owner. e.g.:

connect scott/tiger

set serveroutput on

set lines 100

declare

   h1 number;

begin

   h1 := dbms_datapump.attach('sys_export_table_01','scott');

   dbms_datapump.stop_job (h1);

end;

/

note that after the call

to the stop_job procedure, it may take some time for the job to be

removed. query the view user_datapump_jobs to check whether the job has

been removed:

select * from user_datapump_jobs;

step 7. confirm that the job has been removed:

set lines 200 

col owner_name format a10; 

col job_name format a20 

col state format a12 

col operation like state 

col job_mode like state 

-- locate data pump jobs: 

remarks:

1.

orphaned data pump jobs do not have an impact on new data pump jobs.

the view dba_datapump_jobs is a view, based on gv$datapump_job, obj$,

com$, and user$. the view shows the data pump jobs that are still

running, or jobs for which the master table was kept in the database, or

in case of an abnormal end of the data pump job (the orphaned job). if a

new data pump job is started, a new entry will be created, which has no

relation to the old data pump jobs.

2. when starting the new

data pump job and using a system generated name, we check the names of

existing data pump jobs in the dba_datapump_job in order to obtain a

unique new system generated jobname. naturally, there needs to be enough

free space for the new master table to be created in the schema that

started the new data pump job.

3. a data pump job is

not the same as a job that is defined with dbms_jobs. jobs created with

dbms_jobs use there own processes. data pump jobs use a master process

and worker process(es). in case a data pump still is temporary stopped

(stop_job while in interactive command mode), the data pump job still

exists in the database (status: not running), while the master and

worker process(es) are stopped and do not exist anymore. the client can

attach to the job at a later time, and continue the job execution

(start_job).

4. the possibility of corruption when the master table of an active data pump job is deleted, depends on the data pump job.

4.a. if the job is an export job,

corruption is unlikely as the drop of the master table will only cause

the data pump master and worker processes to abort. this situation is

similar to aborting an export of the original export client.

4.b. if the job is an import job

then the situation is different. when dropping the master table, the

data pump worker and master processes will abort. this will probably

lead to an incomplete import: e.g. not all table data was imported,

and/or table was imported incomplete, and indexes, views, etc. are

missing. this situation is similar to aborting an import of the original

import client.

the drop of the master

table itself, does not lead to any data dictionary corruption. if you

keep the master table after the job completes (using the undocumented

parameter: keep_master=y), then a drop of the master table afterwards,

will not cause any corruption.

5. instead of the status

'not running' the status of a failed job could also be 'defining'. when

trying to attach to such a job, this would fail with:

$ expdp system/manager attach=system.sys_export_schema_01

export: release 11.2.0.4.0 - production on tue jan 27 10:14:27 2015

copyright (c) 1982, 2011, oracle and/or its affiliates.  all rights reserved.

connected to: oracle database 11g enterprise edition release 11.2.0.4.0 - 64bit production

with the partitioning, olap, data mining and real application testing options

ora-31626: job does not exist

ora-06512: at "sys.dbms_sys_error", line 79

ora-06512: at "sys.kupv$ft", line 405

ora-31638: cannot attach to job sys_export_schema_01 for user system

ora-31632: master table "system.sys_export_schema_01" not found, invalid, or inaccessible

the steps to cleanup these failed/orphaned jobs are the same as mentioned above.