原文轉自:http://blog.csdn.net/tianlesoftware/article/details/5604497
一. Logminer 說明
Oracle LogMiner 的官網說明:
Using LogMiner to Analyze Redo Log Files
Logminer是oracle從8i開始提供的用于分析重做日志資訊的工具,它包括DBMS_LOGMNR和DBMS_LOGMNR_D兩個package,後邊的D是字典的意思。它既能分析redo log file,也能分析歸檔後的archive log file。在分析日志的過程中需要使用資料字典,一般先生成資料字典檔案後使用,10g版本還可以使用線上資料字典。
Logminer也可以分析其它資料庫的重做日志檔案,但是必須使用重做日志所在資料庫的資料字典,否則會出現無法識别的亂碼。另外被分析資料庫的作業系統平台最好和目前Logminer所在資料庫的運作平台一樣,且block size相同。
LogMiner是Oracle資料庫提供的一個工具,它用于分析重做日志和歸檔日志所記載的事務操作。
(1)确定資料庫的邏輯損壞時間。假定某個使用者執行drop table誤删除了重要表sales,通過LogMiner可以準确定位該誤操作的執行時間和SCN值,然後通過基于時間恢複或者基于SCN恢複可以完全恢複該表資料。
(2)确定事務級要執行的精細邏輯恢複操作。假定某些使用者在某表上執行了一系列DML操作并送出了事務,并且其中某個使用者的DML操作存在錯誤。通過LogMiner可以取得任何使用者的DML操作及相應的UNDO操作,通過執行UNDO操作可以取消使用者的錯誤操作。
(3)執行後續審計。通過LogMiner可以跟蹤Oracle資料庫的所有DML、DDL和DCL操作,進而取得執行這些操作的時間順序、執行這些操作的使用者等資訊。
LogMiner 由如下2個腳本來安裝:
(1)建立DBMS_LOGMNR:$ORACLE_HOME/rdbms/admin/dbmslm.sql
SQL> @dbmslm.sql ——路徑一定要正确,我自己的是: @D:\oracle\product\10.1.0\Db_1\RDBMS\ADMIN\dbmslm.sql
程式包已建立。
授權成功。
(2)建立DBMS_LOGMNR_D:$ORACLE_HOME/rdbms/admin/dbmslmd.sql.
SQL> @dbmslmd.sql ——我的: @D:\oracle\product\10.1.0\Db_1\RDBMS\ADMIN\dbmslmd.sql
過程已建立。
沒有錯誤。
PL/SQL 過程已成功完成。
1.1 Logminer 支援的資料類型和表的存儲屬性
(1). CHAR
(2). NCHAR
(3). VARCHAR2 and VARCHAR
(4). NVARCHAR2
(5). NUMBER
(6). DATE
(7). TIMESTAMP
(8). TIMESTAMP WITH TIME ZONE
(9). TIMESTAMP WITH LOCAL TIME ZONE
(10). INTERVAL YEAR TO MONTH
(11). INTERVAL DAY TO SECOND
(12). RAW
(13). CLOB
(14). NCLOB
(15). BLOB
(16). LONG
(17). LONG RAW
(18). BINARY_FLOAT
(19). BINARY_DOUBLE
(20). Index-organized tables (IOTs), including those with overflows or LOB columns
(21). Function-based indexes
(22). XMLTYPE data when it is stored in CLOB format
(23). Tables using basic table compression and OLTP table compression
Support for multibyte CLOBs is available only for redo logs generated by a database with compatibility set to a value of 10.1 or higher.
Support for LOB and LONG datatypes is available only for redo logs generated by a database with compatibility set to a value of 9.2.0.0 or higher.
Support for index-organized tables without overflow segment or with no LOB columns in them is available only for redo logs generated by a database with compatibility set to 10.0.0.0 or higher.
Support for index-organized tables with overflow segment or with LOB columns is available onlyfor redo logs generated by a database with compatibility set to 10.2.0.0 or higher.
1.2 Logminer 不支援的資料類型和表存儲屬性
(1). BFILE datatype
(2). Simple and nested abstract datatypes (ADTs)
(3). Collections (nested tables(嵌套表) and VARRAYs)
(4). Object refs
(5). SecureFiles (unless database compatibility is set to 11.2 or higher)
1.3 LogMiner基本對象
dictionary, and the redo log files containing the data of interest:
LogMiner to analyze.
of internal object IDs, when it presents the redo log data that you request.
IDs and presents data as binary data.
-- LogMiner字典用于将内部對象ID号和資料類型轉換為對象名和外部資料格式。使用LogMiner分析重做日志和歸檔日志時,應該生成LogMiner字典,否則将無法讀懂分析結果。
For example, consider the following the SQL statement:
INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000, 11000);
Without the dictionary, LogMiner will display:
insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),HEXTORAW('c229'),HEXTORAW('c3020b'));
(4)The redo log filescontain the changes made to the database or database dictionary.
1.4 LogMiner配置要求
The following are requirements for the source and mining database, the data dictionary, and the redo log files that LogMiner will mine:
(1)Both the source database and the mining database must be running on thesame hardware platform.
-- 源資料庫和分析資料庫必須運作在相同硬體平台上;
(2)The mining database can be the same as, or completely separate from, the source database.
-- 分析資料庫可以是獨立資料庫或源資料庫;
(3)The mining database must run the same release or a later release of the Oracle Database software as the source database.
--分析資料庫的版本不能低于源資料庫的版本;
(4)The mining database must use the same character set (or a superset of the character set) used by the source database.
--分析資料庫與源資料庫必須具有相同的字元集。
(1)The dictionary must be produced by the same source database that generates the redo log files that LogMiner will analyze.
-- LogMiner字典必須在源資料庫中生成。
(1)Must be produced by the same source database.
--當分析多個重做日志和歸檔日志時,它們必須是同一個源資料庫的重做日志和歸檔日志;
(2)Must be associated with the same database RESETLOGS SCN.
--當分析多個重做日志和歸檔日志時,它們必須具有相同的resetlogs scn;
(3)Must be from a release 8.0 or later Oracle Database. However, several of the LogMiner features introduced as of release 9.0.1 work only with redo log files produced on an Oracle9i or later database.
--當分析的重做日志和歸檔日志必須在Oracle8.0版本以上。
LogMiner does not allow you to mix redo log files from different databases or to use a dictionary from a different database than the one that generated the redo log files to be analyzed.
1.5補充日志(suppplemental logging)
You must enable supplemental logging before generating log files that will be analyzed by LogMiner.
When you enable supplemental logging, additional information is recorded in the redo stream that is needed to make the information in the redo log files useful to you. Therefore, at the very least,you must enable minimal
supplemental logging, as the following SQL statement shows:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
To determine whether supplemental logging is enabled, query the V$DATABASE view, as the following SQL statement shows:
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
重做日志用于實作例程恢複和媒體恢複,這些操作所需要的資料被自動記錄在重做日志中。但是,重做應用可能還需要記載其他列資訊到重做日志中,記錄其他列的日志過程被稱為補充日志。
預設情況下,Oracle資料庫沒有提供任何補充日志,進而導緻預設情況下LogMiner無法支援以下特征:
(1)索引簇、鍊行和遷移行;
(2)直接路徑插入;
(3)摘取LogMiner字典到重做日志;
(4)跟蹤DDL;
(5)生成鍵列的SQL_REDO和SQL_UNDO資訊;
(6)LONG和LOB資料類型。
是以,為了充分利用LogMiner提供的特征,必須激活補充日志。在資料庫級激活補充日志的示例如下:
SQL> conn /as sysdba
已連接配接。
SQL> alter database add supplemental log data;
資料庫已更改。
注意:激活不用重新開機資料庫,資料庫聯機即可。
二. 一個典型的LogMiner步驟
To run LogMiner, you use the DBMS_LOGMNR PL/SQL package. Additionally, you mightalso use the DBMS_LOGMNR_D package if you choose to extract a LogMiner dictionary rather than use the online catalog.
The DBMS_LOGMNR package contains the procedures used to initialize and run LogMiner, including interfaces to specify names of redo log files, filter criteria, and session characteristics.(包括了指定redo 名字,過濾條件,會話字元集 的接口)
The DBMS_LOGMNR_D package queries the database dictionary tables of the current database to create a LogMiner dictionary file.
The LogMiner PL/SQL packages are owned by the SYS schema. Therefore, if you are not connected as user SYS, then:
(1)You must include SYS in your call. For example:
EXECUTE SYS.DBMS_LOGMNR.END_LOGMNR;
(2)You must have been granted the EXECUTE_CATALOG_ROLE role.
Enable the type of supplemental logging you want to use. At the very least, you must enable minimal supplemental logging, as follows:
SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
To use LogMiner, you must supply it with a dictionary by doing one of the following:
(1). Specify use of the online catalog by using the DICT_FROM_ONLINE_CATALOG option when you start LogMiner.
(2). Extract database dictionary information to the redo log files.
(3). Extract database dictionary information to a flat file.
procedure, as demonstrated(示範,示例) in the following steps. You can add and remove redo log files in any order.
Note:
If you will be mining in the database instance that is generating the redo log files, then you only need to specify the CONTINUOUS_MINE option and one of the following when you start LogMiner:
(1). The STARTSCN parameter
(2). The STARTTIME parameter
2.3.1 Use SQL*Plus to start an Oracle instance,with the database either mounted or unmounted. For example, enter the STARTUP statement at the SQL prompt:
SQL>STARTUP
2.3.2 Create a list of redo log files. Specify the NEW option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure to signal that this is the beginning of a new list. For example, enter the following to specify the /oracle/logs/log1.f
redo log file:
execute dbms_logmnr.add_logfile( logfilename => '/oracle/logs/log1.f', options => dbms_logmnr.new);
2.3.3 If desired, add more redo log files by specifying the ADDFILE option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure. For example, enter the following to add the /oracle/logs/log2.f redo log file:
execute dbms_logmnr.add_logfile( logfilename => '/oracle/logs/log2.f', options => dbms_logmnr.addfile);
The OPTIONS parameter is optional(可選的)when you are adding additional redo log files. For example, you could simply enter the following:
execute dbms_logmnr.add_logfile( logfilename=>'/oracle/logs/log2.f');
2.3.4 If desired, remove redo log files by using the DBMS_LOGMNR.REMOVE_LOGFILE PL/SQL procedure. For example, enter the following to remove the /oracle/logs/log2.f redo log file:
execute dbms_logmnr.remove_logfile( logfilename => '/oracle/logs/log2.f');
log,可以start logminer 了)
2.4.1 Execute the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner.
Oracle recommends(推薦) that you specify a LogMiner dictionary option. If you do not, then LogMiner cannot translate internal object identifiers and datatypes to object names and external data formats. Therefore, it would
return internal object IDs and present data as binary data. Additionally, the MINE_VALUE and COLUMN_PRESENT functions cannot be used without a dictionary.
(1)If you are specifying the name of a flat file(平面檔案) LogMiner dictionary, then you must supply a fully qualified file name for the dictionary file. For example, to start LogMiner using /oracle/database/dictionary.ora,
issue the following statement:
execute dbms_logmnr.start_logmnr( dictfilename =>'/oracle/database/dictionary.ora');
(2)If you are not specifying a flat file dictionary name, then use the OPTIONS parameter to specify either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG option.
If you specify DICT_FROM_REDO_LOGS, then LogMiner expects to find a dictionary in the redo log files that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure.To determine which redo log files
contain a dictionary, look at the V$ARCHIVED_LOG view.
If you add additional redo log files after LogMiner has been started, you must restart LogMiner.
LogMiner will not retain options that were included in the previous call to DBMS_LOGMNR.START_LOGMNR;you must respecify the options you want to use. However, LogMiner will retain the dictionary
specification from the previous call if you do not specify a dictionary in the current call to DBMS_LOGMNR.START_LOGMNR.
2.4.2 Optionally, you can filter your query by time or by SCN.
2.4.3 You can also use the OPTIONS parameter to specify additional characteristics of your LogMiner session.
For example, you might decide to use the online catalog as your LogMiner dictionary and to have only committed transactions shown in the V$LOGMNR_CONTENTS view, as follows:
execute dbms_logmnr.start_logmnr(options=>
dbms_logmnr.dict_from_online_catalog + dbms_logmnr.committed_data_only);
You can execute the DBMS_LOGMNR.START_LOGMNR procedure multiple times, specifying different options each time. This can be useful, for example, if you did not get the desired results from
a query of V$LOGMNR_CONTENTS, and want to restart LogMiner with different options. Unless you need to respecify the LogMiner dictionary, you do not need to add redo log files if they were already added with a previous call to DBMS_LOGMNR.START_LOGMNR.
At this point, LogMiner is started and you can perform queries against the V$LOGMNR_CONTENTS view.
This procedure closes all the redo log files and allows all the database and system resources allocated by LogMiner to be released.
If this procedure is not executed, then LogMiner retains all its allocated resources until the end of the Oracle session in which it was invoked. It is particularly important to use this procedure to end the LogMiner
session if either the DDL_DICT_TRACKING option or the DICT_FROM_REDO_LOGS option was used.
三. LogMiner 資料字典和Redo Log Files
Before you begin using LogMiner, it is important to understand how LogMiner works with the LogMiner dictionary file (or files) and redo log files. This will help you to get accurate results and to plan the use of your system resources.
LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:
in the tables of interest are anticipated. This is the most efficient and easy-to-use option.
changes will be made to the column definitions in the tables of interest.
from redo log files instead.
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
In addition to using the online catalog to analyze online redo log files,you can use it to analyze archived redo log files, if you are on the same system that generated the archived redo log files.
The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables are somewhat rare, the online catalog generally contains the information you
need for your analysis.
Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog no longer reflects the previous version of the table. This means that
LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table.
-- 使用源資料庫分析重做日志或歸檔日志時,如果要分析表的結構沒有發生任何變化,Oracle建議使用該選項分析重做日志和歸檔日志。
Instead, LogMiner generates nonexecutable SQL (including hexadecimal-to-raw formatting of binary values) in the SQL_REDO column of the V$LOGMNR_CONTENTS view similar to the following example:
The online catalog option requires that the database be open.
The online catalog option is not valid with the DDL_DICT_TRACKING option of DBMS_LOGMNR.START_LOGMNR.
-- dbms_logmnr.dict_from_online_catalog要求資料庫必須處于open狀态,并且該選項隻能用于跟蹤DML操作,而不能用于跟蹤DDL操作。
be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file
is not).
To extract dictionary information to the redo log files, execute the PL/SQL DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option.Do not specify a file name or location.
The process of extracting the dictionary to the redo log files does consume database resources, but if you limit the extraction to off-peak hours, then this should not be a problem, and it is faster than extracting to a flat file. Depending on the
size of the dictionary, it may be contained in multiple redo log files. If the relevant redo log files have been archived,then you can find out which redo log files contain the start and end of an extracted dictionary. To do
so, query the V$ARCHIVED_LOG view, as follows:
Specify the names of the start and end redo log files, and possibly other logs in between them, with the ADD_LOGFILE procedure when you are preparing to begin a LogMiner session.
Oracle recommends that you periodically back up the redo log files so that the information is saved and available at a later date. Ideally, this will not involve any extra steps because if your database is being properly managed, then there should
already be a process in place for backing up and restoring archived redo log files. Again, because of the time required, it is good practice to do this during off-peak hours.
使用分析資料庫分析重做日志或歸檔日志,或者被分析表的結構發生改變時,Oracle建議使用該選項分析重做日志和歸檔日志。為了摘取LogMiner字典到重做日志,要求源資料庫必須處于archivelog模式,并且該資料庫處于open狀态。
--用來查DDL的操作記錄
analysis of older redo log files.
To extract database dictionary information to a flat file, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_FLAT_FILE option.
Be sure that no DDL operations occur while the dictionary is being built.
The following steps describe how to extract a dictionary to a flat file.Steps 1 and 2 are preparation steps. You only need to do them once, and then you can extract a dictionary to a flat file as many times as you want
to.
(1)The DBMS_LOGMNR_D.BUILD procedure requires access to a directorywhere it can place the dictionary file. Because PL/SQL procedures
do not normally access user directories, you must specify a directory for use by the DBMS_LOGMNR_D.BUILD procedure or the procedure will fail. To specify a directory, set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file.
For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, place the following in the initialization parameter file:
UTL_FILE_DIR = /oracle/database
Remember that for the changes to the initialization parameter file to take effect,you must stop and restart the database.
--要是該參數生效需要重新開機DB
(2)If the database is closed,then use SQL*Plus to mount and open the database whose redo log files you want to analyze. For example,
entering the SQL STARTUP command mounts and opens the database:
SQL>STARTUP
(3)Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a file name for the dictionary and a directory path name for the file.
This procedure creates the dictionary file.
For example, enter the following to create the file dictionary.ora in /oracle/database:
EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', '/oracle/database/',
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
You could also specify a file name and location without specifying the STORE_IN_FLAT_FILE option. The result would be the same.
delivered to you through the V$LOGMNR_CONTENTS view.
You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:
3.2.1 Automatically
Although this example specifies the dictionary from the online catalog, any LogMiner dictionary can be used.
The CONTINUOUS_MINE option requires that the database be mounted and that archiving be enabled.
LogMiner will use the database control file to find and add redo log files that satisfy your specified time or SCN range to the LogMiner redo log file list. For example:
EXECUTE DBMS_LOGMNR.START_LOGMNR(
STARTTIME => '01-Jan-2003 08:30:00',
ENDTIME => '01-Jan-2003 08:45:00',
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);
You can also direct LogMiner to automatically build a list of redo log files to analyze by specifying just one redo log file using DBMS_LOGMNR.ADD_LOGFILE, and then specifying the CONTINUOUS_MINE option when you start LogMiner. The previously described
method is more typical, however.
3.2.2 Manually
must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.
specify /oracle/logs/log1.f:
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);
To determine which redo log files are being analyzed in the current LogMiner session, you can query the V$LOGMNR_LOGS view, which contains one row for each redo log file.
四. LogMiner 示例
在做實驗之前,檢查下suppplemental logging:
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
--------
YES
如果是YES 或者IMPLICIT則表明已經生效了,否則需要啟動:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.
4.1 使用源資料庫資料字典(Online Catalog)來分析DML操作
1、先進行DML和DDL的操作,以便下面分析。
SQL> conn /as sysdba
已連接配接。
SQL> show parameter utl;
NAME TYPE VALUE
------------------------------------ ----------- --------
create_stored_outlines string
utl_file_dir string
SQL> insert into scott.dept values('80','Dave','AnQing');
已建立 1 行。
SQL> update scott.dept set loc='shang hai' where deptno=70;
已更新 1 行。
SQL> commit;
送出完成。
SQL> delete from scott.dept where deptno=40;
已删除 1 行。
SQL> alter table scott.dept add(phone varchar2(32));
表已更改。
SQL> insert into scott.dept values(50,'David','Dai','13888888888');
SQL> alter table scott.dept add(address varchar2(300));
2、把線上重做日志變成歸檔日志,這樣分析歸檔日志就可以了
SQL> alter system switch logfile;
系統已更改。
3、建立日志分析清單:
----添加要分析的日志檔案
SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_32_597FQD7B_.ARC',options=>dbms_logmnr.new);
---繼續填加,用dbms_logmnr.removefile可以删除
SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_30_597B5P7B_.ARC',options=>dbms_logmnr.addfile);
4、啟動LogMiner
SQL> execute dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
5、檢視日志分析結果:
SQL> col username format a8
SQL> col sql_redo format a50
SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
會話已更改。
SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where seg_name='DEPT';
USERNAME SCN TIMESTAMP SQL_REDO
-------- ---------- ------------------- -----------------------------------
1645927 2009-08-25 16:54:56 delete from "SCOTT"."DEPT" where "DEPTNO" = '40' and "DNAME" = 'OPERATIONS' and "LOC" = 'BOSTON' and "PHONE" IS NULL and "ADDRESS" IS NULL and ROWID = 'AAAMfNAAEAAAAAQAAD';
SYS 1645931 2009-08-25 16:54:57 alter table scott.dept add(phone varchar2(32)) ;
SYS 1645992 2009-08-25 16:56:33 alter table scott.dept add(address varchar2(300)) ;
6、結束LogMiner
SQL> execute dbms_logmnr.end_logmnr;
4.2 摘取LogMiner字典到字典檔案分析DDL操作
1、進行DDL操作,以便分析
SQL> conn scott/admin
SQL> drop table emp;
表已删除。
SQL> drop table dept;
2、使用字典檔案,請檢視資料庫是否配置utl_file_dir,這個參數為字典檔案的目錄。配置該參數後,需要重新開機資料庫
SQL> show user;
USER 為 "SYS"
SQL> show parameter utl;
NAME TYPE VALUE
------------------------------------ ----------- ------------
create_stored_outlines string
utl_file_dir string
SQL> alter system set utl_file_dir='D:/oracle/logminer' scope=spfile;
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
SQL> show parameter utl
NAME TYPE VALUE
------------------------------------ ----------- -----------
utl_file_dir string D:/oracle/logminer
3、建立字典檔案:
SQL> execute dbms_logmnr_d.build ('dict.ora','D:/oracle/logminer',dbms_logmnr_d.store_in_flat_file);
4、建立日志分析清單:
5、啟動LogMiner
SQL> execute dbms_logmnr.start_logmnr(dictfilename=>'D:/oracle/logminer/dict.ora',options=>dbms_logmnr.ddl_dict_tracking);
6、查詢分析日志結果:
SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where lower(sql_redo) like '%table%';
USERNAME SCN TIMESTAMP SQL_REDO
-------- ---------- -------------- -----------------------------------
1647538 25-8月 -09 ALTER TABLE "SCOTT"."EMP" RENAME CONSTRAINT "PK_EMP" TO "BIN$f/mFjN+nTmaYjrb17YU80w==$0" ;
1647550 25-8月 -09 ALTER TABLE "SCOTT"."EMP" RENAME TO "BIN$E5UujHaTR+uItpLtzN0Ddw==$0" ;
1647553 25-8月 -09 drop table emp AS "BIN$E5UujHaTR+uItpLtzN0Ddw==$0" ;
1647565 25-8月 -09 ALTER TABLE "SCOTT"."DEPT" RENAME CONSTRAINT "PK_DEPT" TO "BIN$3imFL+/1SqONFCB7LoPcCg==$0" ;
1647571 25-8月 -09 ALTER TABLE "SCOTT"."DEPT" RENAME TO "BIN$kYKBLvltRb+vYaT6RkaRiA==$0";
1647574 25-8月 -09 drop table dept AS "BIN$kYKBLvltRb+vYaT6RkaRiA==$0" ;
或者其他的查詢:
SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where username='SYS';
USERNAME TIMESTAMP SQL_REDO
-------- ------------------- --------------------------------------------------
USERNAME SCN TIMESTAMP SQL_REDO
-------- ---------- -------------- -----------------------------------
SYS 1647487 25-8月 -09 set transaction read write;
SYS 1647488 25-8月 -09 alter user scott account unlock;
SYS 1647490 25-8月 -09 Unsupported
SYS 1647492 25-8月 -09 commit;
7、結束LogMiner
注意,v$logmnr_contents内容儲存了日志的内容,隻在目前會話有效,如果想長期儲存分析,可以在目前會話用create table tablename as select * from v$logmnr_contents語句來持久儲存。