天天看点

Oracle 10g配置单向stream流复制

到了Oracle 10g,有了DBMS_STREAMS_ADM之后,Stream流复制已经很好配置了

下面的只是一个最简单的stream流复制,不涉及复杂的rule配置

环境:

OS:Redhat Linux

Oracle 10.2.0.2

source:stream1

destnation:stream2

==准备==

1.归档日志配置:

SQL> archive log list;

Database log mode              Archive Mode

Automatic archival             Enabled

Archive destination            /u01/oradata/stream1_arch

Database log mode              Archive Mode

Automatic archival             Enabled

Archive destination            /u01/oradata/stream2_arch

---以下没有专门说明的都是sys用户---

2.在源stream1上配置Supplemental loging:

在源上:

SQL> alter database add supplemental log data;

Database altered.

(为无主键或唯一键的表实现无条件追加日志,如:

alter table sms.mo_flow_2009_05 add supplemental log group log_group_mo_flow_2009_05(innerid) always;

)

确认源库的追加日志模式:

    数据库级别:

    SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;

    检查追加日志组:

    Select log_group_name, table_name, decode(always, 'ALWAYS', 'Unconditional',NULL, 'Conditional') ALWAYS from DBA_LOG_GROUPS

    检查追加日志组的列:

    Select log_group_name, column_name, position from dba_log_group_columns where table_name = 'DEPARTMENTS' and owner='HR';

3.两台机器上修改global_name:

stream1:

SQL> alter system set global_names=true scope=both;

SQL> alter database rename global_name to stream1.com;

stream2:

SQL> alter system set global_names=true scope=both;

SQL> alter database rename global_name to stream2.com;

4.修改初始化参数:

参考官方文档确定需要来检查和设定参数

5.创建stream管理用户并表空间,配置权限,source和dest:

5.1 为strmadmin用户创建独立表空间

stream1:

SQL> CREATE TABLESPACE streams_tbs DATAFILE '/u01/oradata/stream1/streams_tbs.dbf'

2    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

Tablespace created.

stream2:

SQL> CREATE TABLESPACE streams_tbs DATAFILE '/u01/oradata/stream2/streams_tbs.dbf'

2    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

Tablespace created.

5.2 source和dest创建相同用户和权限:

SQL> CREATE USER strmadmin IDENTIFIED BY strmadminpw

2     DEFAULT TABLESPACE streams_tbs

3     QUOTA UNLIMITED ON streams_tbs;

User created.

SQL> GRANT DBA TO strmadmin;

Grant succeeded.

SQL> BEGIN

2    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(

3      grantee          => 'strmadmin',   

4      grant_privileges => true);

5 END;

6 /

PL/SQL procedure successfully completed.

6.配置listener.ora和tnsnames.ora,闲话不多说,最后能保证:

stream1上:sqlplus system/[email protected]

stream2上:sqlplus system/[email protected]

都没有问题就ok

---如果没有说明以下都是strmadmin/strmadminpw 用户---

7.创建DB link:

单向复制之需要一个database link即source 到dest:stream1-->stream2

stream1上:

sqlplus strmadmin/strmadminpw

SQL> CREATE DATABASE LINK stream2.com CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'stream2';

Database link created.

测试:

SQL> select * from [email protected];

GLOBAL_NAME

-----------------------------------------

STREAM2.COM

==开始stream的配置===

8. source和dest 创建队列:

stream1,stream2都要创建

SQL> EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

PL/SQL procedure successfully completed.

上面命令会创建一个队列缺省名:streams_queue,队列表缺省是:STREAMS_QUEUE_TABLE

队列存储的object类型是anaydata

可以用查询dba_queues,dba_queue_tables来检查:

SQL> select owner,queue_table,name from dba_queues where owner='STRMADMIN';

OWNER           QUEUE_TABLE                     NAME

----------------   ----------------------------------   ---------------------------------------------

STRMADMIN    STREAMS_QUEUE_TABLE    STREAMS_QUEUE

STRMADMIN    STREAMS_QUEUE_TABLE    AQ$_STREAMS_QUEUE_TABLE_E

SQL> select owner,queue_table,object_type from dba_queue_tables where owner='STRMADMIN';

OWNER            QUEUE_TABLE                        OBJECT_TYPE

-----------------   ---------------------------------      -------------------------------

STRMADMIN    STREAMS_QUEUE_TABLE       SYS.ANYDATA

9. 在source: stream1上创建Stream propagation:

SQL> BEGIN

1 DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(

2 schema_name                    => 'hr',

3 streams_name                   => 'stream1_to_stream2',

4 source_queue_name         => 'strmadmin.streams_queue',

5 destination_queue_name => '[email protected]',

6 include_dml                        => true,

7 include_ddl                       => true,

8 source_database              => 'stream1.com',

9 inclusion_rule                    => true,

10 queue_to_queue              => true);

11 END;

12 /

PL/SQL procedure successfully completed.

可以通过dba_propagations查看结果:

SQL> select PROPAGATION_NAME,SOURCE_QUEUE_NAME,DESTINATION_QUEUE_NAME,DESTINATION_DBLINK,STATUS from dba_propagation;

PROPAGATION_NAME      SOURCE_QUEUE_NAME   DESTINATION_QUEUE_NAME   DESTINATION_DBL   STATUS

-------------------------------- ------------------------------    --------------------------------- --------------- --------

STREAM1_TO_STREAM2 STREAMS_QUEUE            STREAMS_QUEUE              STREAM2.COM     ENABLED

9.在source: stream1上创建Capture进程:

SQL> BEGIN

2    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

3      schema_name     => 'hr',  

4      streams_type      => 'capture',

5      streams_name     => 'capture_stream1',

6      queue_name       => 'strmadmin.streams_queue',

7      include_dml         => true,

8      include_ddl         => true,

9      inclusion_rule     => true);

10 END;

11 /

PL/SQL procedure successfully completed.

可以通过dba_capture查看:

SQL> select CAPTURE_NAME,QUEUE_NAME,START_SCN,STATUS,CAPTURE_TYPE from dba_capture;

CAPTURE_NAME            QUEUE_NAME            START_SCN   STATUS   CAPTURE_TY

----------------------------   -------------------------- --------------- ------------ ----------

CAPTURE_STREAM1     STREAMS_QUEUE      504733         DISABLED   LOCAL

SQL> select * from ALL_CAPTURE_PREPARED_SCHEMAS;                           

SCHEMA_NAME   TIMESTAMP SUPPLEME SUPPLEME SUPPLEME SUPPLEME

-------------------- ---------------    -------------- -------------- -------------   --------

HR                      12-JUN-08      IMPLICIT       IMPLICIT     IMPLICIT     NO     

10.将stream1上的hr schema数据导到stream2上:

在stream2上:

sqlplus system/system

SQL> create public database link stream1.com connect to system identified by system using 'stream1';

Database link created.

SQL> select * from [email protected];

GLOBAL_NAME

------------------------------------------------------

STREAM1.COM

导数据:

[email protected] ~]$ impdp system/system network_link=stream1.com schemas=hr

Import: Release 10.2.0.2.0 - Production on Thursday, 12 June, 2008 17:47:59

Copyright (c) 2003, 2005, Oracle. All rights reserved.

.....

......

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at 17:50:32

11. 在stream1上设置stream2上hr schema的Instantiation SCN

sqlplus strmadmin/strmadminpw

SQL> DECLARE

2    iscn NUMBER;        

3    BEGIN

4    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

5    [email protected](

6     source_schema_name    => 'hr',

7    source_database_name => 'stream1.com',

8    instantiation_scn     => iscn,

9     recursive             => true);

10 END;

11 /

PL/SQL procedure successfully completed.

12.在stream2上创建apply进程apply_stream2:

SQL> BEGIN

2   DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

3      schema_name      => 'hr',

4      streams_type    => 'apply',

5      streams_name    => 'apply_stream2',

6      queue_name      => 'strmadmin.streams_queue',

7      include_dml     => true,

8      include_ddl     => true,

9      source_database => 'stream1.com',

10      inclusion_rule => true);

11 END;

12 /

PL/SQL procedure successfully completed.

可以通过:

dba_apply

v$streams_apply_reader

v$streams_apply_coordinator

v$streams_apply_server

查看状态

SQL> select apply_name,queue_name,status from dba_apply;

APPLY_NAME          QUEUE_NAME            STATUS

---------------------      ------------------------   --------

APPLY_STREAM2    STREAMS_QUEUE       DISABLED

==启动==

13.启动capture和apply:

13.1 Stream2上启动 Apply Process

CONNECT strmadmin/strmadminpw

为无条件追加日志组的列设置:

SQL> BEGIN

2 DBMS_APPLY_ADM.SET_KEY_COLUMNS( 3   object_name          IN  VARCHAR2, 4   column_list          IN  VARCHAR2); 5 END;

6 /

SQL> BEGIN

2    DBMS_APPLY_ADM.SET_PARAMETER(

3      apply_name => 'apply_stream2',

4      parameter   => 'disable_on_error',

5      value       => 'n');

6 END;

7 /

PL/SQL procedure successfully completed.

SQL> BEGIN

2    DBMS_APPLY_ADM.START_APPLY(

3      apply_name => 'apply_stream2');

4 END;

5 /

PL/SQL procedure successfully completed.

SQL> select apply_name,queue_name,status from dba_apply;

APPLY_NAME                   QUEUE_NAME               STATUS

------------------------------ --------------------------      --------

APPLY_STREAM2            STREAMS_QUEUE         ENABLED

这时候alert log有:

Thu Jun 12 18:00:36 2008

Streams APPLY A001 started with pid=25, OS id=30819

Streams Apply Reader started P000 with pid=26 OS id=30821

Streams Apply Server started P001 with pid=27 OS id=30823

如果有问题,没有能启动就查dba_apply的error_messages列

13.2 stream1上启动capture process:

sqlplus strmadmin/strmadminpw

(对于10.2.0.1,有Bug 4571978 - Logminer memory leak in a logical standby configuration,此bug会导致ORA-01341: LogMiner out-of-memory 错误。

解决办法,分配更多的内存给capture,原来是10M,改成100M(或更多),重新启动就好了。注意这部分内存不是从streams_pool分配的,而是直接从sga分配的,_SGA_SIZE是个隐藏的capture参数。

SQL> execute dbms_capture_adm.set_parameter('capture_stream1','_SGA_SIZE','100');

SQL> BEGIN

2    DBMS_CAPTURE_ADM.START_CAPTURE(

3      capture_name => 'capture_stream1');

4 END;

5 /

PL/SQL procedure successfully completed.

SQL> select capture_name,status from dba_capture;

CAPTURE_NAME              STATUS

------------------------------    ------------

CAPTURE_STREAM1       ENABLED

alert 日志有:

Thu Jun 12 18:04:46 2008

Streams CAPTURE C001 started with pid=27, OS id=11884

Thu Jun 12 18:04:49 2008

LOGMINER: Parameters summary for session# = 1

LOGMINER: Number of processes = 3, Transaction Chunk Size = 1

LOGMINER: Memory Size = 10M, Checkpoint interval = 10M

LOGMINER: session# = 1, builder process P001 started with pid=32 OS id=11966

LOGMINER: session# = 1, reader process P000 started with pid=29 OS id=11964

LOGMINER: session# = 1, preparer process P002 started with pid=33 OS id=11968

Thu Jun 12 18:04:53 2008

LOGMINER: Begin mining logfile: /u01/oradata/stream1_arch/1_13_657197948.dbf

Thu Jun 12 18:06:40 2008

Thread 1 advanced to log sequence 15

Current log# 2 seq# 15 mem# 0: /u01/oradata/stream1/redo02.log

Thu Jun 12 18:06:53 2008

Thread 1 advanced to log sequence 16

Current log# 3 seq# 16 mem# 0: /u01/oradata/stream1/redo03.log

...

LOGMINER: Begin mining logfile: /u01/oradata/stream1_arch/1_13_657197948.dbf

Thu Jun 12 18:07:34 2008

LOGMINER: End mining logfile: /u01/oradata/stream1_arch/1_13_657197948.dbf

Thu Jun 12 18:07:34 2008

LOGMINER: Begin mining logfile: /u01/oradata/stream1_arch/1_14_657197948.dbf

信息,说明已经开始mining logfile.

===测试===

14.测试

在stream1上:

SQL> conn hr/hr

SQL> select employee_id,salary from employees where employee_id=197;

EMPLOYEE_ID     SALARY

--------------          ----------

        197              3000

SQL> update employees set salary=salary+1 where employee_id=197;

1 row updated.

SQL> commit;

Commit complete.

stream2上检查结果(可能有延迟):

SQL> conn hr/hr

Connected.

SQL> select employee_id,salary from employees where employee_id=197;

EMPLOYEE_ID     SALARY

-----------------       ----------

        197              3001

这个时间间隔跟性能及其它情况有关,包括

capture mining logfile的速度

propagation的间隔(默认3秒)及传输时间

apply log的速度

转自:http://jingh3209.blog.163.com/blog/static/15696672008758543390/