WARNING: This web is not used anymore. Please use PDBService.DownstreamDatabaseConfiguration instead!
 

Downstream Database Configuration

The objective of this configuration is to isolate the source database against network or database problems at the destination sites.

Downstream Capture

The capture process runs on a different database the source database, the downstream database. Archived redo log files from the source database are copied to the downstream database where the capture process captures the changes. The files can be copied using log transport services or even manually.

Advantages

  • Use of fewer resources on the source database because the downstream database performs most of the required work
  • Copying redo log files to one or more downstream databases provides improved protection against data loss

Disadvantages

  • Definite latency in the replication process
  • Downstream database is an additional expense

Restrictions / Requirements

  • Source database must be running at least Oracle Database 10g and the downstream capture database must be running the same version as the source database or higher
  • Operating system on the source and downstream capture sites must be the same (release and directory structure can be differents)
  • Hardware architecture on the source and downstream capture sites must be the same

Creating a Real Time Downstream Capture Environment

Preparing to Copy Redo Data

The following initialization parameters must be set to configure the redo transport services. (Downstream database is dwsdb.cern.ch)
  • At the source database:
    • LOG_ARCHIVE_DEST_N specifies the site running the downstream capture process
            LOG_ARCHIVE_DEST_2 = 'SERVICE=dwsdb.cern.ch LGWR ASYNC OPTIONAL 
                                  NOREGISTER VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)'
            LOG_ARCHIVE_DEST_STATE_2 = 'ENABLE'
            
    • The setting for the LOG_ARCHIVE_CONFIG parameter must include the send value

  • At the downstream database:
    • LOG_ARCHIVE_DEST_N specifies a directory on the system running the downstream database for the archived log files written from the standby redo logs
            LOG_ARCHIVE_DEST_2 = 'LOCATION=/ORA/dbs02/oracle/admin/arch/logfrom_rls1/ 
                                  MANDATORY VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
            LOG_ARCHIVE_DEST_STATE_2 = 'ENABLE'
            
    • The setting for the LOG_ARCHIVE_CONFIG parameter must include the receive value
    • Optionally set the LOG_ARCHIVE_FORMAT parameter to generate filenames in a specified format for the archived redo log files

  • Create the standby redo log files at the downstream database:
    • Determine the log file size used on the source database and the number of log file groups
            SQL> select GROUP#, BYTES from V$LOG;
      
                GROUP#      BYTES
            ---------- ----------
                     1  104857600
                     2  104857600
                     3  104857600
            
    • Add the standby log file groups to the downstream database
      • the number of standby log file groups must be at least one more than the number of online log file groups on the source database (in the example: 3 online log file groups => 4 standby log file groups)
      • the standby log file size must exactly match (or be larger than) the source database log file size
              ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
              ('/path/sdwsdb_redo_04a.log','/path/sdwsdb_redo_04b.log')
              SIZE 100M;
              ....
              ALTER DATABASE ADD STANDBY LOGFILE GROUP 7
              ('/path/sdwsdb_redo_07a.log','/path/sdwsdb_redo_07b.log')
              SIZE 100M;
              

Creating Downstream Capture process

The attached document setupDownstreamRealTime_example.sql contains an example of setting up a Streams environment using Real Time Downstream capture.

You can find more information on the Oracle® Streams Concepts and Administration 10g.

Split & Merge solution

The objective is to isolate the replicas sites against each other. When one site goes down, the source queue is filled with the LCRs/events which have not been consumed by the down site, afecting to the performance of the whole configuration. In order to avoid that, the following solution is proposed:
  • The idea is to split the capture process in two capture processes:
    • The original one (real-time downstream capture in our setup) for the sites working well
    • New capture process for the site/s unstable/s
      • New capture queue and new propagation job are defined
      • The original propagation job is dropped so the spilled LCRs are removed from the original source queue
  • Once the site is stable again, both capture processes are merged in the original one
(see the attached script for more details on how to implement this solution)

  • Solution tested successfully with the help of Volker Kuhr (Oracle consulting)
Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatsql setupDownstreamRealTime_example.sql r2 r1 manage 6.6 K 2006-04-24 - 17:02 EvaDafonte  
Texttxt siteDown_splitmerge_general.txt r1 manage 5.9 K 2006-04-10 - 18:14 EvaDafonte how to split the capture process when one site is unstable and merge again when it is available
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r4 - 2006-04-24 - EvaDafonte
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    PSSGroup All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback