2026-03-12 21:01:38

This commit is contained in:
2026-03-12 22:01:38 +01:00
parent 3bd1db26cc
commit 26296b6d6a
336 changed files with 27507 additions and 0 deletions

View File

@@ -0,0 +1,296 @@
## Context
- setup extract/replicat for 3 tables: ORDERS, PRODUCTS and USERS
- add 2 new tables TRANSACTIONS and TASKS to this extract/replica peer
The aim is to minimize the downtime for the peer extract/replicat, so we will proceed in 2 steps:
- create a second parallel extract/replicat for the 2 new tables
- merge the second extract/replicat to initial extract/replicat
## Extract setup
Add trandata to tables:
dblogin useridalias YODA
add trandata GREEN.ORDERS
add trandata GREEN.PRODUCTS
add trandata GREEN.USERS
list tables GREEN.*
Define params file for extract:
edit params EXTRAA
extract EXTRAA
useridalias JEDIPRD
sourcecatalog YODA
exttrail ./dirdat/aa
purgeoldextracts
checkpointsecs 1
ddl include mapped
warnlongtrans 1h, checkinterval 30m
------------------------------------
table GREEN.ORDERS;
table GREEN.PRODUCTS;
table GREEN.USERS;
Add, register and start extract:
dblogin useridalias JEDIPRD
add extract EXTRAA, integrated tranlog, begin now
add exttrail ./dirdat/aa, extract EXTRAA
register extract EXTRAA, database container (YODA)
start extract EXTRAA
info extract EXTRAA detail
## Initial load
Note down the current SCN on source database.
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
10138382
On target DB create tables structure for ORDERS, PRODUCTS, USERS and do the inlitial load:
SCN=10138382
impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_01.log remap_schema=GREEN:RED tables=GREEN.ORDERS,GREEN.PRODUCTS,GREEN.USERS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN
## Replicat setup
Define params file for replicat.
Take care to filter `filter(@GETENV ('TRANSACTION','CSN')`, it must be positionned to the SCN of initial load.
edit params REPLAA
replicat REPLAA
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAA.dsc, purge, megabytes 10
map YODA.GREEN.ORDERS, target MAUL.RED.ORDERS, filter(@GETENV ('TRANSACTION','CSN') > 10138382);
map YODA.GREEN.PRODUCTS, target MAUL.RED.PRODUCTS, filter(@GETENV ('TRANSACTION','CSN') > 10138382);
map YODA.GREEN.USERS, target MAUL.RED.USERS, filter(@GETENV ('TRANSACTION','CSN') > 10138382);
Add and start replicat:
add replicat REPLAA, integrated, exttrail ./dirdat/aa
dblogin useridalias SITHPRD
register replicat REPLAA database
start replicat REPLAA
info all
Wait to catch the lag:
lag replicat
When done you can remove filter `filter(@GETENV ('TRANSACTION','CSN')`
edit params REPLAA
replicat REPLAA
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAA.dsc, purge, megabytes 10
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
restart replicat REPLAA
## Add 2 new tables to extract/replicat
Add trandata to tables:
dblogin useridalias YODA
add trandata GREEN.TRANSACTIONS
add trandata GREEN.TASKS
list tables GREEN.*
Create a second extract EXTRAB to manage the new tables.
Define extract parameters:
edit params EXTRAB
extract EXTRAB
useridalias JEDIPRD
sourcecatalog YODA
exttrail ./dirdat/ab
purgeoldextracts
checkpointsecs 1
ddl include mapped
warnlongtrans 1h, checkinterval 30m
table GREEN.TRANSACTIONS;
table GREEN.TASKS;
Add, register and start extract:
dblogin useridalias JEDIPRD
add extract EXTRAB, integrated tranlog, begin now
add exttrail ./dirdat/ab, extract EXTRAB
register extract EXTRAB, database container (YODA)
start extract EXTRAB
info extract EXTRAB detail
## Initial load for new tables
Note down the current SCN on source database.
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
10284191
On target DB create tables structure for TRANSACTIONS, TASKS and do the inlitial load:
SCN=10284191
impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_02.log remap_schema=GREEN:RED tables=GREEN.TRANSACTIONS,GREEN.TASKS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN
## New replicat setup
Define extract parameters.
Pay attention to `filter(@GETENV ('TRANSACTION','CSN')` clause to be setup to SCN of intial datapump load.
edit params REPLAB
replicat REPLAB
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAB.dsc, purge, megabytes 10
map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 10284191);
map YODA.GREEN.TASKS, target MAUL.RED.TASKS, filter(@GETENV ('TRANSACTION','CSN') > 10284191);
Add and start new replicat:
add replicat REPLAB, integrated, exttrail ./dirdat/ab
dblogin useridalias SITHPRD
register replicat REPLAB database
start replicat REPLAB
info all
Check if new replicat is running and wait to lag 0.
## Integrate the 2 new tables to initial extract/replicat: EXTRAA/REPLAA
Add new tables to initial extract for a **double run**:
edit params EXTRAA
extract EXTRAA
useridalias JEDIPRD
sourcecatalog YODA
exttrail ./dirdat/aa
purgeoldextracts
checkpointsecs 1
ddl include mapped
warnlongtrans 1h, checkinterval 30m
table GREEN.ORDERS;
table GREEN.PRODUCTS;
table GREEN.USERS;
table GREEN.TRANSACTIONS;
table GREEN.TASKS;
Restart extract EXTRAA:
restart extract EXTRAA
Stop extracts in this **strictly order**:
- **first** extract: EXTRAA
- **second** extract: EXTRAB
> It is **mandatory** to stop extracts in this order.
> **The applied SCN on first replicat tables must be less than the SCN on second replicat** in order to allow the first replicat to start at the last applied psition in the trail file. Like this, the first replicat must not be repositionned in the past.
stop EXTRACT EXTRAA
stop EXTRACT EXTRAB
Now stop both replicat also:
stop replicat REPLAA
stop replicat REPLAB
Note down the SCN for each extract and premare new params file for initial replicat.
info extract EXTRAA detail
info extract EXTRAB detail
In my case:
- EXTRAA: SCN=10358472
- EXTRAB: SCN=10358544
> The SCN of EXTRAB should be greater than the SCN of EXTRAA
Update REPLAA replicat parameter file in accordance with the latest SCN applied on new tables (the SCN of EXTRAB):
edit params REPLAA
replicat REPLAA
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAA.dsc, purge, megabytes 10
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 10358544);
map YODA.GREEN.TASKS , target MAUL.RED.TASKS, filter(@GETENV ('TRANSACTION','CSN') > 10358544);
Start first extract/replicat
start extract EXTRAA
start replicat REPLAA
When the lag is zero you can remove filter `filter(@GETENV ('TRANSACTION','CSN')`
edit params REPLAA
replicat REPLAA
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAA.dsc, purge, megabytes 10
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ;
map YODA.GREEN.TASKS , target MAUL.RED.TASKS ;
Restart first replicat:
start replicat REPLAA
Now all tables are integrated in first extract/replicat.
## Remove second extract/replicat
dblogin useridalias JEDIPRD
unregister extract EXTRAB database
delete extract EXTRAB
dblogin useridalias MAUL
unregister replicat REPLAB database
delete replicat REPLAB

View File

@@ -0,0 +1,12 @@
select 'ORDERS (target)='||count(1) as "#rows" from RED.ORDERS union
select 'ORDERS (source)='||count(1) as "#rows" from GREEN.ORDERS@GREEN_AT_YODA union
select 'PRODUCTS (target)='||count(1) as "#rows" from RED.PRODUCTS union
select 'PRODUCTS (source)='||count(1) as "#rows" from GREEN.PRODUCTS@GREEN_AT_YODA union
select 'USERS (target)='||count(1) as "#rows" from RED.USERS union
select 'USERS (source)='||count(1) as "#rows" from GREEN.USERS@GREEN_AT_YODA union
select 'TRANSACTIONS (target)='||count(1) as "#rows" from RED.TRANSACTIONS union
select 'TRANSACTIONS (source)='||count(1) as "#rows" from GREEN.TRANSACTIONS@GREEN_AT_YODA union
select 'TASKS (target)='||count(1) as "#rows" from RED.TASKS union
select 'TASKS (source)='||count(1) as "#rows" from GREEN.TASKS@GREEN_AT_YODA
order by 1 asc
/

View File

@@ -0,0 +1,83 @@
-- Create sequences for primary key generation
CREATE SEQUENCE seq_products START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE seq_orders START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE seq_users START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE seq_transactions START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE seq_tasks START WITH 1 INCREMENT BY 1;
-- Create tables with meaningful names and relevant columns
CREATE TABLE products (
id NUMBER PRIMARY KEY,
name VARCHAR2(100),
category VARCHAR2(20),
quantity INTEGER
);
CREATE TABLE orders (
id NUMBER PRIMARY KEY,
description VARCHAR2(255),
status VARCHAR2(20)
);
CREATE TABLE users (
id NUMBER PRIMARY KEY,
created_at DATE DEFAULT SYSDATE,
username VARCHAR2(20),
age INTEGER,
location VARCHAR2(20)
);
CREATE TABLE transactions (
id NUMBER PRIMARY KEY,
amount NUMBER(10,2),
currency VARCHAR2(20)
);
CREATE TABLE tasks (
id NUMBER PRIMARY KEY,
status VARCHAR2(50),
priority INTEGER,
type VARCHAR2(20),
assigned_to VARCHAR2(20)
);
-- Create triggers to auto-generate primary key values using sequences
CREATE OR REPLACE TRIGGER trg_products_pk
BEFORE INSERT ON products
FOR EACH ROW
BEGIN
SELECT seq_products.NEXTVAL INTO :NEW.id FROM dual;
END;
/
CREATE OR REPLACE TRIGGER trg_orders_pk
BEFORE INSERT ON orders
FOR EACH ROW
BEGIN
SELECT seq_orders.NEXTVAL INTO :NEW.id FROM dual;
END;
/
CREATE OR REPLACE TRIGGER trg_users_pk
BEFORE INSERT ON users
FOR EACH ROW
BEGIN
SELECT seq_users.NEXTVAL INTO :NEW.id FROM dual;
END;
/
CREATE OR REPLACE TRIGGER trg_transactions_pk
BEFORE INSERT ON transactions
FOR EACH ROW
BEGIN
SELECT seq_transactions.NEXTVAL INTO :NEW.id FROM dual;
END;
/
CREATE OR REPLACE TRIGGER trg_tasks_pk
BEFORE INSERT ON tasks
FOR EACH ROW
BEGIN
SELECT seq_tasks.NEXTVAL INTO :NEW.id FROM dual;
END;
/

View File

@@ -0,0 +1,16 @@
## Delete an integreted replicat
dblogin useridalias SITHPRD
stop replicat REPLAB
unregister replicat REPLAB database
delete replicat REPLAB
info all
## Delete an integreted extract
dblogin useridalias JEDIPRD
stop extract EXTRAB
unregister extract EXTRAB database
delete extract EXTRAB
info all

View File

@@ -0,0 +1,20 @@
--Stop the job (Disable)
BEGIN
DBMS_SCHEDULER.disable('JOB_MANAGE_DATA');
END;
/
--Restart the job
BEGIN
DBMS_SCHEDULER.enable('JOB_MANAGE_DATA');
END;
/
--Fully Remove the Job
BEGIN
DBMS_SCHEDULER.drop_job('JOB_MANAGE_DATA');
END;
/

View File

@@ -0,0 +1,195 @@
## Context
Replicat is ABBENDED because of data issue.
The aim is to restablish the replicat and minimize the downtime.
## Provoke a failure on replicat
On target database truncate RED.TRANSACTIONS table:
truncate table RED.TRANSACTIONS;
Replicat will be abbended because of update/delete orders:
status replicat REPLAA
REPLICAT REPLAA: ABENDED
## Remove tablme from replicat
Comment MAP line relative to TRANSACTIONS table on replicat and restart the replicat.
edit params REPLAA
replicat REPLAA
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAA.dsc, purge, megabytes 10
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
-- map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ;
map YODA.GREEN.TASKS , target MAUL.RED.TASKS ;
start replicat REPLAA
At this moment replicat should be **RUNNING**.
## Create a dedicated extract/replicat for the table in failiure
Create a second extract EXTRAB to manage the new tables.
Define extract parameters:
edit params EXTRAB
extract EXTRAB
useridalias JEDIPRD
sourcecatalog YODA
exttrail ./dirdat/ab
purgeoldextracts
checkpointsecs 1
ddl include mapped
warnlongtrans 1h, checkinterval 30m
table GREEN.TRANSACTIONS;
Add, register and start extract:
dblogin useridalias JEDIPRD
add extract EXTRAB, integrated tranlog, begin now
add exttrail ./dirdat/ab, extract EXTRAB
register extract EXTRAB, database container (YODA)
start extract EXTRAB
info extract EXTRAB detail
> Start **distribution path** (aka **PUMP**) if the replicat is running on distant site (Golden Gate deployment)
## Initial load
Note down the current SCN on source database.
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
12234159
On target DB create tables structure for TRANSACTIONS, TASKS and do the inlitial load:
SCN=12234159
impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_03.log remap_schema=GREEN:RED tables=GREEN.TRANSACTIONS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN
## New replicat setup
Define extract parameters.
Pay attention to `filter(@GETENV ('TRANSACTION','CSN')` clause to be setup to SCN of intial datapump load.
edit params REPLAB
replicat REPLAB
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAB.dsc, purge, megabytes 10
map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 12234159);
Add and start new replicat:
add replicat REPLAB, integrated, exttrail ./dirdat/ab
dblogin useridalias SITHPRD
register replicat REPLAB database
start replicat REPLAB
info all
Check if new replicat is running and wait to lag 0.
## Reintegrate table to initial extract/replicat
Now, TRANSACTIONS table is replicated by EXTRAB/REPLAB, but not by intial replication EXTRAA/REPLAA.
Let's reintegrate TRANSACTIONS in intial replication EXTRAA/REPLAA.
Note that TRANSACTIONS was not removed from EXTRAA definition, so all table changes are still recorded in EXTRAA trail files.
Stop extracts in this **strictly order**:
- **first** extract: EXTRAA
- **second** extract: EXTRAB
> It is **mandatory** to stop extracts in this order.
> **The applied SCN on first replicat tables must be less than the SCN on second replicat** in order to allow the first replicat to start at the last applied position in the trail file. Like this, the first replicat must not be repositionned in the past.
stop EXTRACT EXTRAA
stop EXTRACT EXTRAB
Now stop both replicat also:
stop replicat REPLAA
stop replicat REPLAB
Note down the SCN for each extract and premare new params file for initial replicat.
info extract EXTRAA detail
info extract EXTRAB detail
In my case:
- EXTRAA: SCN=12245651
- EXTRAB: SCN=12245894
> The SCN of EXTRAB should be greater than the SCN of EXTRAA
Update REPLAA replicat parameter file in accordance with the latest SCN applied TRANSACTION table (the SCN of EXTRAB):
edit params REPLAA
replicat REPLAA
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAA.dsc, purge, megabytes 10
map YODA.GREEN.ORDERS, target MAUL.RED.ORDERS ;
map YODA.GREEN.PRODUCTS, target MAUL.RED.PRODUCTS ;
map YODA.GREEN.USERS, target MAUL.RED.USERS ;
map YODA.GREEN.TASKS, target MAUL.RED.TASKS ;
map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 12245894);
Start first extract/replicat
start extract EXTRAA
start replicat REPLAA
When the lag is zero you can remove filter `filter(@GETENV ('TRANSACTION','CSN')` from REPLAA.
stop replicat REPLAA
edit params REPLAA
replicat REPLAA
useridalias MAUL
dboptions enable_instantiation_filtering
discardfile REPLAA.dsc, purge, megabytes 10
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
map YODA.GREEN.TASKS , target MAUL.RED.TASKS ;
map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ;
Restart REPLAA replicat:
start replicat REPLAA
Now all tables are integrated in first extract/replicat.
## Remove second extract/replicat
dblogin useridalias JEDIPRD
unregister extract EXTRAB database
delete extract EXTRAB
dblogin useridalias MAUL
unregister replicat REPLAB database
delete replicat REPLAB
Stop and delete **distribution path** (aka **PUMP**) if the replicat is running on distant site (Golden Gate deployment).

View File

@@ -0,0 +1,91 @@
-- Step 1: Create the stored procedure
CREATE OR REPLACE PROCEDURE manage_data IS
new_products INTEGER default 3;
new_orders INTEGER default 10;
new_users INTEGER default 2;
new_transactions INTEGER default 20;
new_tasks INTEGER default 5;
BEGIN
FOR i IN 1..new_products LOOP
INSERT INTO products (id, name, category, quantity)
VALUES (seq_products.NEXTVAL,
DBMS_RANDOM.STRING('A', 10),
DBMS_RANDOM.STRING('A', 20),
TRUNC(DBMS_RANDOM.VALUE(1, 100)));
END LOOP;
FOR i IN 1..new_orders LOOP
INSERT INTO orders (id, description, status)
VALUES (seq_orders.NEXTVAL,
DBMS_RANDOM.STRING('A', 50),
DBMS_RANDOM.STRING('A', 20));
END LOOP;
FOR i IN 1..new_users LOOP
INSERT INTO users (id, created_at, username, age, location)
VALUES (seq_users.NEXTVAL, SYSDATE,
DBMS_RANDOM.STRING('A', 15),
TRUNC(DBMS_RANDOM.VALUE(18, 60)),
DBMS_RANDOM.STRING('A', 20));
END LOOP;
FOR i IN 1..new_transactions LOOP
INSERT INTO transactions (id, amount, currency)
VALUES (seq_transactions.NEXTVAL,
ROUND(DBMS_RANDOM.VALUE(1, 10000), 2),
DBMS_RANDOM.STRING('A', 3));
END LOOP;
FOR i IN 1..new_tasks LOOP
INSERT INTO tasks (id, status, priority, type, assigned_to)
VALUES (seq_tasks.NEXTVAL,
DBMS_RANDOM.STRING('A', 20),
TRUNC(DBMS_RANDOM.VALUE(1, 10)),
DBMS_RANDOM.STRING('A', 20),
DBMS_RANDOM.STRING('A', 15));
END LOOP;
-- Update 2 random rows in each table
UPDATE products SET quantity = TRUNC(DBMS_RANDOM.VALUE(1, 200))
WHERE id IN (SELECT id FROM products ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
UPDATE orders SET status = DBMS_RANDOM.STRING('A', 20)
WHERE id IN (SELECT id FROM orders ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
UPDATE users SET age = TRUNC(DBMS_RANDOM.VALUE(18, 75))
WHERE id IN (SELECT id FROM users ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
UPDATE transactions SET amount = ROUND(DBMS_RANDOM.VALUE(1, 5000), 2)
WHERE id IN (SELECT id FROM transactions ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
UPDATE tasks SET priority = TRUNC(DBMS_RANDOM.VALUE(1, 10))
WHERE id IN (SELECT id FROM tasks ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
-- Delete 1 random row from each table
DELETE FROM products WHERE id = (SELECT id FROM products ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
DELETE FROM orders WHERE id = (SELECT id FROM orders ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
DELETE FROM users WHERE id = (SELECT id FROM users ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
DELETE FROM transactions WHERE id = (SELECT id FROM transactions ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
DELETE FROM tasks WHERE id = (SELECT id FROM tasks ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
COMMIT;
END;
/
-- Step 2: Create a scheduled job to run every 10 seconds
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'JOB_MANAGE_DATA',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN manage_data; END;',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=SECONDLY; INTERVAL=10',
enabled => TRUE
);
END;
/