2026-03-12 21:01:38
This commit is contained in:
208
AI_generated/PostgreSQL_TLS_01.md
Normal file
208
AI_generated/PostgreSQL_TLS_01.md
Normal file
@@ -0,0 +1,208 @@
|
||||
# 📄 Technical Guide: Setting Up TLS for PostgreSQL
|
||||
|
||||
This document consolidates the group’s discussion into a practical, production‑ready reference for configuring and using TLS with PostgreSQL, including server setup, client configuration, password management, and example application code.
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
PostgreSQL supports encrypted connections using TLS (often referred to as SSL in its configuration). Enabling TLS ensures secure client–server communication and can optionally enforce client certificate authentication. This guide provides step‑by‑step instructions for server and client configuration, common pitfalls, and usage examples.
|
||||
|
||||
---
|
||||
|
||||
## 2. Server-Side Configuration
|
||||
|
||||
### Certificates
|
||||
|
||||
* Required files:
|
||||
* [`server.key`](https://server.key) → private key
|
||||
* [`server.crt`](https://server.crt) → server certificate
|
||||
* [`root.crt`](https://root.crt) → CA certificate (recommended)
|
||||
* Sources: internal PKI, Let’s Encrypt, or self‑signed CA.
|
||||
* Permissions: [`server.key`](https://server.key) must be `0600` or root‑owned with restricted group access.
|
||||
|
||||
### Placement
|
||||
|
||||
* Default paths:
|
||||
* `$PGDATA/`[`server.key`](https://server.key)
|
||||
* `$PGDATA/`[`server.crt`](https://server.crt)
|
||||
* Override with `ssl_cert_file`, `ssl_key_file`, `ssl_ca_file` in [`postgresql.conf`](https://postgresql.conf).
|
||||
|
||||
### Configuration
|
||||
|
||||
Enable TLS in [`postgresql.conf`](https://postgresql.conf):
|
||||
|
||||
```conf
|
||||
ssl = on
|
||||
ssl_ciphers = 'HIGH:!aNULL:!MD5'
|
||||
ssl_prefer_server_ciphers = on
|
||||
ssl_min_protocol_version = 'TLSv1.2'
|
||||
ssl_max_protocol_version = 'TLSv1.3'
|
||||
```
|
||||
|
||||
### Access Control
|
||||
|
||||
Configure `pg_hba.conf`:
|
||||
|
||||
* Allow TLS but not require:
|
||||
|
||||
```
|
||||
host all all 0.0.0.0/0 md5
|
||||
```
|
||||
* Require TLS:
|
||||
|
||||
```
|
||||
hostssl all all 0.0.0.0/0 md5
|
||||
```
|
||||
* Require TLS + client certificate:
|
||||
|
||||
```
|
||||
hostssl all all 0.0.0.0/0 cert
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Client-Side Configuration
|
||||
|
||||
### Basic TLS
|
||||
|
||||
```bash
|
||||
psql "host=db.example.com sslmode=require"
|
||||
```
|
||||
|
||||
### Verify server certificate
|
||||
|
||||
```bash
|
||||
psql "host=db.example.com sslmode=verify-full sslrootcert=/etc/ssl/myca/root.crt"
|
||||
```
|
||||
|
||||
* `sslrootcert` is a **client-side path** to the CA certificate.
|
||||
|
||||
### Mutual TLS
|
||||
|
||||
```bash
|
||||
psql "host=db.example.com sslmode=verify-full \
|
||||
sslrootcert=/etc/ssl/myca/root.crt \
|
||||
sslcert=/etc/ssl/myca/client.crt \
|
||||
sslkey=/etc/ssl/myca/client.key"
|
||||
```
|
||||
|
||||
### Modes comparison
|
||||
|
||||
| Mode | Encrypts | Validates CA | Validates Hostname | Typical Use |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| `require` | Yes | No | No | Basic encryption |
|
||||
| `verify-ca` | Yes | Yes | No | Internal/IP-based |
|
||||
| `verify-full` | Yes | Yes | Yes | Production |
|
||||
---
|
||||
|
||||
## 4. Password Management
|
||||
|
||||
### `.pgpass` file (recommended)
|
||||
|
||||
Format:
|
||||
|
||||
```
|
||||
hostname:port:database:username:password
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
db.example.com:5432:mydb:myuser:SuperSecretPassword123
|
||||
localhost:5432:*:postgres:localdevpass
|
||||
*:5432:*:replicator:replicaPassword
|
||||
```
|
||||
|
||||
* Location: `~/.pgpass`
|
||||
* Permissions: `chmod 600 ~/.pgpass`
|
||||
* Supports wildcards (`*`).
|
||||
|
||||
### Environment variable
|
||||
|
||||
```bash
|
||||
PGPASSWORD='secret123' psql -U myuser -h localhost -d mydb
|
||||
```
|
||||
|
||||
Less secure; use only for quick commands.
|
||||
|
||||
---
|
||||
|
||||
## 5. Testing & Verification
|
||||
|
||||
* Check server TLS status:
|
||||
|
||||
```sql
|
||||
SHOW ssl;
|
||||
```
|
||||
* Inspect negotiated protocol & cipher:
|
||||
|
||||
```sql
|
||||
SELECT * FROM pg_stat_ssl;
|
||||
```
|
||||
* External test:
|
||||
|
||||
```bash
|
||||
openssl s_client -connect db.example.com:5432 -starttls postgres
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Common Pitfalls
|
||||
|
||||
| Issue | Cause | Fix |
|
||||
| --- | --- | --- |
|
||||
| `FATAL: private key file has group or world access` | Wrong permissions | `chmod 600 `[`server.key`](https://server.key) |
|
||||
| Client rejects certificate | CN/SAN mismatch | Ensure proper DNS SANs |
|
||||
| TLS not enforced | Used `host` instead of `hostssl` | Update `pg_hba.conf` |
|
||||
| Backup tools fail | Key readable only by postgres | Store certs outside `$PGDATA` if group-readable needed |
|
||||
---
|
||||
|
||||
## 7. Application Example (Python)
|
||||
|
||||
Minimal psycopg2 script with TLS `verify-full` and bind variables:
|
||||
|
||||
```python
|
||||
import psycopg2
|
||||
import psycopg2.extras
|
||||
|
||||
def main():
|
||||
conn = psycopg2.connect(
|
||||
host="db.example.com",
|
||||
port=5432,
|
||||
dbname="mydb",
|
||||
user="myuser",
|
||||
password="SuperSecretPassword123",
|
||||
sslmode="verify-full",
|
||||
sslrootcert="/etc/ssl/myca/root.crt",
|
||||
sslcert="/etc/ssl/myca/client.crt", # optional
|
||||
sslkey="/etc/ssl/myca/client.key" # optional
|
||||
)
|
||||
|
||||
with conn:
|
||||
with conn.cursor(cursor_factory=psycopg2.extras.DictCursor) as cur:
|
||||
cur.execute("SELECT id, name FROM demo WHERE id = %s", (1,))
|
||||
print("SELECT:", cur.fetchone())
|
||||
|
||||
cur.execute("INSERT INTO demo (id, name) VALUES (%s, %s)", (2, "Inserted Name"))
|
||||
cur.execute("UPDATE demo SET name = %s WHERE id = %s", ("Updated Name", 2))
|
||||
cur.execute("DELETE FROM demo WHERE id = %s", (2,))
|
||||
|
||||
conn.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Appendix / Future Considerations
|
||||
|
||||
* Hardened production templates (PKI layout, Ansible roles, CI/CD verification checklist).
|
||||
* Alternative drivers: psycopg3, SQLAlchemy, async examples.
|
||||
* Integration with secret management (Kubernetes secrets, systemd, Ansible vault).
|
||||
* Directory layout best practices for server vs client PKI.
|
||||
|
||||
---
|
||||
|
||||
✅ This document now serves as a consolidated technical guide for setting up and using TLS with PostgreSQL, including secure password handling and client application integration.
|
||||
152
ASPM/aspm_01.md
Normal file
152
ASPM/aspm_01.md
Normal file
@@ -0,0 +1,152 @@
|
||||
set lines 256
|
||||
|
||||
column client_name format a35
|
||||
column task_name format a30
|
||||
|
||||
column last_try_date format a20
|
||||
column last_good_date format a20
|
||||
column next_try_date format a20
|
||||
|
||||
alter session set nls_timestamp_format = 'yyyy-mm-dd hh24:mi:ss';
|
||||
|
||||
select
|
||||
client_name, task_name, status,
|
||||
to_char(last_try_date,'yyyy-mm-dd hh24:mi:ss') as last_try_date,
|
||||
to_char(last_good_date,'yyyy-mm-dd hh24:mi:ss') as last_good_date,
|
||||
to_char(next_try_date,'yyyy-mm-dd hh24:mi:ss') as next_try_date
|
||||
from dba_autotask_task;
|
||||
|
||||
|
||||
|
||||
SQL> show parameter optimizer%baselines
|
||||
|
||||
NAME TYPE VALUE
|
||||
------------------------------------ ----------- ------------------------------
|
||||
optimizer_capture_sql_plan_baselines boolean FALSE
|
||||
optimizer_use_sql_plan_baselines boolean TRUE
|
||||
|
||||
|
||||
|
||||
|
||||
set lines 200
|
||||
set pages 1000
|
||||
col parameter_name for a35
|
||||
col parameter_value for a30
|
||||
col last_modified for a30
|
||||
col modified_by for a30
|
||||
|
||||
select * from dba_sql_management_config where parameter_name like 'AUTO_SPM_EVOLVE_TASK%';
|
||||
|
||||
|
||||
|
||||
exec dbms_spm.configure('AUTO_SPM_EVOLVE_TASK','ON');
|
||||
exec dbms_spm.configure('AUTO_SPM_EVOLVE_TASK','OFF');
|
||||
|
||||
The list of tunable parameters with DBMS_SPM.CONFIGURE:
|
||||
|
||||
col description FOR a40 word_wrapped
|
||||
SET pages 1000
|
||||
|
||||
select parameter_name, parameter_value, description
|
||||
from dba_advisor_parameters
|
||||
where task_name = 'SYS_AUTO_SPM_EVOLVE_TASK'
|
||||
and parameter_value != 'UNUSED';
|
||||
|
||||
|
||||
|
||||
set lines 256
|
||||
|
||||
col DBID noprint
|
||||
col TASK_ID noprint
|
||||
col TASK_NAME noprint
|
||||
|
||||
select *
|
||||
from dba_autotask_schedule_control
|
||||
where dbid = sys_context('userenv','con_dbid')
|
||||
and task_name = 'Auto SPM Task';
|
||||
|
||||
-- last task details
|
||||
SET LONG 1000000 PAGESIZE 1000 LONGCHUNKSIZE 256 LINESIZE 256
|
||||
|
||||
SELECT DBMS_SPM.report_auto_evolve_task
|
||||
FROM dual;
|
||||
|
||||
|
||||
CREATE TABLE test1(id NUMBER, descr VARCHAR(50)) TABLESPACE users;
|
||||
|
||||
DECLARE
|
||||
i NUMBER;
|
||||
nbrows NUMBER;
|
||||
BEGIN
|
||||
i:=1;
|
||||
nbrows:=50000;
|
||||
LOOP
|
||||
EXIT WHEN i>nbrows;
|
||||
IF (i=1) THEN
|
||||
INSERT INTO test1 VALUES(1,RPAD('A',49,'A'));
|
||||
ELSE
|
||||
INSERT INTO test1 VALUES(nbrows,RPAD('A',49,'A'));
|
||||
END IF;
|
||||
i:=i+1;
|
||||
END LOOP;
|
||||
COMMIT;
|
||||
END;
|
||||
/
|
||||
|
||||
CREATE INDEX test1_idx_id ON test1(id) TABLESPACE users;
|
||||
|
||||
|
||||
EXEC DBMS_STATS.GATHER_TABLE_STATS(ownname=>user, tabname=>'test1', estimate_percent=>NULL, method_opt=>'FOR ALL INDEXED COLUMNS SIZE 2');
|
||||
|
||||
|
||||
ALTER SYSTEM flush shared_pool;
|
||||
|
||||
SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM test1 WHERE id=1;
|
||||
|
||||
SELECT sql_id,child_number,plan_hash_value,is_bind_sensitive,is_bind_aware,is_shareable,is_obsolete,sql_plan_baseline
|
||||
FROM v$sql
|
||||
WHERE sql_id='4q7zcj8kp9q2r';
|
||||
|
||||
|
||||
EXEC DBMS_STATS.GATHER_TABLE_STATS(ownname=>user, tabname=>'test1', estimate_percent=>NULL, method_opt=>'FOR ALL INDEXED COLUMNS SIZE 1');
|
||||
|
||||
|
||||
SELECT
|
||||
plan_hash_value,
|
||||
cpu_time,
|
||||
buffer_gets,
|
||||
disk_reads,
|
||||
direct_writes,
|
||||
rows_processed,
|
||||
fetches,
|
||||
executions,
|
||||
optimizer_cost,
|
||||
TO_CHAR(plan_timestamp,'dd-mon-yyyy hh24:mi:ss') AS plan_timestamp
|
||||
FROM dba_sqlset_statements
|
||||
WHERE sqlset_name='SYS_AUTO_STS'
|
||||
AND sql_id='4q7zcj8kp9q2r'
|
||||
ORDER BY plan_timestamp DESC;
|
||||
|
||||
|
||||
select * from SYS.WRI$_ADV_EXECUTIONS where exec_type='SPM EVOLVE' order by exec_start desc;
|
||||
|
||||
select * from SYS.WRI$_ADV_EXECUTIONS where exec_type='SPM EVOLVE'
|
||||
where exec_start between date'2025-05-25 09:00:00' and date'2025-05-25 19:00:00'
|
||||
order by exec_start desc;
|
||||
|
||||
|
||||
|
||||
select
|
||||
sql_id
|
||||
,plan_hash_value
|
||||
,LAST_MODIFIED
|
||||
from(
|
||||
select
|
||||
dbms_sql_translator.sql_id(sql_text) sql_id,
|
||||
(select to_number(regexp_replace(plan_table_output,'^[^0-9]*'))
|
||||
from table(dbms_xplan.display_sql_plan_baseline(sql_handle,plan_name))
|
||||
where plan_table_output like 'Plan hash value: %') plan_hash_value,
|
||||
bl.*
|
||||
from dba_sql_plan_baselines bl
|
||||
)
|
||||
;
|
||||
217
ASPM/asts_01.md
Normal file
217
ASPM/asts_01.md
Normal file
@@ -0,0 +1,217 @@
|
||||
## Setup
|
||||
|
||||
Check if Automatic SQL Tuning Sets (ASTS) is activated (enabled) and get the last execution time of the automatic schedule:
|
||||
|
||||
set lines 200
|
||||
col task_name for a22
|
||||
|
||||
select * from dba_autotask_schedule_control where task_name = 'Auto STS Capture Task';
|
||||
|
||||
To enable:
|
||||
|
||||
exec dbms_auto_task_admin.enable(client_name => 'Auto STS Capture Task', operation => NULL, window_name => NULL);
|
||||
|
||||
> No way to change the interval and maximum run time
|
||||
|
||||
To disable:
|
||||
|
||||
exec dbms_auto_task_admin.disable(client_name => 'Auto STS Capture Task', operation => NULL, window_name => NULL);
|
||||
|
||||
To manually run the job:
|
||||
|
||||
exec dbms_scheduler.run_job('ORA$_ATSK_AUTOSTS');
|
||||
|
||||
List last job executions:
|
||||
|
||||
col ACTUAL_START_DATE for a45
|
||||
|
||||
select ACTUAL_START_DATE,STATUS from dba_scheduler_job_run_details where JOB_NAME='ORA$_ATSK_AUTOSTS'
|
||||
order by ACTUAL_START_DATE desc fetch first 10 rows only;
|
||||
|
||||
More statistics on the task job:
|
||||
|
||||
WITH dsjrd AS
|
||||
(
|
||||
SELECT (TO_DATE('1','j')+run_duration-TO_DATE('1','j'))* 86400 duration_sec,
|
||||
(TO_DATE('1','j')+cpu_used-TO_DATE('1','j'))* 86400 cpu_used_sec
|
||||
FROM dba_scheduler_job_run_details
|
||||
WHERE job_name = 'ORA$_ATSK_AUTOSTS'
|
||||
)
|
||||
SELECT MIN(duration_sec) ASTS_Min_Time_Sec,
|
||||
MAX(duration_sec) ASTS_Max_Time_Sec,
|
||||
AVG(duration_sec) ASTS_Average_Time_Sec,
|
||||
AVG(cpu_used_sec) ASTS_Average_CPU_Sec
|
||||
FROM dsjrd;
|
||||
|
||||
How many SQL statements we have actually in the SYS_AUTO_STS SQL Tuning Set (STS):
|
||||
|
||||
set lines 200
|
||||
col name for a15
|
||||
col description for a30
|
||||
col owner for a10
|
||||
|
||||
select name, owner, description, created, last_modified, statement_count from dba_sqlset where name='SYS_AUTO_STS';
|
||||
|
||||
To purge all statements:
|
||||
|
||||
exec dbms_sqlset.drop_sqlset(sqlset_name => 'SYS_AUTO_STS', sqlset_owner => 'SYS');
|
||||
|
||||
How much space it takes in your SYSAUX tablesapce:
|
||||
|
||||
col table_name for a30
|
||||
col table_size_mb for 999999.99
|
||||
col total_size_mb for 999999.99
|
||||
|
||||
select
|
||||
table_name,
|
||||
round(sum(size_b) / 1024 / 1024, 3) as table_size_mb,
|
||||
round(max(total_size_b) / 1024 / 1024, 3) as total_size_mb
|
||||
from
|
||||
(
|
||||
select
|
||||
table_name,
|
||||
size_b,
|
||||
sum(size_b) over() as total_size_b
|
||||
from
|
||||
(
|
||||
select
|
||||
segment_name as table_name,
|
||||
bytes as size_b
|
||||
from dba_segments
|
||||
where
|
||||
segment_name not like '%WORKSPA%'
|
||||
and owner = 'SYS'
|
||||
and (segment_name like 'WRI%SQLSET%' or segment_name like 'WRH$_SQLTEXT')
|
||||
union all
|
||||
select
|
||||
t.table_name,
|
||||
bytes as size_b
|
||||
from dba_segments s,
|
||||
(select
|
||||
table_name,
|
||||
segment_name
|
||||
from dba_lobs
|
||||
where table_name in ('WRI$_SQLSET_PLAN_LINES', 'WRH$_SQLTEXT')
|
||||
and owner = 'SYS'
|
||||
) t
|
||||
where s.segment_name = t.segment_name
|
||||
)
|
||||
)
|
||||
group by table_name
|
||||
order by table_size_mb desc;
|
||||
|
||||
## Test case
|
||||
|
||||
DROP TABLE test01 purge;
|
||||
CREATE TABLE test01(id NUMBER, descr VARCHAR(50)) TABLESPACE users;
|
||||
|
||||
DECLARE
|
||||
i NUMBER;
|
||||
nbrows NUMBER;
|
||||
BEGIN
|
||||
i:=1;
|
||||
nbrows:=50000;
|
||||
LOOP
|
||||
EXIT WHEN i>nbrows;
|
||||
IF (i=1) THEN
|
||||
INSERT INTO test01 VALUES(1,RPAD('A',49,'A'));
|
||||
ELSE
|
||||
INSERT INTO test01 VALUES(nbrows,RPAD('A',49,'A'));
|
||||
END IF;
|
||||
i:=i+1;
|
||||
END LOOP;
|
||||
COMMIT;
|
||||
END;
|
||||
/
|
||||
|
||||
CREATE INDEX test01_idx_id ON test01(id);
|
||||
|
||||
exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'test01', method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO');
|
||||
|
||||
No histogram will be calculated:
|
||||
|
||||
col column_name for a20
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='TEST01';
|
||||
|
||||
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1;
|
||||
|
||||
ID DESCR
|
||||
---------- --------------------------------------------------
|
||||
1 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
|
||||
|
||||
The optimize we will choose a full scan:
|
||||
|
||||
SQL_ID 28stunrv2985c, child number 0
|
||||
-------------------------------------
|
||||
select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1
|
||||
|
||||
Plan hash value: 262542483
|
||||
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers |
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
| 0 | SELECT STATEMENT | | 1 | | | 136 (100)| 1 |00:00:00.01 | 443 |
|
||||
|* 1 | TABLE ACCESS FULL| TEST01 | 1 | 25000 | 732K| 136 (0)| 1 |00:00:00.01 | 443 |
|
||||
-----------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
Wait for next Auto STS Capture Task schedule or run the job manually.
|
||||
The SQL_ID will be captured bu ASTS:
|
||||
|
||||
col SQLSET_NAME for a30
|
||||
col PARSING_SCHEMA_NAME for a30
|
||||
|
||||
select SQLSET_NAME,PLAN_HASH_VALUE,PARSING_SCHEMA_NAME,BUFFER_GETS from DBA_SQLSET_STATEMENTS where SQL_ID='28stunrv2985c';
|
||||
|
||||
SQLSET_NAME PLAN_HASH_VALUE PARSING_SCHEMA_NAME BUFFER_GETS
|
||||
------------------------------ --------------- ------------------------------ -----------
|
||||
SYS_AUTO_STS 262542483 RED 453
|
||||
|
||||
Gather the stats again:
|
||||
|
||||
exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'test01', method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO');
|
||||
|
||||
Oracle learned from its mistake and will calulate histograms:
|
||||
|
||||
COLUMN_NAME NUM_DISTINCT DENSITY NUM_NULLS NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
|
||||
-------------------- ------------ ---------- ---------- ----------- ----------- ---------------
|
||||
ID 2 .00001 0 2 50000 FREQUENCY
|
||||
|
||||
Flush the shared pool and re-execute the query:
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1;
|
||||
|
||||
As expected, the index has been used:
|
||||
|
||||
SQL_ID 28stunrv2985c, child number 0
|
||||
-------------------------------------
|
||||
select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1
|
||||
|
||||
Plan hash value: 4138272685
|
||||
|
||||
------------------------------------------------------------------------------------------------------------------------------------
|
||||
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers |
|
||||
------------------------------------------------------------------------------------------------------------------------------------
|
||||
| 0 | SELECT STATEMENT | | 1 | | | 2 (100)| 1 |00:00:00.01 | 4 |
|
||||
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| TEST01 | 1 | 1 | 30 | 2 (0)| 1 |00:00:00.01 | 4 |
|
||||
|* 2 | INDEX RANGE SCAN | TEST01_IDX_ID | 1 | 1 | | 1 (0)| 1 |00:00:00.01 | 3 |
|
||||
------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Wait for next Auto STS Capture Task schedule and check if the SQL_ID is in with both executions.
|
||||
> For me the manual execution does not add the 2-end plan to ASTS.
|
||||
|
||||
select SQLSET_NAME,PLAN_HASH_VALUE,PARSING_SCHEMA_NAME,BUFFER_GETS from DBA_SQLSET_STATEMENTS where SQL_ID='28stunrv2985c';
|
||||
|
||||
SQLSET_NAME PLAN_HASH_VALUE PARSING_SCHEMA_NAME BUFFER_GETS
|
||||
------------------------------ --------------- ------------------------------ -----------
|
||||
SYS_AUTO_STS 262542483 RED 453
|
||||
SYS_AUTO_STS 4138272685 RED 203
|
||||
|
||||
|
||||
26
FDA/ORA-55622.txt
Normal file
26
FDA/ORA-55622.txt
Normal file
@@ -0,0 +1,26 @@
|
||||
SQL> show user
|
||||
USER is "USR"
|
||||
|
||||
|
||||
SQL> delete from SYS_FBA_DDL_COLMAP_26338;
|
||||
delete from SYS_FBA_DDL_COLMAP_26338
|
||||
*
|
||||
ERROR at line 1:
|
||||
ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on
|
||||
table "USR"."SYS_FBA_DDL_COLMAP_26338"
|
||||
|
||||
|
||||
SQL> delete from SYS_FBA_HIST_26338;
|
||||
delete from SYS_FBA_HIST_26338
|
||||
*
|
||||
ERROR at line 1:
|
||||
ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on
|
||||
table "USR"."SYS_FBA_HIST_26338"
|
||||
|
||||
|
||||
ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on table “string”.”string”
|
||||
|
||||
Reason for the Error:
|
||||
An attempt was made to write to or alter or create unique index on a Flashback Archive internal table.
|
||||
Solution
|
||||
No action required. Only Oracle is allowed to perform such operations on Flashback Archive internal tables.
|
||||
317
FDA/fda_01.txt
Normal file
317
FDA/fda_01.txt
Normal file
@@ -0,0 +1,317 @@
|
||||
alias adm_NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba'
|
||||
|
||||
|
||||
create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret;
|
||||
alter pluggable database NIHILUS open;
|
||||
alter pluggable database NIHILUS save state;
|
||||
|
||||
|
||||
alter session set container=NIHILUS;
|
||||
|
||||
create bigfile tablespace LIVE_TS datafile size 32M autoextend on next 32M;
|
||||
create bigfile tablespace ARCHIVE_TS datafile size 32M autoextend on next 32M;
|
||||
|
||||
create user adm identified by "secret";
|
||||
grant sysdba to adm;
|
||||
|
||||
|
||||
create user usr identified by "secret";
|
||||
grant CONNECT,RESOURCE to usr;
|
||||
grant alter session to usr;
|
||||
|
||||
alter user usr default tablespace LIVE_TS;
|
||||
|
||||
alter user usr quota unlimited on LIVE_TS;
|
||||
alter user usr quota unlimited on ARCHIVE_TS;
|
||||
|
||||
alias adm_NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba'
|
||||
alias usr_NIHILUS='rlwrap sqlplus usr/"secret"@bakura:1521/NIHILUS'
|
||||
|
||||
|
||||
create flashback archive default ARCHIVE_7_DAY
|
||||
tablespace ARCHIVE_TS
|
||||
quota 1G
|
||||
retention 7 DAY;
|
||||
|
||||
grant flashback archive on ARCHIVE_7_DAY to usr;
|
||||
grant flashback archive administer to usr;
|
||||
grant execute on dbms_flashback_archive to usr;
|
||||
|
||||
|
||||
------------------------------------------------------------------------------
|
||||
SET LINESIZE 150
|
||||
|
||||
COLUMN owner_name FORMAT A20
|
||||
COLUMN flashback_archive_name FORMAT A22
|
||||
COLUMN create_time FORMAT A20
|
||||
COLUMN last_purge_time FORMAT A20
|
||||
|
||||
SELECT owner_name,
|
||||
flashback_archive_name,
|
||||
flashback_archive#,
|
||||
retention_in_days,
|
||||
TO_CHAR(create_time, 'YYYY-MM-DD HH24:MI:SS') AS create_time,
|
||||
TO_CHAR(last_purge_time, 'YYYY-MM-DD HH24:MI:SS') AS last_purge_time,
|
||||
status
|
||||
FROM dba_flashback_archive
|
||||
ORDER BY owner_name, flashback_archive_name;
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
|
||||
------------------------------------------------------------------------------
|
||||
SET LINESIZE 150
|
||||
|
||||
COLUMN flashback_archive_name FORMAT A22
|
||||
COLUMN tablespace_name FORMAT A20
|
||||
COLUMN quota_in_mb FORMAT A11
|
||||
|
||||
SELECT flashback_archive_name,
|
||||
flashback_archive#,
|
||||
tablespace_name,
|
||||
quota_in_mb
|
||||
FROM dba_flashback_archive_ts
|
||||
ORDER BY flashback_archive_name;
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
|
||||
------------------------------------------------------------------------------
|
||||
SET LINESIZE 150
|
||||
|
||||
COLUMN owner_name FORMAT A20
|
||||
COLUMN table_name FORMAT A20
|
||||
COLUMN flashback_archive_name FORMAT A22
|
||||
COLUMN archive_table_name FORMAT A20
|
||||
|
||||
SELECT owner_name,
|
||||
table_name,
|
||||
flashback_archive_name,
|
||||
archive_table_name,
|
||||
status
|
||||
FROM dba_flashback_archive_tables
|
||||
ORDER BY owner_name, table_name;
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
|
||||
-- Example 1
|
||||
-------------
|
||||
|
||||
create table TAB1 (
|
||||
ID number,
|
||||
DESCRIPTION varchar2(50),
|
||||
constraint TAB_1_PK primary key (id)
|
||||
);
|
||||
|
||||
alter table TAB1 flashback archive ARCHIVE_7_DAY;
|
||||
|
||||
insert into TAB1 values (1, 'one');
|
||||
commit;
|
||||
|
||||
update TAB1 set description = 'two' where id = 1;
|
||||
commit;
|
||||
|
||||
update TAB1 set description = 'three' where id = 1;
|
||||
commit;
|
||||
|
||||
|
||||
------------------------------------------------------------------------------
|
||||
SET LINESIZE 200
|
||||
|
||||
COLUMN versions_startscn FORMAT 99999999999999999
|
||||
COLUMN versions_starttime FORMAT A32
|
||||
COLUMN versions_endscn FORMAT 99999999999999999
|
||||
COLUMN versions_endtime FORMAT A32
|
||||
COLUMN versions_xid FORMAT A16
|
||||
COLUMN versions_operation FORMAT A1
|
||||
COLUMN description FORMAT A11
|
||||
|
||||
SELECT versions_startscn,
|
||||
versions_starttime,
|
||||
versions_endscn,
|
||||
versions_endtime,
|
||||
versions_xid,
|
||||
versions_operation,
|
||||
description
|
||||
FROM tab1
|
||||
VERSIONS BETWEEN TIMESTAMP SYSTIMESTAMP-(1/24) AND SYSTIMESTAMP
|
||||
WHERE id = 1
|
||||
ORDER BY versions_startscn;
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
|
||||
create table TAB1 (d date);
|
||||
alter table TAB1 flashback archive ARCHIVE_7_DAY;
|
||||
|
||||
insert into TAB1 values (sysdate);
|
||||
commit;
|
||||
|
||||
-- infinite_update1.sql
|
||||
begin
|
||||
loop
|
||||
update TAB1 set d=sysdate;
|
||||
commit;
|
||||
dbms_session.sleep(1);
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
|
||||
|
||||
SET LINESIZE 200
|
||||
|
||||
COLUMN versions_startscn FORMAT 99999999999999999
|
||||
COLUMN versions_starttime FORMAT A32
|
||||
COLUMN versions_endscn FORMAT 99999999999999999
|
||||
COLUMN versions_endtime FORMAT A32
|
||||
COLUMN versions_xid FORMAT A16
|
||||
COLUMN versions_operation FORMAT A1
|
||||
COLUMN description FORMAT A25
|
||||
|
||||
SELECT
|
||||
versions_startscn,
|
||||
versions_starttime,
|
||||
versions_endscn,
|
||||
versions_endtime,
|
||||
versions_xid,
|
||||
versions_operation,
|
||||
d
|
||||
FROM
|
||||
TAB1
|
||||
VERSIONS BETWEEN TIMESTAMP TIMESTAMP'2023-06-17 17:20:10' and TIMESTAMP'2023-06-17 17:20:40'
|
||||
ORDER BY versions_startscn;
|
||||
|
||||
|
||||
SELECT * from TAB1
|
||||
AS OF TIMESTAMP TIMESTAMP'2023-06-17 17:05:10';
|
||||
|
||||
|
||||
SELECT * from TAB1
|
||||
AS OF TIMESTAMP TIMESTAMP'2023-06-17 17:30:49';
|
||||
|
||||
|
||||
|
||||
EXEC DBMS_SYSTEM.set_ev(si=>163, se=>24797, ev=>10046, le=>8, nm=>'');
|
||||
|
||||
|
||||
-- Example 2
|
||||
-------------
|
||||
|
||||
alter table TAB2 no flashback archive;
|
||||
drop table TAB2 purge;
|
||||
|
||||
|
||||
create table TAB2 (
|
||||
n1 number,
|
||||
c1 varchar2(10),
|
||||
d1 DATE
|
||||
);
|
||||
|
||||
alter table TAB2 flashback archive ARCHIVE_7_DAY;
|
||||
|
||||
insert into TAB2 values(1,'One',TIMESTAMP'2023-01-01 00:00:00');
|
||||
commit;
|
||||
insert into TAB2 values(2,'Two',TIMESTAMP'2023-01-01 00:00:00');
|
||||
commit;
|
||||
insert into TAB2 values(3,'Three',TIMESTAMP'2023-01-01 00:00:00');
|
||||
commit;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
|
||||
|
||||
SET LINESIZE 200
|
||||
COLUMN versions_startscn FORMAT 99999999999999999
|
||||
COLUMN versions_starttime FORMAT A32
|
||||
COLUMN versions_endscn FORMAT 99999999999999999
|
||||
COLUMN versions_endtime FORMAT A32
|
||||
COLUMN versions_xid FORMAT A16
|
||||
COLUMN versions_operation FORMAT A1
|
||||
COLUMN description FORMAT A25
|
||||
|
||||
SELECT
|
||||
versions_startscn,
|
||||
versions_starttime,
|
||||
versions_endscn,
|
||||
versions_endtime,
|
||||
versions_xid,
|
||||
versions_operation,
|
||||
T.*
|
||||
FROM
|
||||
TAB2 VERSIONS BETWEEN TIMESTAMP (systimestamp-3/24) and systimestamp T
|
||||
where
|
||||
N1=1
|
||||
ORDER BY versions_startscn;
|
||||
|
||||
update TAB2 set d1=TIMESTAMP'2023-12-31 23:59:59' where n1=1;
|
||||
commit;
|
||||
|
||||
|
||||
select * from TAB2 as of timestamp TIMESTAMP'2023-06-18 08:47:20' where N1=1;
|
||||
|
||||
select * from TAB2 as of timestamp systimestamp where N1=1;
|
||||
select * from TAB2 as of scn 4335762 where N1=1;
|
||||
select * from TAB2 as of scn 4335824 where N1=1;
|
||||
|
||||
->
|
||||
alter table TAB2 add C2 varchar2(3);
|
||||
|
||||
update TAB2 set C2='abc' where n1=1;
|
||||
update TAB2 set C2='***' where n1=1;
|
||||
commit;
|
||||
update TAB2 set C2='def' where n1=1;
|
||||
commit;
|
||||
|
||||
|
||||
alter table TAB2 drop column C2;
|
||||
alter table TAB2 rename column C1 to C3;
|
||||
|
||||
update TAB2 set d1=systimestamp where n1=1;
|
||||
commit;
|
||||
|
||||
update TAB2 set d1=TIMESTAMP'1973-10-05 10:00:00',C3='birthday' where n1=1;
|
||||
commit;
|
||||
|
||||
update TAB2 set d1=systimestamp,C3='right now' where n1=1;
|
||||
commit;
|
||||
|
||||
|
||||
4336404 18-JUN-23 03.14.59
|
||||
select * from TAB2 as of timestamp TIMESTAMP'2023-06-18 03:15:00' where N1=1;
|
||||
|
||||
select * from TAB2 as of scn 4336403 where N1=1;
|
||||
select * from TAB2 as of scn 4336404 where N1=1;
|
||||
select * from TAB2 as of scn 4337054 where N1=1;
|
||||
|
||||
select * from TAB2 as of scn 4282896 where N1=1;
|
||||
select * from TAB2 as of scn 4283027 where N1=1;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
-- cleanup
|
||||
alter table TAB2 no flashback archive;
|
||||
drop table TAB2 purge;
|
||||
|
||||
alter table TAB1 no flashback archive;
|
||||
drop table TAB1 purge;
|
||||
|
||||
drop user USR cascade;
|
||||
|
||||
drop flashback archive ARCHIVE_7_DAY;
|
||||
|
||||
drop tablespace LIVE_TS including contents and datafiles;
|
||||
drop tablespace ARCHIVE_TS including contents and datafiles;
|
||||
|
||||
|
||||
-- cleanup
|
||||
alter pluggable database NIHILUS close instances=ALL;
|
||||
drop pluggable database NIHILUS including datafiles;
|
||||
123
FDA/fda_02.txt
Normal file
123
FDA/fda_02.txt
Normal file
@@ -0,0 +1,123 @@
|
||||
create bigfile tablespace ARCHIVE_TS datafile size 32M autoextend on next 32M;
|
||||
|
||||
create flashback archive default ARCHIVE_7_DAY
|
||||
tablespace ARCHIVE_TS
|
||||
quota 1G
|
||||
retention 7 DAY;
|
||||
|
||||
|
||||
create table TAB1 (
|
||||
ID number,
|
||||
DESCRIPTION varchar2(50),
|
||||
constraint TAB_1_PK primary key (id)
|
||||
);
|
||||
|
||||
alter table TAB2 flashback archive ARCHIVE_7_DAY;
|
||||
|
||||
|
||||
insert into TAB2 values(1,'One',TIMESTAMP'2023-01-01 00:00:00');
|
||||
commit;
|
||||
insert into TAB2 values(2,'Two',TIMESTAMP'2023-01-01 00:00:00');
|
||||
commit;
|
||||
insert into TAB2 values(3,'Three',TIMESTAMP'2023-01-01 00:00:00');
|
||||
commit;
|
||||
|
||||
alter table TAB2 add C2 varchar2(3);
|
||||
|
||||
update TAB2 set C2='abc' where n1=1;
|
||||
update TAB2 set C2='***' where n1=1;
|
||||
commit;
|
||||
update TAB2 set C2='def' where n1=1;
|
||||
commit;
|
||||
|
||||
|
||||
alter table TAB2 drop column C2;
|
||||
alter table TAB2 rename column C1 to C3;
|
||||
|
||||
update TAB2 set d1=systimestamp where n1=1;
|
||||
commit;
|
||||
|
||||
update TAB2 set d1=TIMESTAMP'1973-10-05 10:00:00',C3='birthday' where n1=1;
|
||||
commit;
|
||||
|
||||
update TAB2 set d1=systimestamp,C3='right now' where n1=1;
|
||||
commit;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Query: select * from TAB2 as of timestamp systimestamp-1/24+21/24/60 where N1=1;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
SQL> @desc SYS_FBA_DDL_COLMAP_26338
|
||||
Name Null? Type
|
||||
------------------------------- -------- ----------------------------
|
||||
1 STARTSCN NUMBER
|
||||
2 ENDSCN NUMBER
|
||||
3 XID RAW(8)
|
||||
4 OPERATION VARCHAR2(1)
|
||||
5 COLUMN_NAME VARCHAR2(255)
|
||||
6 TYPE VARCHAR2(255)
|
||||
7 HISTORICAL_COLUMN_NAME VARCHAR2(255)
|
||||
|
||||
SQL> @desc SYS_FBA_HIST_26338
|
||||
Name Null? Type
|
||||
------------------------------- -------- ----------------------------
|
||||
1 RID VARCHAR2(4000)
|
||||
2 STARTSCN NUMBER
|
||||
3 ENDSCN NUMBER
|
||||
4 XID RAW(8)
|
||||
5 OPERATION VARCHAR2(1)
|
||||
6 N1 NUMBER
|
||||
7 C3 VARCHAR2(10)
|
||||
8 D1 DATE
|
||||
9 D_4335990_C2 VARCHAR2(3)
|
||||
|
||||
|
||||
|
||||
set lines 200
|
||||
col STARTSCN for 9999999999
|
||||
col ENDSCN for 9999999999
|
||||
col HISTORICAL_COLUMN_NAME for a30
|
||||
col COLUMN_NAME for a30
|
||||
col XID noprint
|
||||
col TYPE for a20
|
||||
|
||||
select * from SYS_FBA_DDL_COLMAP_26338 order by STARTSCN;
|
||||
|
||||
|
||||
STARTSCN ENDSCN O COLUMN_NAME TYPE HISTORICAL_COLUMN_NAME
|
||||
----------- ----------- - ------------------------------ -------------------- ------------------------------
|
||||
4297455 N1 NUMBER N1
|
||||
4297455 4336109 C3 VARCHAR2(10) C1
|
||||
4297455 D1 DATE D1
|
||||
4335662 4335990 D_4335990_C2 VARCHAR2(3) C2
|
||||
4336109 C3 VARCHAR2(10) C3
|
||||
|
||||
|
||||
col RID noprint
|
||||
col XID noprint
|
||||
col OPERATION noprint
|
||||
|
||||
select * from SYS_FBA_HIST_26338 order by STARTSCN;
|
||||
|
||||
STARTSCN ENDSCN XID O N1 C3 D1 D_4
|
||||
----------- ----------- ---------------- - ---------- ---------- ------------------- ---
|
||||
4336404 4336452 08000200AE020000 U 1 birthday 1973-10-05 10:00:00
|
||||
4298014 4335762 08000700A5020000 U 1 One 2023-12-31 23:59:59
|
||||
4336266 4336404 06000400A2020000 U 1 One 2023-06-18 15:12:51
|
||||
4335762 4335824 09000A00B4020000 U 1 One 2023-12-31 23:59:59 ***
|
||||
4335996 4336266 U 1 One 2023-12-31 23:59:59
|
||||
4335824 4335996 02000300AE020000 U 1 One 2023-12-31 23:59:59 def
|
||||
4297497 4335996 0300190095020000 I 2 Two 2023-01-01 00:00:00
|
||||
4297630 4335996 0600200092020000 I 3 Three 2023-01-01 00:00:00
|
||||
4297491 4298014 0400180090020000 I 1 One 2023-01-01 00:00:00
|
||||
4336452 4337054 07001200A1020000 U 1 birthday 2023-06-18 15:15:13
|
||||
|
||||
|
||||
|
||||
|
||||
226
FDA/fda_asof_01.txt
Executable file
226
FDA/fda_asof_01.txt
Executable file
@@ -0,0 +1,226 @@
|
||||
|
||||
TKPROF: Release 21.0.0.0.0 - Development on Sun Jun 18 15:33:28 2023
|
||||
|
||||
Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved.
|
||||
|
||||
Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3396.trc
|
||||
Sort options: default
|
||||
|
||||
********************************************************************************
|
||||
count = number of times OCI procedure was executed
|
||||
cpu = cpu time in seconds executing
|
||||
elapsed = elapsed time in seconds executing
|
||||
disk = number of physical reads of buffers from disk
|
||||
query = number of buffers gotten for consistent read
|
||||
current = number of buffers gotten in current mode (usually for update)
|
||||
rows = number of rows processed by the fetch or execute call
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 2ajc7pwz9jsx3 Plan Hash: 2536448058
|
||||
|
||||
select max(scn)
|
||||
from
|
||||
smon_scn_time
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 3 0.00 0.00 0 0 0 0
|
||||
Execute 3 0.00 0.00 0 0 0 0
|
||||
Fetch 3 0.00 0.00 0 3 0 3
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 9 0.00 0.00 0 3 0 3
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 3
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
1 1 1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=20 us starts=1)
|
||||
1 1 1 INDEX FULL SCAN (MIN/MAX) SMON_SCN_TIME_SCN_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=6 card=1)(object id 425)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 41dzdw7ca24a1 Plan Hash: 1159443182
|
||||
|
||||
select count(*)
|
||||
from
|
||||
"USR".SYS_FBA_DDL_COLMAP_26338
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.00 0.00 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 1 0.00 0.00 0 6 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 3 0.00 0.00 0 6 0 1
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
1 1 1 SORT AGGREGATE (cr=6 pr=0 pw=0 time=95 us starts=1)
|
||||
5 5 5 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=6 pr=0 pw=0 time=92 us starts=1 cost=3 size=0 card=3)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 15fqvf9xff3hm Plan Hash: 3966719185
|
||||
|
||||
select HISTORICAL_COLUMN_NAME, COLUMN_NAME
|
||||
from
|
||||
"USR".SYS_FBA_DDL_COLMAP_26338 where (STARTSCN<=4336404 or STARTSCN is NULL)
|
||||
and (ENDSCN > 4336404 or ENDSCN is NULL) order by STARTSCN, ROWID
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.00 0.00 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 4 0.00 0.00 0 6 0 3
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 6 0.00 0.00 0 6 0 3
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
3 3 3 SORT ORDER BY (cr=6 pr=0 pw=0 time=61 us starts=1 cost=4 size=60 card=3)
|
||||
3 3 3 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=6 pr=0 pw=0 time=42 us starts=1 cost=3 size=60 card=3)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 5ty7pv13y930m Plan Hash: 1347681019
|
||||
|
||||
select count(*)
|
||||
from
|
||||
sys.col_group_usage$ where obj# = :1 and cols = :2 and trunc(sysdate) =
|
||||
trunc(timestamp) and bitand(flags, :3) = :3 and (cols_range is null and
|
||||
length(:4) = 0 or cols_range is not null and cols_range =
|
||||
dbms_auto_index_internal.merge_cols_str(cols_range, :4))
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 0 0.00 0.00 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 1 0.00 0.00 0 2 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 2 0.00 0.00 0 2 0 1
|
||||
|
||||
Misses in library cache during parse: 0
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
PGA memory operation 67 0.00 0.00
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: g0181my81qz4x Plan Hash: 303836101
|
||||
|
||||
select *
|
||||
from
|
||||
TAB2 as of scn 4336404 where N1=1
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.01 0.01 0 16 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 2 0.00 0.00 0 93 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 4 0.01 0.01 0 109 0 1
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
Optimizer mode: ALL_ROWS
|
||||
Parsing user id: 84
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
1 1 1 VIEW (cr=110 pr=0 pw=0 time=102 us starts=1 cost=282 size=58 card=2)
|
||||
1 1 1 UNION-ALL (cr=110 pr=0 pw=0 time=100 us starts=1)
|
||||
1 1 1 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=100 pr=0 pw=0 time=100 us starts=1 cost=274 size=29 card=1)
|
||||
1 1 1 TABLE ACCESS FULL SYS_FBA_HIST_26338 PARTITION: 1 1 (cr=100 pr=0 pw=0 time=94 us starts=1 cost=274 size=29 card=1)
|
||||
0 0 0 FILTER (cr=10 pr=0 pw=0 time=571 us starts=1)
|
||||
1 1 1 NESTED LOOPS OUTER (cr=10 pr=0 pw=0 time=571 us starts=1 cost=8 size=44 card=1)
|
||||
1 1 1 TABLE ACCESS FULL TAB2 (cr=7 pr=0 pw=0 time=528 us starts=1 cost=6 size=16 card=1)
|
||||
1 1 1 TABLE ACCESS BY INDEX ROWID BATCHED SYS_FBA_TCRV_26338 (cr=3 pr=0 pw=0 time=24 us starts=1 cost=2 size=28 card=1)
|
||||
3 3 3 INDEX RANGE SCAN SYS_FBA_TCRV_IDX1_26338 (cr=1 pr=0 pw=0 time=8 us starts=1 cost=1 size=0 card=1)(object id 26344)
|
||||
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
PGA memory operation 4 0.00 0.00
|
||||
SQL*Net message to client 2 0.00 0.00
|
||||
SQL*Net message from client 2 12.15 12.16
|
||||
|
||||
|
||||
|
||||
********************************************************************************
|
||||
|
||||
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.01 0.01 0 16 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 2 0.00 0.00 0 93 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 4 0.01 0.01 0 109 0 1
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
SQL*Net message to client 3 0.00 0.00
|
||||
SQL*Net message from client 3 47.08 59.24
|
||||
PGA memory operation 4 0.00 0.00
|
||||
|
||||
|
||||
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 5 0.00 0.00 0 0 0 0
|
||||
Execute 6 0.00 0.00 0 0 0 0
|
||||
Fetch 9 0.00 0.00 0 17 0 8
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 20 0.00 0.00 0 17 0 8
|
||||
|
||||
Misses in library cache during parse: 3
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
PGA memory operation 67 0.00 0.00
|
||||
|
||||
1 user SQL statements in session.
|
||||
6 internal SQL statements in session.
|
||||
7 SQL statements in session.
|
||||
********************************************************************************
|
||||
Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3396.trc
|
||||
Trace file compatibility: 12.2.0.0
|
||||
Sort options: default
|
||||
|
||||
1 session in tracefile.
|
||||
1 user SQL statements in trace file.
|
||||
6 internal SQL statements in trace file.
|
||||
7 SQL statements in trace file.
|
||||
5 unique SQL statements in trace file.
|
||||
218 lines in trace file.
|
||||
12 elapsed seconds in trace file.
|
||||
|
||||
|
||||
324
FDA/fda_asof_02.txt
Executable file
324
FDA/fda_asof_02.txt
Executable file
@@ -0,0 +1,324 @@
|
||||
|
||||
TKPROF: Release 21.0.0.0.0 - Development on Sun Jun 18 15:55:49 2023
|
||||
|
||||
Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved.
|
||||
|
||||
Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3510.trc
|
||||
Sort options: default
|
||||
|
||||
********************************************************************************
|
||||
count = number of times OCI procedure was executed
|
||||
cpu = cpu time in seconds executing
|
||||
elapsed = elapsed time in seconds executing
|
||||
disk = number of physical reads of buffers from disk
|
||||
query = number of buffers gotten for consistent read
|
||||
current = number of buffers gotten in current mode (usually for update)
|
||||
rows = number of rows processed by the fetch or execute call
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 41dzdw7ca24a1 Plan Hash: 1159443182
|
||||
|
||||
select count(*)
|
||||
from
|
||||
"USR".SYS_FBA_DDL_COLMAP_26338
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.00 0.00 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 1 0.00 0.00 0 6 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 3 0.00 0.00 0 6 0 1
|
||||
|
||||
Misses in library cache during parse: 0
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
1 1 1 SORT AGGREGATE (cr=6 pr=0 pw=0 time=67 us starts=1)
|
||||
5 5 5 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=6 pr=0 pw=0 time=59 us starts=1 cost=3 size=0 card=3)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 2syvqzbxp4k9z Plan Hash: 533170135
|
||||
|
||||
select u.name, o.name, a.interface_version#, o.obj#
|
||||
from
|
||||
association$ a, user$ u, obj$ o where a.obj# = :1
|
||||
and a.property = :2
|
||||
and a.statstype# = o.obj# and
|
||||
u.user# = o.owner#
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 6 0.00 0.00 0 0 0 0
|
||||
Execute 6 0.00 0.00 0 0 0 0
|
||||
Fetch 6 0.00 0.00 0 12 0 0
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 18 0.00 0.00 0 12 0 0
|
||||
|
||||
Misses in library cache during parse: 0
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
0 0 0 HASH JOIN (cr=2 pr=0 pw=0 time=39 us starts=1 cost=5 size=62 card=1)
|
||||
0 0 0 NESTED LOOPS (cr=2 pr=0 pw=0 time=35 us starts=1 cost=5 size=62 card=1)
|
||||
0 0 0 STATISTICS COLLECTOR (cr=2 pr=0 pw=0 time=33 us starts=1)
|
||||
0 0 0 HASH JOIN (cr=2 pr=0 pw=0 time=26 us starts=1 cost=4 size=44 card=1)
|
||||
0 0 0 NESTED LOOPS (cr=2 pr=0 pw=0 time=26 us starts=1 cost=4 size=44 card=1)
|
||||
0 0 0 STATISTICS COLLECTOR (cr=2 pr=0 pw=0 time=26 us starts=1)
|
||||
0 0 0 TABLE ACCESS FULL ASSOCIATION$ (cr=2 pr=0 pw=0 time=24 us starts=1 cost=2 size=16 card=1)
|
||||
0 0 0 TABLE ACCESS BY INDEX ROWID BATCHED OBJ$ (cr=0 pr=0 pw=0 time=0 us starts=0 cost=2 size=28 card=1)
|
||||
0 0 0 INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=0 card=1)(object id 36)
|
||||
0 0 0 INDEX FAST FULL SCAN I_OBJ2 (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=28 card=1)(object id 37)
|
||||
0 0 0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=18 card=1)
|
||||
0 0 0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us starts=0 cost=0 size=0 card=1)(object id 11)
|
||||
0 0 0 TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=18 card=1)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 2xyb5d6xg9srh Plan Hash: 785096182
|
||||
|
||||
select a.default_cpu_cost, a.default_io_cost
|
||||
from
|
||||
association$ a where a.obj# = :1
|
||||
and a.property = :2
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 6 0.00 0.00 0 0 0 0
|
||||
Execute 6 0.00 0.00 0 0 0 0
|
||||
Fetch 6 0.00 0.00 0 12 0 0
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 18 0.00 0.00 0 12 0 0
|
||||
|
||||
Misses in library cache during parse: 0
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
0 0 0 TABLE ACCESS FULL ASSOCIATION$ (cr=2 pr=0 pw=0 time=16 us starts=1 cost=2 size=18 card=1)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 476v06tzdhkhc Plan Hash: 3966719185
|
||||
|
||||
select HISTORICAL_COLUMN_NAME, COLUMN_NAME
|
||||
from
|
||||
"USR".SYS_FBA_DDL_COLMAP_26338 where (STARTSCN<=
|
||||
TIMESTAMP_TO_SCN(systimestamp-1/24+21/24/60) or STARTSCN is NULL) and
|
||||
(ENDSCN > TIMESTAMP_TO_SCN(systimestamp-1/24+21/24/60) or ENDSCN is NULL)
|
||||
order by STARTSCN, ROWID
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.00 0.00 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 4 0.00 0.00 0 6 0 3
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 6 0.00 0.00 0 6 0 3
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
3 3 3 SORT ORDER BY (cr=13 pr=0 pw=0 time=1075 us starts=1 cost=4 size=20 card=1)
|
||||
3 3 3 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=13 pr=0 pw=0 time=1061 us starts=1 cost=3 size=20 card=1)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 4jrkd9ymavb8x Plan Hash: 3631124065
|
||||
|
||||
select max(time_mp)
|
||||
from
|
||||
smon_scn_time
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 20 0.00 0.00 0 0 0 0
|
||||
Execute 20 0.00 0.00 0 0 0 0
|
||||
Fetch 20 0.00 0.00 0 20 0 20
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 60 0.00 0.00 0 20 0 20
|
||||
|
||||
Misses in library cache during parse: 0
|
||||
Optimizer mode: ALL_ROWS
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
1 1 1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=19 us starts=1)
|
||||
1 1 1 INDEX FULL SCAN (MIN/MAX) SMON_SCN_TIME_TIM_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=7 card=1)(object id 424)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 2ajc7pwz9jsx3 Plan Hash: 2536448058
|
||||
|
||||
select max(scn)
|
||||
from
|
||||
smon_scn_time
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 2 0.00 0.00 0 0 0 0
|
||||
Execute 2 0.00 0.00 0 0 0 0
|
||||
Fetch 2 0.00 0.00 0 2 0 2
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 6 0.00 0.00 0 2 0 2
|
||||
|
||||
Misses in library cache during parse: 0
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
1 1 1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=12 us starts=1)
|
||||
1 1 1 INDEX FULL SCAN (MIN/MAX) SMON_SCN_TIME_SCN_IDX (cr=1 pr=0 pw=0 time=6 us starts=1 cost=1 size=6 card=1)(object id 425)
|
||||
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 5ty7pv13y930m Plan Hash: 1347681019
|
||||
|
||||
select count(*)
|
||||
from
|
||||
sys.col_group_usage$ where obj# = :1 and cols = :2 and trunc(sysdate) =
|
||||
trunc(timestamp) and bitand(flags, :3) = :3 and (cols_range is null and
|
||||
length(:4) = 0 or cols_range is not null and cols_range =
|
||||
dbms_auto_index_internal.merge_cols_str(cols_range, :4))
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 0 0.00 0.00 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 1 0.00 0.00 0 2 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 2 0.00 0.00 0 2 0 1
|
||||
|
||||
Misses in library cache during parse: 0
|
||||
Optimizer mode: CHOOSE
|
||||
Parsing user id: SYS (recursive depth: 1)
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
PGA memory operation 75 0.00 0.00
|
||||
********************************************************************************
|
||||
|
||||
SQL ID: 36g2pydn13abk Plan Hash: 2739728740
|
||||
|
||||
select *
|
||||
from
|
||||
TAB2 as of timestamp systimestamp-1/24+21/24/60 where N1=1
|
||||
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.01 0.01 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 2 0.00 0.00 0 109 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 4 0.01 0.01 0 109 0 1
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
Optimizer mode: ALL_ROWS
|
||||
Parsing user id: 84
|
||||
Number of plan statistics captured: 1
|
||||
|
||||
Rows (1st) Rows (avg) Rows (max) Row Source Operation
|
||||
---------- ---------- ---------- ---------------------------------------------------
|
||||
1 1 1 VIEW (cr=123 pr=0 pw=0 time=1775 us starts=1 cost=282 size=58 card=2)
|
||||
1 1 1 UNION-ALL (cr=123 pr=0 pw=0 time=1772 us starts=1)
|
||||
1 1 1 FILTER (cr=112 pr=0 pw=0 time=1769 us starts=1)
|
||||
1 1 1 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=111 pr=0 pw=0 time=1530 us starts=1 cost=274 size=29 card=1)
|
||||
1 1 1 TABLE ACCESS FULL SYS_FBA_HIST_26338 PARTITION: 1 1 (cr=111 pr=0 pw=0 time=1526 us starts=1 cost=274 size=29 card=1)
|
||||
0 0 0 FILTER (cr=11 pr=0 pw=0 time=410 us starts=1)
|
||||
1 1 1 NESTED LOOPS OUTER (cr=10 pr=0 pw=0 time=245 us starts=1 cost=8 size=44 card=1)
|
||||
1 1 1 TABLE ACCESS FULL TAB2 (cr=7 pr=0 pw=0 time=215 us starts=1 cost=6 size=16 card=1)
|
||||
1 1 1 TABLE ACCESS BY INDEX ROWID BATCHED SYS_FBA_TCRV_26338 (cr=3 pr=0 pw=0 time=20 us starts=1 cost=2 size=28 card=1)
|
||||
3 3 3 INDEX RANGE SCAN SYS_FBA_TCRV_IDX1_26338 (cr=1 pr=0 pw=0 time=7 us starts=1 cost=1 size=0 card=1)(object id 26344)
|
||||
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
PGA memory operation 3 0.00 0.00
|
||||
SQL*Net message to client 2 0.00 0.00
|
||||
SQL*Net message from client 2 2.20 2.20
|
||||
|
||||
|
||||
|
||||
********************************************************************************
|
||||
|
||||
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 1 0.01 0.01 0 0 0 0
|
||||
Execute 1 0.00 0.00 0 0 0 0
|
||||
Fetch 2 0.00 0.00 0 109 0 1
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 4 0.01 0.01 0 109 0 1
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
SQL*Net message to client 3 0.00 0.00
|
||||
SQL*Net message from client 3 5.67 7.88
|
||||
PGA memory operation 3 0.00 0.00
|
||||
|
||||
|
||||
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
|
||||
|
||||
call count cpu elapsed disk query current rows
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
Parse 36 0.00 0.00 0 0 0 0
|
||||
Execute 37 0.00 0.00 0 0 0 0
|
||||
Fetch 40 0.00 0.00 0 60 0 27
|
||||
------- ------ -------- ---------- ---------- ---------- ---------- ----------
|
||||
total 113 0.01 0.01 0 60 0 27
|
||||
|
||||
Misses in library cache during parse: 1
|
||||
|
||||
Elapsed times include waiting on following events:
|
||||
Event waited on Times Max. Wait Total Waited
|
||||
---------------------------------------- Waited ---------- ------------
|
||||
PGA memory operation 75 0.00 0.00
|
||||
|
||||
1 user SQL statements in session.
|
||||
7 internal SQL statements in session.
|
||||
8 SQL statements in session.
|
||||
********************************************************************************
|
||||
Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3510.trc
|
||||
Trace file compatibility: 12.2.0.0
|
||||
Sort options: default
|
||||
|
||||
1 session in tracefile.
|
||||
1 user SQL statements in trace file.
|
||||
7 internal SQL statements in trace file.
|
||||
8 SQL statements in trace file.
|
||||
8 unique SQL statements in trace file.
|
||||
509 lines in trace file.
|
||||
2 elapsed seconds in trace file.
|
||||
|
||||
|
||||
9
FDA/infinite_update1.sql
Normal file
9
FDA/infinite_update1.sql
Normal file
@@ -0,0 +1,9 @@
|
||||
begin
|
||||
loop
|
||||
update TAB1 set d=sysdate;
|
||||
commit;
|
||||
dbms_session.sleep(1);
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
|
||||
BIN
Golden_Gate/.DS_Store
vendored
Normal file
BIN
Golden_Gate/.DS_Store
vendored
Normal file
Binary file not shown.
47
Golden_Gate/Clean_up_old_Extracts_01.txt
Normal file
47
Golden_Gate/Clean_up_old_Extracts_01.txt
Normal file
@@ -0,0 +1,47 @@
|
||||
Clean up old Extracts
|
||||
---------------------
|
||||
https://www.dbasolved.com/2022/04/clean-up-old-extracts/
|
||||
|
||||
0. Identify captures and log miner sessions
|
||||
-------------------------------------------
|
||||
set linesize 150
|
||||
col capture_name format a20
|
||||
select capture_name from dba_capture;
|
||||
|
||||
|
||||
set linesize 130
|
||||
col session_name format a20
|
||||
col global_db_name format a45
|
||||
select SESSION#,CLIENT#,SESSION_NAME,DB_ID,GLOBAL_DB_NAME from system.LOGMNR_SESSION$;
|
||||
|
||||
|
||||
1. Drop the extracts
|
||||
---------------------
|
||||
exec DBMS_CAPTURE_ADM.DROP_CAPTURE ('<MY_CAPTURE_01>');
|
||||
|
||||
2. Drop queue tables from log miner
|
||||
-----------------------------------
|
||||
|
||||
set linesize 250
|
||||
col owner format a30
|
||||
col name format a30
|
||||
col queue_table format a30
|
||||
select owner, name, queue_table from dba_queues where owner = 'OGGADMIN';
|
||||
|
||||
|
||||
|
||||
# delete in automatic mode
|
||||
declare
|
||||
v_queue_name varchar2(60);
|
||||
begin
|
||||
for i in (select queue_table, owner from dba_queues where owner = 'OGGADMIN')
|
||||
loop
|
||||
v_queue_name := i.owner||'.'||i.queue_table;
|
||||
DBMS_AQADM.DROP_QUEUE_TABLE(queue_table => v_queue_name, force => TRUE);
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
|
||||
# or delete one by one
|
||||
exec DBMS_AQADM.DROP_QUEUE_TABLE(queue_table => '<OWNER>.<TABLE_NAME>', force => TRUE);
|
||||
# note that tables with AQ$_ prefix will be autotaic deleted
|
||||
234
Golden_Gate/distrib_certif_01.md
Normal file
234
Golden_Gate/distrib_certif_01.md
Normal file
@@ -0,0 +1,234 @@
|
||||
### Sources
|
||||
|
||||
- [OGG Documentation](https://docs.oracle.com/en/middleware/goldengate/core/19.1/securing/securing-deployments.html#GUID-472E5C9C-85FC-4B87-BB90-2CE877F41DC0)
|
||||
- [Markdown Basic Syntax](https://www.markdownguide.org/basic-syntax/)
|
||||
|
||||
### Creating a Self-Signed Root Certificate
|
||||
|
||||
Create an automatic login wallet
|
||||
|
||||
orapki wallet create \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/rootCA \
|
||||
-pwd "LuxAeterna12;" \
|
||||
-auto_login
|
||||
|
||||
Create self-signed certificate
|
||||
|
||||
orapki wallet add -wallet ~/wallet_directory/root_ca \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/rootCA \
|
||||
-pwd "LuxAeterna12;" \
|
||||
-dn "CN=RootCA" \
|
||||
-keysize 2048 \
|
||||
-self_signed \
|
||||
-validity 7300 \
|
||||
-sign_alg sha256
|
||||
|
||||
Check the contents of the wallet
|
||||
|
||||
orapki wallet display \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/rootCA \
|
||||
-pwd "LuxAeterna12;"
|
||||
|
||||
Export the certificate to a .pem file
|
||||
|
||||
orapki wallet export \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/rootCA \
|
||||
-pwd "LuxAeterna12;" \
|
||||
-dn "CN=RootCA" \
|
||||
-cert /app/oracle/staging_area/export/rootCA_Cert.pem
|
||||
|
||||
|
||||
### Creating Server Certificates
|
||||
|
||||
#### For [exegol] server
|
||||
|
||||
Create an automatic login wallet
|
||||
|
||||
orapki wallet create \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/exegol \
|
||||
-pwd "TabulaRasa32;" \
|
||||
-auto_login
|
||||
|
||||
Add a Certificate Signing Request (CSR) to the server’s wallet
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/exegol \
|
||||
-pwd "TabulaRasa32;" \
|
||||
-dn "CN=exegol.swgalaxy" \
|
||||
-keysize 2048
|
||||
|
||||
Export the CSR to a .pem file
|
||||
|
||||
orapki wallet export \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/exegol \
|
||||
-pwd "TabulaRasa32;" \
|
||||
-dn "CN=exegol.swgalaxy" \
|
||||
-request /app/oracle/staging_area/export/exegol_req.pem
|
||||
|
||||
Using the CSR, create a signed server or client certificate and sign it using the root certificate.
|
||||
Assign a unique serial number to each certificate.
|
||||
|
||||
orapki cert create \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/rootCA \
|
||||
-pwd "LuxAeterna12;" \
|
||||
-request /app/oracle/staging_area/export/exegol_req.pem \
|
||||
-cert /app/oracle/staging_area/export/exegol_Cert.pem \
|
||||
-serial_num 20 \
|
||||
-validity 375 \
|
||||
-sign_alg sha256
|
||||
|
||||
Add the root certificate into the client’s or server’s wallet as a trusted certificate.
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/exegol \
|
||||
-pwd "TabulaRasa32;" \
|
||||
-trusted_cert \
|
||||
-cert /app/oracle/staging_area/export/rootCA_Cert.pem
|
||||
|
||||
Add the server or client certificate as a user certificate into the client’s or server’s wallet
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/exegol \
|
||||
-pwd "TabulaRasa32;" \
|
||||
-user_cert \
|
||||
-cert /app/oracle/staging_area/export/exegol_Cert.pem
|
||||
|
||||
Check the contents of the wallet
|
||||
|
||||
orapki wallet display \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/exegol \
|
||||
-pwd "TabulaRasa32;"
|
||||
|
||||
|
||||
#### For [helska] server
|
||||
|
||||
Create an automatic login wallet
|
||||
|
||||
orapki wallet create \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/helska \
|
||||
-pwd "SicSemper81;" \
|
||||
-auto_login
|
||||
|
||||
Add a Certificate Signing Request (CSR) to the server’s wallet
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/helska \
|
||||
-pwd "SicSemper81;" \
|
||||
-dn "CN=helska.swgalaxy" \
|
||||
-keysize 2048
|
||||
|
||||
Export the CSR to a .pem file
|
||||
|
||||
orapki wallet export \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/helska \
|
||||
-pwd "SicSemper81;" \
|
||||
-dn "CN=helska.swgalaxy" \
|
||||
-request /app/oracle/staging_area/export/helska_req.pem
|
||||
|
||||
Using the CSR, create a signed server or client certificate and sign it using the root certificate.
|
||||
Assign a unique serial number to each certificate.
|
||||
|
||||
orapki cert create \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/rootCA \
|
||||
-pwd "LuxAeterna12;" \
|
||||
-request /app/oracle/staging_area/export/helska_req.pem \
|
||||
-cert /app/oracle/staging_area/export/helska_Cert.pem \
|
||||
-serial_num 21 \
|
||||
-validity 375 \
|
||||
-sign_alg sha256
|
||||
|
||||
Add the root certificate into the client’s or server’s wallet as a trusted certificate.
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/helska \
|
||||
-pwd "SicSemper81;" \
|
||||
-trusted_cert \
|
||||
-cert /app/oracle/staging_area/export/rootCA_Cert.pem
|
||||
|
||||
Add the server or client certificate as a user certificate into the client’s or server’s wallet
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/helska \
|
||||
-pwd "SicSemper81;" \
|
||||
-user_cert \
|
||||
-cert /app/oracle/staging_area/export/helska_Cert.pem
|
||||
|
||||
Check the contents of the wallet
|
||||
|
||||
orapki wallet display \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/helska \
|
||||
-pwd "SicSemper81;"
|
||||
|
||||
### Creating a Distribution Server User Certificate
|
||||
|
||||
Create an automatic login wallet
|
||||
|
||||
orapki wallet create \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/dist_client \
|
||||
-pwd "LapsusLinguae91" \
|
||||
-auto_login
|
||||
|
||||
Add a Certificate Signing Request (CSR) to the wallet
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/dist_client \
|
||||
-pwd "LapsusLinguae91" \
|
||||
-dn "CN=dist_client" \
|
||||
-keysize 2048
|
||||
|
||||
Export the CSR to a .pem file
|
||||
|
||||
orapki wallet export \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/dist_client \
|
||||
-pwd "LapsusLinguae91" \
|
||||
-dn "CN=dist_client" \
|
||||
-request /app/oracle/staging_area/export/dist_client_req.pem
|
||||
|
||||
Using the CSR, create a signed certificate and sign it using the root certificate.
|
||||
Assign a unique serial number to each certificate.
|
||||
|
||||
orapki cert create \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/rootCA \
|
||||
-pwd "LuxAeterna12;" \
|
||||
-request /app/oracle/staging_area/export/dist_client_req.pem \
|
||||
-cert /app/oracle/staging_area/export/dist_client_Cert.pem \
|
||||
-serial_num 22 \
|
||||
-validity 375 \
|
||||
-sign_alg sha256
|
||||
|
||||
Add the root certificate into the client’s or server’s wallet as a trusted certificate.
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/dist_client \
|
||||
-pwd "LapsusLinguae91" \
|
||||
-trusted_cert \
|
||||
-cert /app/oracle/staging_area/export/rootCA_Cert.pem
|
||||
|
||||
Add the server or client certificate as a user certificate into the client’s or server’s wallet
|
||||
|
||||
orapki wallet add \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/dist_client \
|
||||
-pwd "LapsusLinguae91" \
|
||||
-user_cert \
|
||||
-cert /app/oracle/staging_area/export/dist_client_Cert.pem
|
||||
|
||||
Check the contents of the wallet
|
||||
|
||||
orapki wallet display \
|
||||
-wallet /app/oracle/staging_area/wallet_dir/dist_client \
|
||||
-pwd "LapsusLinguae91"
|
||||
|
||||
|
||||
### Trusted Certificates
|
||||
|
||||
Both the Distribution Server and Receiver Server need certificates.
|
||||
- The Distribution Server uses the certificate in the client wallet location under outbound section
|
||||
- For the Receiver Server, the certificate is in the wallet for the inbound wallet location
|
||||
|
||||
For self-signed certificates, you can choose from one of the following:
|
||||
- Have both certificates signed by the same Root Certificate
|
||||
- The other side’s certificate is added to the local wallet as trusted certificate
|
||||
|
||||
|
||||
|
||||
|
||||
296
Golden_Gate/example_01/add_2_tables.md
Normal file
296
Golden_Gate/example_01/add_2_tables.md
Normal file
@@ -0,0 +1,296 @@
|
||||
## Context
|
||||
|
||||
- setup extract/replicat for 3 tables: ORDERS, PRODUCTS and USERS
|
||||
- add 2 new tables TRANSACTIONS and TASKS to this extract/replica peer
|
||||
|
||||
The aim is to minimize the downtime for the peer extract/replicat, so we will proceed in 2 steps:
|
||||
- create a second parallel extract/replicat for the 2 new tables
|
||||
- merge the second extract/replicat to initial extract/replicat
|
||||
|
||||
## Extract setup
|
||||
|
||||
Add trandata to tables:
|
||||
|
||||
dblogin useridalias YODA
|
||||
add trandata GREEN.ORDERS
|
||||
add trandata GREEN.PRODUCTS
|
||||
add trandata GREEN.USERS
|
||||
list tables GREEN.*
|
||||
|
||||
|
||||
Define params file for extract:
|
||||
|
||||
edit params EXTRAA
|
||||
|
||||
|
||||
extract EXTRAA
|
||||
useridalias JEDIPRD
|
||||
sourcecatalog YODA
|
||||
exttrail ./dirdat/aa
|
||||
purgeoldextracts
|
||||
checkpointsecs 1
|
||||
ddl include mapped
|
||||
warnlongtrans 1h, checkinterval 30m
|
||||
------------------------------------
|
||||
table GREEN.ORDERS;
|
||||
table GREEN.PRODUCTS;
|
||||
table GREEN.USERS;
|
||||
|
||||
|
||||
Add, register and start extract:
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
add extract EXTRAA, integrated tranlog, begin now
|
||||
add exttrail ./dirdat/aa, extract EXTRAA
|
||||
register extract EXTRAA, database container (YODA)
|
||||
start extract EXTRAA
|
||||
info extract EXTRAA detail
|
||||
|
||||
|
||||
## Initial load
|
||||
|
||||
Note down the current SCN on source database.
|
||||
|
||||
SQL> select current_scn from v$database;
|
||||
|
||||
CURRENT_SCN
|
||||
-----------
|
||||
10138382
|
||||
|
||||
|
||||
On target DB create tables structure for ORDERS, PRODUCTS, USERS and do the inlitial load:
|
||||
|
||||
SCN=10138382
|
||||
impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_01.log remap_schema=GREEN:RED tables=GREEN.ORDERS,GREEN.PRODUCTS,GREEN.USERS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN
|
||||
|
||||
## Replicat setup
|
||||
|
||||
Define params file for replicat.
|
||||
Take care to filter `filter(@GETENV ('TRANSACTION','CSN')`, it must be positionned to the SCN of initial load.
|
||||
|
||||
edit params REPLAA
|
||||
|
||||
replicat REPLAA
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAA.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.ORDERS, target MAUL.RED.ORDERS, filter(@GETENV ('TRANSACTION','CSN') > 10138382);
|
||||
map YODA.GREEN.PRODUCTS, target MAUL.RED.PRODUCTS, filter(@GETENV ('TRANSACTION','CSN') > 10138382);
|
||||
map YODA.GREEN.USERS, target MAUL.RED.USERS, filter(@GETENV ('TRANSACTION','CSN') > 10138382);
|
||||
|
||||
|
||||
Add and start replicat:
|
||||
|
||||
add replicat REPLAA, integrated, exttrail ./dirdat/aa
|
||||
|
||||
dblogin useridalias SITHPRD
|
||||
register replicat REPLAA database
|
||||
start replicat REPLAA
|
||||
info all
|
||||
|
||||
|
||||
Wait to catch the lag:
|
||||
|
||||
lag replicat
|
||||
|
||||
When done you can remove filter `filter(@GETENV ('TRANSACTION','CSN')`
|
||||
|
||||
edit params REPLAA
|
||||
|
||||
replicat REPLAA
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAA.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
|
||||
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
|
||||
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
|
||||
|
||||
|
||||
restart replicat REPLAA
|
||||
|
||||
|
||||
|
||||
## Add 2 new tables to extract/replicat
|
||||
|
||||
Add trandata to tables:
|
||||
|
||||
dblogin useridalias YODA
|
||||
add trandata GREEN.TRANSACTIONS
|
||||
add trandata GREEN.TASKS
|
||||
list tables GREEN.*
|
||||
|
||||
|
||||
Create a second extract EXTRAB to manage the new tables.
|
||||
Define extract parameters:
|
||||
|
||||
edit params EXTRAB
|
||||
|
||||
extract EXTRAB
|
||||
useridalias JEDIPRD
|
||||
sourcecatalog YODA
|
||||
exttrail ./dirdat/ab
|
||||
purgeoldextracts
|
||||
checkpointsecs 1
|
||||
ddl include mapped
|
||||
warnlongtrans 1h, checkinterval 30m
|
||||
|
||||
table GREEN.TRANSACTIONS;
|
||||
table GREEN.TASKS;
|
||||
|
||||
Add, register and start extract:
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
add extract EXTRAB, integrated tranlog, begin now
|
||||
add exttrail ./dirdat/ab, extract EXTRAB
|
||||
register extract EXTRAB, database container (YODA)
|
||||
start extract EXTRAB
|
||||
info extract EXTRAB detail
|
||||
|
||||
## Initial load for new tables
|
||||
|
||||
Note down the current SCN on source database.
|
||||
|
||||
SQL> select current_scn from v$database;
|
||||
|
||||
CURRENT_SCN
|
||||
-----------
|
||||
10284191
|
||||
|
||||
On target DB create tables structure for TRANSACTIONS, TASKS and do the inlitial load:
|
||||
|
||||
SCN=10284191
|
||||
impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_02.log remap_schema=GREEN:RED tables=GREEN.TRANSACTIONS,GREEN.TASKS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN
|
||||
|
||||
## New replicat setup
|
||||
|
||||
Define extract parameters.
|
||||
Pay attention to `filter(@GETENV ('TRANSACTION','CSN')` clause to be setup to SCN of intial datapump load.
|
||||
|
||||
edit params REPLAB
|
||||
|
||||
replicat REPLAB
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAB.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 10284191);
|
||||
map YODA.GREEN.TASKS, target MAUL.RED.TASKS, filter(@GETENV ('TRANSACTION','CSN') > 10284191);
|
||||
|
||||
Add and start new replicat:
|
||||
|
||||
add replicat REPLAB, integrated, exttrail ./dirdat/ab
|
||||
dblogin useridalias SITHPRD
|
||||
register replicat REPLAB database
|
||||
start replicat REPLAB
|
||||
info all
|
||||
|
||||
Check if new replicat is running and wait to lag 0.
|
||||
|
||||
## Integrate the 2 new tables to initial extract/replicat: EXTRAA/REPLAA
|
||||
|
||||
Add new tables to initial extract for a **double run**:
|
||||
|
||||
edit params EXTRAA
|
||||
|
||||
extract EXTRAA
|
||||
useridalias JEDIPRD
|
||||
sourcecatalog YODA
|
||||
exttrail ./dirdat/aa
|
||||
purgeoldextracts
|
||||
checkpointsecs 1
|
||||
ddl include mapped
|
||||
warnlongtrans 1h, checkinterval 30m
|
||||
|
||||
table GREEN.ORDERS;
|
||||
table GREEN.PRODUCTS;
|
||||
table GREEN.USERS;
|
||||
table GREEN.TRANSACTIONS;
|
||||
table GREEN.TASKS;
|
||||
|
||||
Restart extract EXTRAA:
|
||||
|
||||
restart extract EXTRAA
|
||||
|
||||
Stop extracts in this **strictly order**:
|
||||
- **first** extract: EXTRAA
|
||||
- **second** extract: EXTRAB
|
||||
|
||||
> It is **mandatory** to stop extracts in this order.
|
||||
> **The applied SCN on first replicat tables must be less than the SCN on second replicat** in order to allow the first replicat to start at the last applied psition in the trail file. Like this, the first replicat must not be repositionned in the past.
|
||||
|
||||
stop EXTRACT EXTRAA
|
||||
stop EXTRACT EXTRAB
|
||||
|
||||
Now stop both replicat also:
|
||||
|
||||
stop replicat REPLAA
|
||||
stop replicat REPLAB
|
||||
|
||||
Note down the SCN for each extract and premare new params file for initial replicat.
|
||||
|
||||
info extract EXTRAA detail
|
||||
info extract EXTRAB detail
|
||||
|
||||
In my case:
|
||||
- EXTRAA: SCN=10358472
|
||||
- EXTRAB: SCN=10358544
|
||||
|
||||
> The SCN of EXTRAB should be greater than the SCN of EXTRAA
|
||||
|
||||
Update REPLAA replicat parameter file in accordance with the latest SCN applied on new tables (the SCN of EXTRAB):
|
||||
|
||||
edit params REPLAA
|
||||
|
||||
replicat REPLAA
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAA.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
|
||||
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
|
||||
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
|
||||
|
||||
map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 10358544);
|
||||
map YODA.GREEN.TASKS , target MAUL.RED.TASKS, filter(@GETENV ('TRANSACTION','CSN') > 10358544);
|
||||
|
||||
Start first extract/replicat
|
||||
|
||||
start extract EXTRAA
|
||||
start replicat REPLAA
|
||||
|
||||
When the lag is zero you can remove filter `filter(@GETENV ('TRANSACTION','CSN')`
|
||||
|
||||
edit params REPLAA
|
||||
|
||||
replicat REPLAA
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAA.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
|
||||
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
|
||||
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
|
||||
|
||||
|
||||
map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ;
|
||||
map YODA.GREEN.TASKS , target MAUL.RED.TASKS ;
|
||||
|
||||
|
||||
Restart first replicat:
|
||||
|
||||
start replicat REPLAA
|
||||
|
||||
Now all tables are integrated in first extract/replicat.
|
||||
|
||||
## Remove second extract/replicat
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
unregister extract EXTRAB database
|
||||
delete extract EXTRAB
|
||||
|
||||
dblogin useridalias MAUL
|
||||
unregister replicat REPLAB database
|
||||
delete replicat REPLAB
|
||||
|
||||
12
Golden_Gate/example_01/count_lines.sql
Normal file
12
Golden_Gate/example_01/count_lines.sql
Normal file
@@ -0,0 +1,12 @@
|
||||
select 'ORDERS (target)='||count(1) as "#rows" from RED.ORDERS union
|
||||
select 'ORDERS (source)='||count(1) as "#rows" from GREEN.ORDERS@GREEN_AT_YODA union
|
||||
select 'PRODUCTS (target)='||count(1) as "#rows" from RED.PRODUCTS union
|
||||
select 'PRODUCTS (source)='||count(1) as "#rows" from GREEN.PRODUCTS@GREEN_AT_YODA union
|
||||
select 'USERS (target)='||count(1) as "#rows" from RED.USERS union
|
||||
select 'USERS (source)='||count(1) as "#rows" from GREEN.USERS@GREEN_AT_YODA union
|
||||
select 'TRANSACTIONS (target)='||count(1) as "#rows" from RED.TRANSACTIONS union
|
||||
select 'TRANSACTIONS (source)='||count(1) as "#rows" from GREEN.TRANSACTIONS@GREEN_AT_YODA union
|
||||
select 'TASKS (target)='||count(1) as "#rows" from RED.TASKS union
|
||||
select 'TASKS (source)='||count(1) as "#rows" from GREEN.TASKS@GREEN_AT_YODA
|
||||
order by 1 asc
|
||||
/
|
||||
83
Golden_Gate/example_01/cr_tables.sql
Normal file
83
Golden_Gate/example_01/cr_tables.sql
Normal file
@@ -0,0 +1,83 @@
|
||||
-- Create sequences for primary key generation
|
||||
CREATE SEQUENCE seq_products START WITH 1 INCREMENT BY 1;
|
||||
CREATE SEQUENCE seq_orders START WITH 1 INCREMENT BY 1;
|
||||
CREATE SEQUENCE seq_users START WITH 1 INCREMENT BY 1;
|
||||
CREATE SEQUENCE seq_transactions START WITH 1 INCREMENT BY 1;
|
||||
CREATE SEQUENCE seq_tasks START WITH 1 INCREMENT BY 1;
|
||||
|
||||
-- Create tables with meaningful names and relevant columns
|
||||
CREATE TABLE products (
|
||||
id NUMBER PRIMARY KEY,
|
||||
name VARCHAR2(100),
|
||||
category VARCHAR2(20),
|
||||
quantity INTEGER
|
||||
);
|
||||
|
||||
CREATE TABLE orders (
|
||||
id NUMBER PRIMARY KEY,
|
||||
description VARCHAR2(255),
|
||||
status VARCHAR2(20)
|
||||
);
|
||||
|
||||
CREATE TABLE users (
|
||||
id NUMBER PRIMARY KEY,
|
||||
created_at DATE DEFAULT SYSDATE,
|
||||
username VARCHAR2(20),
|
||||
age INTEGER,
|
||||
location VARCHAR2(20)
|
||||
);
|
||||
|
||||
CREATE TABLE transactions (
|
||||
id NUMBER PRIMARY KEY,
|
||||
amount NUMBER(10,2),
|
||||
currency VARCHAR2(20)
|
||||
);
|
||||
|
||||
CREATE TABLE tasks (
|
||||
id NUMBER PRIMARY KEY,
|
||||
status VARCHAR2(50),
|
||||
priority INTEGER,
|
||||
type VARCHAR2(20),
|
||||
assigned_to VARCHAR2(20)
|
||||
);
|
||||
|
||||
-- Create triggers to auto-generate primary key values using sequences
|
||||
CREATE OR REPLACE TRIGGER trg_products_pk
|
||||
BEFORE INSERT ON products
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
SELECT seq_products.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END;
|
||||
/
|
||||
|
||||
CREATE OR REPLACE TRIGGER trg_orders_pk
|
||||
BEFORE INSERT ON orders
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
SELECT seq_orders.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END;
|
||||
/
|
||||
|
||||
CREATE OR REPLACE TRIGGER trg_users_pk
|
||||
BEFORE INSERT ON users
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
SELECT seq_users.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END;
|
||||
/
|
||||
|
||||
CREATE OR REPLACE TRIGGER trg_transactions_pk
|
||||
BEFORE INSERT ON transactions
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
SELECT seq_transactions.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END;
|
||||
/
|
||||
|
||||
CREATE OR REPLACE TRIGGER trg_tasks_pk
|
||||
BEFORE INSERT ON tasks
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
SELECT seq_tasks.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END;
|
||||
/
|
||||
16
Golden_Gate/example_01/delete_extr_repl.md
Normal file
16
Golden_Gate/example_01/delete_extr_repl.md
Normal file
@@ -0,0 +1,16 @@
|
||||
## Delete an integreted replicat
|
||||
|
||||
dblogin useridalias SITHPRD
|
||||
stop replicat REPLAB
|
||||
unregister replicat REPLAB database
|
||||
delete replicat REPLAB
|
||||
info all
|
||||
|
||||
## Delete an integreted extract
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
stop extract EXTRAB
|
||||
unregister extract EXTRAB database
|
||||
delete extract EXTRAB
|
||||
info all
|
||||
|
||||
20
Golden_Gate/example_01/job_actions.sql
Normal file
20
Golden_Gate/example_01/job_actions.sql
Normal file
@@ -0,0 +1,20 @@
|
||||
--Stop the job (Disable)
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.disable('JOB_MANAGE_DATA');
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
--Restart the job
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.enable('JOB_MANAGE_DATA');
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
--Fully Remove the Job
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.drop_job('JOB_MANAGE_DATA');
|
||||
END;
|
||||
/
|
||||
|
||||
195
Golden_Gate/example_01/repair_failed_table.md
Normal file
195
Golden_Gate/example_01/repair_failed_table.md
Normal file
@@ -0,0 +1,195 @@
|
||||
## Context
|
||||
Replicat is ABBENDED because of data issue.
|
||||
The aim is to restablish the replicat and minimize the downtime.
|
||||
|
||||
## Provoke a failure on replicat
|
||||
On target database truncate RED.TRANSACTIONS table:
|
||||
|
||||
truncate table RED.TRANSACTIONS;
|
||||
|
||||
Replicat will be abbended because of update/delete orders:
|
||||
|
||||
status replicat REPLAA
|
||||
REPLICAT REPLAA: ABENDED
|
||||
|
||||
## Remove tablme from replicat
|
||||
|
||||
Comment MAP line relative to TRANSACTIONS table on replicat and restart the replicat.
|
||||
|
||||
edit params REPLAA
|
||||
|
||||
replicat REPLAA
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAA.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
|
||||
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
|
||||
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
|
||||
|
||||
|
||||
-- map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ;
|
||||
map YODA.GREEN.TASKS , target MAUL.RED.TASKS ;
|
||||
|
||||
|
||||
start replicat REPLAA
|
||||
|
||||
At this moment replicat should be **RUNNING**.
|
||||
|
||||
## Create a dedicated extract/replicat for the table in failiure
|
||||
|
||||
Create a second extract EXTRAB to manage the new tables.
|
||||
Define extract parameters:
|
||||
|
||||
edit params EXTRAB
|
||||
|
||||
extract EXTRAB
|
||||
useridalias JEDIPRD
|
||||
sourcecatalog YODA
|
||||
exttrail ./dirdat/ab
|
||||
purgeoldextracts
|
||||
checkpointsecs 1
|
||||
ddl include mapped
|
||||
warnlongtrans 1h, checkinterval 30m
|
||||
table GREEN.TRANSACTIONS;
|
||||
|
||||
Add, register and start extract:
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
add extract EXTRAB, integrated tranlog, begin now
|
||||
add exttrail ./dirdat/ab, extract EXTRAB
|
||||
register extract EXTRAB, database container (YODA)
|
||||
start extract EXTRAB
|
||||
info extract EXTRAB detail
|
||||
|
||||
> Start **distribution path** (aka **PUMP**) if the replicat is running on distant site (Golden Gate deployment)
|
||||
|
||||
## Initial load
|
||||
|
||||
Note down the current SCN on source database.
|
||||
|
||||
SQL> select current_scn from v$database;
|
||||
|
||||
CURRENT_SCN
|
||||
-----------
|
||||
12234159
|
||||
|
||||
On target DB create tables structure for TRANSACTIONS, TASKS and do the inlitial load:
|
||||
|
||||
SCN=12234159
|
||||
impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_03.log remap_schema=GREEN:RED tables=GREEN.TRANSACTIONS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN
|
||||
|
||||
## New replicat setup
|
||||
|
||||
Define extract parameters.
|
||||
Pay attention to `filter(@GETENV ('TRANSACTION','CSN')` clause to be setup to SCN of intial datapump load.
|
||||
|
||||
edit params REPLAB
|
||||
|
||||
replicat REPLAB
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAB.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 12234159);
|
||||
|
||||
Add and start new replicat:
|
||||
|
||||
add replicat REPLAB, integrated, exttrail ./dirdat/ab
|
||||
dblogin useridalias SITHPRD
|
||||
register replicat REPLAB database
|
||||
start replicat REPLAB
|
||||
info all
|
||||
|
||||
Check if new replicat is running and wait to lag 0.
|
||||
|
||||
## Reintegrate table to initial extract/replicat
|
||||
|
||||
Now, TRANSACTIONS table is replicated by EXTRAB/REPLAB, but not by intial replication EXTRAA/REPLAA.
|
||||
Let's reintegrate TRANSACTIONS in intial replication EXTRAA/REPLAA.
|
||||
Note that TRANSACTIONS was not removed from EXTRAA definition, so all table changes are still recorded in EXTRAA trail files.
|
||||
|
||||
Stop extracts in this **strictly order**:
|
||||
- **first** extract: EXTRAA
|
||||
- **second** extract: EXTRAB
|
||||
|
||||
> It is **mandatory** to stop extracts in this order.
|
||||
> **The applied SCN on first replicat tables must be less than the SCN on second replicat** in order to allow the first replicat to start at the last applied position in the trail file. Like this, the first replicat must not be repositionned in the past.
|
||||
|
||||
stop EXTRACT EXTRAA
|
||||
stop EXTRACT EXTRAB
|
||||
|
||||
Now stop both replicat also:
|
||||
|
||||
stop replicat REPLAA
|
||||
stop replicat REPLAB
|
||||
|
||||
Note down the SCN for each extract and premare new params file for initial replicat.
|
||||
|
||||
info extract EXTRAA detail
|
||||
info extract EXTRAB detail
|
||||
|
||||
In my case:
|
||||
- EXTRAA: SCN=12245651
|
||||
- EXTRAB: SCN=12245894
|
||||
|
||||
> The SCN of EXTRAB should be greater than the SCN of EXTRAA
|
||||
|
||||
Update REPLAA replicat parameter file in accordance with the latest SCN applied TRANSACTION table (the SCN of EXTRAB):
|
||||
|
||||
edit params REPLAA
|
||||
|
||||
replicat REPLAA
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAA.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.ORDERS, target MAUL.RED.ORDERS ;
|
||||
map YODA.GREEN.PRODUCTS, target MAUL.RED.PRODUCTS ;
|
||||
map YODA.GREEN.USERS, target MAUL.RED.USERS ;
|
||||
map YODA.GREEN.TASKS, target MAUL.RED.TASKS ;
|
||||
|
||||
map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 12245894);
|
||||
|
||||
|
||||
Start first extract/replicat
|
||||
|
||||
start extract EXTRAA
|
||||
start replicat REPLAA
|
||||
|
||||
When the lag is zero you can remove filter `filter(@GETENV ('TRANSACTION','CSN')` from REPLAA.
|
||||
|
||||
stop replicat REPLAA
|
||||
|
||||
edit params REPLAA
|
||||
|
||||
replicat REPLAA
|
||||
useridalias MAUL
|
||||
dboptions enable_instantiation_filtering
|
||||
discardfile REPLAA.dsc, purge, megabytes 10
|
||||
|
||||
map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ;
|
||||
map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ;
|
||||
map YODA.GREEN.USERS , target MAUL.RED.USERS ;
|
||||
map YODA.GREEN.TASKS , target MAUL.RED.TASKS ;
|
||||
|
||||
map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ;
|
||||
|
||||
Restart REPLAA replicat:
|
||||
|
||||
start replicat REPLAA
|
||||
|
||||
Now all tables are integrated in first extract/replicat.
|
||||
|
||||
## Remove second extract/replicat
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
unregister extract EXTRAB database
|
||||
delete extract EXTRAB
|
||||
|
||||
dblogin useridalias MAUL
|
||||
unregister replicat REPLAB database
|
||||
delete replicat REPLAB
|
||||
|
||||
Stop and delete **distribution path** (aka **PUMP**) if the replicat is running on distant site (Golden Gate deployment).
|
||||
|
||||
91
Golden_Gate/example_01/worlkoad_as_job.sql
Normal file
91
Golden_Gate/example_01/worlkoad_as_job.sql
Normal file
@@ -0,0 +1,91 @@
|
||||
-- Step 1: Create the stored procedure
|
||||
CREATE OR REPLACE PROCEDURE manage_data IS
|
||||
new_products INTEGER default 3;
|
||||
new_orders INTEGER default 10;
|
||||
new_users INTEGER default 2;
|
||||
new_transactions INTEGER default 20;
|
||||
new_tasks INTEGER default 5;
|
||||
BEGIN
|
||||
FOR i IN 1..new_products LOOP
|
||||
INSERT INTO products (id, name, category, quantity)
|
||||
VALUES (seq_products.NEXTVAL,
|
||||
DBMS_RANDOM.STRING('A', 10),
|
||||
DBMS_RANDOM.STRING('A', 20),
|
||||
TRUNC(DBMS_RANDOM.VALUE(1, 100)));
|
||||
END LOOP;
|
||||
|
||||
|
||||
FOR i IN 1..new_orders LOOP
|
||||
INSERT INTO orders (id, description, status)
|
||||
VALUES (seq_orders.NEXTVAL,
|
||||
DBMS_RANDOM.STRING('A', 50),
|
||||
DBMS_RANDOM.STRING('A', 20));
|
||||
END LOOP;
|
||||
|
||||
|
||||
FOR i IN 1..new_users LOOP
|
||||
INSERT INTO users (id, created_at, username, age, location)
|
||||
VALUES (seq_users.NEXTVAL, SYSDATE,
|
||||
DBMS_RANDOM.STRING('A', 15),
|
||||
TRUNC(DBMS_RANDOM.VALUE(18, 60)),
|
||||
DBMS_RANDOM.STRING('A', 20));
|
||||
END LOOP;
|
||||
|
||||
|
||||
FOR i IN 1..new_transactions LOOP
|
||||
INSERT INTO transactions (id, amount, currency)
|
||||
VALUES (seq_transactions.NEXTVAL,
|
||||
ROUND(DBMS_RANDOM.VALUE(1, 10000), 2),
|
||||
DBMS_RANDOM.STRING('A', 3));
|
||||
END LOOP;
|
||||
|
||||
|
||||
FOR i IN 1..new_tasks LOOP
|
||||
INSERT INTO tasks (id, status, priority, type, assigned_to)
|
||||
VALUES (seq_tasks.NEXTVAL,
|
||||
DBMS_RANDOM.STRING('A', 20),
|
||||
TRUNC(DBMS_RANDOM.VALUE(1, 10)),
|
||||
DBMS_RANDOM.STRING('A', 20),
|
||||
DBMS_RANDOM.STRING('A', 15));
|
||||
END LOOP;
|
||||
|
||||
-- Update 2 random rows in each table
|
||||
UPDATE products SET quantity = TRUNC(DBMS_RANDOM.VALUE(1, 200))
|
||||
WHERE id IN (SELECT id FROM products ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
|
||||
|
||||
UPDATE orders SET status = DBMS_RANDOM.STRING('A', 20)
|
||||
WHERE id IN (SELECT id FROM orders ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
|
||||
|
||||
UPDATE users SET age = TRUNC(DBMS_RANDOM.VALUE(18, 75))
|
||||
WHERE id IN (SELECT id FROM users ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
|
||||
|
||||
UPDATE transactions SET amount = ROUND(DBMS_RANDOM.VALUE(1, 5000), 2)
|
||||
WHERE id IN (SELECT id FROM transactions ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
|
||||
|
||||
UPDATE tasks SET priority = TRUNC(DBMS_RANDOM.VALUE(1, 10))
|
||||
WHERE id IN (SELECT id FROM tasks ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY);
|
||||
|
||||
-- Delete 1 random row from each table
|
||||
DELETE FROM products WHERE id = (SELECT id FROM products ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
|
||||
DELETE FROM orders WHERE id = (SELECT id FROM orders ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
|
||||
DELETE FROM users WHERE id = (SELECT id FROM users ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
|
||||
DELETE FROM transactions WHERE id = (SELECT id FROM transactions ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
|
||||
DELETE FROM tasks WHERE id = (SELECT id FROM tasks ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY);
|
||||
|
||||
COMMIT;
|
||||
END;
|
||||
/
|
||||
|
||||
-- Step 2: Create a scheduled job to run every 10 seconds
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.create_job (
|
||||
job_name => 'JOB_MANAGE_DATA',
|
||||
job_type => 'PLSQL_BLOCK',
|
||||
job_action => 'BEGIN manage_data; END;',
|
||||
start_date => SYSTIMESTAMP,
|
||||
repeat_interval => 'FREQ=SECONDLY; INTERVAL=10',
|
||||
enabled => TRUE
|
||||
);
|
||||
END;
|
||||
/
|
||||
|
||||
74
Golden_Gate/ogg_01.txt
Normal file
74
Golden_Gate/ogg_01.txt
Normal file
@@ -0,0 +1,74 @@
|
||||
https://www.dbi-services.com/blog/setting-up-a-sample-replication-with-goldengate/
|
||||
|
||||
|
||||
# source: 19c database, schema OTTER, NON-CDB //togoria:1521/ANDOPRD
|
||||
# target: 21c database, schema BEAVER, PDB //bakura:1521/WOMBAT
|
||||
|
||||
|
||||
-- on source DB
|
||||
create user OTTER identified by "K91@9kLorg1j_7OxV";
|
||||
grant connect,resource to OTTER;
|
||||
alter user OTTER quota unlimited on USERS;
|
||||
|
||||
-- on target DB
|
||||
create user BEAVER identified by "Versq99#LerB009aX";
|
||||
grant connect,resource to BEAVER;
|
||||
alter user BEAVER quota unlimited on USERS;
|
||||
|
||||
# on BOTH databases
|
||||
###################
|
||||
|
||||
# check if ARCHIVELOG mode is ON
|
||||
archive log list;
|
||||
|
||||
# activate integrated OGG replication
|
||||
alter system set enable_goldengate_replication=TRUE scope=both sid='*';
|
||||
|
||||
# put databases in FORCE LOGGING mode
|
||||
alter database force logging;
|
||||
|
||||
# add suplimental log
|
||||
alter database add supplemental log data;
|
||||
|
||||
# create a GoldenGate admin user
|
||||
create user OGGADMIN identified by "eXtpam!ZarghOzVe81p@1";
|
||||
grant create session to OGGADMIN;
|
||||
grant select any dictionary to OGGADMIN;
|
||||
exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE ('OGGADMIN');
|
||||
grant flashback any table to OGGADMIN;
|
||||
|
||||
# test GoldenGate admin user connections
|
||||
sqlplus /nolog
|
||||
connect OGGADMIN/"eXtpam!ZarghOzVe81p@1"@//togoria:1521/ANDOPRD
|
||||
connect OGGADMIN/"eXtpam!ZarghOzVe81p@1"@//bakura:1521/WOMBAT
|
||||
|
||||
|
||||
# create tables to repliacate on source DB
|
||||
create table OTTER.T1(d date);
|
||||
|
||||
|
||||
ggsci
|
||||
create wallet
|
||||
add credentialstore
|
||||
alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "eXtpam!ZarghOzVe81p@1" alias ANDOPRD
|
||||
info credentialstore
|
||||
|
||||
dblogin useridalias ANDOPRD
|
||||
add trandata OTTER.T1
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# cleanup
|
||||
#########
|
||||
# on source DB
|
||||
drop user OTTER cascade;
|
||||
drop user OGGADMIN cascade;
|
||||
# on target DB
|
||||
drop user WOMBAT cascade;
|
||||
drop user OGGADMIN cascade;
|
||||
|
||||
|
||||
|
||||
|
||||
128
Golden_Gate/ogg_02.txt
Normal file
128
Golden_Gate/ogg_02.txt
Normal file
@@ -0,0 +1,128 @@
|
||||
alias gg='rlwrap /app/oracle/product/ogg21/ggsci'
|
||||
|
||||
create user OGGADMIN identified by "eXtpam!ZarghOzVe81p@1";
|
||||
# maybe too much
|
||||
grant DBA to OGGADMIN;
|
||||
|
||||
add credentialstore
|
||||
info credentialstore domain admin
|
||||
alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "eXtpam!ZarghOzVe81p@1" alias ANDOPRD domain admin
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
list tables OTTER.*
|
||||
# delete trandata OTTER.*
|
||||
add trandata OTTER.*
|
||||
|
||||
Edit params ./GLOBALS
|
||||
#-->
|
||||
GGSCHEMA OGGADMIN
|
||||
#<--
|
||||
|
||||
edit params myextr1
|
||||
#-->
|
||||
EXTRACT myextr1
|
||||
USERID OGGADMIN@//togoria:1521/ANDOPRD, PASSWORD "eXtpam!ZarghOzVe81p@1"
|
||||
EXTTRAIL ./dirdat/ex
|
||||
CHECKPOINTSECS 1
|
||||
TABLE OTTER.*;
|
||||
#<--
|
||||
|
||||
|
||||
ADD EXTRACT myextr1, TRANLOG, BEGIN now
|
||||
REGISTER EXTRACT myextr1, DATABASE
|
||||
ADD EXTTRAIL ./dirdat/ex, EXTRACT myextr1
|
||||
START EXTRACT myextr1
|
||||
info myextr1
|
||||
|
||||
edit param mypump1
|
||||
#-->
|
||||
EXTRACT mypump1
|
||||
PASSTHRU
|
||||
RMTHOST bakura, MGRPORT 7809
|
||||
RMTTRAIL ./dirdat/RT
|
||||
CHECKPOINTSECS 1
|
||||
TABLE OTTER.*;
|
||||
#<--
|
||||
|
||||
|
||||
|
||||
ADD EXTRACT mypump1, EXTTRAILSOURCE ./dirdat/ex
|
||||
Add RMTTRAIL ./dirdat/rt, EXTRACT mypump1
|
||||
START EXTRACT mypump1
|
||||
info mypump1
|
||||
|
||||
add checkpointtable OGGADMIN.checkpointtable
|
||||
|
||||
|
||||
add credentialstore
|
||||
info credentialstore domain admin
|
||||
alter credentialstore add user OGGADMIN@//bakura:1521/EWOKPRD password "eXtpam!ZarghOzVe81p@1" alias EWOKPRD domain admin
|
||||
dblogin useridalias EWOKPRD domain admin
|
||||
|
||||
|
||||
add checkpointtable OGGADMIN.checkpointtable
|
||||
|
||||
edit params myrepl1
|
||||
#-->
|
||||
REPLICAT myrepl1
|
||||
USERID OGGADMIN@//bakura:1521/EWOKPRD, PASSWORD "eXtpam!ZarghOzVe81p@1"
|
||||
DISCARDFILE ./dirdsc/myrepl1.dsc, PURGE
|
||||
ASSUMETARGETDEFS
|
||||
MAP OTTER.*, TARGET OTTER.*;
|
||||
#<--
|
||||
|
||||
add replicat myrepl1, EXTTRAIL ./dirdat/RT, checkpointtable OGGADMIN.checkpointtable
|
||||
|
||||
start MYREPL1
|
||||
|
||||
create spfile='/app/oracle/base/admin/EWOKPRD/spfile/spfileEWOKPRD.ora' from pfile='/mnt/yavin4/tmp/_oracle_/tmp/ANDO.txt';
|
||||
|
||||
# create a static listener to connect as sysdba in NOMOUNT state
|
||||
|
||||
oracle@bakura[EWOKPRD]:/mnt/yavin4/tmp/_oracle_/tmp$ cat listener.ora
|
||||
|
||||
MYLSNR =
|
||||
(DESCRIPTION_LIST =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS = (PROTOCOL = TCP)(HOST = bakura)(PORT = 1600))
|
||||
)
|
||||
)
|
||||
|
||||
SID_LIST_MYLSNR =
|
||||
(SID_LIST =
|
||||
(SID_DESC =
|
||||
(GLOBAL_DBNAME = EWOKPRD_STATIC)
|
||||
(SID_NAME = EWOKPRD)
|
||||
(ORACLE_HOME = /app/oracle/product/19)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/tmp
|
||||
lsnrctl start MYLSNR
|
||||
lsnrctl status MYLSNR
|
||||
|
||||
|
||||
connect sys/"Secret00!"@//bakura:1600/EWOKPRD_STATIC as sysdba
|
||||
connect sys/"Secret00!"@//togoria:1521/ANDOPRD as sysdba
|
||||
|
||||
|
||||
rman target=sys/"Secret00!"@//togoria:1521/ANDOPRD auxiliary=sys/"Secret00!"@//bakura:1600/EWOKPRD_STATIC
|
||||
run {
|
||||
allocate channel pri1 device type DISK;
|
||||
allocate channel pri2 device type DISK;
|
||||
allocate channel pri3 device type DISK;
|
||||
allocate channel pri4 device type DISK;
|
||||
allocate auxiliary channel aux1 device type DISK;
|
||||
allocate auxiliary channel aux2 device type DISK;
|
||||
allocate auxiliary channel aux3 device type DISK;
|
||||
allocate auxiliary channel aux4 device type DISK;
|
||||
duplicate target database to 'EWOK'
|
||||
from active database
|
||||
using compressed backupset section size 1G;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
147
Golden_Gate/ogg_03.txt
Normal file
147
Golden_Gate/ogg_03.txt
Normal file
@@ -0,0 +1,147 @@
|
||||
-- https://www.dbi-services.com/blog/performing-an-initial-load-with-goldengate-1-file-to-replicat/
|
||||
-- https://www.dbi-services.com/blog/performing-an-initial-load-with-goldengate-2-expdpimpdp/
|
||||
|
||||
Source DB: ANDOPRD@togoria
|
||||
Target DB: EWOKPRD@bakura
|
||||
|
||||
alias gg='rlwrap /app/oracle/product/ogg21/ggsci'
|
||||
|
||||
# install HR schema on source database
|
||||
@install.sql
|
||||
|
||||
# install HR schema on target database, disable constraints and delete all data
|
||||
@install.sql
|
||||
|
||||
connect / as sysdba
|
||||
declare
|
||||
lv_statement varchar2(2000);
|
||||
begin
|
||||
for r in ( select c.CONSTRAINT_NAME, c.TABLE_NAME
|
||||
from dba_constraints c
|
||||
, dba_tables t
|
||||
where c.owner = 'HR'
|
||||
and t.table_name = c.table_name
|
||||
and t.owner = 'HR'
|
||||
and c.constraint_type != 'P'
|
||||
)
|
||||
loop
|
||||
lv_statement := 'alter table hr.'||r.TABLE_NAME||' disable constraint '||r.CONSTRAINT_NAME;
|
||||
execute immediate lv_statement;
|
||||
end loop;
|
||||
for r in ( select table_name
|
||||
from dba_tables
|
||||
where owner = 'HR'
|
||||
)
|
||||
loop
|
||||
execute immediate 'delete hr.'||r.table_name;
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
|
||||
select count(*) from hr.employees;
|
||||
select count(*) from hr.jobs;
|
||||
|
||||
# create OGGADMIN user on both databases
|
||||
create user OGGADMIN identified by "Chan8em11fUwant!";
|
||||
grant dba to OGGADMIN;
|
||||
|
||||
|
||||
# on source machine
|
||||
add credentialstore
|
||||
info credentialstore domain admin
|
||||
alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "Chan8em11fUwant!" alias ANDOPRD domain admin
|
||||
info credentialstore domain admin
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
|
||||
# on target machine
|
||||
add credentialstore
|
||||
info credentialstore domain admin
|
||||
alter credentialstore add user OGGADMIN@//bakura:1521/EWOKPRD password "Chan8em11fUwant!" alias EWOKPRD domain admin
|
||||
info credentialstore domain admin
|
||||
dblogin useridalias EWOKPRD domain admin
|
||||
|
||||
|
||||
# on source machine
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
list tables HR.*
|
||||
add trandata HR.*
|
||||
|
||||
|
||||
# on source, in order to catch transactions during the initial load, we will create an extract for Change Data Capture
|
||||
|
||||
edit params extrcdc1
|
||||
-------------------------------->
|
||||
EXTRACT extrcdc1
|
||||
useridalias ANDOPRD domain admin
|
||||
EXTTRAIL ./dirdat/gg
|
||||
LOGALLSUPCOLS
|
||||
UPDATERECORDFORMAT compact
|
||||
TABLE HR.*;
|
||||
TABLEEXCLUDE HR.EMP_DETAILS_VIEW;
|
||||
<--------------------------------
|
||||
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
register extract extrcdc1 database
|
||||
|
||||
add extract extrcdc1, integrated tranlog, begin now
|
||||
EXTRACT added.
|
||||
|
||||
add extract extrcdc1, integrated tranlog, begin now
|
||||
add exttrail ./dirdat/gg, extract extrcdc1, megabytes 5
|
||||
|
||||
# on source, configure the datapump
|
||||
edit params dppump1
|
||||
-------------------------------->
|
||||
EXTRACT dppump1
|
||||
PASSTHRU
|
||||
RMTHOST bakura, MGRPORT 7809
|
||||
RMTTRAIL ./dirdat/jj
|
||||
TABLE HR.*;
|
||||
TABLEEXCLUDE HR.EMP_DETAILS_VIEW;
|
||||
<--------------------------------
|
||||
|
||||
add extract dppump1, exttrailsource ./dirdat/gg
|
||||
add rmttrail ./dirdat/jj, extract dppump1, megabytes 5
|
||||
|
||||
# on sourxe, start extracts CDC capture and datapump
|
||||
start extract dppump1
|
||||
start extract extrcdc1
|
||||
info *
|
||||
|
||||
# on target, configure replicat for CDC
|
||||
|
||||
edit params replcdd
|
||||
-------------------------------->
|
||||
REPLICAT replcdd
|
||||
ASSUMETARGETDEFS
|
||||
DISCARDFILE ./dirrpt/replccd.dsc, purge
|
||||
useridalias EWOKPRD domain admin
|
||||
MAP HR.*, TARGET HR.*;
|
||||
<--------------------------------
|
||||
|
||||
dblogin useridalias EWOKPRD domain admin
|
||||
add replicat replcdd, integrated, exttrail ./dirdat/jj
|
||||
|
||||
# We will NOT START the replicat right now as we wan to do the initial load before
|
||||
|
||||
# Note down the current scn of the source database
|
||||
SQL> select current_scn from v$database;
|
||||
|
||||
CURRENT_SCN
|
||||
-----------
|
||||
3968490
|
||||
|
||||
# on destination, import HS schema
|
||||
create public database link ANDOPRD connect to OGGADMIN identified by "Chan8em11fUwant!" using '//togoria:1521/ANDOPRD';
|
||||
select * from DUAL@ANDOPRD;
|
||||
|
||||
impdp userid=OGGADMIN/"Chan8em11fUwant!"@//bakura:1521/EWOKPRD logfile=MY:HR.log network_link=ANDOPRD schemas=HR flashback_scn=3968490
|
||||
|
||||
start replicat replcdd, aftercsn 3968490
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
416
Golden_Gate/ogg_04.txt
Normal file
416
Golden_Gate/ogg_04.txt
Normal file
@@ -0,0 +1,416 @@
|
||||
# setup source schema
|
||||
#####################
|
||||
|
||||
create user WOMBAT identified by "NDbGvewNHVj8@#2FFGfz!De";
|
||||
grant connect, resource to WOMBAT;
|
||||
alter user WOMBAT quota unlimited on USERS;
|
||||
|
||||
connect WOMBAT/"NDbGvewNHVj8@#2FFGfz!De";
|
||||
|
||||
drop table T0 purge;
|
||||
drop table T1 purge;
|
||||
drop table T2 purge;
|
||||
drop table T3 purge;
|
||||
|
||||
create table JOB (
|
||||
id NUMBER GENERATED ALWAYS AS IDENTITY,
|
||||
d DATE not null
|
||||
);
|
||||
alter table JOB add constraint JOB_PK_ID primary key (ID);
|
||||
|
||||
|
||||
create table T0 (
|
||||
id NUMBER GENERATED ALWAYS AS IDENTITY,
|
||||
d DATE not null,
|
||||
c VARCHAR2(20),
|
||||
n NUMBER
|
||||
)
|
||||
partition by range (d)
|
||||
interval (interval '1' MONTH) (
|
||||
partition p0 values less than (DATE'2000-01-01')
|
||||
)
|
||||
;
|
||||
|
||||
alter table T0 add constraint T0_PK_ID primary key (ID);
|
||||
|
||||
create table T1 (
|
||||
d DATE not null,
|
||||
c VARCHAR2(10),
|
||||
n1 NUMBER,
|
||||
n2 NUMBER
|
||||
)
|
||||
partition by range (d)
|
||||
interval (interval '1' MONTH) (
|
||||
partition p0 values less than (DATE'2000-01-01')
|
||||
)
|
||||
;
|
||||
|
||||
create table T2 (
|
||||
d DATE not null,
|
||||
n1 NUMBER,
|
||||
n2 NUMBER,
|
||||
n3 NUMBER
|
||||
)
|
||||
partition by range (d)
|
||||
interval (interval '1' MONTH) (
|
||||
partition p0 values less than (DATE'2000-01-01')
|
||||
)
|
||||
;
|
||||
|
||||
create table T3 (
|
||||
d DATE not null,
|
||||
n NUMBER,
|
||||
c1 VARCHAR2(10),
|
||||
c2 VARCHAR2(10),
|
||||
c3 VARCHAR2(10)
|
||||
)
|
||||
partition by range (d)
|
||||
interval (interval '1' MONTH) (
|
||||
partition p0 values less than (DATE'2000-01-01')
|
||||
)
|
||||
;
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION random_date(
|
||||
p_from IN DATE,
|
||||
p_to IN DATE
|
||||
) RETURN DATE
|
||||
IS
|
||||
BEGIN
|
||||
RETURN p_from + DBMS_RANDOM.VALUE() * (p_to -p_from);
|
||||
END random_date;
|
||||
/
|
||||
|
||||
CREATE OR REPLACE FUNCTION random_string(
|
||||
maxsize IN NUMBER
|
||||
) RETURN VARCHAR2
|
||||
IS
|
||||
BEGIN
|
||||
RETURN dbms_random.string('x',maxsize);
|
||||
END random_string;
|
||||
/
|
||||
|
||||
CREATE OR REPLACE FUNCTION random_integer(
|
||||
maxvalue IN NUMBER
|
||||
) RETURN NUMBER
|
||||
IS
|
||||
BEGIN
|
||||
RETURN trunc(dbms_random.value(1,maxvalue));
|
||||
END random_integer;
|
||||
/
|
||||
|
||||
# add some data into tables
|
||||
###########################
|
||||
|
||||
set timing ON
|
||||
|
||||
DECLARE
|
||||
imax NUMBER default 100000;
|
||||
i number;
|
||||
begin
|
||||
dbms_random.seed (val => 0);
|
||||
for i in 1 .. imax loop
|
||||
insert /*+ APPEND */ into T0 (d,c,n) values (random_date(DATE'2000-01-01',SYSDATE),random_string(20),random_integer(999999999));
|
||||
end loop;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
DECLARE
|
||||
imax NUMBER default 100000;
|
||||
i number;
|
||||
begin
|
||||
dbms_random.seed (val => 0);
|
||||
for i in 1 .. imax loop
|
||||
insert /*+ APPEND */ into T1 (d,c,n1,n2) values (random_date(DATE'2000-01-01',SYSDATE),random_string(10),random_integer(999999999),random_integer(999999999));
|
||||
end loop;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
DECLARE
|
||||
imax NUMBER default 100000;
|
||||
i number;
|
||||
begin
|
||||
dbms_random.seed (val => 0);
|
||||
for i in 1 .. imax loop
|
||||
insert /*+ APPEND */ into T2 (d,n1,n2,n3) values (random_date(DATE'2000-01-01',SYSDATE),random_integer(999999999),random_integer(999999999),random_integer(999999999));
|
||||
end loop;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
DECLARE
|
||||
imax NUMBER default 100000;
|
||||
i number;
|
||||
begin
|
||||
dbms_random.seed (val => 0);
|
||||
for i in 1 .. imax loop
|
||||
insert /*+ APPEND */ into T3 (d,n,c1,c2,c3) values (random_date(DATE'2000-01-01',SYSDATE),random_integer(999999999),random_string(10),random_string(10),random_string(10));
|
||||
end loop;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
# run this PL/SQL block to generate living data
|
||||
###############################################
|
||||
connect WOMBAT/"NDbGvewNHVj8@#2FFGfz!De";
|
||||
|
||||
DECLARE
|
||||
i number;
|
||||
begin
|
||||
loop
|
||||
sys.dbms_session.sleep(5);
|
||||
dbms_random.seed (val => 0);
|
||||
i:=random_integer(999999999);
|
||||
insert into JOB (d) values (sysdate);
|
||||
|
||||
update T0 set c=random_string(20) where n=i;
|
||||
update T1 set c=random_string(20) where n2 between i-1000 and i+1000;
|
||||
update T2 set d=random_date(DATE'2000-01-01',SYSDATE) where n1 between i-1000 and i+1000;
|
||||
update T3 set c1=random_string(20),d=random_date(DATE'2000-01-01',SYSDATE) where n between i-1000 and i+1000;
|
||||
|
||||
insert into T0 (d,c,n) values (random_date(DATE'2000-01-01',SYSDATE),random_string(20),random_integer(999999999));
|
||||
insert into T1 (d,c,n1,n2) values (random_date(DATE'2000-01-01',SYSDATE),random_string(10),random_integer(999999999),random_integer(999999999));
|
||||
insert into T2 (d,n1,n2,n3) values (random_date(DATE'2000-01-01',SYSDATE),random_integer(999999999),random_integer(999999999),random_integer(999999999));
|
||||
insert into T3 (d,c1,c2,c3) values (random_date(DATE'2000-01-01',SYSDATE),random_string(10),random_string(10),random_string(10));
|
||||
|
||||
commit;
|
||||
exit when 1=0;
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
## Golden Gate setup
|
||||
####################
|
||||
|
||||
# on source & destination
|
||||
alias gg='rlwrap /app/oracle/product/ogg21/ggsci'
|
||||
|
||||
create user OGGADMIN identified by "eXtpam!ZarghOzVe81p@1";
|
||||
# maybe too much
|
||||
grant DBA to OGGADMIN;
|
||||
|
||||
Edit params ./GLOBALS
|
||||
#-->
|
||||
GGSCHEMA OGGADMIN
|
||||
#<--
|
||||
|
||||
# on source
|
||||
add credentialstore
|
||||
info credentialstore domain admin
|
||||
alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "eXtpam!ZarghOzVe81p@1" alias ANDOPRD domain admin
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
|
||||
# on destination
|
||||
add credentialstore
|
||||
info credentialstore domain admin
|
||||
alter credentialstore add user OGGADMIN@//bakura:1521/EWOKPRD password "Chan8em11fUwant!" alias EWOKPRD domain admin
|
||||
info credentialstore domain admin
|
||||
dblogin useridalias EWOKPRD domain admin
|
||||
|
||||
|
||||
# setup replication only for tables T0, T1 and T2
|
||||
#################################################
|
||||
|
||||
# on source machine
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
list tables WOMBAT.*
|
||||
add trandata WOMBAT.T0
|
||||
add trandata WOMBAT.T1
|
||||
add trandata WOMBAT.T2
|
||||
|
||||
edit params extr_w1
|
||||
-------------------------------->
|
||||
EXTRACT extr_w1
|
||||
useridalias ANDOPRD domain admin
|
||||
EXTTRAIL ./dirdat/w1
|
||||
LOGALLSUPCOLS
|
||||
UPDATERECORDFORMAT compact
|
||||
table WOMBAT.T0;
|
||||
table WOMBAT.T1;
|
||||
table WOMBAT.T2;
|
||||
<--------------------------------
|
||||
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
register extract extr_w1 database
|
||||
|
||||
add extract extr_w1, integrated tranlog, begin now
|
||||
add exttrail ./dirdat/w1, extract extr_w1, megabytes 5
|
||||
|
||||
start extr_w1
|
||||
info extr_w1
|
||||
|
||||
# on source, configure the datapump
|
||||
edit params dpump_w1
|
||||
-------------------------------->
|
||||
EXTRACT dpump_w1
|
||||
PASSTHRU
|
||||
RMTHOST bakura, MGRPORT 7809
|
||||
RMTTRAIL ./dirdat/w1
|
||||
table WOMBAT.T0;
|
||||
table WOMBAT.T1;
|
||||
table WOMBAT.T2;
|
||||
<--------------------------------
|
||||
|
||||
add extract dpump_w1, exttrailsource ./dirdat/w1
|
||||
add rmttrail ./dirdat/w1, extract dpump_w1, megabytes 5
|
||||
|
||||
start dpump_w1
|
||||
info dpump_w1
|
||||
|
||||
# on target, setup replcat but not start it
|
||||
edit params repl_w1
|
||||
-------------------------------->
|
||||
REPLICAT repl_w1
|
||||
ASSUMETARGETDEFS
|
||||
DISCARDFILE ./dirrpt/repl_w1.dsc, purge
|
||||
useridalias EWOKPRD domain admin
|
||||
MAP WOMBAT.T0, TARGET OTTER.T0;
|
||||
MAP WOMBAT.T1, TARGET OTTER.T1;
|
||||
MAP WOMBAT.T2, TARGET OTTER.T2;
|
||||
<--------------------------------
|
||||
|
||||
dblogin useridalias EWOKPRD domain admin
|
||||
add replicat repl_w1, integrated, exttrail ./dirdat/w1
|
||||
|
||||
# perform the intial LOAD
|
||||
#########################
|
||||
|
||||
# Note down the current scn of the source database
|
||||
SQL> select current_scn from v$database;
|
||||
|
||||
CURRENT_SCN
|
||||
-----------
|
||||
4531616
|
||||
|
||||
# on destination, import tables
|
||||
create public database link ANDOPRD connect to OGGADMIN identified by "Chan8em11fUwant!" using '//togoria:1521/ANDOPRD';
|
||||
select * from DUAL@ANDOPRD;
|
||||
|
||||
# create target schema using same DDL defionotion as on source database
|
||||
create user OTTER identified by "50DbGvewN00K@@)2FFGfzKg";
|
||||
grant connect, resource to OTTER;
|
||||
alter user OTTER quota unlimited on USERS;
|
||||
|
||||
impdp userid=OGGADMIN/"Chan8em11fUwant!"@//bakura:1521/EWOKPRD logfile=MY:WOMBAT_01.log network_link=ANDOPRD tables=WOMBAT.T0,WOMBAT.T1,WOMBAT.T2 flashback_scn=4531616 remap_schema=WOMBAT:OTTER
|
||||
|
||||
start repl_w1, aftercsn 4531616
|
||||
|
||||
# when LAG is catched, retart replcat
|
||||
stop repl_w1
|
||||
start repl_w1
|
||||
info repl_w1
|
||||
|
||||
# add 2 tables to SYNC
|
||||
######################
|
||||
|
||||
# on source, add 2 tables to extract & datapump
|
||||
stop dpump_w1
|
||||
stop extr_w1
|
||||
|
||||
# add new tables in extract & datapump parameter files
|
||||
edit params extr_w1
|
||||
-------------------------------->
|
||||
EXTRACT extr_w1
|
||||
useridalias ANDOPRD domain admin
|
||||
EXTTRAIL ./dirdat/w1
|
||||
LOGALLSUPCOLS
|
||||
UPDATERECORDFORMAT compact
|
||||
table WOMBAT.T0;
|
||||
table WOMBAT.T1;
|
||||
table WOMBAT.T2;
|
||||
table WOMBAT.JOB;
|
||||
table WOMBAT.T3;
|
||||
<--------------------------------
|
||||
|
||||
# add trandata for new tables
|
||||
dblogin useridalias ANDOPRD domain admin
|
||||
list tables WOMBAT.*
|
||||
add trandata WOMBAT.JOB
|
||||
add trandata WOMBAT.T3
|
||||
|
||||
start extr_w1
|
||||
info extr_w1
|
||||
|
||||
edit params dpump_w1
|
||||
-------------------------------->
|
||||
EXTRACT dpump_w1
|
||||
PASSTHRU
|
||||
RMTHOST bakura, MGRPORT 7809
|
||||
RMTTRAIL ./dirdat/w1
|
||||
table WOMBAT.T0;
|
||||
table WOMBAT.T1;
|
||||
table WOMBAT.T2;
|
||||
table WOMBAT.JOB;
|
||||
table WOMBAT.T3;
|
||||
<--------------------------------
|
||||
|
||||
start dpump_w1
|
||||
info dpump_w1
|
||||
|
||||
# once extract & datapump are up and running, we will proceed with the initial load of the nexw tables using expdp/impdp
|
||||
# Note down the current scn of the source database
|
||||
SQL> select current_scn from v$database;
|
||||
|
||||
CURRENT_SCN
|
||||
-----------
|
||||
4675686
|
||||
|
||||
impdp userid=OGGADMIN/"Chan8em11fUwant!"@//bakura:1521/EWOKPRD logfile=MY:WOMBAT_02.log network_link=ANDOPRD tables=WOMBAT.JOB,WOMBAT.T3 flashback_scn=4675686 remap_schema=WOMBAT:OTTER
|
||||
|
||||
# on target, stop replicat, add new tables and start FROM THE GOOD SCN ON NEW TABLES
|
||||
stop repl_w1
|
||||
|
||||
edit params repl_w1
|
||||
-------------------------------->
|
||||
REPLICAT repl_w1
|
||||
ASSUMETARGETDEFS
|
||||
DISCARDFILE ./dirrpt/repl_w1.dsc, purge
|
||||
useridalias EWOKPRD domain admin
|
||||
MAP WOMBAT.T0, TARGET OTTER.T0;
|
||||
MAP WOMBAT.T1, TARGET OTTER.T1;
|
||||
MAP WOMBAT.T2, TARGET OTTER.T2;
|
||||
MAP WOMBAT.JOB, TARGET OTTER.JOB, filter(@GETENV ('TRANSACTION','CSN') > 4633243);
|
||||
MAP WOMBAT.T3, TARGET OTTER.T3, filter(@GETENV ('TRANSACTION','CSN') > 4633243);
|
||||
<--------------------------------
|
||||
|
||||
start repl_w1
|
||||
info repl_w1
|
||||
|
||||
# wen lag is catched, remove SCN clauses from replicat and restart
|
||||
|
||||
stop repl_w1
|
||||
|
||||
edit params repl_w1
|
||||
-------------------------------->
|
||||
REPLICAT repl_w1
|
||||
ASSUMETARGETDEFS
|
||||
DISCARDFILE ./dirrpt/repl_w1.dsc, purge
|
||||
useridalias EWOKPRD domain admin
|
||||
MAP WOMBAT.T0, TARGET OTTER.T0;
|
||||
MAP WOMBAT.T1, TARGET OTTER.T1;
|
||||
MAP WOMBAT.T2, TARGET OTTER.T2;
|
||||
MAP WOMBAT.JOB, TARGET OTTER.JOB;
|
||||
MAP WOMBAT.T3, TARGET OTTER.T3;
|
||||
<--------------------------------
|
||||
|
||||
start repl_w1
|
||||
info repl_w1
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
141
Golden_Gate/setup.md
Normal file
141
Golden_Gate/setup.md
Normal file
@@ -0,0 +1,141 @@
|
||||
## Articles
|
||||
|
||||
https://www.dbi-services.com/blog/how-to-create-an-oracle-goldengate-extract-in-multitenant/
|
||||
http://blog.data-alchemy.org/posts/oracle-goldengate-pluggable/
|
||||
|
||||
## Topology
|
||||
|
||||
Databases:
|
||||
- source: CDB: JEDIPRD@wayland, PDB: YODA
|
||||
- target: CDB: SITHPRD@togoria, PDB: MAUL
|
||||
|
||||
## Databases setup for Golden Gate
|
||||
|
||||
In **both** databases, create Golden Gate admin user in `CDB$ROOT`:
|
||||
|
||||
create user c##oggadmin identified by "Secret00!";
|
||||
alter user c##oggadmin quota unlimited on USERS;
|
||||
grant create session, connect,resource,alter system, select any dictionary, flashback any table to c##oggadmin container=all;
|
||||
exec dbms_goldengate_auth.grant_admin_privilege(grantee => 'c##oggadmin',container=>'all');
|
||||
alter user c##oggadmin set container_data=all container=current;
|
||||
grant alter any table to c##oggadmin container=ALL;
|
||||
alter system set enable_goldengate_replication=true scope=both;
|
||||
alter database force logging;
|
||||
alter database add supplemental log data;
|
||||
select supplemental_log_data_min, force_logging from v$database;
|
||||
|
||||
> On **target** database I had to add extra grants:
|
||||
|
||||
grant select any table to c##oggadmin container=ALL;
|
||||
grant insert any table to c##oggadmin container=ALL;
|
||||
grant update any table to c##oggadmin container=ALL;
|
||||
grant delete any table to c##oggadmin container=ALL;
|
||||
|
||||
Create schemas for replicated tables on source and target PDB:
|
||||
|
||||
alter session set container=YODA;
|
||||
create user GREEN identified by "Secret00!";
|
||||
alter user GREEN quota unlimited on USERS;
|
||||
grant connect,resource to GREEN;
|
||||
connect GREEN/"Secret00!"@wayland/YODA;
|
||||
|
||||
|
||||
alter session set container=MAUL;
|
||||
create user RED identified by "Secret00!";
|
||||
alter user RED quota unlimited on USERS;
|
||||
grant connect,resource to RED;
|
||||
connect RED/"Secret00!"@togoria/MAUL;
|
||||
|
||||
## Setup `exegol` Golden Gate deployment
|
||||
|
||||
> My Root CA (added to truststore host) has not be recognized by `admincmient` resulting OGG-12982 error while `curl` works perfectly.
|
||||
|
||||
Solution: define `OGG_CLIENT_TLS_CAPATH` environement variable to my root CA certificate prior to using `admincmient`
|
||||
|
||||
export OGG_CLIENT_TLS_CAPATH=/etc/pki/ca-trust/source/anchors/rootCA.pem
|
||||
|
||||
Add in the credentialstore enteries for database connections:
|
||||
|
||||
adminclient
|
||||
connect https://exegol.swgalaxy:2000 deployment ogg_exegol_deploy as OGGADMIN password "Secret00!"
|
||||
|
||||
Optionaly store credentials to connect to deployement:
|
||||
|
||||
add credentials admin user OGGADMIN password "Secret00!"
|
||||
|
||||
Now we can hide the password when conecting to deployement:
|
||||
|
||||
connect https://exegol.swgalaxy:2000 deployment ogg_exegol_deploy as admin
|
||||
|
||||
Add in the credentialstore enteries for database connections:
|
||||
|
||||
create wallet
|
||||
add credentialstore
|
||||
alter credentialstore add user c##oggadmin@wayland/JEDIPRD password "Secret00!" alias JEDIPRD
|
||||
alter credentialstore add user c##oggadmin@wayland/YODA password "Secret00!" alias YODA
|
||||
info credentialstore
|
||||
|
||||
Test database connections:
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
dblogin useridalias YODA
|
||||
|
||||
To delete a user from credential store:
|
||||
|
||||
alter credentialstore delete user JEDIPRD
|
||||
|
||||
> IMPORTANT: in a database **MULTITENANT** architecture, Golden Gate is working at `CDB$ROOT` level.
|
||||
|
||||
Create the checkpoint table:
|
||||
|
||||
dblogin useridalias JEDIPRD
|
||||
add checkpointtable YODA.c##oggadmin.checkpt
|
||||
|
||||
Set **global** parameters:
|
||||
|
||||
edit GLOBALS
|
||||
|
||||
Put:
|
||||
|
||||
ggschema c##oggadmin
|
||||
checkpointtable YODA.c##oggadmin.checkpt
|
||||
|
||||
|
||||
## Setup `helska` Golden Gate deployment
|
||||
|
||||
adminclient
|
||||
connect https://helska.swgalaxy:2000 deployment ogg_helska_deploy as OGGADMIN password "Secret00!"
|
||||
|
||||
Optionaly store credentials to connect to deployement:
|
||||
|
||||
add credentials admin user OGGADMIN password "Secret00!"
|
||||
|
||||
Now we can hide the password when conecting to deployement:
|
||||
|
||||
connect https://helska.swgalaxy:2000 deployment ogg_helska_deploy as admin
|
||||
|
||||
Add in the credentialstore enteries for database connections:
|
||||
|
||||
alter credentialstore add user c##oggadmin@togoria/SITHPRD password "Secret00!" alias SITHPRD
|
||||
alter credentialstore add user c##oggadmin@togoria/MAUL password "Secret00!" alias MAUL
|
||||
info credentialstore
|
||||
|
||||
Test database connections:
|
||||
|
||||
dblogin useridalias SITHPRD
|
||||
dblogin useridalias MAUL
|
||||
|
||||
Create the checkpoint table:
|
||||
|
||||
dblogin useridalias SITHPRD
|
||||
add checkpointtable MAUL.c##oggadmin.checkpt
|
||||
|
||||
Set **global** parameters:
|
||||
|
||||
edit GLOBALS
|
||||
|
||||
Put:
|
||||
|
||||
ggschema c##oggadmin
|
||||
checkpointtable MAUL.c##oggadmin.checkpt
|
||||
|
||||
39
Oracle_26_AI/install_01.md
Normal file
39
Oracle_26_AI/install_01.md
Normal file
@@ -0,0 +1,39 @@
|
||||
Packages to install before executing `runInstaller`:
|
||||
|
||||
```bash
|
||||
dnf install fontconfig.x86_64 compat-openssl11.x86_64 -y
|
||||
```
|
||||
Script for **stand alone** database creation:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
DB_NAME=DEFENDER
|
||||
ORACLE_UNQNAME=DEFENDERPRD
|
||||
PDB_NAME=SENTINEL
|
||||
SYS_PWD="Secret00!"
|
||||
PDB_PWD="Secret00!"
|
||||
|
||||
dbca -silent \
|
||||
-createDatabase \
|
||||
-templateName General_Purpose.dbc \
|
||||
-gdbname ${ORACLE_UNQNAME} \
|
||||
-sid ${DB_NAME} \
|
||||
-createAsContainerDatabase true \
|
||||
-numberOfPDBs 1 \
|
||||
-pdbName ${PDB_NAME} \
|
||||
-pdbAdminPassword ${PDB_PWD} \
|
||||
-sysPassword ${SYS_PWD} \
|
||||
-systemPassword ${SYS_PWD} \
|
||||
-datafileDestination /data \
|
||||
-storageType FS \
|
||||
-useOMF true \
|
||||
-recoveryAreaDestination /reco \
|
||||
-recoveryAreaSize 10240 \
|
||||
-characterSet AL32UTF8 \
|
||||
-nationalCharacterSet AL16UTF16 \
|
||||
-databaseType MULTIPURPOSE \
|
||||
-automaticMemoryManagement false \
|
||||
-totalMemory 3072
|
||||
```
|
||||
|
||||
268
Oracle_TLS/oracle_tls_01.md
Normal file
268
Oracle_TLS/oracle_tls_01.md
Normal file
@@ -0,0 +1,268 @@
|
||||
# Setup 1: self signed certificated and certificates exchange
|
||||
|
||||
## Server side (togoria)
|
||||
|
||||
Create the wallet:
|
||||
|
||||
orapki wallet create \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-auto_login_local
|
||||
|
||||
|
||||
Create certificate in wallet:
|
||||
|
||||
orapki wallet add \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-dn "CN=togoria.swgalaxy" -keysize 1024 -self_signed -validity 3650
|
||||
|
||||
Display wallet contents (wallet password is not required):
|
||||
|
||||
orapki wallet display \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet"
|
||||
|
||||
Export certificate:
|
||||
|
||||
orapki wallet export \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-dn "CN=togoria.swgalaxy" \
|
||||
-cert /app/oracle/staging_area/TLS_poc/exports/togoria.swgalaxy.crt
|
||||
|
||||
## Client side (wayland)
|
||||
|
||||
Create the wallet:
|
||||
|
||||
orapki wallet create \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-auto_login_local
|
||||
|
||||
Create certificate in wallet:
|
||||
|
||||
orapki wallet add \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-dn "CN=wayland.swgalaxy" -keysize 1024 -self_signed -validity 3650
|
||||
|
||||
Display wallet contents (wallet password is not required):
|
||||
|
||||
orapki wallet display \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet"
|
||||
|
||||
Export certificate:
|
||||
|
||||
orapki wallet export \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-dn "CN=wayland.swgalaxy" \
|
||||
-cert /app/oracle/staging_area/TLS_poc/exports/wayland.swgalaxy.crt
|
||||
|
||||
## Exchange certificates between server and client
|
||||
|
||||
Load client certificate into server wallet as **trusted** certificate:
|
||||
|
||||
orapki wallet add \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-trusted_cert -cert /app/oracle/staging_area/TLS_poc/exports/wayland.swgalaxy.crt
|
||||
|
||||
Load server certificate into client wallet as **trusted** certificate:
|
||||
|
||||
orapki wallet add \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-trusted_cert -cert /app/oracle/staging_area/TLS_poc/exports/togoria.swgalaxy.crt
|
||||
|
||||
## Server side (togoria)
|
||||
|
||||
> It is not possible to use a custom `TNS_ADMIN` for the listener. `sqlnet.ora` and `listener.ora` shound be placed under `$(orabasehome)/network/admin` for a **read-only** `ORACLE_HOME` or under `$ORACLE_HOME/network/admin` for a **read-write** `ORACLE_HOME`
|
||||
|
||||
File `sqlnet.ora`:
|
||||
|
||||
WALLET_LOCATION =
|
||||
(SOURCE =
|
||||
(METHOD = FILE)
|
||||
(METHOD_DATA =
|
||||
(DIRECTORY = /app/oracle/staging_area/TLS_poc/wallet)
|
||||
)
|
||||
)
|
||||
|
||||
SQLNET.AUTHENTICATION_SERVICES = (TCPS,NTS,BEQ)
|
||||
SSL_CLIENT_AUTHENTICATION = FALSE
|
||||
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA)
|
||||
|
||||
|
||||
File `listener.ora`:
|
||||
|
||||
SSL_CLIENT_AUTHENTICATION = FALSE
|
||||
|
||||
WALLET_LOCATION =
|
||||
(SOURCE =
|
||||
(METHOD = FILE)
|
||||
(METHOD_DATA =
|
||||
(DIRECTORY = /app/oracle/staging_area/TLS_poc/wallet)
|
||||
)
|
||||
)
|
||||
|
||||
LISTENER_SECURE =
|
||||
(DESCRIPTION_LIST =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS = (PROTOCOL = TCPS)(HOST = togoria.swgalaxy)(PORT = 24000))
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
Start listener:
|
||||
|
||||
lsnrctl start LISTENER_SECURE
|
||||
|
||||
Register listener in database:
|
||||
|
||||
alter system set local_listener="(DESCRIPTION_LIST =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS = (PROTOCOL = TCPS)(HOST = togoria.swgalaxy)(PORT = 24000))
|
||||
)
|
||||
)"
|
||||
scope=both sid='*';
|
||||
|
||||
alter system register;
|
||||
|
||||
## Client network configuration
|
||||
|
||||
export TNS_ADMIN=/app/oracle/staging_area/TLS_poc/tnsadmin
|
||||
|
||||
File `$TNS_ADMIN/sqlnet.ora`:
|
||||
|
||||
WALLET_LOCATION =
|
||||
(SOURCE =
|
||||
(METHOD = FILE)
|
||||
(METHOD_DATA =
|
||||
(DIRECTORY = /app/oracle/staging_area/TLS_poc/wallet)
|
||||
)
|
||||
)
|
||||
|
||||
SQLNET.AUTHENTICATION_SERVICES = (TCPS,NTS)
|
||||
SSL_CLIENT_AUTHENTICATION = FALSE
|
||||
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA)
|
||||
|
||||
|
||||
File `$TNS_ADMIN/tnsnames.ora`:
|
||||
|
||||
MAUL_24000=
|
||||
(DESCRIPTION=
|
||||
(ADDRESS=
|
||||
(PROTOCOL=TCPS)(HOST=togoria.swgalaxy)(PORT=24000)
|
||||
)
|
||||
(CONNECT_DATA=
|
||||
(SERVICE_NAME=MAUL)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
Check **TCPS** connection:
|
||||
|
||||
connect vpl/*****@MAUL_24000
|
||||
|
||||
select SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') from dual;
|
||||
|
||||
|
||||
# Setup 2: use certificates signed by a CA Root
|
||||
|
||||
Stop the listener:
|
||||
|
||||
lsnrctl stop LISTENER_SECURE
|
||||
|
||||
Remove trusted/user certificates and certificate requests on **server** side.
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-trusted_cert \
|
||||
-alias 'CN=togoria.swgalaxy'
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-trusted_cert \
|
||||
-alias 'CN=wayland.swgalaxy'
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-user_cert \
|
||||
-dn 'CN=togoria.swgalaxy'
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-cert_req \
|
||||
-dn 'CN=togoria.swgalaxy'
|
||||
|
||||
Remove trusted/user certificates and certificate requests on **client** side.
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-trusted_cert \
|
||||
-alias 'CN=togoria.swgalaxy'
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-trusted_cert \
|
||||
-alias 'CN=wayland.swgalaxy'
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-user_cert \
|
||||
-dn 'CN=wayland.swgalaxy'
|
||||
|
||||
orapki wallet remove \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-cert_req \
|
||||
-dn 'CN=wayland.swgalaxy'
|
||||
|
||||
Check if wallets are empty client/server side.
|
||||
|
||||
orapki wallet display \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet"
|
||||
|
||||
We will use certificates signed by the same CA Root for the client and for the server.
|
||||
|
||||
Create an export file using the server certificate, server private key and CA Root certificate:
|
||||
|
||||
openssl pkcs12 -export \
|
||||
-in /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.crt \
|
||||
-inkey /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.key \
|
||||
-certfile /app/oracle/staging_area/TLS_poc/openssl_files/rootCA.pem \
|
||||
-out /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.p12
|
||||
|
||||
Import into Oracle wallet:
|
||||
|
||||
orapki wallet import_pkcs12 \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "C0mpl1cated#Ph|rase" \
|
||||
-pkcs12file /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.p12
|
||||
|
||||
Server certificate will be imported as **user** certificate and CA Root certificate will be imported as **trusted** certificate.
|
||||
|
||||
Perform the same certificate export-import operation client side:
|
||||
|
||||
openssl pkcs12 -export \
|
||||
-in /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.crt \
|
||||
-inkey /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.key \
|
||||
-certfile /app/oracle/staging_area/TLS_poc/openssl_files/rootCA.pem \
|
||||
-out /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.p12
|
||||
|
||||
orapki wallet import_pkcs12 \
|
||||
-wallet "/app/oracle/staging_area/TLS_poc/wallet" \
|
||||
-pwd "Dont1Try@toGuessth1s" \
|
||||
-pkcs12file /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.p12
|
||||
|
||||
Start the listener:
|
||||
|
||||
lsnrctl start LISTENER_SECURE
|
||||
40
PDB_clone/clone_PDB_from_non-CDB_01.txt
Normal file
40
PDB_clone/clone_PDB_from_non-CDB_01.txt
Normal file
@@ -0,0 +1,40 @@
|
||||
# clone non-CDB to PDB using database link
|
||||
##########################################
|
||||
|
||||
|
||||
# Note: source is ARCHIVELOG mode and READ-WRITE state
|
||||
|
||||
# on source (non-CDB) database, create the user to use for the database link
|
||||
create user CLONE_USER identified by "m007jgert221PnH@A";
|
||||
grant create session, create pluggable database to CLONE_USER;
|
||||
|
||||
# on target (CDB) database, create the database link
|
||||
create database link CLONE_NON_CDB
|
||||
connect to CLONE_USER identified by "m007jgert221PnH@A"
|
||||
using '//togoria:1521/ANDOPRD';
|
||||
|
||||
select * from dual@CLONE_NON_CDB;
|
||||
|
||||
# drop target database if exists
|
||||
alter pluggable database WOMBAT close immediate instances=ALL;
|
||||
drop pluggable database WOMBAT including datafiles;
|
||||
|
||||
# clone PDB from database link
|
||||
create pluggable database WOMBAT from NON$CDB@CLONE_NON_CDB parallel 4;
|
||||
|
||||
# the PDB should be on MOUNT state
|
||||
show pdbs
|
||||
|
||||
# if the version of TARGET DB > version of SOURCE DB, the PDB should be upgrade
|
||||
dbupgrade -l /home/oracle/tmp -c "WOMBAT"
|
||||
|
||||
# convert to PDB before openning
|
||||
alter session set container=WOMBAT;
|
||||
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
|
||||
|
||||
# after convversion, open the PDB and save state
|
||||
alter pluggable database WOMBAT open instances=ALL;
|
||||
alter pluggable database WOMBAT save state;
|
||||
|
||||
|
||||
|
||||
87
RAC_on_OEL8/OEL8_standalone_taris_install.txt
Normal file
87
RAC_on_OEL8/OEL8_standalone_taris_install.txt
Normal file
@@ -0,0 +1,87 @@
|
||||
qemu-img create -f raw /vm/ssd0/taris/boot_01.img 4G
|
||||
qemu-img create -f raw /vm/ssd0/taris/root_01.img 30G
|
||||
qemu-img create -f raw /vm/ssd0/taris/swap_01.img 20G
|
||||
qemu-img create -f raw /vm/ssd0/taris/app_01.img 60G
|
||||
|
||||
virt-install \
|
||||
--graphics vnc,password=secret,listen=0.0.0.0 \
|
||||
--name=taris \
|
||||
--vcpus=4 \
|
||||
--memory=16384 \
|
||||
--network bridge=br0 \
|
||||
--network bridge=br0 \
|
||||
--cdrom=/vm/hdd0/_kit_/OracleLinux-R8-U7-x86_64-dvd.iso \
|
||||
--disk /vm/ssd0/taris/boot_01.img \
|
||||
--disk /vm/ssd0/taris/root_01.img \
|
||||
--disk /vm/ssd0/taris/swap_01.img \
|
||||
--disk /vm/ssd0/taris/app_01.img \
|
||||
--os-variant=ol8.5
|
||||
|
||||
|
||||
|
||||
dd if=/dev/zero of=/vm/ssd0/taris/data_01.img bs=1G count=20
|
||||
dd if=/dev/zero of=/vm/ssd0/taris/data_02.img bs=1G count=20
|
||||
dd if=/dev/zero of=/vm/ssd0/taris/reco_01.img bs=1G count=20
|
||||
|
||||
virsh domblklist taris --details
|
||||
|
||||
virsh attach-disk taris --source /vm/ssd0/taris/data_01.img --target vde --persistent
|
||||
virsh attach-disk taris --source /vm/ssd0/taris/data_02.img --target vdf --persistent
|
||||
virsh attach-disk taris --source /vm/ssd0/taris/reco_01.img --target vdg --persistent
|
||||
|
||||
# Enable EPEL Repository on Oracle Linux 8
|
||||
tee /etc/yum.repos.d/ol8-epel.repo<<EOF
|
||||
[ol8_developer_EPEL]
|
||||
name= Oracle Linux \$releasever EPEL (\$basearch)
|
||||
baseurl=https://yum.oracle.com/repo/OracleLinux/OL8/developer/EPEL/\$basearch/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
|
||||
gpgcheck=1
|
||||
enabled=1
|
||||
EOF
|
||||
|
||||
dnf makecache
|
||||
|
||||
|
||||
# on host install packages
|
||||
dnf install -y bind-utils zip.x86_64 unzip.x86_64 gzip.x86_64 pigz.x86_64 net-tools.x86_64 unixODBC wget lsof.x86_64 rlwrap.x86_64 cifs-utils.x86_64
|
||||
dnf install -y oracle-database-preinstall-19c.x86_64
|
||||
dnf install -y oracle-database-preinstall-21c.x86_64
|
||||
|
||||
# disable firewall
|
||||
systemctl status firewalld
|
||||
systemctl stop firewalld
|
||||
systemctl disable firewalld
|
||||
|
||||
|
||||
# disable selinux
|
||||
getenforce
|
||||
# update /etc/selinux/config
|
||||
# restart the server and check if it is diabled
|
||||
getenforce
|
||||
|
||||
|
||||
groupadd smbuser --gid 1502
|
||||
useradd smbuser --uid 1502 -g smbuser -G smbuser
|
||||
|
||||
mkdir -p /mnt/yavin4
|
||||
|
||||
# test CIFS mount
|
||||
mount -t cifs //192.168.0.9/share /mnt/yavin4 -o vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,user=vplesnila
|
||||
umount /mnt/yavin4
|
||||
|
||||
# create credentials file for automount: /root/.smbcred
|
||||
# username=vplesnila
|
||||
# password=*****
|
||||
|
||||
# add in /etc/fstab
|
||||
# //192.168.0.9/share /mnt/yavin4 cifs vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred 0 0
|
||||
|
||||
-- mount
|
||||
mount -a
|
||||
|
||||
|
||||
|
||||
cat /etc/sysconfig/network-scripts/ifcfg-* | grep IPADD
|
||||
cat /etc/sysconfig/network-scripts/ifcfg-* | grep NAME
|
||||
|
||||
|
||||
55
RAC_on_OEL8/rename_host_and_scan_RAC_OEL8.txt
Normal file
55
RAC_on_OEL8/rename_host_and_scan_RAC_OEL8.txt
Normal file
@@ -0,0 +1,55 @@
|
||||
# based on note: How to rename the hostname in RAC (Doc ID 2341779.1)
|
||||
# rodia-db01 -> ylesia-db01
|
||||
# rodia-db02 -> ylesia-db02
|
||||
# rodia-scan -> ylesia-scan
|
||||
|
||||
|
||||
# on rodia-db02: stop CRS
|
||||
# on rodia-db02: deconfigure CRS
|
||||
# on rodia-db02: uninstall GI
|
||||
# on rodia-db01: remove rodia-db02 from cluster
|
||||
# on rodia-db02: change IPs and change hostname to ylesia-db02
|
||||
# on rodia-db01: add ylesia-db02 to cluster
|
||||
|
||||
# on rodia-db02 as root
|
||||
$ORACLE_HOME/bin/crsctl stop crs
|
||||
$ORACLE_HOME/crs/install/rootcrs.sh -deconfig -force
|
||||
# on rodia-db02 as grid
|
||||
$ORACLE_HOME/deinstall/deinstall -local
|
||||
|
||||
# on rodia-db01 as root
|
||||
$ORACLE_HOME/bin/crsctl delete node -n rodia-db02
|
||||
olsnodes
|
||||
crsctl status res -t
|
||||
|
||||
# change IPs and hostame rodia-db02 -> ylesia-db02
|
||||
|
||||
# on rodia-db01 as grid using graphical interface
|
||||
$ORACLE_HOME/addnode/addnode.sh
|
||||
olsnodes
|
||||
crsctl status res -t
|
||||
|
||||
# repeat operations for remove rodia-db01, rename rodia-db01 -> ylesia-db01 and add ylesia-db01 to the cluster
|
||||
|
||||
# now change SCAN & SCAN listener
|
||||
srvctl config scan
|
||||
|
||||
srvctl stop scan_listener
|
||||
srvctl stop scan -f
|
||||
|
||||
srvctl status scan
|
||||
srvctl status scan_listener
|
||||
|
||||
|
||||
srvctl modify scan -n ylesia-scan
|
||||
srvctl config scan
|
||||
|
||||
srvctl start scan
|
||||
srvctl start scan_listener
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
468
RAC_on_OEL8/ylesia_RAC_OEL8_install.txt
Normal file
468
RAC_on_OEL8/ylesia_RAC_OEL8_install.txt
Normal file
@@ -0,0 +1,468 @@
|
||||
# DNS config
|
||||
############
|
||||
|
||||
# config file swgalaxy.zone
|
||||
|
||||
ylesia-db01 IN A 192.168.0.114
|
||||
ylesia-db01-vip IN A 192.168.0.115
|
||||
ylesia-db01-priv IN A 192.168.1.114
|
||||
ylesia-db01-asm IN A 192.168.2.114
|
||||
|
||||
ylesia-db02 IN A 192.168.0.116
|
||||
ylesia-db02-vip IN A 192.168.0.117
|
||||
ylesia-db02-priv IN A 192.168.1.116
|
||||
ylesia-db02-asm IN A 192.168.2.116
|
||||
|
||||
ylesia-scan IN A 192.168.0.108
|
||||
ylesia-scan IN A 192.168.0.109
|
||||
ylesia-scan IN A 192.168.0.110
|
||||
|
||||
rodia-db01 IN A 192.168.0.93
|
||||
rodia-db01-vip IN A 192.168.0.95
|
||||
rodia-db01-priv IN A 192.168.1.93
|
||||
rodia-db01-asm IN A 192.168.2.93
|
||||
|
||||
rodia-db02 IN A 192.168.0.94
|
||||
rodia-db02-vip IN A 192.168.0.96
|
||||
rodia-db02-priv IN A 192.168.1.94
|
||||
rodia-db02-asm IN A 192.168.2.94
|
||||
|
||||
rodia-scan IN A 192.168.0.97
|
||||
rodia-scan IN A 192.168.0.98
|
||||
rodia-scan IN A 192.168.0.99
|
||||
|
||||
# config file 0.168.192.in-addr.arpa
|
||||
|
||||
114 IN PTR ylesia-db01.
|
||||
116 IN PTR ylesia-db02.
|
||||
115 IN PTR ylesia-db01-vip.
|
||||
117 IN PTR ylesia-db02-vip.
|
||||
|
||||
108 IN PTR ylesia-scan.
|
||||
109 IN PTR ylesia-scan.
|
||||
110 IN PTR ylesia-scan.
|
||||
|
||||
93 IN PTR rodia-db01.swgalaxy.
|
||||
94 IN PTR rodia-db02.swgalaxy.
|
||||
95 IN PTR rodia-db01-vip.swgalaxy.
|
||||
96 IN PTR rodia-db02-vip.swgalaxy.
|
||||
|
||||
97 IN PTR rodia-scan.swgalaxy.
|
||||
98 IN PTR rodia-scan.swgalaxy.
|
||||
99 IN PTR rodia-scan.swgalaxy.
|
||||
|
||||
|
||||
qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/boot_01.img 4G
|
||||
qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/root_01.img 30G
|
||||
qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/swap_01.img 20G
|
||||
qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/app_01.img 60G
|
||||
|
||||
|
||||
# get os-variant as Short ID from OS info database
|
||||
osinfo-query os | grep -i oracle | sort
|
||||
|
||||
virt-install \
|
||||
--graphics vnc,password=secret,listen=0.0.0.0 \
|
||||
--name=ylesia-db01 \
|
||||
--vcpus=4 \
|
||||
--memory=40960 \
|
||||
--network bridge=br0 \
|
||||
--network bridge=br0 \
|
||||
--network bridge=br0 \
|
||||
--cdrom=/mnt/yavin4/kit/Oracle/OEL8/OracleLinux-R8-U7-x86_64-dvd.iso \
|
||||
--disk /vm/hdd0/ylesia-rac/ylesia-db01/boot_01.img \
|
||||
--disk /vm/hdd0/ylesia-rac/ylesia-db01/root_01.img \
|
||||
--disk /vm/hdd0/ylesia-rac/ylesia-db01/swap_01.img \
|
||||
--disk /vm/hdd0/ylesia-rac/ylesia-db01/app_01.img \
|
||||
--os-variant=ol8.5
|
||||
|
||||
|
||||
# on host install packages
|
||||
dnf install bind-utils
|
||||
dnf install zip.x86_64 unzip.x86_64 gzip.x86_64
|
||||
dnf install pigz.x86_64
|
||||
dnf install net-tools.x86_64
|
||||
dnf install oracle-database-preinstall-19c.x86_64
|
||||
dnf install oracle-database-preinstall-21c.x86_64
|
||||
dnf install unixODBC
|
||||
dnf install wget
|
||||
dnf install lsof.x86_64
|
||||
|
||||
|
||||
# Enable EPEL Repository on Oracle Linux 8
|
||||
tee /etc/yum.repos.d/ol8-epel.repo<<EOF
|
||||
[ol8_developer_EPEL]
|
||||
name= Oracle Linux \$releasever EPEL (\$basearch)
|
||||
baseurl=https://yum.oracle.com/repo/OracleLinux/OL8/developer/EPEL/\$basearch/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
|
||||
gpgcheck=1
|
||||
enabled=1
|
||||
EOF
|
||||
|
||||
dnf makecache
|
||||
|
||||
# Install rlwrap
|
||||
dnf install rlwrap.x86_64
|
||||
|
||||
# disable firewall
|
||||
systemctl status firewalld
|
||||
systemctl stop firewalld
|
||||
systemctl disable firewalld
|
||||
|
||||
|
||||
# disable selinux
|
||||
getenforce
|
||||
# update /etc/selinux/config
|
||||
# restart the server and check if it is diabled
|
||||
getenforce
|
||||
|
||||
|
||||
# grid infrastructure users and groups
|
||||
groupadd -g 54327 asmoper
|
||||
groupadd -g 54328 asmdba
|
||||
groupadd -g 54329 asmadmin
|
||||
|
||||
useradd -g oinstall -G asmoper,asmdba,asmadmin -c "Grid Infrastructure Owner" grid
|
||||
usermod -g oinstall -G asmdba,dba,oper -c "Oracle Sotfware Owner" oracle
|
||||
|
||||
|
||||
# install ASMLib
|
||||
# see Metalink note: Oracle Linux 8: How To Install ASMLib (Doc ID 2720215.1)
|
||||
|
||||
# that will install oracleasm-support & oracleasmlib
|
||||
cd /tmp
|
||||
wget https://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.17-1.el8.x86_64.rpm
|
||||
wget https://public-yum.oracle.com/repo/OracleLinux/OL8/addons/x86_64/getPackage/oracleasm-support-2.1.12-1.el8.x86_64.rpm
|
||||
dnf localinstall ./oracleasm-support-2.1.12-1.el8.x86_64.rpm ./oracleasmlib-2.0.17-1.el8.x86_64.rpm
|
||||
|
||||
|
||||
# in Dom0, create virtual disk for ASM
|
||||
dd if=/dev/zero of=/vm/ssd0/ylesia-rac/disk-array/asm_data_01.img bs=1G count=30
|
||||
dd if=/dev/zero of=/vm/ssd0/ylesia-rac/disk-array/asm_data_02.img bs=1G count=30
|
||||
dd if=/dev/zero of=/vm/ssd0/ylesia-rac/disk-array/asm_data_03.img bs=1G count=30
|
||||
dd if=/dev/zero of=/vm/ssd0/ylesia-rac/disk-array/asm_data_04.img bs=1G count=30
|
||||
dd if=/dev/zero of=/vm/ssd0/ylesia-rac/disk-array/asm_data_05.img bs=1G count=30
|
||||
|
||||
dd if=/dev/zero of=/vm/hdd0/ylesia-rac/disk-array/asm_reco_01.img bs=1G count=20
|
||||
dd if=/dev/zero of=/vm/hdd0/ylesia-rac/disk-array/asm_reco_02.img bs=1G count=20
|
||||
dd if=/dev/zero of=/vm/hdd0/ylesia-rac/disk-array/asm_reco_03.img bs=1G count=20
|
||||
dd if=/dev/zero of=/vm/hdd0/ylesia-rac/disk-array/asm_reco_04.img bs=1G count=20
|
||||
|
||||
|
||||
# list the block devices of VM
|
||||
virsh domblklist ylesia-db01 --details
|
||||
|
||||
# attach disks to the VM (with VM stopped for more than 1 disk to attach, I don't know why)
|
||||
# vdX device names will be renamed automatically at VM start in order do not have gaps
|
||||
|
||||
virsh attach-disk ylesia-db01 --source /vm/ssd0/ylesia-rac/disk-array/asm_data_01.img --target vdi --persistent
|
||||
virsh attach-disk ylesia-db01 --source /vm/ssd0/ylesia-rac/disk-array/asm_data_02.img --target vdj --persistent
|
||||
virsh attach-disk ylesia-db01 --source /vm/ssd0/ylesia-rac/disk-array/asm_data_03.img --target vdk --persistent
|
||||
virsh attach-disk ylesia-db01 --source /vm/ssd0/ylesia-rac/disk-array/asm_data_04.img --target vdl --persistent
|
||||
virsh attach-disk ylesia-db01 --source /vm/ssd0/ylesia-rac/disk-array/asm_data_05.img --target vdm --persistent
|
||||
|
||||
virsh attach-disk ylesia-db01 --source /vm/hdd0/ylesia-rac/disk-array/asm_reco_01.img --target vdn --persistent
|
||||
virsh attach-disk ylesia-db01 --source /vm/hdd0/ylesia-rac/disk-array/asm_reco_02.img --target vdo --persistent
|
||||
virsh attach-disk ylesia-db01 --source /vm/hdd0/ylesia-rac/disk-array/asm_reco_03.img --target vdp --persistent
|
||||
virsh attach-disk ylesia-db01 --source /vm/hdd0/ylesia-rac/disk-array/asm_reco_04.img --target vdr --persistent
|
||||
|
||||
|
||||
# edit VM xml config file and add to disk array disk's keyword: <shareable/>
|
||||
|
||||
oracleasm configure -i
|
||||
# choose grid for user and asmdba for group
|
||||
oracleasm init
|
||||
|
||||
# if we need to use an older kernel prior to lmast kernel update
|
||||
# https://www.golinuxcloud.com/change-default-kernel-version-rhel-centos-8/
|
||||
|
||||
|
||||
# create ASM disks
|
||||
oracleasm status
|
||||
oracleasm scandisks
|
||||
oracleasm listdisks
|
||||
|
||||
# list block devices
|
||||
lsblk
|
||||
|
||||
# use following shell script to create all new partitions
|
||||
|
||||
---------------------------------------------------------------------------------------
|
||||
#!/bin/sh
|
||||
hdd="/dev/vde /dev/vdf /dev/vdg /dev/vdh /dev/vdi /dev/vdj /dev/vdk /dev/vdl /dev/vdm"
|
||||
for i in $hdd;do
|
||||
echo "n
|
||||
p
|
||||
1
|
||||
|
||||
|
||||
w
|
||||
"|fdisk $i;done
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
# if ASMLib is used
|
||||
oracleasm createdisk DATA_01 /dev/vde1
|
||||
oracleasm createdisk DATA_02 /dev/vdf1
|
||||
oracleasm createdisk DATA_03 /dev/vdg1
|
||||
oracleasm createdisk DATA_04 /dev/vdh1
|
||||
oracleasm createdisk DATA_05 /dev/vdi1
|
||||
|
||||
oracleasm createdisk RECO_01 /dev/vdj1
|
||||
oracleasm createdisk RECO_02 /dev/vdk1
|
||||
oracleasm createdisk RECO_03 /dev/vdl1
|
||||
oracleasm createdisk RECO_04 /dev/vdm1
|
||||
|
||||
|
||||
# without ASMLib
|
||||
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
|
||||
KERNEL=="vde1",NAME="asm_data_01",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
KERNEL=="vdf1",NAME="asm_data_02",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
KERNEL=="vdg1",NAME="asm_data_03",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
KERNEL=="vdh1",NAME="asm_data_04",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
KERNEL=="vdi1",NAME="asm_data_05",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
|
||||
KERNEL=="vdj1",NAME="asm_reco_01",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
KERNEL=="vdk1",NAME="asm_reco_02",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
KERNEL=="vdl1",NAME="asm_reco_03",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
KERNEL=="vdm1",NAME="asm_reco_04",OWNER="grid",GROUP="asmadmin",MODE="0660"
|
||||
|
||||
|
||||
# at this moment clone the VM
|
||||
# on Dom0
|
||||
virsh dumpxml ylesia-db01 > /tmp/myvm.xml
|
||||
# modify XML file:
|
||||
# replace ylesia-db01 by ylesia-db02
|
||||
# remove <uuid>...</uuid> line
|
||||
# generate new mac addresses for network interfaces
|
||||
|
||||
date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/'
|
||||
|
||||
virsh define /tmp/myvm.xml
|
||||
|
||||
# start cloned ylesia-db02 VM and change IP adresses and host name
|
||||
vi /etc/sysconfig/network-scripts/ifcfg-enp1s0
|
||||
vi /etc/sysconfig/network-scripts/ifcfg-enp2s0
|
||||
vi /etc/sysconfig/network-scripts/ifcfg-enp3s0
|
||||
|
||||
hostnamectl set-hostname ylesia-db02.swgalaxy
|
||||
|
||||
# mount CIFS share on both VM
|
||||
dnf install cifs-utils.x86_64
|
||||
|
||||
groupadd smbuser --gid 1502
|
||||
useradd smbuser --uid 1502 -g smbuser -G smbuser
|
||||
|
||||
mkdir -p /mnt/yavin4
|
||||
|
||||
# test CIFS mount
|
||||
mount -t cifs //192.168.0.9/share /mnt/yavin4 -o vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,user=vplesnila
|
||||
umount /mnt/yavin4
|
||||
|
||||
# create credentials file for automount: /root/.smbcred
|
||||
# username=vplesnila
|
||||
# password=*****
|
||||
|
||||
# add in /etc/fstab
|
||||
# //192.168.0.9/share /mnt/yavin4 cifs vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred 0 0
|
||||
|
||||
-- mount
|
||||
mount -a
|
||||
|
||||
# oracle user profile
|
||||
---------------------------------------------------------------------------------------
|
||||
# .bash_profile
|
||||
|
||||
# Get the aliases and functions
|
||||
if [ -f ~/.bashrc ]; then
|
||||
. ~/.bashrc
|
||||
fi
|
||||
|
||||
# User specific environment and startup programs
|
||||
alias listen='lsof -i -P | grep -i "listen"'
|
||||
alias s='rlwrap sqlplus / as sysdba'
|
||||
alias r='rlwrap rman target /'
|
||||
|
||||
PS1='\u@\h[$ORACLE_SID]:$PWD\$ '
|
||||
umask 022
|
||||
|
||||
PATH=$PATH:$HOME/.local/bin:$HOME/bin
|
||||
|
||||
export PATH
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
# grid user profile
|
||||
---------------------------------------------------------------------------------------
|
||||
# .bash_profile
|
||||
|
||||
# Get the aliases and functions
|
||||
if [ -f ~/.bashrc ]; then
|
||||
. ~/.bashrc
|
||||
fi
|
||||
|
||||
# User specific environment and startup programs
|
||||
alias listen='lsof -i -P | grep -i "listen"'
|
||||
alias asmcmd='rlwrap asmcmd'
|
||||
alias s='rlwrap sqlplus / as sysasm'
|
||||
PS1='\u@\h[$ORACLE_SID]:$PWD\$ '
|
||||
umask 022
|
||||
|
||||
GRID_HOME=/app/grid/product/21.3
|
||||
ORACLE_SID=+ASM1
|
||||
ORACLE_BASE=/app/grid/base
|
||||
ORACLE_HOME=$GRID_HOME
|
||||
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
|
||||
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
|
||||
|
||||
|
||||
export ORACLE_BASE
|
||||
export ORACLE_HOME
|
||||
export LD_LIBRARY_PATH
|
||||
export ORACLE_SID
|
||||
export PATH
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
# generate SSH keys on both VM and add public keys in .ssh/authorized_keys in order to connect locally and cross connect without password
|
||||
ssh-keygen
|
||||
cd
|
||||
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
|
||||
|
||||
# as root on both VM
|
||||
mkdir -p /app/grid/product/21.3
|
||||
mkdir -p /app/grid/base
|
||||
mkdir -p /app/grid/oraInventory
|
||||
|
||||
chown -R grid:oinstall /app/grid/product/21.3
|
||||
chown -R grid:oinstall /app/grid/base
|
||||
chown -R grid:oinstall /app/grid/oraInventory
|
||||
|
||||
# on the 1st VM, unzip grid infrastructure distribution ZIP file
|
||||
cd /app/grid/product/21.3
|
||||
unzip /mnt/yavin4/kit/Oracle/Oracle_Database_21/LINUX.X64_213000_grid_home.zip
|
||||
|
||||
|
||||
# from a X11 terminal, proceed with software installation
|
||||
/app/grid/product/21.3/gridSetup.sh
|
||||
|
||||
# same command to use after software installation in order to configure the new Oracle Cluster
|
||||
/app/grid/product/21.3/gridSetup.sh
|
||||
|
||||
# if grid setup fails with the error PRVG-11250 The Check "RPM Package Manager Database" Was Not Performed
|
||||
# consider apply following MOS note: Cluvfy Fail with PRVG-11250 The Check "RPM Package Manager Database" Was Not Performed (Doc ID 2548970.1)
|
||||
/app/grid/product/21.3/runcluvfy.sh stage -pre crsinst -n ylesia-db01,ylesia-db02 -method root
|
||||
|
||||
# from a X11 terminal, run ASM configuration assistent in order to create RECO diskgroup
|
||||
/app/grid/product/21.3/bin/asmca
|
||||
|
||||
# check cluster status
|
||||
crsctl status res -t
|
||||
|
||||
|
||||
# Apply the latest GIRU patch using out-of-place method
|
||||
#######################################################
|
||||
|
||||
# as root, create a staging area for patches on the first VM
|
||||
mkdir -p /app/staging_area
|
||||
chown -R grid:oinstall /app/staging_area
|
||||
chmod g+w /app/staging_area
|
||||
|
||||
# as grid user, unzip GI patch in the staging area onn the first VM
|
||||
su - grid
|
||||
cd /app/staging_area
|
||||
unzip /mnt/yavin4/kit/Oracle/Oracle_Database_21/patch/GI_RU_AVR23/p35132566_210000_Linux-x86-64.zip
|
||||
|
||||
# as root, on both VM, preparev the directory for the new GI
|
||||
export NEW_GRID_HOME=/app/grid/software/21.10
|
||||
|
||||
mkdir -p $NEW_GRID_HOME
|
||||
chown -R grid:oinstall $NEW_GRID_HOME
|
||||
|
||||
# as grid, only on the first VM unzip the base distibution of thr GI
|
||||
su - grid
|
||||
export NEW_GRID_HOME=/app/grid/software/21.10
|
||||
cd $NEW_GRID_HOME
|
||||
unzip /mnt/yavin4/kit/Oracle/Oracle_Database_21/LINUX.X64_213000_grid_home.zip
|
||||
|
||||
# very IMPORTANT
|
||||
# deploy the last version of OPatch in the new GI home before proceed with the GI install with RU apply
|
||||
# as grid user
|
||||
cd $NEW_GRID_HOME
|
||||
rm -rf OPatch
|
||||
ls OPatch
|
||||
unzip /mnt/yavin4/kit/Oracle/opatch/p6880880_210000_Linux-x86-64.zip
|
||||
|
||||
# at this moment, just simulate an install of the base GI, software only
|
||||
# do not install, just put the response file aside
|
||||
|
||||
# setup the new GI HOME and install the GIRU
|
||||
export NEW_GRID_HOME=/app/grid/software/21.10
|
||||
export ORACLE_HOME=$NEW_GRID_HOME
|
||||
$ORACLE_HOME/gridSetup.sh -executePrereqs -silent
|
||||
|
||||
cd $ORACLE_HOME
|
||||
./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
|
||||
-applyRU /app/staging_area/35132566 \
|
||||
-responseFile /home/grid/grid.rsp
|
||||
|
||||
|
||||
# once new GI homes are insalled and updated to the lasr GIRU
|
||||
# switch CRS to the new GI HOME, on each VM's one by one (rolling mode)
|
||||
|
||||
export NEW_GRID_HOME=/app/grid/software/21.10
|
||||
export ORACLE_HOME=$NEW_GRID_HOME
|
||||
export CURRENT_NODE=$(hostname)
|
||||
|
||||
$ORACLE_HOME/gridSetup.sh \
|
||||
-silent -switchGridHome \
|
||||
oracle.install.option=CRS_SWONLY \
|
||||
ORACLE_HOME=$ORACLE_HOME \
|
||||
oracle.install.crs.config.clusterNodes=$CURRENT_NODE \
|
||||
oracle.install.crs.rootconfig.executeRootScript=false
|
||||
|
||||
# check if grid:oinstall is the owner of GI HOME, otherwise modify-it:
|
||||
chown grid /app/grid/product/21.10
|
||||
|
||||
# IMPORTANT: do not remove the old GI HOME before switching to the new GI HOME on all nodes
|
||||
|
||||
# update grid .bash_profile with the new GI home and check CRS
|
||||
crsctl status res -t
|
||||
|
||||
# display registered ORACLE_HOME's
|
||||
cat /app/grid/oraInventory/ContentsXML/inventory.xml | grep "HOME NAME"
|
||||
|
||||
# as grid user, on both VM, remove OLD ORACLE_HOME
|
||||
export OLD_GRID_HOME=/app/grid/product
|
||||
export ORACLE_HOME=$OLD_GRID_HOME
|
||||
$ORACLE_HOME/deinstall/deinstall -local
|
||||
|
||||
# divers
|
||||
########
|
||||
|
||||
# if some install/deinstall operations for 19 rdbms are failing checking OEL8.7 compatibility, use:
|
||||
export CV_ASSUME_DISTID=OL7
|
||||
|
||||
# possible also to need following libs
|
||||
dnf install libstdc++-devel.x86_64
|
||||
dnf install libaio-devel.x86_64
|
||||
dnf install libcap.x86_64 libcap-devel.x86_64
|
||||
|
||||
|
||||
# potential issue with Oracle 19 RDBMS binary
|
||||
# check permission (-rwsr-s--x) and owner (oracle:oinstall) for 19 oracle binary
|
||||
ls -l /app/oracle/product/19/bin/oracle
|
||||
# if is not good, issue as root
|
||||
chown oracle:asmadmin /app/oracle/product/19/bin/oracle
|
||||
chmod 6751 /app/oracle/product/19/bin/oracle
|
||||
|
||||
|
||||
# if CLSRSC-762: Empty site GUID for the local site name (Doc ID 2878740.1)
|
||||
# update $GRID_HOME/crs/install/crsgenconfig_params
|
||||
# put the name of the RAC and generate a new UUID using linux uuid command
|
||||
|
||||
|
||||
# Enabling a Read-Only Oracle Home
|
||||
$ORACLE_HOME/bin/roohctl -enable
|
||||
|
||||
|
||||
|
||||
|
||||
20
RMAN/rman_duplicate_from_location_hot_backup_01.txt
Normal file
20
RMAN/rman_duplicate_from_location_hot_backup_01.txt
Normal file
@@ -0,0 +1,20 @@
|
||||
# set duplicate target database to <DB_NAME>
|
||||
|
||||
|
||||
rman auxiliary /
|
||||
|
||||
run
|
||||
{
|
||||
allocate auxiliary channel aux01 device type disk;
|
||||
allocate auxiliary channel aux02 device type disk;
|
||||
allocate auxiliary channel aux03 device type disk;
|
||||
allocate auxiliary channel aux04 device type disk;
|
||||
allocate auxiliary channel aux05 device type disk;
|
||||
allocate auxiliary channel aux06 device type disk;
|
||||
allocate auxiliary channel aux07 device type disk;
|
||||
allocate auxiliary channel aux08 device type disk;
|
||||
allocate auxiliary channel aux09 device type disk;
|
||||
allocate auxiliary channel aux10 device type disk;
|
||||
duplicate target database to ANDO backup location '/mnt/yavin4/tmp/_oracle_/orabackup/19_non_CDB/backupset/';
|
||||
}
|
||||
|
||||
143
Time_Zone_upgrade/ts_upgrade_01.txt
Normal file
143
Time_Zone_upgrade/ts_upgrade_01.txt
Normal file
@@ -0,0 +1,143 @@
|
||||
# https://oracle-base.com/articles/misc/update-database-time-zone-file#upgrade-time-zone-file-multiteanant
|
||||
|
||||
|
||||
# check current time zone version
|
||||
|
||||
select * from V$TIMEZONE_FILE;
|
||||
select TZ_VERSION from REGISTRY$DATABASE;
|
||||
|
||||
|
||||
-- qyery dst_check.sql -----------
|
||||
COLUMN property_name FORMAT A30
|
||||
COLUMN property_value FORMAT A20
|
||||
|
||||
select
|
||||
property_name, property_value
|
||||
from
|
||||
DATABASE_PROPERTIES
|
||||
where
|
||||
property_name like 'DST_%'
|
||||
order by
|
||||
property_name;
|
||||
----------------------------------
|
||||
|
||||
# latest available version of the timezone
|
||||
select DBMS_DST.GET_LATEST_TIMEZONE_VERSION from dual;
|
||||
|
||||
|
||||
# prepare for the upgrade (optional)
|
||||
|
||||
DECLARE
|
||||
l_tz_version PLS_INTEGER;
|
||||
BEGIN
|
||||
l_tz_version := DBMS_DST.get_latest_timezone_version;
|
||||
|
||||
DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
|
||||
DBMS_DST.begin_prepare(l_tz_version);
|
||||
END;
|
||||
/
|
||||
|
||||
# execute dst_check.sql
|
||||
# DST_UPGRADE_STATE should change from NONE to PREPARE
|
||||
|
||||
# clean tachnical tables
|
||||
truncate table SYS.DST$AFFECTED_TABLES;
|
||||
truncate table SYS.DST$ERROR_TABLE;
|
||||
|
||||
|
||||
# find tables and errors affected by the upgrade
|
||||
exec DBMS_DST.FIND_AFFECTED_TABLES;
|
||||
|
||||
select * from SYS.DST$AFFECTED_TABLES;
|
||||
select * from SYS.DST$ERROR_TABLE;
|
||||
|
||||
# perform necessaty checks and finish the prepare step if you want to go ahead with the upgrade
|
||||
exec DBMS_DST.END_PREPARE;
|
||||
|
||||
# Note: for a CDB, TZ should be upgrade in each container
|
||||
|
||||
# restart the database in UPGRADE mode
|
||||
|
||||
# BEGIN upgrade
|
||||
###############
|
||||
SET SERVEROUTPUT ON
|
||||
DECLARE
|
||||
l_tz_version PLS_INTEGER;
|
||||
BEGIN
|
||||
SELECT DBMS_DST.get_latest_timezone_version
|
||||
INTO l_tz_version
|
||||
FROM dual;
|
||||
|
||||
DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
|
||||
DBMS_DST.begin_upgrade(l_tz_version);
|
||||
END;
|
||||
/
|
||||
|
||||
# restart the database
|
||||
|
||||
# END upgrade
|
||||
#############
|
||||
SET SERVEROUTPUT ON
|
||||
DECLARE
|
||||
l_failures PLS_INTEGER;
|
||||
BEGIN
|
||||
DBMS_DST.upgrade_database(l_failures);
|
||||
DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
|
||||
DBMS_DST.end_upgrade(l_failures);
|
||||
DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
|
||||
END;
|
||||
/
|
||||
|
||||
# restart the database
|
||||
|
||||
|
||||
# following queries can be used to check the progress of the TZ upgrade table by table
|
||||
-- CDB
|
||||
COLUMN owner FORMAT A30
|
||||
COLUMN table_name FORMAT A30
|
||||
|
||||
SELECT con_id,
|
||||
owner,
|
||||
table_name,
|
||||
upgrade_in_progress
|
||||
FROM cdb_tstz_tables
|
||||
ORDER BY 1,2,3;
|
||||
|
||||
-- Non-CDB
|
||||
COLUMN owner FORMAT A30
|
||||
COLUMN table_name FORMAT A30
|
||||
|
||||
SELECT owner,
|
||||
table_name,
|
||||
upgrade_in_progress
|
||||
FROM dba_tstz_tables
|
||||
ORDER BY 1,2;
|
||||
|
||||
# Note: in 21c, the following parameter is supposed tu avoid database restart during TZ upgrade
|
||||
# in my test it does not worked
|
||||
alter system set timezone_version_upgrade_online=true scope=both sid='*';
|
||||
|
||||
|
||||
-- restart PDB$SEED in UPGRADE mode
|
||||
alter pluggable database PDB$SEED close immediate instances=ALL;
|
||||
alter pluggable database PDB$SEED open upgrade instances=ALL;
|
||||
show pdbs
|
||||
alter session set container=PDB$SEED;
|
||||
-- run BEGIN TZ upgrade procedure
|
||||
|
||||
-- restart PDB$SEED in READ-WITE mode
|
||||
alter session set container=CDB$ROOT;
|
||||
alter pluggable database PDB$SEED close immediate instances=ALL;
|
||||
alter pluggable database PDB$SEED open read write instances=ALL;
|
||||
alter session set container=PDB$SEED;
|
||||
-- run END TZ upgrade procedure
|
||||
|
||||
-- restart PDB$SEED in READ-WITE mode
|
||||
alter session set container=CDB$ROOT;
|
||||
alter pluggable database PDB$SEED close immediate instances=ALL;
|
||||
alter pluggable database PDB$SEED open instances=ALL;
|
||||
|
||||
-- check TZ and close PDB$SEED
|
||||
alter session set container=CDB$ROOT;
|
||||
alter pluggable database PDB$SEED close immediate instances=ALL;
|
||||
|
||||
18
artcles.txt
Normal file
18
artcles.txt
Normal file
@@ -0,0 +1,18 @@
|
||||
https://www.databasejournal.com/oracle/hybrid-histograms-in-oracle-12c/
|
||||
https://hourim.wordpress.com/2016/01/20/natural-and-adjusted-hybrid-histogram/
|
||||
https://chinaraliyev.wordpress.com/2018/11/06/understanding-hybrid-histogram/
|
||||
|
||||
http://www.br8dba.com/store-db-credentials-in-oracle-wallet/
|
||||
https://backendtales.blogspot.com/2023/02/santas-little-index-helper.html
|
||||
Asymmetric Dataguard with multitenant
|
||||
https://oracleandme.com/2023/10/31/asymmetric-dataguard-with-multitenant-part-1/
|
||||
|
||||
https://backendtales.blogspot.com/2023/02/santas-little-index-helper.html
|
||||
https://github.com/GoogleCloudPlatform/community/blob/master/archived/setting-up-postgres-hot-standby.md
|
||||
|
||||
# cursor: pin S wait on X
|
||||
https://svenweller.wordpress.com/2018/05/23/tackling-cursor-pin-s-wait-on-x-wait-event-issue/
|
||||
|
||||
# pin cursor in shared pool
|
||||
https://dbamarco.wordpress.com/2015/10/29/high-parse-time-in-oracle-12c/
|
||||
|
||||
78
automatic_SPM/automatic_SPM_01.txt
Normal file
78
automatic_SPM/automatic_SPM_01.txt
Normal file
@@ -0,0 +1,78 @@
|
||||
show parameter optimizer_capture_sql_plan_baselines
|
||||
|
||||
-- sould be FALSE for automatic SPM
|
||||
|
||||
col parameter_name for a40
|
||||
col parameter_value for a20
|
||||
|
||||
SELECT parameter_name,parameter_value
|
||||
FROM dba_sql_management_config;
|
||||
|
||||
-- check for parameter_name='AUTO_SPM_EVOLVE_TASK'
|
||||
|
||||
col task_name for a40
|
||||
|
||||
SELECT task_name,enabled
|
||||
FROM dba_autotask_schedule_control
|
||||
WHERE dbid = sys_context('userenv','con_dbid');
|
||||
|
||||
-- check for task_name = 'Auto SPM Task';
|
||||
|
||||
------------
|
||||
-- to ENABLE
|
||||
------------
|
||||
BEGIN
|
||||
DBMS_SPM.CONFIGURE('AUTO_SPM_EVOLVE_TASK','ON');
|
||||
END;
|
||||
/
|
||||
|
||||
-- For non-autonomous systems only, in the relevant PDB
|
||||
-- execute the following as SYS to ensure the correct plan source
|
||||
-- and ACCEPT_PLANS has its default value, TRUE,
|
||||
BEGIN
|
||||
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
|
||||
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
|
||||
parameter => 'ALTERNATE_PLAN_SOURCE',
|
||||
value => 'SQL_TUNING_SET');
|
||||
END;
|
||||
/
|
||||
BEGIN
|
||||
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
|
||||
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
|
||||
parameter => 'ACCEPT_PLANS',
|
||||
value => 'TRUE');
|
||||
END;
|
||||
/
|
||||
|
||||
-------------
|
||||
-- to DISABLE
|
||||
-------------
|
||||
BEGIN
|
||||
DBMS_SPM.CONFIGURE('AUTO_SPM_EVOLVE_TASK','OFF');
|
||||
END;
|
||||
/
|
||||
|
||||
-- For non-autonomous systems only,
|
||||
-- execute the following as SYS if you want to return
|
||||
-- parameters to 'manual' SPM values - for example
|
||||
BEGIN
|
||||
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
|
||||
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
|
||||
parameter => 'ALTERNATE_PLAN_BASELINE',
|
||||
value => 'EXISTING');
|
||||
END;
|
||||
/
|
||||
BEGIN
|
||||
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
|
||||
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
|
||||
parameter => 'ALTERNATE_PLAN_SOURCE',
|
||||
value => 'AUTO');
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
22
btrfs/btrfs_install_rocky8_01.txt
Normal file
22
btrfs/btrfs_install_rocky8_01.txt
Normal file
@@ -0,0 +1,22 @@
|
||||
# based on: https://www.unixmen.com/install-btrfs-tools-on-ubuntu-linux-to-manage-btrfs-operations/
|
||||
|
||||
dnf install -Y git automake asciidoc.noarch xmlto.x86_64
|
||||
dnf --enablerepo=powertools install python3-sphinx
|
||||
dnf install -Y e2fsprogs-devel.x86_64 e2fsprogs-libs.x86_64 e2fsprogs.x86_64 libblkid-devel.x86_64
|
||||
dnf install -y libzstd.x86_64 libzstd-devel.x86_64
|
||||
dnf install -y systemd-devel.x86_64
|
||||
dnf install -y python39.x86_64 python36-devel.x86_64
|
||||
dnf install -y lzo.x86_64 lzo-devel.x86_64
|
||||
|
||||
git clone git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git
|
||||
cd btrfs-progs/
|
||||
automake
|
||||
./configure
|
||||
make
|
||||
# if falure on Making documentation, add master_doc = 'index' to Documentation/conf.py
|
||||
make install
|
||||
|
||||
# test
|
||||
btrfs version
|
||||
|
||||
lsblk
|
||||
135
clustoring_factor/clustering_factor_01.txt
Normal file
135
clustoring_factor/clustering_factor_01.txt
Normal file
@@ -0,0 +1,135 @@
|
||||
-- https://easyteam.fr/limpact-du-facteur-dordonnancement-sur-les-performances-clustering-factor/
|
||||
|
||||
create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret;
|
||||
alter pluggable database NIHILUS open;
|
||||
alter pluggable database NIHILUS save state;
|
||||
|
||||
|
||||
alter session set container=NIHILUS;
|
||||
|
||||
create tablespace USERS datafile size 32M autoextend ON next 32M;
|
||||
alter database default tablespace USERS;
|
||||
|
||||
create user adm identified by "secret";
|
||||
grant sysdba to adm;
|
||||
|
||||
create user usr identified by "secret";
|
||||
grant CONNECT,RESOURCE to usr;
|
||||
grant alter session to usr;
|
||||
alter user usr quota unlimited on USERS;
|
||||
|
||||
alias adm_NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba'
|
||||
alias usr_NIHILUS='rlwrap sqlplus usr/"secret"@ba
|
||||
|
||||
|
||||
|
||||
create table USR.TABLE_LIST_DISPLAY_PATTERNS (
|
||||
LIST_ID number not null,
|
||||
DISPLAY_PATTERN_ID varchar(1000) not null
|
||||
);
|
||||
|
||||
|
||||
begin
|
||||
for i in 1..100 loop
|
||||
insert into USR.TABLE_LIST_DISPLAY_PATTERNS select i, lpad('x',1000,'x') from dba_objects where rownum < 35 order by 1;
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
commit;
|
||||
|
||||
|
||||
create index USR.LIST_DISPLAY_PATTERNS_IDX on USR.TABLE_LIST_DISPLAY_PATTERNS(LIST_ID);
|
||||
|
||||
|
||||
create table USR.TABLE_LIST_DISPLAY_RAND as
|
||||
select * from USR.TABLE_LIST_DISPLAY_PATTERNS order by DBMS_RANDOM.RANDOM;
|
||||
|
||||
create index USR.LIST_DISPLAY_RAND_IDX on USR.TABLE_LIST_DISPLAY_RAND(LIST_ID);
|
||||
|
||||
exec dbms_stats.gather_table_stats('USR','TABLE_LIST_DISPLAY_PATTERNS', method_opt=>'for all columns size AUTO');
|
||||
exec dbms_stats.gather_table_stats('USR','TABLE_LIST_DISPLAY_RAND', method_opt=>'for all columns size AUTO');
|
||||
|
||||
|
||||
|
||||
|
||||
SQL> @tab USR.TABLE_LIST_DISPLAY_PATTERNS
|
||||
Show tables matching condition "%USR.TABLE_LIST_DISPLAY_PATTERNS%" (if schema is not specified then current user's tables only are shown)...
|
||||
|
||||
OWNER TABLE_NAME TYPE NUM_ROWS BLOCKS EMPTY AVGSPC ROWLEN TAB_LAST_ANALYZED DEGREE COMPRESS
|
||||
-------------------- ------------------------------ ---- ------------ ------------- --------- ------ ------ ------------------- ---------------------------------------- --------
|
||||
USR TABLE_LIST_DISPLAY_PATTERNS TAB 3400 496 0 0 1004 2023-06-25 15:41:27 1 DISABLED
|
||||
|
||||
1 row selected.
|
||||
|
||||
SQL> @ind USR.LIST_DISPLAY_PATTERNS_IDX
|
||||
Display indexes where table or index name matches %USR.LIST_DISPLAY_PATTERNS_IDX%...
|
||||
|
||||
TABLE_OWNER TABLE_NAME INDEX_NAME POS# COLUMN_NAME DSC
|
||||
-------------------- ------------------------------ ------------------------------ ---- ------------------------------ ----
|
||||
USR TABLE_LIST_DISPLAY_PATTERNS LIST_DISPLAY_PATTERNS_IDX 1 LIST_ID
|
||||
|
||||
|
||||
INDEX_OWNER TABLE_NAME INDEX_NAME IDXTYPE UNIQ STATUS PART TEMP H LFBLKS NDK NUM_ROWS CLUF LAST_ANALYZED DEGREE VISIBILIT
|
||||
-------------------- ------------------------------ ------------------------------ ---------- ---- -------- ---- ---- -- ---------- ------------- ---------- ---------- ------------------- ------ ---------
|
||||
USR TABLE_LIST_DISPLAY_PATTERNS LIST_DISPLAY_PATTERNS_IDX NORMAL NO VALID NO N 2 7 100 3400 551 2023-06-25 15:41:27 1 VISIBLE
|
||||
|
||||
|
||||
-- each LIST_ID is stored in how many distinct blocks?
|
||||
|
||||
alter session set current_schema=USR;
|
||||
|
||||
select
|
||||
norm.list_id, norm.cnt normanized_blocks , random.cnt randomanized_blocks
|
||||
from
|
||||
(select list_id, count(distinct(dbms_rowid.ROWID_BLOCK_NUMBER(rowid))) cnt
|
||||
from TABLE_LIST_DISPLAY_PATTERNS
|
||||
group by list_id)norm
|
||||
,
|
||||
( select list_id, count(distinct(dbms_rowid.ROWID_BLOCK_NUMBER(rowid))) cnt
|
||||
from TABLE_LIST_DISPLAY_RAND
|
||||
group by list_id) random
|
||||
where norm.list_id = random.list_id
|
||||
order by list_id;
|
||||
|
||||
|
||||
|
||||
|
||||
set lines 256 pages 999
|
||||
|
||||
var LID NUMBER;
|
||||
execute :LID:=20;
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
* from USR.TABLE_LIST_DISPLAY_PATTERNS where LIST_ID=:LID;
|
||||
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| 0 | SELECT STATEMENT | | 1 | | | 7 (100)| 34 |00:00:00.01 | 14 |
|
||||
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| TABLE_LIST_DISPLAY_PATTERNS | 1 | 34 | 34136 | 7 (0)| 34 |00:00:00.01 | 14 |
|
||||
|* 2 | INDEX RANGE SCAN | LIST_DISPLAY_PATTERNS_IDX | 1 | 34 | | 1 (0)| 34 |00:00:00.01 | 5 |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
* from USR.TABLE_LIST_DISPLAY_RAND where LIST_ID=:LID;
|
||||
|
||||
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
----------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers |
|
||||
----------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| 0 | SELECT STATEMENT | | 1 | | | 35 (100)| 34 |00:00:00.01 | 39 |
|
||||
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| TABLE_LIST_DISPLAY_RAND | 1 | 34 | 34136 | 35 (0)| 34 |00:00:00.01 | 39 |
|
||||
|* 2 | INDEX RANGE SCAN | LIST_DISPLAY_RAND_IDX | 1 | 34 | | 1 (0)| 34 |00:00:00.01 | 5 |
|
||||
----------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
117
divers/ADB_free_install_01.txt
Normal file
117
divers/ADB_free_install_01.txt
Normal file
@@ -0,0 +1,117 @@
|
||||
-- https://github.com/oracle/adb-free/pkgs/container/adb-free
|
||||
|
||||
dd if=/dev/zero of=/vm/ssd0/ithor/app_02.img bs=1G count=8
|
||||
dd if=/dev/zero of=/vm/ssd0/ithor/app_03.img bs=1G count=8
|
||||
virsh domblklist ithor --details
|
||||
virsh attach-disk ithor /vm/ssd0/ithor/app_03.img vde --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk ithor /vm/ssd0/ithor/app_02.img vdf --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
|
||||
lsblk
|
||||
pvs
|
||||
pvcreate /dev/vde1
|
||||
pvcreate /dev/vdf1
|
||||
vgs
|
||||
vgextend vgapp /dev/vde1
|
||||
vgextend vgapp /dev/vdf1
|
||||
lvs
|
||||
lvextend -l +100%FREE /dev/vgapp/app
|
||||
xfs_growfs /app
|
||||
df -hT
|
||||
|
||||
# disable selinux
|
||||
/etc/selinux/config
|
||||
SELINUX=disabled
|
||||
|
||||
# install podman
|
||||
dnf install podman.x86_64
|
||||
|
||||
# change storage path for pods
|
||||
/etc/containers/storage.conf
|
||||
|
||||
# create a volume to user later for DATAPUMP / persistent storage across containers
|
||||
podman volume create adb_container_volume
|
||||
|
||||
# build pod
|
||||
podman run -d \
|
||||
-p 1521:1522 \
|
||||
-p 1522:1522 \
|
||||
-p 8443:8443 \
|
||||
-p 27017:27017 \
|
||||
-e DATABASE_NAME=ITHOR \
|
||||
-e WORKLOAD_TYPE=ATP \
|
||||
-e WALLET_PASSWORD=Remotecontrol1 \
|
||||
-e ADMIN_PASSWORD=Remotecontrol1 \
|
||||
--cap-add SYS_ADMIN \
|
||||
--device /dev/fuse \
|
||||
--name adb-free \
|
||||
--volume adb_container_volume:/u01/data \
|
||||
ghcr.io/oracle/adb-free:latest-23ai
|
||||
|
||||
# list pods and logs
|
||||
podman ps -a
|
||||
podman logs -f --names adb-free
|
||||
|
||||
# generate systemd unit to manage pod startup
|
||||
podman generate systemd --restart-policy=always -t 1 adb-free > /etc/systemd/system/adb-free.service
|
||||
systemctl list-unit-files | grep adb
|
||||
|
||||
systemctl enable adb-free.service
|
||||
systemctl stop adb-free.service
|
||||
systemctl start adb-free.service
|
||||
|
||||
# extract certificates from pod
|
||||
mkdir /app/adb-free
|
||||
podman cp adb-free:/u01/app/oracle/wallets/tls_wallet /app/adb-free/
|
||||
|
||||
# setup SQL*Plus connections from a linux machine
|
||||
# client 23 required
|
||||
# from umbara
|
||||
scp -rp ithor:/app/adb-free/tls_wallet adb-free_tls_wallet
|
||||
chown -R oracle:oinstall adb-free_tls_wallet
|
||||
|
||||
su - oracle
|
||||
export TNS_ADMIN=/app/oracle/adb-free_tls_wallet
|
||||
sed -i 's/localhost/ithor.swgalaxy/g' $TNS_ADMIN/tnsnames.ora
|
||||
|
||||
sqcl admin/Remotecontrol1@ithor_low_tls
|
||||
sqcl admin/Remotecontrol1@ithor_low
|
||||
|
||||
# create another ADMIN user
|
||||
-----------------------------------------------------------------
|
||||
-- USER SQL
|
||||
CREATE USER LIVESQL IDENTIFIED BY Remotecontrol1;
|
||||
|
||||
-- ADD ROLES
|
||||
GRANT CONNECT TO LIVESQL;
|
||||
GRANT CONSOLE_DEVELOPER TO LIVESQL;
|
||||
GRANT GRAPH_DEVELOPER TO LIVESQL;
|
||||
GRANT RESOURCE TO LIVESQL;
|
||||
ALTER USER LIVESQL DEFAULT ROLE CONSOLE_DEVELOPER,GRAPH_DEVELOPER;
|
||||
|
||||
-- REST ENABLE
|
||||
BEGIN
|
||||
ORDS_ADMIN.ENABLE_SCHEMA(
|
||||
p_enabled => TRUE,
|
||||
p_schema => 'LIVESQL',
|
||||
p_url_mapping_type => 'BASE_PATH',
|
||||
p_url_mapping_pattern => 'livesql',
|
||||
p_auto_rest_auth=> TRUE
|
||||
);
|
||||
-- ENABLE DATA SHARING
|
||||
C##ADP$SERVICE.DBMS_SHARE.ENABLE_SCHEMA(
|
||||
SCHEMA_NAME => 'LIVESQL',
|
||||
ENABLED => TRUE
|
||||
);
|
||||
commit;
|
||||
END;
|
||||
/
|
||||
|
||||
-- ENABLE GRAPH
|
||||
ALTER USER LIVESQL GRANT CONNECT THROUGH GRAPH$PROXY_USER;
|
||||
|
||||
-- QUOTA
|
||||
ALTER USER LIVESQL QUOTA UNLIMITED ON DATA;
|
||||
-----------------------------------------------------------------
|
||||
-- extra
|
||||
GRANT PDB_DBA TO LIVESQL;
|
||||
|
||||
105
divers/FK_indexing_01.txt
Normal file
105
divers/FK_indexing_01.txt
Normal file
@@ -0,0 +1,105 @@
|
||||
drop table SUPPLIER purge;
|
||||
|
||||
create table SUPPLIER(
|
||||
id INTEGER generated always as identity
|
||||
,name varchar2(30) not null
|
||||
,primary key(id)
|
||||
)
|
||||
;
|
||||
|
||||
|
||||
insert /*+ APPEND */ into SUPPLIER(name)
|
||||
select
|
||||
dbms_random.string('x',30)
|
||||
from
|
||||
xmltable('1 to 100')
|
||||
;
|
||||
|
||||
commit;
|
||||
|
||||
|
||||
drop table PRODUCT purge;
|
||||
create table PRODUCT(
|
||||
id integer generated always as identity
|
||||
,supplier_id integer
|
||||
,product_name varchar2(30)
|
||||
,price NUMBER
|
||||
,primary key(id)
|
||||
,constraint fk_prod_suppl foreign key(supplier_id) references SUPPLIER(id) on delete cascade
|
||||
)
|
||||
;
|
||||
|
||||
alter table PRODUCT drop constraint fk_prod_suppl;
|
||||
alter table PRODUCT add constraint fk_prod_suppl foreign key(supplier_id) references SUPPLIER(id) on delete cascade;
|
||||
|
||||
insert /*+ APPEND */ into PRODUCT(supplier_id,product_name,price)
|
||||
select
|
||||
trunc(dbms_random.value(1,90))
|
||||
,dbms_random.string('x',30)
|
||||
,dbms_random.value(1,10000)
|
||||
from
|
||||
xmltable('1 to 10000000')
|
||||
;
|
||||
|
||||
commit;
|
||||
|
||||
|
||||
-- grant execute on dbms_job to POC;
|
||||
-- grant create job to POC;
|
||||
|
||||
create or replace procedure delete_supplier(suppl_id integer) as
|
||||
begin
|
||||
DBMS_APPLICATION_INFO.set_module(module_name => 'delete_supplier', action_name => 'Delete supplier');
|
||||
delete from SUPPLIER where id=suppl_id;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
create or replace procedure parallel_delete_supplier as
|
||||
v_jobno number:=0;
|
||||
begin
|
||||
for i in 51..100 loop
|
||||
dbms_job.submit(v_jobno,'delete_supplier('||to_char(i)||');', sysdate);
|
||||
end loop;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
-- create a huge locking situation ;)
|
||||
exec parallel_delete_supplier;
|
||||
|
||||
|
||||
SQL> @ash/ashtop inst_id,session_id,sql_id,event2,blocking_inst_id,blocking_session,blocking_session_status,P1text,p2,p3 "username='POC'" sysdate-1/24/20 sysdate
|
||||
|
||||
Total Distinct Distinct
|
||||
Seconds AAS %This INST_ID SESSION_ID SQL_ID EVENT2 BLOCKING_INST_ID BLOCKING_SESSION BLOCKING_SE P1TEXT P2 P3 FIRST_SEEN LAST_SEEN Execs Seen Tstamps
|
||||
--------- ------- ------- ---------- ---------- ------------- ------------------------------------------ ---------------- ---------------- ----------- ------------------------------ ---------- ---------- ------------------- ------------------- ---------- --------
|
||||
15 .1 2% | 1 19 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 20 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 21 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 23 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 25 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 27 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 29 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 30 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 31 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 33 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 35 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 38 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 158 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 159 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
15 .1 2% | 1 160 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15
|
||||
|
||||
|
||||
-- find enq mode from P1 column og gv$session
|
||||
SQL> select distinct' [mode='||BITAND(p1, POWER(2,14)-1)||']' from gv$session where username='POC' and event like 'enq%';
|
||||
|
||||
'[MODE='||BITAND(P1,POWER(2,14)-1)||']'
|
||||
------------------------------------------------
|
||||
[mode=5]
|
||||
|
||||
|
||||
-- index the FK on child table
|
||||
create index IDX_PRODUCT_SUPPL_ID on PRODUCT(supplier_id);
|
||||
|
||||
11
divers/KVM_VM_create_Windows_11.txt
Normal file
11
divers/KVM_VM_create_Windows_11.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
qemu-img create -f raw /vm/ssd0/utapau/hdd_01.img 200G
|
||||
|
||||
virt-install \
|
||||
--graphics vnc,password=secret,listen=0.0.0.0 \
|
||||
--name=utapau \
|
||||
--vcpus=2 \
|
||||
--memory=4096 \
|
||||
--network bridge=br0 \
|
||||
--cdrom=/vm/hdd0/_kit_/Win10_1809Oct_v2_French_x64.iso \
|
||||
--disk=/vm/ssd0/utapau/hdd_01.img \
|
||||
--os-variant=win10
|
||||
13
divers/KVM_VM_create_linux.txt
Normal file
13
divers/KVM_VM_create_linux.txt
Normal file
@@ -0,0 +1,13 @@
|
||||
qemu-img create -f raw /vm/ssd0/topawa/hdd_01.img 200G
|
||||
|
||||
virt-install \
|
||||
--graphics vnc,password=secret,listen=0.0.0.0 \
|
||||
--name=topawa \
|
||||
--vcpus=4 \
|
||||
--memory=8192 \
|
||||
--network bridge=br0 \
|
||||
--network bridge=br0 \
|
||||
--cdrom=/vm/hdd0/_kit_/extix-23.4-64bit-deepin-23-refracta-3050mb-230403.iso \
|
||||
--disk=/vm/ssd0/topawa/hdd_01.img \
|
||||
--os-variant=ubuntu22.04
|
||||
|
||||
95
divers/KVM_install_Rocky9_01.txt
Normal file
95
divers/KVM_install_Rocky9_01.txt
Normal file
@@ -0,0 +1,95 @@
|
||||
-- Network setup
|
||||
----------------
|
||||
|
||||
nmcli connection show --active
|
||||
|
||||
nmcli connection modify enp4s0 ipv4.address 192.168.0.4/24
|
||||
nmcli connection modify enp4s0 ipv4.method manual ipv6.method ignore
|
||||
nmcli connection modify enp4s0 ipv4.gateway 192.168.0.1
|
||||
nmcli connection modify enp4s0 ipv4.dns 192.168.0.8
|
||||
nmcli connection modify enp4s0 ipv4.dns-search swgalaxy
|
||||
|
||||
hostnamectl set-hostname naboo.swgalaxy
|
||||
|
||||
# SELINUX=disabled
|
||||
/etc/selinux/config
|
||||
|
||||
systemctl stop firewalld
|
||||
systemctl disable firewalld
|
||||
|
||||
-- KVM install
|
||||
--------------
|
||||
|
||||
dnf install -y qemu-kvm libvirt virt-manager virt-install virtio-win.noarch
|
||||
dnf install -y epel-release -y
|
||||
dnf -y install bridge-utils virt-top libguestfs-tools bridge-utils virt-viewer
|
||||
dnf -y install at wget bind-utils
|
||||
|
||||
systemctl start atd
|
||||
systemctl enable atd
|
||||
systemctl status atd
|
||||
|
||||
lsmod | grep kvm
|
||||
|
||||
systemctl start libvirtd
|
||||
systemctl enable libvirtd
|
||||
|
||||
brctl show
|
||||
nmcli connection show
|
||||
|
||||
# This section should be scripted and run from the server console or run under at-script as background command
|
||||
#---->
|
||||
|
||||
export BR_NAME="br0"
|
||||
export BR_INT="enp4s0"
|
||||
export SUBNET_IP="192.168.0.4/24"
|
||||
export GW="192.168.0.1"
|
||||
export DNS1="192.168.0.8"
|
||||
|
||||
nmcli connection add type bridge autoconnect yes con-name ${BR_NAME} ifname ${BR_NAME}
|
||||
|
||||
nmcli connection modify ${BR_NAME} ipv4.addresses ${SUBNET_IP} ipv4.method manual
|
||||
nmcli connection modify ${BR_NAME} ipv4.gateway ${GW}
|
||||
nmcli connection modify ${BR_NAME} ipv4.dns ${DNS1}
|
||||
|
||||
nmcli connection delete ${BR_INT}
|
||||
nmcli connection add type bridge-slave autoconnect yes con-name ${BR_INT} ifname ${BR_INT} master ${BR_NAME}
|
||||
|
||||
nmcli connection show
|
||||
nmcli connection up br0
|
||||
nmcli connection show br0
|
||||
|
||||
ip addr show
|
||||
|
||||
systemctl restart libvirtd
|
||||
# <-----
|
||||
|
||||
|
||||
# Install other stuff: Cockpit, bind-utils, cifs-utils etc.
|
||||
dnf install cockpit cockpit-machines.noarch -y
|
||||
|
||||
systemctl start cockpit
|
||||
systemctl enable --now cockpit.socket
|
||||
|
||||
# reboot the system
|
||||
|
||||
dnf install -y lsof bind-utils cifs-utils.x86_64
|
||||
|
||||
# setup CIFS mounts
|
||||
groupadd smbuser --gid 1502
|
||||
useradd smbuser --uid 1502 -g smbuser -G smbuser
|
||||
|
||||
-- create credentials file for automount: /root/.smbcred
|
||||
username=vplesnila
|
||||
password=*****
|
||||
|
||||
mkdir -p /mnt/yavin4
|
||||
mkdir -p /mnt/unprotected
|
||||
|
||||
-- add in /etc/fstab
|
||||
//192.168.0.9/share /mnt/yavin4 cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0
|
||||
//192.168.0.9/unprotected /mnt/unprotected cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0
|
||||
|
||||
systemctl daemon-reload
|
||||
mount -a
|
||||
|
||||
2
divers/KVM_save_all_domain_XML.txt
Normal file
2
divers/KVM_save_all_domain_XML.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
virsh list --all --name | awk {'print "virsh dumpxml " $1 " > " $1".xml"'} | grep -v "virsh dumpxml > .xml"
|
||||
|
||||
144
divers/OEL9_install_01.txt
Normal file
144
divers/OEL9_install_01.txt
Normal file
@@ -0,0 +1,144 @@
|
||||
dd if=/dev/zero of=system_01.img bs=1G count=10
|
||||
dd if=/dev/zero of=swap_01.img bs=1G count=4
|
||||
|
||||
# create new domain
|
||||
virt-install \
|
||||
--graphics vnc,password=secret,listen=0.0.0.0 \
|
||||
--name=seedmachine \
|
||||
--vcpus=4 \
|
||||
--memory=8192 \
|
||||
--network bridge=br0 \
|
||||
--network bridge=br0 \
|
||||
--cdrom=/mnt/yavin4/kit/Linux/OracleLinux-R9-U4-x86_64-boot-uek.iso \
|
||||
--disk /vm/ssd0/seedmachine/system_01.img \
|
||||
--disk /vm/ssd0/seedmachine/swap_01.img \
|
||||
--os-variant=ol9.3
|
||||
|
||||
dnf install -y lsof bind-utils cifs-utils.x86_64
|
||||
dnf -y install at wget bind-utils tar.x86_64
|
||||
|
||||
systemctl start atd
|
||||
systemctl enable atd
|
||||
systemctl status atd
|
||||
|
||||
-- Network setup
|
||||
----------------
|
||||
|
||||
nmcli connection show --active
|
||||
|
||||
nmcli connection modify enp1s0 ipv4.address 192.168.0.66/24
|
||||
nmcli connection modify enp1s0 ipv4.method manual ipv6.method ignore
|
||||
nmcli connection modify enp1s0 ipv4.gateway 192.168.0.1
|
||||
nmcli connection modify enp1s0 ipv4.dns 192.168.0.8
|
||||
nmcli connection modify enp1s0 ipv4.dns-search swgalaxy
|
||||
|
||||
nmcli connection modify enp2s0 ipv4.address 192.168.1.66/24
|
||||
nmcli connection modify enp2s0 ipv4.method manual ipv6.method ignore
|
||||
|
||||
hostnamectl set-hostname seedmachine.swgalaxy
|
||||
|
||||
# SELINUX=disabled
|
||||
/etc/selinux/config
|
||||
|
||||
systemctl stop firewalld
|
||||
systemctl disable firewalld
|
||||
|
||||
dnf install oracle-epel-release-el9.x86_64 oracle-database-preinstall-19c.x86_64
|
||||
dnf install -y rlwrap.x86_64
|
||||
|
||||
|
||||
# setup CIFS mounts
|
||||
groupadd smbuser --gid 1502
|
||||
useradd smbuser --uid 1502 -g smbuser -G smbuser
|
||||
|
||||
-- create credentials file for automount: /root/.smbcred
|
||||
username=vplesnila
|
||||
password=*****
|
||||
|
||||
mkdir -p /mnt/yavin4
|
||||
mkdir -p /mnt/unprotected
|
||||
|
||||
-- add in /etc/fstab
|
||||
//192.168.0.9/share /mnt/yavin4 cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0
|
||||
//192.168.0.9/unprotected /mnt/unprotected cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0
|
||||
|
||||
systemctl daemon-reload
|
||||
mount -a
|
||||
|
||||
# add oracle user in smbuser group
|
||||
cat /etc/group | grep smbuser
|
||||
|
||||
smbuser:x:1502:smbuser,oracle
|
||||
|
||||
# add /app FS
|
||||
dd if=/dev/zero of=app_01.img bs=1G count=40
|
||||
dd if=/dev/zero of=data_01.img bs=1G count=20
|
||||
dd if=/dev/zero of=data_02.img bs=1G count=20
|
||||
dd if=/dev/zero of=reco_01.img bs=1G count=20
|
||||
|
||||
virsh domblklist seedmachine --details
|
||||
virsh attach-disk seedmachine /vm/ssd0/seedmachine/app_01.img vdc --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk seedmachine /vm/ssd0/seedmachine/data_01.img vdd --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk seedmachine /vm/ssd0/seedmachine/data_02.img vde --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk seedmachine /vm/ssd0/seedmachine/reco_01.img vdf --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
|
||||
fdisk /dev/vdc
|
||||
fdisk /dev/vdd
|
||||
fdisk /dev/vde
|
||||
fdisk /dev/vdf
|
||||
|
||||
pvs
|
||||
pvcreate /dev/vdc1
|
||||
pvcreate /dev/vdd1
|
||||
pvcreate /dev/vde1
|
||||
pvcreate /dev/vdf1
|
||||
|
||||
vgs
|
||||
vgcreate vgapp /dev/vdc1
|
||||
vgcreate vgdata /dev/vdd1 /dev/vde1
|
||||
vgcreate vgreco /dev/vdf1
|
||||
|
||||
lvs
|
||||
lvcreate -n app -l 100%FREE vgapp
|
||||
lvcreate -n data -l 100%FREE vgdata
|
||||
lvcreate -n reco -l 100%FREE vgreco
|
||||
|
||||
mkfs.xfs /dev/mapper/vgapp-app
|
||||
mkfs.xfs /dev/mapper/vgdata-data
|
||||
mkfs.xfs /dev/mapper/vgreco-reco
|
||||
|
||||
mkdir -p /app /data /reco
|
||||
|
||||
# add in /etc/fstab
|
||||
/dev/mapper/vgapp-app /app xfs defaults 0 0
|
||||
/dev/mapper/vgdata-data /data xfs defaults 0 0
|
||||
/dev/mapper/vgreco-reco /reco xfs defaults 0 0
|
||||
|
||||
systemctl daemon-reload
|
||||
mount -a
|
||||
|
||||
chown -R oracle:oinstall /app /data /reco
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2
divers/PC_boot_menu.txt
Normal file
2
divers/PC_boot_menu.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
AMD Ryzen - F7
|
||||
|
||||
9
divers/PDB$SEED_recompile_all.sql
Normal file
9
divers/PDB$SEED_recompile_all.sql
Normal file
@@ -0,0 +1,9 @@
|
||||
alter pluggable database PDB$SEED close immediate instances=ALL;
|
||||
alter pluggable database PDB$SEED open read write instances=ALL;
|
||||
alter session set container=PDB$SEED;
|
||||
alter session set "_ORACLE_SCRIPT"=true;
|
||||
@?/rdbms/admin/utlrp
|
||||
alter session set "_ORACLE_SCRIPT"=false;
|
||||
alter session set container=CDB$ROOT;
|
||||
alter pluggable database PDB$SEED close immediate instances=ALL;
|
||||
alter pluggable database PDB$SEED open read only instances=ALL;
|
||||
157
divers/PDB_PITR_scratch_01.txt
Normal file
157
divers/PDB_PITR_scratch_01.txt
Normal file
@@ -0,0 +1,157 @@
|
||||
rman target /
|
||||
|
||||
run
|
||||
{
|
||||
set nocfau;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck';
|
||||
allocate channel ch02 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck';
|
||||
allocate channel ch03 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck';
|
||||
allocate channel ch04 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck';
|
||||
backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input;
|
||||
release channel ch01;
|
||||
release channel ch02;
|
||||
release channel ch03;
|
||||
release channel ch04;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.controlfile';
|
||||
backup current controlfile;
|
||||
release channel ch01;
|
||||
}
|
||||
|
||||
|
||||
sqlplus 'sys/"Secret00!"'@wayland.swgalaxy:1555/ADNAPRD_DGMGRL as sysdba
|
||||
sqlplus 'sys/"Secret00!"'@togoria.swgalaxy:1555/ADNADRP_DGMGRL as sysdba
|
||||
|
||||
|
||||
configure archivelog deletion policy to applied on all standby;
|
||||
|
||||
rman target='sys/"Secret00!"'@wayland.swgalaxy:1555/ADNAPRD_DGMGRL auxiliary='sys/"Secret00!"'@togoria.swgalaxy:1555/ADNADRP_DGMGRL
|
||||
|
||||
run
|
||||
{
|
||||
allocate channel pri01 device type disk;
|
||||
allocate channel pri02 device type disk;
|
||||
allocate channel pri03 device type disk;
|
||||
allocate channel pri04 device type disk;
|
||||
allocate channel pri05 device type disk;
|
||||
allocate channel pri06 device type disk;
|
||||
allocate channel pri07 device type disk;
|
||||
allocate channel pri08 device type disk;
|
||||
allocate channel pri09 device type disk;
|
||||
allocate channel pri10 device type disk;
|
||||
|
||||
allocate auxiliary channel aux01 device type disk;
|
||||
allocate auxiliary channel aux02 device type disk;
|
||||
allocate auxiliary channel aux03 device type disk;
|
||||
allocate auxiliary channel aux04 device type disk;
|
||||
allocate auxiliary channel aux05 device type disk;
|
||||
allocate auxiliary channel aux06 device type disk;
|
||||
allocate auxiliary channel aux07 device type disk;
|
||||
allocate auxiliary channel aux08 device type disk;
|
||||
allocate auxiliary channel aux09 device type disk;
|
||||
allocate auxiliary channel aux10 device type disk;
|
||||
|
||||
duplicate database 'ADNA' for standby
|
||||
from active database using compressed backupset section size 512M;
|
||||
}
|
||||
|
||||
|
||||
|
||||
alter system set dg_broker_config_file1='/app/oracle/base/admin/ADNAPRD/dgmgrl/dr1ADNAPRD.dat' scope=both sid='*';
|
||||
alter system set dg_broker_config_file2='/app/oracle/base/admin/ADNAPRD/dgmgrl/dr2ADNAPRD.dat' scope=both sid='*';
|
||||
alter system set dg_broker_start=TRUE scope=both sid='*';
|
||||
|
||||
|
||||
alter system set dg_broker_config_file1='/app/oracle/base/admin/ADNADRP/dgmgrl/dr1ADNADRP.dat' scope=both sid='*';
|
||||
alter system set dg_broker_config_file2='/app/oracle/base/admin/ADNADRP/dgmgrl/dr2ADNADRP.dat' scope=both sid='*';
|
||||
alter system set dg_broker_start=TRUE scope=both sid='*';
|
||||
|
||||
|
||||
rlwrap dgmgrl 'sys/"Secret00!"'@wayland.swgalaxy:1555/ADNAPRD_DGMGRL
|
||||
|
||||
create configuration ADNA as
|
||||
primary database is ADNAPRD
|
||||
connect identifier is 'wayland.swgalaxy:1555/ADNAPRD_DGMGRL';
|
||||
|
||||
add database ADNADRP
|
||||
as connect identifier is 'togoria.swgalaxy:1555/ADNADRP_DGMGRL'
|
||||
maintained as physical;
|
||||
|
||||
enable configuration;
|
||||
|
||||
edit database 'adnaprd' set property ArchiveLagTarget=0;
|
||||
edit database 'adnaprd' set property LogArchiveMaxProcesses=2;
|
||||
edit database 'adnaprd' set property LogArchiveMinSucceedDest=1;
|
||||
edit database 'adnaprd' set property StandbyFileManagement='AUTO';
|
||||
|
||||
edit database 'adnadrp' set property ArchiveLagTarget=0;
|
||||
edit database 'adnadrp' set property LogArchiveMaxProcesses=2;
|
||||
edit database 'adnadrp' set property LogArchiveMinSucceedDest=1;
|
||||
edit database 'adnadrp' set property StandbyFileManagement='AUTO';
|
||||
|
||||
edit instance 'ADNAPRD' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=wayland.swgalaxy)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=ADNAPRD_DGMGRL)(INSTANCE_NAME=ADNAPRD)(SERVER=DEDICATED)))';
|
||||
edit instance 'ADNADRP' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=togoria.swgalaxy)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=ADNADRP_DGMGRL)(INSTANCE_NAME=ADNADRP)(SERVER=DEDICATED)))';
|
||||
|
||||
show configuration
|
||||
validate database 'adnadrp'
|
||||
validate database 'adnaprd'
|
||||
|
||||
|
||||
|
||||
|
||||
create table heartbeat (ts TIMESTAMP);
|
||||
insert into heartbeat values (CURRENT_TIMESTAMP);
|
||||
commit;
|
||||
|
||||
|
||||
CREATE OR REPLACE PROCEDURE update_heartbeat AS
|
||||
BEGIN
|
||||
UPDATE heartbeat
|
||||
SET ts = SYSTIMESTAMP;
|
||||
COMMIT;
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.CREATE_JOB (
|
||||
job_name => 'HEARTBEAT_JOB',
|
||||
job_type => 'STORED_PROCEDURE',
|
||||
job_action => 'UPDATE_HEARTBEAT',
|
||||
start_date => SYSTIMESTAMP,
|
||||
repeat_interval => 'FREQ=SECONDLY; INTERVAL=1',
|
||||
enabled => FALSE
|
||||
);
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.ENABLE('HEARTBEAT_JOB');
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.DISABLE('HEARTBEAT_JOB');
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.DROP_JOB('HEARTBEAT_JOB');
|
||||
END;
|
||||
/
|
||||
|
||||
drop PROCEDURE update_heartbeat;
|
||||
|
||||
drop table heartbeat purge;
|
||||
|
||||
|
||||
run{
|
||||
set until time "TIMESTAMP'2026-02-21 15:50:00'";
|
||||
alter pluggable database RYLS close immediate instances=all;
|
||||
restore pluggable database RYLS;
|
||||
recover pluggable database RYLS;
|
||||
alter pluggable database RYLS open resetlogs instances=all;
|
||||
}
|
||||
20
divers/Purines_vs_Omega‑3.md
Normal file
20
divers/Purines_vs_Omega‑3.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Classement croisé Purines vs Oméga‑3
|
||||
|
||||
| Aliment | Purines (mg/100 g) | Catégorie purines | Oméga‑3 (g/100 g) | Catégorie oméga‑3 | Verdict croisé |
|
||||
|--------------------------|--------------------|-------------------|-------------------|-------------------|----------------|
|
||||
| Poulet (blanc/cuisse) | 150–175 | Modéré | ~0.05 | Pauvre | ⚠️ Peu d’intérêt nutritionnel, purines modérées mais quasi pas d’oméga‑3 |
|
||||
| Bœuf (muscle) | ~120 | Modéré | ~0.04 | Pauvre | ⚠️ Idem, faible en oméga‑3 |
|
||||
| Foie de bœuf | ~300 | Très élevé | ~0.10 | Pauvre | 🚫 À éviter (purines très élevées, peu d’oméga‑3) |
|
||||
| Sardine | ~210 | Élevé | ~0.80–0.90 | Moyen | ⚖️ Bon apport en oméga‑3 mais purines élevées |
|
||||
| Anchois | ~300 | Très élevé | ~0.90 | Moyen | 🚫 Risque goutte, malgré oméga‑3 |
|
||||
| Saumon | ~170 | Modéré | ~2.3–2.6 | Riche | ✅ Excellent compromis (oméga‑3 riches, purines modérées) |
|
||||
| Maquereau | ~145 | Modéré | ~1.4–1.8 | Riche | ✅ Très bon compromis |
|
||||
| Hareng | ~170 | Modéré | ~1.6–2.2 | Riche | ✅ Très bon compromis |
|
||||
| Truite | ~150 | Modéré | ~1.2–1.3 | Riche | ✅ Bon compromis |
|
||||
| Thon (rouge) | ~150 | Modéré | ~1.6–1.7 | Riche | ✅ Bon compromis |
|
||||
| Crevettes | ~150 | Modéré | ~0.30 | Moyen | ⚖️ Correct mais pas exceptionnel |
|
||||
| Crabe / Tourteau | ~150 | Modéré | ~0.45 | Moyen | ⚖️ Correct |
|
||||
| Homard / Langouste | ~135 | Modéré | ~0.52 | Moyen | ⚖️ Correct |
|
||||
| Moules | ~150 | Modéré | ~0.59–0.85 | Moyen | ⚖️ Correct |
|
||||
| Couteaux de mer | ~150 | Modéré | ~0.6 | Moyen | ⚖️ Correct |
|
||||
| Coquilles Saint‑Jacques | ~150–180 | Modéré | ~0.5–0.6 | Moyen | ⚖️ Correct |
|
||||
256
divers/RAC_19_OEL9_ASMLIB3_setup_01.txt
Normal file
256
divers/RAC_19_OEL9_ASMLIB3_setup_01.txt
Normal file
@@ -0,0 +1,256 @@
|
||||
# netwok setup on each node
|
||||
nmcli connection show --active
|
||||
|
||||
nmcli connection modify enp1s0 ipv4.address 192.168.0.95/24
|
||||
nmcli connection modify enp1s0 ipv4.method manual ipv6.method ignore
|
||||
nmcli connection modify enp1s0 ipv4.gateway 192.168.0.1
|
||||
nmcli connection modify enp1s0 ipv4.dns 192.168.0.8
|
||||
nmcli connection modify enp1s0 ipv4.dns-search swgalaxy
|
||||
|
||||
nmcli connection modify enp2s0 ipv4.address 192.168.1.95/24
|
||||
nmcli connection modify enp2s0 ipv4.method manual ipv6.method ignore
|
||||
|
||||
nmcli connection modify enp10s0 ipv4.address 192.168.2.95/24
|
||||
nmcli connection modify enp10s0 ipv4.method manual ipv6.method ignore
|
||||
|
||||
hostnamectl set-hostname rodia-db03.swgalaxy
|
||||
|
||||
# attach disks in each node
|
||||
virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_01.img vdd --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_02.img vde --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_03.img vdf --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_04.img vdg --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_05.img vdh --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
|
||||
|
||||
- unzip distrib in grid home
|
||||
- unzip last GIRU in a temporary location
|
||||
- apply GIRU in silent mode over the base GI distrib
|
||||
|
||||
# on each node
|
||||
##############
|
||||
|
||||
mkdir -p /app/oracle
|
||||
chmod 775 /app/oracle
|
||||
chown -R oracle:oinstall /app/oracle
|
||||
|
||||
cd /app/oracle/
|
||||
mkdir -p admin base grid oraInventory rdbms staging_area
|
||||
chmod 775 admin base grid oraInventory rdbms staging_area
|
||||
|
||||
chown -R oracle:oinstall admin rdbms staging_area
|
||||
chown -R grid:oinstall grid oraInventory base
|
||||
|
||||
su - grid
|
||||
mkdir -p /app/oracle/grid/product/19
|
||||
|
||||
|
||||
# on first node
|
||||
###############
|
||||
|
||||
# unzip distrib
|
||||
cd /app/oracle/grid/product/19
|
||||
unzip /mnt/yavin4/kit/Oracle/Grid_Infra/19/distrib/LINUX.X64_193000_grid_home.zip
|
||||
|
||||
# update Opatch
|
||||
rm -rf OPatch
|
||||
unzip /mnt/yavin4/kit/Oracle/opatch/p6880880_190000_Linux-x86-64.zip
|
||||
|
||||
cd /app/oracle/staging_area/
|
||||
unzip /mnt/yavin4/kit/Oracle/Grid_Infra/19/GIRU/GIRU_19.27/p37641958_190000_Linux-x86-64.zip
|
||||
|
||||
# apply the RU on this ORACLE_HOME
|
||||
# on first node as grid
|
||||
|
||||
export ORACLE_BASE=/app/oracle/base
|
||||
export ORACLE_HOME=/app/oracle/grid/product/19
|
||||
export PATH=$ORACLE_HOME/bin:$PATH
|
||||
|
||||
$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/36758186
|
||||
$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37642901
|
||||
$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37643161
|
||||
$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37654975
|
||||
$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37762426
|
||||
|
||||
# to satisfy ALL pre-requisits, to do on ALL nodes
|
||||
|
||||
dnf install -y $ORACLE_HOME/cv/rpm/cvuqdisk-1.0.10-1.rpm
|
||||
|
||||
# in /etc/security/limits.conf
|
||||
|
||||
# Oracle
|
||||
oracle soft stack 10240
|
||||
grid soft stack 10240
|
||||
grid soft memlock 3145728
|
||||
grid hard memlock 3145728
|
||||
|
||||
# in /etc/sysctl.conf
|
||||
|
||||
# other oracle settings
|
||||
kernel.panic = 1
|
||||
|
||||
|
||||
# temporary SWAP
|
||||
dd if=/dev/zero of=/mnt/unprotected/tmp/oracle/swap_20g.img bs=1G count=20
|
||||
mkswap /mnt/unprotected/tmp/oracle/swap_20g.img
|
||||
swapon /mnt/unprotected/tmp/oracle/swap_20g.img
|
||||
free -h
|
||||
|
||||
##############
|
||||
|
||||
# pre-check as grid
|
||||
export ORACLE_BASE=/app/oracle/base
|
||||
export ORACLE_HOME=/app/oracle/grid/product/19
|
||||
export PATH=$ORACLE_HOME/bin:$PATH
|
||||
|
||||
$ORACLE_HOME/runcluvfy.sh stage -pre crsinst -n ylesia-db01,ylesia-db02,ylesia-db03
|
||||
|
||||
|
||||
# ASM disks
|
||||
lsblk --list | egrep "vdd|vde|vdf|vdg|vdh"
|
||||
ls -ltr /dev/vd[d-h]
|
||||
|
||||
fdisk ..... all disk
|
||||
|
||||
|
||||
lsblk --list | egrep "vdd|vde|vdf|vdg|vdh"
|
||||
ls -ltr /dev/vd[d-h]1
|
||||
|
||||
# install asmlib on all nodes
|
||||
dnf install -y oracleasm-support-3.1.0-10.el9.x86_64.rpm
|
||||
dnf install -y oracleasmlib-3.1.0-6.el9.x86_64.rpm
|
||||
|
||||
systemctl start oracleasm.service
|
||||
|
||||
oracleasm configure -i
|
||||
|
||||
(answers: grid, asmdba and all default)
|
||||
|
||||
echo "kernel.io_uring_disabled = 0" >> /etc/sysctl.conf
|
||||
sysctl -p
|
||||
|
||||
# create ASM disks on first node
|
||||
oracleasm createdisk DATA_01 /dev/vdd1
|
||||
oracleasm createdisk DATA_02 /dev/vde1
|
||||
oracleasm createdisk DATA_03 /dev/vdf1
|
||||
oracleasm createdisk RECO_01 /dev/vdg1
|
||||
oracleasm createdisk RECO_02 /dev/vdh1
|
||||
|
||||
oracleasm scandisks
|
||||
oracleasm listdisks
|
||||
|
||||
# on other nodes, only scan and list ASL disks
|
||||
|
||||
# on first node, grid setup
|
||||
$ORACLE_HOME/gridSetup.sh
|
||||
|
||||
# RDBMS install
|
||||
###############
|
||||
|
||||
# unzip distrib
|
||||
mkdir -p /app/oracle/rdbms/product/19
|
||||
cd /app/oracle/rdbms/product/19
|
||||
unzip /mnt/yavin4/kit/Oracle/Oracle_Database_19/distrib/LINUX.X64_193000_db_home.zip
|
||||
|
||||
|
||||
# update Opatch
|
||||
rm -rf OPatch
|
||||
unzip /mnt/yavin4/kit/Oracle/opatch/p6880880_190000_Linux-x86-64.zip
|
||||
|
||||
# apply the RU on this ORACLE_HOME
|
||||
# on first node as oracle
|
||||
|
||||
export ORACLE_BASE=/app/oracle/base
|
||||
export ORACLE_HOME=/app/oracle/rdbms/product/19
|
||||
export PATH=$ORACLE_HOME/bin:$PATH
|
||||
|
||||
$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/36758186
|
||||
$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37642901
|
||||
$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37643161
|
||||
$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37654975
|
||||
$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37762426
|
||||
|
||||
# install from a X session
|
||||
$ORACLE_HOME/runInstaller
|
||||
|
||||
# on all nodes
|
||||
chmod -R 775 /app/oracle/base/admin /app/oracle/base/diag
|
||||
|
||||
cat <<EOF! >> /etc/oratab
|
||||
SET19:/app/oracle/rdbms/product/19:N
|
||||
EOF!
|
||||
|
||||
|
||||
# using DBCA to create/delete database
|
||||
|
||||
export ORACLE_DB_NAME=AERON
|
||||
export ORACLE_UNQNAME=AERONPRD
|
||||
export PDB_NAME=REEK
|
||||
export NODE1=ylesia-db01
|
||||
export NODE2=ylesia-db02
|
||||
export NODE3=ylesia-db03
|
||||
export SYS_PASSWORD="Secret00!"
|
||||
export PDB_PASSWORD="Secret00!"
|
||||
|
||||
# create MULTITENANT database
|
||||
dbca -silent -createDatabase \
|
||||
-templateName General_Purpose.dbc \
|
||||
-sid ${ORACLE_UNQNAME} \
|
||||
-gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \
|
||||
-characterSet AL32UTF8 \
|
||||
-sysPassword ${SYS_PASSWORD} \
|
||||
-systemPassword ${SYS_PASSWORD} \
|
||||
-createAsContainerDatabase true \
|
||||
-numberOfPDBs 1 \
|
||||
-pdbName ${PDB_NAME} \
|
||||
-pdbAdminPassword ${PDB_PASSWORD} \
|
||||
-databaseType MULTIPURPOSE \
|
||||
-automaticMemoryManagement false \
|
||||
-totalMemory 3072 \
|
||||
-redoLogFileSize 128 \
|
||||
-emConfiguration NONE \
|
||||
-ignorePreReqs \
|
||||
-nodelist ${NODE1},${NODE2},${NODE3} \
|
||||
-storageType ASM \
|
||||
-diskGroupName +DATA \
|
||||
-recoveryGroupName +RECO \
|
||||
-useOMF true \
|
||||
-initparams db_name=${ORACLE_DB_NAME},db_unique_name=${ORACLE_UNQNAME},sga_max_size=3G,sga_target=3G,pga_aggregate_target=512M \
|
||||
-enableArchive true \
|
||||
-recoveryAreaDestination +RECO \
|
||||
-recoveryAreaSize 30720 \
|
||||
-asmsnmpPassword ${SYS_PASSWORD}
|
||||
|
||||
# create NON-CDB database
|
||||
dbca -silent -createDatabase \
|
||||
-templateName General_Purpose.dbc \
|
||||
-sid ${ORACLE_UNQNAME} \
|
||||
-gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \
|
||||
-characterSet AL32UTF8 \
|
||||
-sysPassword ${SYS_PASSWORD} \
|
||||
-systemPassword ${SYS_PASSWORD} \
|
||||
-createAsContainerDatabase false \
|
||||
-databaseType MULTIPURPOSE \
|
||||
-automaticMemoryManagement false \
|
||||
-totalMemory 3072 \
|
||||
-redoLogFileSize 128 \
|
||||
-emConfiguration NONE \
|
||||
-ignorePreReqs \
|
||||
-nodelist ${NODE1},${NODE2},${NODE3} \
|
||||
-storageType ASM \
|
||||
-diskGroupName +DATA \
|
||||
-recoveryGroupName +RECO \
|
||||
-useOMF true \
|
||||
-initparams db_name=${ORACLE_DB_NAME},db_unique_name=${ORACLE_UNQNAME},sga_max_size=3G,sga_target=3G,pga_aggregate_target=512M \
|
||||
-enableArchive true \
|
||||
-recoveryAreaDestination +RECO \
|
||||
-recoveryAreaSize 30720 \
|
||||
-asmsnmpPassword ${SYS_PASSWORD}
|
||||
|
||||
|
||||
# delete database
|
||||
dbca -silent -deleteDatabase \
|
||||
-sourceDB AERONPRD \
|
||||
-sysPassword ${SYS_PASSWORD} \
|
||||
-forceArchiveLogDeletion
|
||||
|
||||
86
divers/SuSE_install_01.txt
Normal file
86
divers/SuSE_install_01.txt
Normal file
@@ -0,0 +1,86 @@
|
||||
#############
|
||||
# VM creation
|
||||
#############
|
||||
|
||||
mkdir /vm/ssd0/aquaris
|
||||
|
||||
qemu-img create -f raw /vm/ssd0/aquaris/root_01.img 64G
|
||||
|
||||
virt-install \
|
||||
--graphics vnc,password=secret,listen=0.0.0.0 \
|
||||
--name=aquaris \
|
||||
--vcpus=4 \
|
||||
--memory=4096 \
|
||||
--network bridge=br0 \
|
||||
--network bridge=br0 \
|
||||
--cdrom=/vm/hdd0/_kit_/openSUSE-Leap-15.5-NET-x86_64-Build491.1-Media.iso \
|
||||
--disk /vm/ssd0/aquaris/root_01.img \
|
||||
--os-variant=opensuse15.4
|
||||
|
||||
####################
|
||||
# SuSE configuration
|
||||
####################
|
||||
|
||||
# network interfaces
|
||||
/etc/sysconfig/network/ifcfg-eth0
|
||||
/etc/sysconfig/network/ifcfg-eth1
|
||||
|
||||
#DNS
|
||||
/run/netconfig/resolv.conf
|
||||
# set NETCONFIG_DNS_POLICY="auto" in /etc/sysconfig/network/config
|
||||
|
||||
# gateway
|
||||
/etc/sysconfig/network/routes
|
||||
|
||||
# delete unwanted statis enteries in /etc/hosts
|
||||
|
||||
##############
|
||||
# VM customize
|
||||
##############
|
||||
|
||||
qemu-img create -f raw /vm/ssd0/aquaris/app_01.img 60G
|
||||
dd if=/dev/zero of=/vm/ssd0/aquaris/data_01.img bs=1G count=30
|
||||
dd if=/dev/zero of=/vm/ssd0/aquaris/backup_01.img bs=1G count=20
|
||||
|
||||
virsh domblklist aquaris --details
|
||||
|
||||
virsh attach-disk aquaris /vm/ssd0/aquaris/app_01.img vdb --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk aquaris /vm/ssd0/aquaris/data_01.img vdc --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
virsh attach-disk aquaris /vm/ssd0/aquaris/backup_01.img vdd --driver qemu --subdriver raw --targetbus virtio --persistent
|
||||
|
||||
btrfs device scan
|
||||
btrfs filesystem show
|
||||
|
||||
mkfs.btrfs /dev/vdb
|
||||
mkfs.btrfs /dev/vdc
|
||||
mkfs.btrfs /dev/vdd
|
||||
|
||||
|
||||
# create mount points
|
||||
mkdir /app /data /backup
|
||||
|
||||
# add in /etc/fstab
|
||||
UUID=fe1756c7-a062-40ed-921a-9fb1c12d8d51 /app btrfs defaults 0 0
|
||||
UUID=3b147a0d-ca13-46f5-aa75-72f5a2b9fd4c /data btrfs defaults 0 0
|
||||
UUID=d769e88b-5ec4-4e0a-93cd-1f2a9deecc8b /backup btrfs defaults 0 0
|
||||
|
||||
# mount all
|
||||
mount -a
|
||||
|
||||
btrfs subvolume create /backup/current
|
||||
mkdir /backup/.snapshots
|
||||
|
||||
btrfs subvolume snapshot /backup/current /backup/.snapshots/01
|
||||
btrfs subvolume snapshot /backup/current /backup/.snapshots/02
|
||||
|
||||
btrfs subvolume list /backup/current
|
||||
|
||||
btrfs subvolume show /backup/.snapshots/01
|
||||
btrfs subvolume show /backup/.snapshots/02
|
||||
|
||||
tree -a /backup
|
||||
|
||||
btrfs subvolume delete /backup/.snapshots/01
|
||||
btrfs subvolume delete /backup/.snapshots/02
|
||||
btrfs subvolume delete /backup/current
|
||||
|
||||
222
divers/TLS_connection_01.txt
Normal file
222
divers/TLS_connection_01.txt
Normal file
@@ -0,0 +1,222 @@
|
||||
# https://wadhahdaouehi.tn/2023/05/oracle-database-server-client-certificate-tcps-oracle-19c/
|
||||
|
||||
_____ _ _
|
||||
/ ____| (_) | |
|
||||
| (___ ___ _ ____ _____ _ __ ___ _ __| | ___
|
||||
\___ \ / _ \ '__\ \ / / _ \ '__| / __| |/ _` |/ _ \
|
||||
____) | __/ | \ V / __/ | \__ \ | (_| | __/
|
||||
|_____/ \___|_| \_/ \___|_| |___/_|\__,_|\___|
|
||||
|
||||
|
||||
# Create a new auto-login wallet
|
||||
export WALLET_DIRECTORY=/home/oracle/poc_tls/wallet
|
||||
export WALLET_PASSWORD="VaeVictis00!"
|
||||
|
||||
orapki wallet create -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -auto_login_local
|
||||
|
||||
# Create a self-signed certificate and load it into the wallet
|
||||
export CERT_VALIDITY_DAYS=3650
|
||||
|
||||
orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN=`hostname`" -keysize 2048 -self_signed -validity ${CERT_VALIDITY_DAYS}
|
||||
|
||||
# Check the contents of the wallet
|
||||
orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD}
|
||||
|
||||
Note: The self-signed certificate is both a user and trusted certificate
|
||||
|
||||
# Export the certificate to load it into the client wallet later
|
||||
export CERT_EXPORT_PATH=/home/oracle/poc_tls/export
|
||||
orapki wallet export -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN= `hostname` " -cert ${CERT_EXPORT_PATH}/`hostname`-certificate.crt
|
||||
|
||||
|
||||
_____ _ _ _ _ _
|
||||
/ ____| (_) | | (_) | |
|
||||
| | | |_ ___ _ __ | |_ ___ _ __| | ___
|
||||
| | | | |/ _ \ '_ \| __| / __| |/ _` |/ _ \
|
||||
| |____| | | __/ | | | |_ \__ \ | (_| | __/
|
||||
\_____|_|_|\___|_| |_|\__| |___/_|\__,_|\___|
|
||||
|
||||
|
||||
# Create a new auto-login wallet
|
||||
export WALLET_DIRECTORY=/mnt/yavin4/tmp/00000/wayland/wallet
|
||||
export WALLET_PASSWORD="AdVictoriam00!"
|
||||
|
||||
orapki wallet create -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -auto_login_local
|
||||
|
||||
# Create a self-signed certificate and load it into the wallet
|
||||
export CERT_VALIDITY_DAYS=3650
|
||||
|
||||
orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN=`hostname`" -keysize 2048 -self_signed -validity ${CERT_VALIDITY_DAYS}
|
||||
|
||||
# Check the contents of the wallet
|
||||
orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD}
|
||||
|
||||
Note: The self-signed certificate is both a user and trusted certificate
|
||||
|
||||
# Export the certificate to load it into the client wallet later
|
||||
export CERT_EXPORT_PATH="/mnt/yavin4/tmp/00000/wayland/cert_expo"
|
||||
orapki wallet export -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN= `hostname` " -cert ${CERT_EXPORT_PATH}/`hostname`-certificate.crt
|
||||
|
||||
|
||||
_____ _ _ __ _ _ _
|
||||
/ ____| | | (_)/ _(_) | | | |
|
||||
| | ___ _ __| |_ _| |_ _ ___ __ _| |_ ___ _____ _____| |__ __ _ _ __ __ _ ___
|
||||
| | / _ \ '__| __| | _| |/ __/ _` | __/ _ \ / _ \ \/ / __| '_ \ / _` | '_ \ / _` |/ _ \
|
||||
| |___| __/ | | |_| | | | | (_| (_| | || __/ | __/> < (__| | | | (_| | | | | (_| | __/
|
||||
\_____\___|_| \__|_|_| |_|\___\__,_|\__\___| \___/_/\_\___|_| |_|\__,_|_| |_|\__, |\___|
|
||||
__/ |
|
||||
|___/
|
||||
|
||||
Note: Both Server/Client should trust each other
|
||||
|
||||
# Load the client certificate into the server wallet
|
||||
export WALLET_DIRECTORY=/mnt/yavin4/tmp/00000/bakura/wallet
|
||||
export WALLET_PASSWORD="VaeVictis00!"
|
||||
export CERT_EXPORT_FILE="/mnt/yavin4/tmp/00000/wayland/cert_expo/wayland.swgalaxy-certificate.crt"
|
||||
|
||||
orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -trusted_cert -cert ${CERT_EXPORT_FILE}
|
||||
# Check the contents of the wallet
|
||||
orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD}
|
||||
|
||||
|
||||
# Load the server certificate into the client wallet
|
||||
export WALLET_DIRECTORY=/mnt/yavin4/tmp/00000/wayland/wallet
|
||||
export WALLET_PASSWORD="AdVictoriam00!"
|
||||
export CERT_EXPORT_FILE="/mnt/yavin4/tmp/00000/bakura/cert_expo/bakura.swgalaxy-certificate.crt"
|
||||
|
||||
orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -trusted_cert -cert ${CERT_EXPORT_FILE}
|
||||
# Check the contents of the wallet
|
||||
orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD}
|
||||
|
||||
|
||||
_ _ _ _
|
||||
| | (_) | | | |
|
||||
| | _ ___| |_ ___ _ __ ___ _ __ ___ ___| |_ _ _ _ __
|
||||
| | | / __| __/ _ \ '_ \ / _ \ '__| / __|/ _ \ __| | | | '_ \
|
||||
| |____| \__ \ || __/ | | | __/ | \__ \ __/ |_| |_| | |_) |
|
||||
|______|_|___/\__\___|_| |_|\___|_| |___/\___|\__|\__,_| .__/
|
||||
| |
|
||||
|_|
|
||||
|
||||
Note: I didn't succeed the LISTENER setup using a custom TNS_ADMIN or using /etc/listener.ora file
|
||||
|
||||
rm -rf /etc/listener.ora
|
||||
rm -rf /etc/tnsnames.ora
|
||||
|
||||
|
||||
# I'm using a read-only ORACLE_HOME
|
||||
cat $(orabasehome)/network/admin/sqlnet.ora
|
||||
|
||||
WALLET_LOCATION =
|
||||
(SOURCE =
|
||||
(METHOD = FILE)
|
||||
(METHOD_DATA =
|
||||
(DIRECTORY = /mnt/yavin4/tmp/00000/bakura/wallet)
|
||||
)
|
||||
)
|
||||
|
||||
SQLNET.AUTHENTICATION_SERVICES = (TCPS,BEQ,NTP)
|
||||
SSL_CLIENT_AUTHENTICATION = FALSE
|
||||
|
||||
|
||||
cat $(orabasehome)/network/admin/listener.ora
|
||||
SSL_CLIENT_AUTHENTICATION = FALSE
|
||||
WALLET_LOCATION =
|
||||
(SOURCE =
|
||||
(METHOD = FILE)
|
||||
(METHOD_DATA =
|
||||
(DIRECTORY = /mnt/yavin4/tmp/00000/bakura/wallet)
|
||||
)
|
||||
)
|
||||
|
||||
LISTENER_DEMO =
|
||||
(DESCRIPTION_LIST =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS = (PROTOCOL = TCP)(HOST = bakura.swgalaxy)(PORT = 1600))
|
||||
)
|
||||
(DESCRIPTION =
|
||||
(ADDRESS = (PROTOCOL = TCPS)(HOST = bakura.swgalaxy)(PORT = 1700))
|
||||
)
|
||||
)
|
||||
|
||||
# start specific listener
|
||||
lsnrctl start LISTENER_DEMO
|
||||
|
||||
# register the database in the listener; note that TCPS adress was not required
|
||||
alter system set local_listener='(DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = bakura.swgalaxy)(PORT = 1600)) ) )' scope=both sid='*';
|
||||
alter system register;
|
||||
|
||||
Note: I don't explicitly specified TCPS adress but TCPS connections will be OK
|
||||
|
||||
_____ _ _ _ _
|
||||
/ ____| (_) | | | |
|
||||
| | | |_ ___ _ __ | |_ ___ ___| |_ _ _ _ __
|
||||
| | | | |/ _ \ '_ \| __| / __|/ _ \ __| | | | '_ \
|
||||
| |____| | | __/ | | | |_ \__ \ __/ |_| |_| | |_) |
|
||||
\_____|_|_|\___|_| |_|\__| |___/\___|\__|\__,_| .__/
|
||||
| |
|
||||
|_|
|
||||
Note: On client side, custom TNS_ADMIN worked
|
||||
|
||||
export TNS_ADMIN=/mnt/yavin4/tmp/00000/wayland/tns_admin
|
||||
|
||||
cd $TNS_ADMIN
|
||||
|
||||
cat sqlnet.ora
|
||||
|
||||
WALLET_LOCATION =
|
||||
(SOURCE =
|
||||
(METHOD = FILE)
|
||||
(METHOD_DATA =
|
||||
(DIRECTORY = /mnt/yavin4/tmp/00000/wayland/wallet)
|
||||
)
|
||||
)
|
||||
|
||||
SQLNET.AUTHENTICATION_SERVICES = (TCPS,BEQ,NTP)
|
||||
SSL_CLIENT_AUTHENTICATION = FALSE
|
||||
|
||||
|
||||
cat tnsnames.ora
|
||||
|
||||
HUTTPRD_tcp =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS_LIST =
|
||||
(ADDRESS = (PROTOCOL = TCP)(HOST = bakura.swgalaxy)(PORT = 1600))
|
||||
)
|
||||
(CONNECT_DATA =
|
||||
(SERVER = DEDICATED)
|
||||
(SERVICE_NAME = HUTTPRD)
|
||||
)
|
||||
)
|
||||
|
||||
HUTTPRD_tcps =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS_LIST =
|
||||
(ADDRESS = (PROTOCOL = TCPS)(HOST = bakura.swgalaxy)(PORT = 1700))
|
||||
)
|
||||
(CONNECT_DATA =
|
||||
(SERVER = DEDICATED)
|
||||
(SERVICE_NAME = HUTTPRD)
|
||||
)
|
||||
)
|
||||
|
||||
# JABBA is a PDB inside HUTTPRD
|
||||
JABBA_tcps =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS_LIST =
|
||||
(ADDRESS = (PROTOCOL = TCPS)(HOST = bakura.swgalaxy)(PORT = 1700))
|
||||
)
|
||||
(CONNECT_DATA =
|
||||
(SERVER = DEDICATED)
|
||||
(SERVICE_NAME = JABBA)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
# check connections
|
||||
connect c##globaldba/"secret"@HUTTPRD_tcp
|
||||
connect c##globaldba/"secret"@HUTTPRD_tcps
|
||||
connect c##globaldba/"secret"@JABBA_tcps
|
||||
|
||||
# check for connection protocol: tcp/tcps
|
||||
select SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') from dual;
|
||||
93
divers/ash_plsql_01.txt
Normal file
93
divers/ash_plsql_01.txt
Normal file
@@ -0,0 +1,93 @@
|
||||
connect user1/secret@//bakura.swgalaxy:1521/WOMBAT
|
||||
|
||||
create table tpl1 as select * from dba_extents;
|
||||
create table tpl2 as (select * from tpl1 union all select * from tpl1);
|
||||
create table tpl3 as (select * from tpl2 union all select * from tpl2);
|
||||
|
||||
|
||||
select /* MYQ1 */
|
||||
count(*)
|
||||
from
|
||||
tpl1
|
||||
join tpl2 on tpl1.bytes=tpl2.bytes
|
||||
join tpl3 on tpl1.segment_name=tpl3.segment_name
|
||||
/
|
||||
|
||||
|
||||
--------------------------------------------------------
|
||||
-- DDL for Package PACKAGE1
|
||||
--------------------------------------------------------
|
||||
|
||||
CREATE OR REPLACE EDITIONABLE PACKAGE "USER1"."PACKAGE1" AS
|
||||
|
||||
PROCEDURE PROC1;
|
||||
PROCEDURE PROC2;
|
||||
PROCEDURE PROC3;
|
||||
|
||||
END PACKAGE1;
|
||||
|
||||
/
|
||||
|
||||
|
||||
--------------------------------------------------------
|
||||
-- DDL for Package Body PACKAGE1
|
||||
--------------------------------------------------------
|
||||
|
||||
CREATE OR REPLACE EDITIONABLE PACKAGE BODY "USER1"."PACKAGE1" AS
|
||||
|
||||
PROCEDURE proc1 AS
|
||||
rr NUMBER;
|
||||
BEGIN
|
||||
SELECT /* MYQ2 */
|
||||
COUNT(*)
|
||||
INTO rr
|
||||
FROM
|
||||
tpl1
|
||||
JOIN tpl2 ON tpl1.bytes = tpl2.bytes
|
||||
JOIN tpl3 ON tpl1.segment_name = tpl3.segment_name;
|
||||
|
||||
END;
|
||||
|
||||
PROCEDURE proc2 AS
|
||||
z NUMBER;
|
||||
BEGIN
|
||||
SELECT /* MYQ3 */
|
||||
COUNT(*)
|
||||
INTO z
|
||||
FROM
|
||||
tpl1
|
||||
JOIN tpl2 ON tpl1.bytes = tpl2.bytes
|
||||
JOIN tpl3 ON tpl1.segment_name = tpl3.segment_name;
|
||||
|
||||
END;
|
||||
|
||||
|
||||
PROCEDURE proc3 AS
|
||||
v NUMBER;
|
||||
BEGIN
|
||||
SELECT /* MYQ4 */
|
||||
COUNT(*)
|
||||
INTO v
|
||||
FROM
|
||||
tpl1
|
||||
JOIN tpl2 ON tpl1.bytes = tpl2.bytes
|
||||
JOIN tpl3 ON tpl1.segment_name = tpl3.segment_name;
|
||||
|
||||
END;
|
||||
|
||||
|
||||
END package1;
|
||||
|
||||
/
|
||||
|
||||
|
||||
SQL> @ash/ashtop sql_id,TOP_LEVEL_SQL_ID,PLSQL_ENTRY_OBJECT_ID,PLSQL_ENTRY_SUBPROGRAM_ID "username='USER1'" sysdate-1/24 sysdate
|
||||
|
||||
Total Distinct Distinct
|
||||
Seconds AAS %This SQL_ID TOP_LEVEL_SQL PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID FIRST_SEEN LAST_SEEN Execs Seen Tstamps
|
||||
--------- ------- ------- ------------- ------------- --------------------- ------------------------- ------------------- ------------------- ---------- --------
|
||||
105 .0 41% | a0dhc0nj62mk1 8ybf2rvtac57c 33008 3 2023-07-19 20:45:23 2023-07-19 20:47:07 1 105
|
||||
104 .0 41% | a0dhc0nj62mk1 25ju18ztqn751 33008 1 2023-07-19 20:34:23 2023-07-19 20:36:06 1 104
|
||||
42 .0 16% | a0dhc0nj62mk1 cum98j5xfkk62 33008 2 2023-07-19 20:44:37 2023-07-19 20:45:18 1 42
|
||||
|
||||
|
||||
8
divers/certbot_renew_01.txt
Normal file
8
divers/certbot_renew_01.txt
Normal file
@@ -0,0 +1,8 @@
|
||||
certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/memo.dbaoracle.fr -d memo.dbaoracle.fr
|
||||
certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/support.dbaoracle.fr -d support.dbaoracle.fr
|
||||
certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/public.dbaoracle.fr -d public.dbaoracle.fr
|
||||
|
||||
certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/sabnzbd.dbaoracle.fr -d sabnzbd.dbaoracle.fr
|
||||
certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/lidarr.dbaoracle.fr -d lidarr.dbaoracle.fr
|
||||
certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/sonarr.dbaoracle.fr -d sonarr.dbaoracle.fr
|
||||
certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/radarr.dbaoracle.fr -d radarr.dbaoracle.fr
|
||||
88
divers/clone_oracle_home_golden_image_01.txt
Normal file
88
divers/clone_oracle_home_golden_image_01.txt
Normal file
@@ -0,0 +1,88 @@
|
||||
-- https://rene-ace.com/how-to-clone-an-oracle-home-in-19c/
|
||||
-----------------------------------------------------------
|
||||
|
||||
cd $ORACLE_HOME/rdbms/lib/
|
||||
cat config.c | grep define
|
||||
|
||||
---------------------------->
|
||||
#define SS_DBA_GRP "dba"
|
||||
#define SS_OPER_GRP "oper"
|
||||
#define SS_ASM_GRP ""
|
||||
#define SS_BKP_GRP "backupdba"
|
||||
#define SS_DGD_GRP "dgdba"
|
||||
#define SS_KMT_GRP "kmdba"
|
||||
#define SS_RAC_GRP "racdba"
|
||||
<----------------------------
|
||||
|
||||
$ORACLE_HOME/runInstaller -silent -createGoldImage -destinationLocation /app/oracle/staging_area
|
||||
|
||||
cd /app/oracle/staging_area
|
||||
unzip -v db_home_2023-08-16_02-20-39PM.zip
|
||||
|
||||
mkdir -p /app/oracle/product/19.20
|
||||
cd /app/oracle/product/19.20
|
||||
unzip /app/oracle/staging_area/db_home_2023-08-16_02-20-39PM.zip
|
||||
|
||||
unset ORACLE_HOME ORACLE_SID ORACLE_RSID ORACLE_UNQNAME ORACLE_BASE
|
||||
|
||||
export ORACLE_HOME=/app/oracle/product/19.20
|
||||
export ORACLE_HOSTNAME=togoria
|
||||
export ORA_INVENTORY=/app/oracle/oraInventory
|
||||
export NODE1_HOSTNAME=togoria
|
||||
# export NODE2_HOSTNAME=reneace02
|
||||
export ORACLE_BASE=/app/oracle/base
|
||||
|
||||
|
||||
# current
|
||||
# required only IS is OEL8
|
||||
export CV_ASSUME_DISTID=OEL7.8
|
||||
|
||||
${ORACLE_HOME}/runInstaller -ignorePrereq -waitforcompletion -silent \
|
||||
-responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
|
||||
oracle.install.option=INSTALL_DB_SWONLY \
|
||||
ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
|
||||
UNIX_GROUP_NAME=oinstall \
|
||||
INVENTORY_LOCATION=${ORA_INVENTORY} \
|
||||
ORACLE_HOME=${ORACLE_HOME} \
|
||||
ORACLE_BASE=${ORACLE_BASE} \
|
||||
oracle.install.db.OSDBA_GROUP=dba \
|
||||
oracle.install.db.OSOPER_GROUP=oper \
|
||||
oracle.install.db.OSBACKUPDBA_GROUP=backupdba \
|
||||
oracle.install.db.OSDGDBA_GROUP=dgdba \
|
||||
oracle.install.db.OSKMDBA_GROUP=kmdba \
|
||||
oracle.install.db.OSRACDBA_GROUP=racdba
|
||||
|
||||
|
||||
# original
|
||||
${ORACLE_HOME}/runInstaller -ignorePrereq -waitforcompletion -silent \
|
||||
-responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
|
||||
oracle.install.option=INSTALL_DB_SWONLY \
|
||||
ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
|
||||
UNIX_GROUP_NAME=oinstall \
|
||||
INVENTORY_LOCATION=${ORA_INVENTORY} \
|
||||
SELECTED_LANGUAGES=en \
|
||||
ORACLE_HOME=${ORACLE_HOME} \
|
||||
ORACLE_BASE=${ORACLE_BASE} \
|
||||
oracle.install.db.InstallEdition=EE \
|
||||
oracle.install.db.OSDBA_GROUP=dba \
|
||||
oracle.install.db.OSOPER_GROUP=dba \
|
||||
oracle.install.db.OSBACKUPDBA_GROUP=dba \
|
||||
oracle.install.db.OSDGDBA_GROUP=dba \
|
||||
oracle.install.db.OSKMDBA_GROUP=dba \
|
||||
oracle.install.db.OSRACDBA_GROUP=dba \
|
||||
oracle.install.db.CLUSTER_NODES=${NODE1_HOSTNAME},${NODE2_HOSTNAME} \
|
||||
oracle.install.db.isRACOneInstall=false \
|
||||
oracle.install.db.rac.serverpoolCardinality=0 \
|
||||
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \
|
||||
oracle.install.db.ConfigureAsContainerDB=false \
|
||||
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
|
||||
DECLINE_SECURITY_UPDATES=true
|
||||
|
||||
|
||||
# check ORACLE homes in inventory
|
||||
cat /app/oracle/oraInventory/ContentsXML/inventory.xml | grep "HOME NAME"
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
115
divers/dataguard_21_RAC_01.txt
Normal file
115
divers/dataguard_21_RAC_01.txt
Normal file
@@ -0,0 +1,115 @@
|
||||
rman target /
|
||||
|
||||
run
|
||||
{
|
||||
set nocfau;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck';
|
||||
allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck';
|
||||
allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck';
|
||||
allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck';
|
||||
backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input;
|
||||
release channel ch01;
|
||||
release channel ch02;
|
||||
release channel ch03;
|
||||
release channel ch04;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.controlfile';
|
||||
backup current controlfile;
|
||||
release channel ch01;
|
||||
}
|
||||
|
||||
srvctl add database -d HUTTPRD -o /app/oracle/product/19 -p '+DATA/HUTTPRD/spfile.ora'
|
||||
|
||||
~~ create passwordfile on ASM; if the DB is not yet registered on CRS, you will get a WARNING
|
||||
orapwd FILE='+DATA/HUTTPRD/orapwHUTTPRD' ENTRIES=10 DBUNIQUENAME='HUTTPRD' password="Secret00!"
|
||||
srvctl modify database -d HUTTPRD -pwfile '+DATA/HUTTPRD/orapwHUTTPRD'
|
||||
|
||||
srvctl add instance -d HUTTPRD -i HUTTPRD1 -n ylesia-db01
|
||||
srvctl add instance -d HUTTPRD -i HUTTPRD2 -n ylesia-db02
|
||||
|
||||
|
||||
alias HUTTPRD='rlwrap sqlplus sys/"Secret00!"@ylesia-scan/HUTTPRD as sysdba'
|
||||
alias HUTTDRP='rlwrap sqlplus sys/"Secret00!"@rodia-scan/HUTTDRP as sysdba'
|
||||
|
||||
|
||||
run
|
||||
{
|
||||
allocate auxiliary channel aux01 device type disk;
|
||||
allocate auxiliary channel aux02 device type disk;
|
||||
allocate auxiliary channel aux03 device type disk;
|
||||
allocate auxiliary channel aux04 device type disk;
|
||||
duplicate database 'HUTT' for standby backup location '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/';
|
||||
}
|
||||
|
||||
|
||||
srvctl add database -d HUTTDRP -o /app/oracle/product/21 -p '+DATA/HUTTDRP/spfile.ora'
|
||||
srvctl modify database -d HUTTDRP -r physical_standby -n HUTT -s MOUNT
|
||||
|
||||
srvctl add instance -d HUTTDRP -i HUTTDRP1 -n rodia-db01
|
||||
srvctl add instance -d HUTTDRP -i HUTTDRP2 -n rodia-db02
|
||||
|
||||
|
||||
# copy passwordfile from primary to standby
|
||||
ASMCMD [+DATA/HUTTPRD] > pwcopy +DATA/HUTTPRD/PASSWORD/pwdhuttprd.274.1137773649 /tmp
|
||||
scp /tmp/pwdhuttprd.274.1137773649 rodia-db02:/tmp
|
||||
ASMCMD [+DATA/HUTTDRP] > pwcopy /tmp/pwdhuttprd.274.1137773649 +DATA/HUTTDRP/orapwhuttdrp
|
||||
|
||||
srvctl modify database -db HUTTDRP -pwfile '+DATA/HUTTDRP/orapwhuttdrp'
|
||||
|
||||
|
||||
alter system set dg_broker_config_file1='+DATA/HUTTPRD/dg_broker_01.dat' scope=both sid='*';
|
||||
alter system set dg_broker_config_file2='+DATA/HUTTPRD/dg_broker_02.dat' scope=both sid='*';
|
||||
alter system set dg_broker_start=TRUE scope=both sid='*';
|
||||
|
||||
alter system set dg_broker_config_file1='+DATA/HUTTDRP/dg_broker_01.dat' scope=both sid='*';
|
||||
alter system set dg_broker_config_file2='+DATA/HUTTDRP/dg_broker_02.dat' scope=both sid='*';
|
||||
alter system set dg_broker_start=TRUE scope=both sid='*';
|
||||
|
||||
|
||||
select GROUP#,THREAD#,MEMBERS,STATUS, BYTES/(1024*1024) Mb from v$log;
|
||||
select GROUP#,THREAD#,STATUS, BYTES/(1024*1024) Mb from v$standby_log;
|
||||
|
||||
set lines 256
|
||||
col MEMBER for a80
|
||||
select * from v$logfile;
|
||||
|
||||
|
||||
-- create standby redologs
|
||||
select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log;
|
||||
select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log;
|
||||
|
||||
-- clear / drop standby redologs
|
||||
select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log;
|
||||
select 'ALTER DATABASE DROP STANDBY LOGFILE GROUP '||GROUP#||';' from v$standby_log;
|
||||
|
||||
|
||||
dgmgrl
|
||||
DGMGRL> connect sys/"Secret00!"@ylesia-scan:1521/HUTTPRD
|
||||
DGMGRL> create configuration HUTT as primary database is HUTTPRD connect identifier is ylesia-scan:1521/HUTTPRD;
|
||||
DGMGRL> add database HUTTDRP as connect identifier is rodia-scan:1521/HUTTDRP;
|
||||
|
||||
DGMGRL> enable configuration;
|
||||
DGMGRL> show configuration;
|
||||
|
||||
DGMGRL> edit database 'huttdrp' set property ArchiveLagTarget=0;
|
||||
DGMGRL> edit database 'huttdrp' set property LogArchiveMaxProcesses=2;
|
||||
DGMGRL> edit database 'huttdrp' set property LogArchiveMinSucceedDest=1;
|
||||
DGMGRL> edit database 'huttdrp' set property StandbyFileManagement='AUTO';
|
||||
|
||||
DGMGRL> edit database 'huttprd' set property ArchiveLagTarget=0;
|
||||
DGMGRL> edit database 'huttprd' set property LogArchiveMaxProcesses=2;
|
||||
DGMGRL> edit database 'huttprd' set property LogArchiveMinSucceedDest=1;
|
||||
DGMGRL> edit database 'huttprd' set property StandbyFileManagement='AUTO';
|
||||
|
||||
DGMGRL> show configuration;
|
||||
|
||||
|
||||
|
||||
RMAN> configure archivelog deletion policy to applied on all standby;
|
||||
|
||||
# if incremental recover from source is required
|
||||
RMAN> recover database from service 'ylesia-scan/HUTTPRD' using compressed backupset section size 2G;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
162
divers/dataguard_cascade_routes_01.txt
Normal file
162
divers/dataguard_cascade_routes_01.txt
Normal file
@@ -0,0 +1,162 @@
|
||||
Primary: ylesia-scan:1521/HUTTPRD
|
||||
Dataguard: rodia-scan:1521/HUTTDRP
|
||||
Cascade 1: kamino:1521/HUTTCA1
|
||||
Far sync: mandalore:1521/HUTTFAR
|
||||
Remote dataguard: taris:1521/HUTTREM
|
||||
|
||||
alias HUTTPRD='rlwrap sqlplus sys/"Secret00!"@ylesia-scan:1521/HUTTPRD as sysdba'
|
||||
alias HUTTPRD1='rlwrap sqlplus sys/"Secret00!"@ylesia-db01-vip:1521/HUTTPRD as sysdba'
|
||||
alias HUTTPRD2='rlwrap sqlplus sys/"Secret00!"@ylesia-db02-vip:1521/HUTTPRD as sysdba'
|
||||
alias HUTTDRP='rlwrap sqlplus sys/"Secret00!"@rodia-scan:1521/HUTTDRP as sysdba'
|
||||
alias HUTTCA1='rlwrap sqlplus sys/"Secret00!"@kamino:1521/HUTTCA1 as sysdba'
|
||||
alias HUTTFAR='rlwrap sqlplus sys/"Secret00!"@mandalore:1521/HUTTFAR as sysdba'
|
||||
alias HUTTREM='rlwrap sqlplus sys/"Secret00!"@taris:1521/HUTTREM as sysdba'
|
||||
|
||||
|
||||
run
|
||||
{
|
||||
allocate auxiliary channel aux01 device type disk;
|
||||
allocate auxiliary channel aux02 device type disk;
|
||||
allocate auxiliary channel aux03 device type disk;
|
||||
allocate auxiliary channel aux04 device type disk;
|
||||
duplicate database 'HUTT' for standby backup location '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/';
|
||||
}
|
||||
|
||||
|
||||
run
|
||||
{
|
||||
allocate channel pri01 device type disk;
|
||||
allocate channel pri02 device type disk;
|
||||
allocate channel pri03 device type disk;
|
||||
allocate channel pri04 device type disk;
|
||||
recover database from service 'ylesia-scan:1521/HUTTPRD' using compressed backupset section size 1G;
|
||||
}
|
||||
|
||||
alter database create standby controlfile as '/mnt/yavin4/tmp/00000/HUTTPRD1.stdby';
|
||||
alter database create far sync instance controlfile as '/mnt/yavin4/tmp/00000/HUTTPRD1.far';
|
||||
|
||||
dgmgrl
|
||||
DGMGRL> connect sys/"Secret00!"@ylesia-scan:1521/HUTTPRD
|
||||
DGMGRL> add database HUTTCA1 as connect identifier is kamino:1521/HUTTCA1;
|
||||
DGMGRL> add database HUTTREM as connect identifier is taris:1521/HUTTREM
|
||||
DGMGRL> add far_sync HUTTFAR as connect identifier is mandalore:1521/HUTTFAR;
|
||||
|
||||
DGMGRL> show database 'huttprd' redoroutes;
|
||||
DGMGRL> show database 'huttdrp' redoroutes;
|
||||
|
||||
# routes config ###########################################################################
|
||||
|
||||
# without FAR SYNC: main dataguard relies redo to cascade
|
||||
DGMGRL> edit database huttprd set property redoroutes = '(local:huttdrp)(huttdrp:huttca1)';
|
||||
DGMGRL> edit database huttdrp set property redoroutes = '(huttprd:huttca1)(local:huttprd)';
|
||||
|
||||
# FAR SYNC built but not activated: main dataguard relies redo to cascade and remote dataguard
|
||||
DGMGRL> edit database huttprd set property redoroutes = '(local:huttdrp)(huttdrp:huttca1)';
|
||||
DGMGRL> edit database huttdrp set property redoroutes = '(huttprd:huttca1,huttrem)(local:huttprd)';
|
||||
|
||||
# FAR SYNC activated: main dataguard relies redo to cascade and FAR SYNC relies redo to remote dataguard
|
||||
DGMGRL> edit database huttprd set property redoroutes = '(local:huttdrp,huttfar SYNC)(huttdrp:huttca1 ASYNC)';
|
||||
DGMGRL> edit database huttdrp set property redoroutes = '(huttprd:huttca1 ASYNC)(local:huttprd,huttfar SYNC)';
|
||||
DGMGRL> edit far_sync huttfar set property redoroutes = '(huttprd:huttrem ASYNC)(huttdrp:huttrem ASYNC)';
|
||||
|
||||
# #########################################################################################
|
||||
|
||||
|
||||
DGMGRL> edit database huttprd set property StandbyFileManagement='AUTO';
|
||||
DGMGRL> edit database huttdrp set property StandbyFileManagement='AUTO';
|
||||
DGMGRL> edit database huttca1 set property StandbyFileManagement='AUTO';
|
||||
DGMGRL> edit database huttrem set property StandbyFileManagement='AUTO';
|
||||
DGMGRL> edit far_sync huttfar set property StandbyFileManagement='AUTO';
|
||||
|
||||
# unless setting configuration protection to MaxAvailability, cascade standby redelog was not used and broker show warnings
|
||||
# after setting to MaxAvailability, switching back to MaxPerformance does not affected the sitiation, cascade standby still use
|
||||
# standby redologs and broker status does not display warnings anymore
|
||||
|
||||
DGMGRL> edit configuration set protection mode as MaxAvailability;
|
||||
DGMGRL> edit configuration set protection mode as MaxPerformance;
|
||||
|
||||
|
||||
# not sure that help for
|
||||
# ORA-16853: apply lag has exceeded specified threshold
|
||||
# ORA-16855: transport lag has exceeded specified threshold
|
||||
|
||||
DGMGRL> edit database huttprd set property TransportDisconnectedThreshold=0;
|
||||
DGMGRL> edit database huttdrp set property TransportDisconnectedThreshold=0;
|
||||
DGMGRL> edit database huttca1 set property TransportDisconnectedThreshold=0;
|
||||
|
||||
DGMGRL> edit database huttprd set property ApplyLagThreshold=0;
|
||||
DGMGRL> edit database huttdrp set property ApplyLagThreshold=0;
|
||||
DGMGRL> edit database huttca1 set property ApplyLagThreshold=0;
|
||||
|
||||
# othrwise, to reset:
|
||||
|
||||
DGMGRL> edit database huttprd reset property ApplyLagThreshold;
|
||||
DGMGRL> edit database huttdrp reset property ApplyLagThreshold;
|
||||
DGMGRL> edit database huttca1 reset property ApplyLagThreshold;
|
||||
|
||||
DGMGRL> edit database huttprd reset property TransportDisconnectedThreshold;
|
||||
DGMGRL> edit database huttdrp reset property TransportDisconnectedThreshold;
|
||||
DGMGRL> edit database huttca1 reset property TransportDisconnectedThreshold;
|
||||
|
||||
|
||||
DGMGRL> enable database huttca1;
|
||||
DGMGRL> edit database huttca1 set state='APPLY-OFF';
|
||||
DGMGRL> edit database huttca1 set state='ONLINE';
|
||||
|
||||
-- create standby redologs
|
||||
select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log union all
|
||||
select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log;
|
||||
|
||||
-- clear / drop standby redologs
|
||||
select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log;
|
||||
select 'ALTER DATABASE DROP STANDBY LOGFILE GROUP '||GROUP#||';' from v$standby_log;
|
||||
|
||||
|
||||
alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
|
||||
set lines 200
|
||||
|
||||
-- on PRIMARY database
|
||||
----------------------
|
||||
select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log group by THREAD#;
|
||||
|
||||
-- on STANDBY database
|
||||
----------------------
|
||||
select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log
|
||||
where APPLIED='YES' group by THREAD#;
|
||||
|
||||
|
||||
set lines 155 pages 9999
|
||||
col thread# for 9999990
|
||||
col sequence# for 999999990
|
||||
col grp for 990
|
||||
col fnm for a50 head "File Name"
|
||||
col "Fisrt SCN Number" for 999999999999990
|
||||
break on thread
|
||||
|
||||
select
|
||||
a.thread#
|
||||
,a.sequence#
|
||||
,a.group# grp
|
||||
, a.bytes/1024/1024 Size_MB
|
||||
,a.status
|
||||
,a.archived
|
||||
,a.first_change# "First SCN Number"
|
||||
,to_char(FIRST_TIME,'YYYY-MM-DD HH24:MI:SS') "First SCN Time"
|
||||
,to_char(LAST_TIME,'YYYY-MM-DD HH24:MI:SS') "Last SCN Time"
|
||||
from
|
||||
gv$standby_log a order by 1,2,3,4
|
||||
/
|
||||
|
||||
|
||||
|
||||
# https://www.dba-scripts.com/articles/dataguard-standby/data-guard-far-sync/
|
||||
|
||||
|
||||
edit database huttdrp set property redoroutes = '(huttprd:huttca1)(huttprd:huttrem)(local:huttprd)';
|
||||
enable database huttrem;
|
||||
|
||||
|
||||
|
||||
|
||||
create pluggable database JABBA admin user admin identified by "Secret00!";
|
||||
|
||||
11
divers/dg.txt
Normal file
11
divers/dg.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
|
||||
set lines 200
|
||||
|
||||
-- on PRIMARY database
|
||||
----------------------
|
||||
select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log group by THREAD#;
|
||||
|
||||
-- on STANDBY database
|
||||
----------------------
|
||||
select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log
|
||||
where APPLIED='YES' group by THREAD#;
|
||||
17
divers/disable_IPV6.md
Normal file
17
divers/disable_IPV6.md
Normal file
@@ -0,0 +1,17 @@
|
||||
Create a sysctl config file:
|
||||
```bash
|
||||
tee /etc/sysctl.d/99-disable-ipv6.conf >/dev/null <<'EOF'
|
||||
net.ipv6.conf.all.disable_ipv6 = 1
|
||||
net.ipv6.conf.default.disable_ipv6 = 1
|
||||
EOF
|
||||
```
|
||||
|
||||
Apply the settings:
|
||||
```bash
|
||||
sudo sysctl --system
|
||||
```
|
||||
|
||||
Verify:
|
||||
```bash
|
||||
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
|
||||
```
|
||||
9
divers/dnsmanager_api_example_01.txt
Normal file
9
divers/dnsmanager_api_example_01.txt
Normal file
@@ -0,0 +1,9 @@
|
||||
curl -s https://app.dnsmanager.io/api/v1/user/domains \
|
||||
-u "9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy" | jq
|
||||
|
||||
curl -s https://app.dnsmanager.io/api/v1/user/domain/151914/records \
|
||||
-u "9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy" | jq
|
||||
|
||||
curl -s -X PUT -d content="1.1.1.1" https://app.dnsmanager.io/api/v1/user/domain/151914/record/16572810 \
|
||||
-u "9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy" | jq
|
||||
|
||||
19
divers/import_certificate_RHEL9.md
Normal file
19
divers/import_certificate_RHEL9.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# How to Import Your Own CA Root on RHEL 9
|
||||
|
||||
## Place your CA certificate in the correct directory
|
||||
|
||||
```bash
|
||||
cp /mnt/unprotected/tmp/oracle/swgalaxy_root_ca.cert.pem /etc/pki/ca-trust/source/anchors/
|
||||
```
|
||||
|
||||
## Update the system trust store
|
||||
|
||||
```bash
|
||||
update-ca-trust extract
|
||||
```
|
||||
|
||||
## Verify that your CA is now trusted
|
||||
|
||||
```bash
|
||||
openssl verify -CAfile /etc/pki/tls/certs/ca-bundle.crt /etc/pki/ca-trust/source/anchors/swgalaxy_root_ca.cert.pem
|
||||
```
|
||||
8
divers/issue_after_swap_lv_destroy_01.txt
Normal file
8
divers/issue_after_swap_lv_destroy_01.txt
Normal file
@@ -0,0 +1,8 @@
|
||||
# after destroing a SWAP LV for create a new one, old reference remains on /etc/default/grub
|
||||
# in GRUB_CMDLINE_LINUX
|
||||
|
||||
# delete GRUB_CMDLINE_LINUX from /etc/default/grub
|
||||
vi /etc/default/grub
|
||||
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||||
|
||||
# restart the machine
|
||||
5
divers/linux_change_machine_id.md
Normal file
5
divers/linux_change_machine_id.md
Normal file
@@ -0,0 +1,5 @@
|
||||
Commands to generate a new machine ID:
|
||||
```bash
|
||||
cat /dev/null > /etc/machine-id
|
||||
systemd-machine-id-setup
|
||||
```
|
||||
27
divers/linux_cleanup_boot_partition.txt
Normal file
27
divers/linux_cleanup_boot_partition.txt
Normal file
@@ -0,0 +1,27 @@
|
||||
@ Technical Tip: Clean up /boot in CentOS, RHEL or Rocky Linux 8 and up
|
||||
|
||||
1) Check the current kernel being used:
|
||||
|
||||
|
||||
sudo uname -sr
|
||||
|
||||
|
||||
2) List all kernels installed on the system:
|
||||
|
||||
|
||||
sudo rpm -q kernel
|
||||
|
||||
|
||||
3) Delete old kernels and only leave <X> number of kernels:
|
||||
|
||||
|
||||
sudo dnf remove --oldinstallonly --setopt installonly_limit=<X> kernel
|
||||
|
||||
|
||||
Note: <X> can be set to 1, 2, 3 or other numeric values. Carefully check the running kernel in step 2 and any other kernels used before running this command. Alternatively, use the following command to delete kernels one by one:
|
||||
|
||||
|
||||
rpm -e <kernel_name>
|
||||
|
||||
Kernel names can be obtained through step 2.
|
||||
|
||||
26
divers/linux_create_swap_partition_01.txt
Normal file
26
divers/linux_create_swap_partition_01.txt
Normal file
@@ -0,0 +1,26 @@
|
||||
# create swap partition on /dev/vdb
|
||||
###################################
|
||||
|
||||
# create PV,VG and LV
|
||||
lsblk
|
||||
fdisk /dev/vdb1
|
||||
pvs
|
||||
pvcreate /dev/vdb1
|
||||
vgcreate vgswap /dev/vdb1
|
||||
vgs
|
||||
lvs
|
||||
lvcreate -n swap -l 100%FREE vgswap
|
||||
ls /dev/mapper/vgswap-swap
|
||||
|
||||
# format LV as swap
|
||||
mkswap /dev/mapper/vgswap-swap
|
||||
|
||||
# add swap entery in /etc/fstab
|
||||
/dev/mapper/vgswap-swap swap swap defaults 0 0
|
||||
|
||||
# activate swap
|
||||
swapon -va
|
||||
|
||||
# check swap
|
||||
cat /proc/swaps
|
||||
free -h
|
||||
6
divers/linux_remove_old_kernel_01.txt
Normal file
6
divers/linux_remove_old_kernel_01.txt
Normal file
@@ -0,0 +1,6 @@
|
||||
# remove old kernel from /boot
|
||||
# https://community.fortinet.com/t5/FortiSOAR-Knowledge-Base/Technical-Tip-Clean-up-boot-in-CentOS-RHEL-or-Rocky-Linux-8-and/ta-p/257565
|
||||
|
||||
uname -sr
|
||||
rpm -q kernel
|
||||
dnf remove --oldinstallonly --setopt installonly_limit=2 kernel
|
||||
96
divers/my_root_CA_generate_certificate.md
Normal file
96
divers/my_root_CA_generate_certificate.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Issue a Server Certificate
|
||||
|
||||
> Based on https://medium.com/@sureshchand.rhce/how-to-build-a-root-ca-intermediate-ca-with-openssl-eba1c73d1591
|
||||
|
||||
## Create server key
|
||||
``` bash
|
||||
openssl genpkey -algorithm RSA \
|
||||
-out exegol.swgalaxy.key.pem \
|
||||
-pkeyopt rsa_keygen_bits:2048
|
||||
```
|
||||
|
||||
## Create CSR with SAN
|
||||
|
||||
Define a configuration file for the CSR `exegol.swgalaxy.cnf`:
|
||||
```
|
||||
[ req ]
|
||||
distinguished_name = req_distinguished_name
|
||||
req_extensions = req_ext
|
||||
prompt = no
|
||||
|
||||
[ req_distinguished_name ]
|
||||
C = FR
|
||||
ST = Yvelines
|
||||
L = Le Vesinet
|
||||
O = swgalaxy
|
||||
OU = swgalaxy servers
|
||||
CN = exegol.swgalaxy
|
||||
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[ alt_names ]
|
||||
DNS.1 = exegol.swgalaxy
|
||||
DNS.2 = exegol
|
||||
```
|
||||
|
||||
Create thr CSR:
|
||||
|
||||
``` bash
|
||||
openssl req -new -key exegol.swgalaxy.key.pem \
|
||||
-out exegol.swgalaxy.csr.pem \
|
||||
-config exegol.swgalaxy.cnf
|
||||
```
|
||||
|
||||
|
||||
## Sign with Intermediate CA
|
||||
|
||||
Update `server_cert` extension on **intermediate CA** configuration file `/app/pki/intermediate/openssl.cnf`:
|
||||
```
|
||||
[ server_cert ]
|
||||
# Basic identity
|
||||
subjectKeyIdentifier = hash
|
||||
authorityKeyIdentifier = keyid,issuer
|
||||
|
||||
# Server certificates must NOT be CA certificates
|
||||
basicConstraints = critical, CA:FALSE
|
||||
|
||||
# Key usage: what the certificate is allowed to do
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
|
||||
# Extended key usage: define this as a TLS server certificate
|
||||
extendedKeyUsage = serverAuth
|
||||
|
||||
# Allow SANs (modern TLS requires SANs)
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[ alt_names ]
|
||||
DNS.1 = exegol.swgalaxy
|
||||
DNS.2 = exegol
|
||||
```
|
||||
|
||||
Sign the certificate with **intermediate CA**:
|
||||
|
||||
``` bash
|
||||
openssl ca -config /app/pki/intermediate/openssl.cnf \
|
||||
-extensions server_cert \
|
||||
-days 3650 -notext -md sha256 \
|
||||
-in exegol.swgalaxy.csr.pem \
|
||||
-out /app/pki/intermediate/certs/exegol.swgalaxy.cert.pem
|
||||
```
|
||||
|
||||
## Verify the chain
|
||||
|
||||
``` bash
|
||||
openssl verify \
|
||||
-CAfile /app/pki/intermediate/certs/ca-chain.cert.pem \
|
||||
/app/pki/intermediate/certs/exegol.swgalaxy.cert.pem
|
||||
```
|
||||
|
||||
## Verify the certificate
|
||||
|
||||
``` bash
|
||||
openssl x509 -text -noout \
|
||||
-in /app/pki/intermediate/certs/exegol.swgalaxy.cert.pem
|
||||
```
|
||||
|
||||
2
divers/oracle_resource_manager_01.txt
Normal file
2
divers/oracle_resource_manager_01.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
# CPU usage limit with resource manager in Oracle
|
||||
# https://smarttechways.com/2021/05/12/cpu-usage-limit-with-resource-manager-in-oracle/
|
||||
178
divers/patch_standby_first_01.txt
Normal file
178
divers/patch_standby_first_01.txt
Normal file
@@ -0,0 +1,178 @@
|
||||
select force_logging from v$database;
|
||||
|
||||
set lines 256 pages 999
|
||||
|
||||
col MEMBER for a60
|
||||
select * from v$logfile;
|
||||
|
||||
-- create standby redologs
|
||||
select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log;
|
||||
select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log;
|
||||
|
||||
-- clear / drop standby redologs
|
||||
select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log;
|
||||
select 'ALTER DATABASE DROP STANDBY LOGFILE GROUP '||GROUP#||';' from v$standby_log;
|
||||
|
||||
|
||||
|
||||
|
||||
*.audit_file_dest='/app/oracle/base/admin/ANDODRP/adump'
|
||||
*.audit_trail='OS'
|
||||
*.compatible='19.0.0.0'
|
||||
*.control_files='/data/ANDODRP/control01.ctl'
|
||||
*.db_block_size=8192
|
||||
*.db_create_file_dest='/data'
|
||||
*.db_create_online_log_dest_1='/data'
|
||||
*.db_name='ANDO'
|
||||
*.db_recovery_file_dest_size=10G
|
||||
*.db_recovery_file_dest='/reco'
|
||||
*.db_unique_name='ANDODRP'
|
||||
*.diagnostic_dest='/app/oracle/base/admin/ANDODRP'
|
||||
*.enable_goldengate_replication=TRUE
|
||||
*.enable_pluggable_database=FALSE
|
||||
*.instance_name='ANDODRP'
|
||||
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST'
|
||||
*.log_archive_format='%t_%s_%r.arc'
|
||||
*.open_cursors=300
|
||||
*.pga_aggregate_target=512M
|
||||
*.processes=350
|
||||
*.remote_login_passwordfile='exclusive'
|
||||
*.sga_max_size=3G
|
||||
*.sga_target=3G
|
||||
*.undo_tablespace='TS_UNDO'
|
||||
|
||||
|
||||
|
||||
create spfile='/app/oracle/base/admin/ANDODRP/spfile/spfileANDODRP.ora' from pfile='/mnt/yavin4/tmp/_oracle_/tmp/ANDO.txt';
|
||||
|
||||
|
||||
/mnt/yavin4/tmp/_oracle_/tmp/bakura/listener.ora
|
||||
|
||||
STATIC =
|
||||
(DESCRIPTION_LIST =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS = (PROTOCOL = TCP)(HOST = bakura)(PORT = 1600))
|
||||
)
|
||||
)
|
||||
|
||||
SID_LIST_STATIC =
|
||||
(SID_LIST =
|
||||
(SID_DESC =
|
||||
(GLOBAL_DBNAME = ANDODRP_STATIC)
|
||||
(SID_NAME = ANDODRP)
|
||||
(ORACLE_HOME = /app/oracle/product/19)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/tmp/bakura
|
||||
lsnrctl start STATIC
|
||||
lsnrctl status STATIC
|
||||
|
||||
|
||||
|
||||
|
||||
/mnt/yavin4/tmp/_oracle_/tmp/togoria/listener.ora
|
||||
|
||||
STATIC =
|
||||
(DESCRIPTION_LIST =
|
||||
(DESCRIPTION =
|
||||
(ADDRESS = (PROTOCOL = TCP)(HOST = togoria)(PORT = 1600))
|
||||
)
|
||||
)
|
||||
|
||||
SID_LIST_STATIC =
|
||||
(SID_LIST =
|
||||
(SID_DESC =
|
||||
(GLOBAL_DBNAME = ANDOPRD_STATIC)
|
||||
(SID_NAME = ANDOPRD)
|
||||
(ORACLE_HOME = /app/oracle/product/19)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/tmp/togoria
|
||||
lsnrctl start STATIC
|
||||
lsnrctl status STATIC
|
||||
|
||||
|
||||
connect sys/"Secret00!"@//togoria:1600/ANDOPRD_STATIC as sysdba
|
||||
connect sys/"Secret00!"@//bakura:1600/ANDODRP_STATIC as sysdba
|
||||
|
||||
|
||||
rman target=sys/"Secret00!"@//togoria:1600/ANDOPRD_STATIC auxiliary=sys/"Secret00!"@//bakura:1600/ANDODRP_STATIC
|
||||
run {
|
||||
allocate channel pri1 device type DISK;
|
||||
allocate channel pri2 device type DISK;
|
||||
allocate channel pri3 device type DISK;
|
||||
allocate channel pri4 device type DISK;
|
||||
allocate auxiliary channel aux1 device type DISK;
|
||||
allocate auxiliary channel aux2 device type DISK;
|
||||
allocate auxiliary channel aux3 device type DISK;
|
||||
allocate auxiliary channel aux4 device type DISK;
|
||||
duplicate target database
|
||||
for standby
|
||||
dorecover
|
||||
from active database
|
||||
nofilenamecheck
|
||||
using compressed backupset section size 1G;
|
||||
}
|
||||
|
||||
|
||||
alter system set dg_broker_config_file1='/app/oracle/base/admin/ANDOPRD/divers/dr1ANDOPRD.dat' scope=both sid='*';
|
||||
alter system set dg_broker_config_file2='/app/oracle/base/admin/ANDOPRD/divers/dr2ANDOPRD.dat' scope=both sid='*';
|
||||
alter system set dg_broker_start=TRUE scope=both sid='*';
|
||||
|
||||
alter system set dg_broker_config_file1='/app/oracle/base/admin/ANDODRP/divers/dr1ANDODRP.dat' scope=both sid='*';
|
||||
alter system set dg_broker_config_file2='/app/oracle/base/admin/ANDODRP/divers/dr2ANDODRP.dat' scope=both sid='*';
|
||||
alter system set dg_broker_start=TRUE scope=both sid='*';
|
||||
|
||||
|
||||
dgmgrl
|
||||
connect sys/"Secret00!"@//togoria:1600/ANDOPRD_STATIC
|
||||
|
||||
create configuration ANDO as
|
||||
primary database is ANDOPRD
|
||||
connect identifier is "//togoria:1600/ANDOPRD_STATIC";
|
||||
|
||||
add database ANDODRP
|
||||
as connect identifier is "//bakura:1600/ANDODRP_STATIC"
|
||||
maintained as physical;
|
||||
|
||||
enable configuration;
|
||||
show configuration;
|
||||
|
||||
edit database 'andoprd' set property ArchiveLagTarget=0;
|
||||
edit database 'andoprd' set property LogArchiveMaxProcesses=2;
|
||||
edit database 'andoprd' set property LogArchiveMinSucceedDest=1;
|
||||
edit database 'andoprd' set property StandbyFileManagement='AUTO';
|
||||
|
||||
edit database 'andodrp' set property ArchiveLagTarget=0;
|
||||
edit database 'andodrp' set property LogArchiveMaxProcesses=2;
|
||||
edit database 'andodrp' set property LogArchiveMinSucceedDest=1;
|
||||
edit database 'andodrp' set property StandbyFileManagement='AUTO';
|
||||
|
||||
edit database 'andoprd' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=togoria)(PORT=1600))(CONNECT_DATA=(SERVICE_NAME=ANDOPRD_STATIC)(SERVER=DEDICATED)))';
|
||||
edit database 'andodrp' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=bakura)(PORT=1600))(CONNECT_DATA=(SERVICE_NAME=ANDODRP_STATIC)(SERVER=DEDICATED)))';
|
||||
|
||||
validate database 'andoprd'
|
||||
validate database 'andodrp'
|
||||
|
||||
switchover to 'andodrp'
|
||||
switchover to 'andoprd'
|
||||
switchover to 'andodrp'
|
||||
|
||||
convert database 'andodrp' to snapshot standby;
|
||||
convert database 'andodrp' to physical standby;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
24
divers/purines.md
Normal file
24
divers/purines.md
Normal file
@@ -0,0 +1,24 @@
|
||||
| Poisson / Viande / Fruit de mer | Purines (mg/100 g) |
|
||||
|----------------------------------|--------------------|
|
||||
| Dinde hachée, crue | ~96 |
|
||||
| Cabillaud | ~98 |
|
||||
| Aiglefin | ~110 |
|
||||
| Colin | ~110 |
|
||||
| Merlu | ~110 |
|
||||
| Flétan | ~120 |
|
||||
| Noix de Saint-Jacques | ~135 |
|
||||
| Dorade | ~140 |
|
||||
| Bar | ~150 |
|
||||
| Poulet haché, cru | ~158.7 |
|
||||
| Saumon | ~170 |
|
||||
| Truite | ~170 |
|
||||
| Crevette | ~200 |
|
||||
| Porc | ~230 |
|
||||
| Bœuf | ~250 |
|
||||
| Thon | ~290 |
|
||||
| Sardine crue | 345 |
|
||||
| Hareng en conserve | 378 |
|
||||
| Foie de lotte cuit | 398.7 |
|
||||
| Anchois crus | 411 |
|
||||
| Crevette Sakura séchée | 748.9 |
|
||||
| Maquereau japonais | 1 175 |
|
||||
3
divers/random_string_bash.txt
Normal file
3
divers/random_string_bash.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
# generating random string in bash
|
||||
echo $RANDOM | md5sum | head -c 20; echo;
|
||||
cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20; echo;
|
||||
19
divers/rocky9_nmcli_example_01.txt
Normal file
19
divers/rocky9_nmcli_example_01.txt
Normal file
@@ -0,0 +1,19 @@
|
||||
# Rocky 9 network interface change IP address and host name example
|
||||
###################################################################
|
||||
|
||||
nmcli connection show
|
||||
nmcli connection show --active
|
||||
|
||||
nmcli connection modify enp1s0 ipv4.address 192.168.0.52/24
|
||||
nmcli connection modify enp1s0 ipv4.method manual ipv6.method ignore
|
||||
nmcli connection modify enp1s0 ipv4.gateway 192.168.0.1
|
||||
nmcli connection modify enp1s0 ipv4.dns 192.168.0.8
|
||||
nmcli connection modify enp1s0 ipv4.dns-search swgalaxy
|
||||
|
||||
nmcli connection modify enp2s0 ipv4.address 192.168.1.52/24 ipv4.method manual ipv6.method ignore
|
||||
|
||||
# list host interfaces
|
||||
hostname -I
|
||||
|
||||
# set host name
|
||||
hostnamectl hostname ithor.swgalaxy
|
||||
8
divers/screen_command.md
Normal file
8
divers/screen_command.md
Normal file
@@ -0,0 +1,8 @@
|
||||
## Screen configuration
|
||||
|
||||
Configuration file `~/.screenrc`:
|
||||
|
||||
termcapinfo xterm* ti@:te@
|
||||
caption always
|
||||
caption string "%{= bW}%3n %{y}%t %{-}%= %{m}%H%?%{-} -- %{c}%l%?%{-} -- %D %M %d %{y}%c"
|
||||
|
||||
34
divers/split_string_in_words_01.sql
Normal file
34
divers/split_string_in_words_01.sql
Normal file
@@ -0,0 +1,34 @@
|
||||
/*
|
||||
vplesnlia: split input string in words
|
||||
*/
|
||||
|
||||
|
||||
DECLARE
|
||||
TYPE v_arr IS
|
||||
VARRAY(100) OF VARCHAR2(60);
|
||||
var v_arr;
|
||||
return_value VARCHAR2(60);
|
||||
BEGIN
|
||||
var := v_arr();
|
||||
FOR c1 IN (
|
||||
SELECT
|
||||
regexp_substr(
|
||||
'&&1', '[^ ]+', 1, level
|
||||
) AS string_parts
|
||||
FROM
|
||||
dual
|
||||
CONNECT BY
|
||||
regexp_substr(
|
||||
'&&1', '[^ ]+', 1, level
|
||||
) IS NOT NULL
|
||||
) LOOP
|
||||
var.extend;
|
||||
var(var.last) := c1.string_parts;
|
||||
END LOOP;
|
||||
|
||||
FOR i IN var.first..var.last LOOP
|
||||
return_value := var(i);
|
||||
dbms_output.put_line(return_value);
|
||||
END LOOP;
|
||||
|
||||
END;
|
||||
236
divers/sql_analytic_01.txt
Normal file
236
divers/sql_analytic_01.txt
Normal file
@@ -0,0 +1,236 @@
|
||||
https://livesql.oracle.com/apex/livesql/file/tutorial_GNRYA4548AQNXC0S04DXVEV08.html
|
||||
https://oracle-base.com/articles/misc/rank-dense-rank-first-last-analytic-functions#rank
|
||||
|
||||
drop table CARS purge;
|
||||
create table CARS (
|
||||
id INTEGER GENERATED ALWAYS AS IDENTITY
|
||||
,brand VARCHAR2(15) not null
|
||||
,model VARCHAR2(10) not null
|
||||
,year NUMBER(4) not null
|
||||
,color VARCHAR2(10) not null
|
||||
,category VARCHAR2(12) not null
|
||||
,price NUMBER not null
|
||||
,power NUMBER(4) not null
|
||||
,fuel VARCHAR2(8) not null
|
||||
)
|
||||
;
|
||||
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','A4','2001','gray','city','5400','150','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','A6','2012','gray','limousine','12000','204','DIESEL');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 4','2020','white','sport','16000','240','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','X6','2018','blue','SUV','15000','280','DIESEL');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Volkswagen','Polo','2014','gray','city','4800','90','DIESEL');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Arkana','2023','green','SUV','35000','220','ELECTRIC');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Porche','Cayenne','2021','black','SUV','41000','280','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Tesla','Model 3','2023','black','city','30500','250','ELECTRIC');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Tesla','Model 3','2023','white','city','30500','250','ELECTRIC');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Tesla','Model 3','2022','black','city','24000','250','ELECTRIC');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','A4','2022','red','city','26000','200','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','Q5','2021','gray','SUV','38000','260','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 3','2022','white','city','46000','240','ELECTRIC');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 3','2023','white','city','44000','240','ELECTRIC');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 3','2021','white','city','42000','240','ELECTRIC');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Clio','2019','black','city','8900','110','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Clio','2020','black','city','9600','110','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Twingo','2019','red','city','7800','90','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Twingo','2022','green','city','9200','90','SP');
|
||||
Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Porche','911','2022','gray','sport','61000','310','SP');
|
||||
|
||||
commit;
|
||||
|
||||
|
||||
-- display cars and total cars count
|
||||
select
|
||||
c.*
|
||||
,count(*) over() as Total_count
|
||||
from
|
||||
CARS c
|
||||
;
|
||||
|
||||
-- display cars and the number of cars by brand
|
||||
select
|
||||
c.*
|
||||
,count(*) over(partition by (brand)) as Brand_count
|
||||
from
|
||||
CARS c
|
||||
;
|
||||
|
||||
|
||||
-- number of cars and sum of prices grouped by color
|
||||
select color, count(*), sum(price)
|
||||
from CARS
|
||||
group by color;
|
||||
|
||||
-- integrating last group by query as analytic
|
||||
-- adding that "inline" for each line
|
||||
select
|
||||
c.*
|
||||
,count(*) over(partition by (color)) as count_by_color
|
||||
,sum(price) over(partition by (color)) as SUM_price_by_color
|
||||
from
|
||||
CARS c
|
||||
;
|
||||
|
||||
|
||||
|
||||
-- average price by category
|
||||
select CATEGORY, avg(price)
|
||||
from CARS
|
||||
group by CATEGORY;
|
||||
|
||||
-- for each car, the percentage of price/median price of it's category
|
||||
select
|
||||
c.*
|
||||
,100*c.price/avg(c.price) over (partition by (category)) Price_by_avg_category_PERCENT
|
||||
from
|
||||
CARS c
|
||||
;
|
||||
|
||||
|
||||
select CATEGORY, average(price)
|
||||
from CARS
|
||||
group by CATEGORY;
|
||||
|
||||
|
||||
-- order by in alalytic: runtime from FIRST key until CURRENT key
|
||||
select b.*,
|
||||
count(*) over (
|
||||
order by brick_id
|
||||
) running_total,
|
||||
sum ( weight ) over (
|
||||
order by brick_id
|
||||
) running_weight
|
||||
from bricks b;
|
||||
|
||||
|
||||
BRICK_ID COLOUR SHAPE WEIGHT RUNNING_TOTAL RUNNING_WEIGHT
|
||||
---------- ---------- ---------- ---------- ------------- --------------
|
||||
1 blue cube 1 1 1
|
||||
2 blue pyramid 2 2 3
|
||||
3 red cube 1 3 4
|
||||
4 red cube 2 4 6
|
||||
5 red pyramid 3 5 9
|
||||
6 green pyramid 1 6 10
|
||||
|
||||
6 rows selected.
|
||||
|
||||
|
||||
select
|
||||
c.*
|
||||
,sum(c.price) over (order by c.id)
|
||||
from
|
||||
cars c;
|
||||
|
||||
|
||||
|
||||
ID BRAND MODEL YEAR COLOR CATEGORY PRICE POWER FUEL SUM(C.PRICE)OVER(ORDERBYC.ID)
|
||||
---------- --------------- ---------- ---------- ---------- ------------ ---------- ---------- -------- -----------------------------
|
||||
1 Audi A4 2001 gray city 5400 150 SP 5400
|
||||
2 Audi A6 2012 gray limousine 12000 204 DIESEL 17400
|
||||
3 BMW Serie 4 2020 white sport 16000 240 SP 33400
|
||||
4 BMW X6 2018 blue SUV 15000 280 DIESEL 48400
|
||||
5 Volkswagen Polo 2014 gray city 4800 90 DIESEL 53200
|
||||
6 Renault Arkana 2023 green SUV 35000 220 ELECTRIC 88200
|
||||
7 Porche Cayenne 2021 black SUV 41000 280 SP 129200
|
||||
8 Tesla Model 3 2023 black city 30500 250 ELECTRIC 159700
|
||||
9 Tesla Model 3 2023 white city 30500 250 ELECTRIC 190200
|
||||
10 Tesla Model 3 2022 black city 24000 250 ELECTRIC 214200
|
||||
11 Audi A4 2022 red city 26000 200 SP 240200
|
||||
12 Audi Q5 2021 gray SUV 38000 260 SP 278200
|
||||
13 BMW Serie 3 2022 white city 46000 240 ELECTRIC 324200
|
||||
14 BMW Serie 3 2023 white city 44000 240 ELECTRIC 368200
|
||||
15 BMW Serie 3 2021 white city 42000 240 ELECTRIC 410200
|
||||
16 Renault Clio 2019 black city 8900 110 SP 419100
|
||||
17 Renault Clio 2020 black city 9600 110 SP 428700
|
||||
18 Renault Twingo 2019 red city 7800 90 SP 436500
|
||||
19 Renault Twingo 2022 green city 9200 90 SP 445700
|
||||
20 Porche 911 2022 gray sport 61000 310 SP 506700
|
||||
|
||||
20 rows selected.
|
||||
|
||||
|
||||
-- adding PARTITION by EXPR will "group by EXPR" and reset FIRST key for each group
|
||||
select
|
||||
c.*
|
||||
,sum(c.price) over (partition by brand order by c.id)
|
||||
from
|
||||
cars c;
|
||||
|
||||
|
||||
ID BRAND MODEL YEAR COLOR CATEGORY PRICE POWER FUEL SUM(C.PRICE)OVER(PARTITIONBYBRANDORDERBYC.ID)
|
||||
---------- --------------- ---------- ---------- ---------- ------------ ---------- ---------- -------- ---------------------------------------------
|
||||
1 Audi A4 2001 gray city 5400 150 SP 5400
|
||||
2 Audi A6 2012 gray limousine 12000 204 DIESEL 17400
|
||||
11 Audi A4 2022 red city 26000 200 SP 43400
|
||||
12 Audi Q5 2021 gray SUV 38000 260 SP 81400
|
||||
3 BMW Serie 4 2020 white sport 16000 240 SP 16000
|
||||
4 BMW X6 2018 blue SUV 15000 280 DIESEL 31000
|
||||
13 BMW Serie 3 2022 white city 46000 240 ELECTRIC 77000
|
||||
14 BMW Serie 3 2023 white city 44000 240 ELECTRIC 121000
|
||||
15 BMW Serie 3 2021 white city 42000 240 ELECTRIC 163000
|
||||
7 Porche Cayenne 2021 black SUV 41000 280 SP 41000
|
||||
20 Porche 911 2022 gray sport 61000 310 SP 102000
|
||||
6 Renault Arkana 2023 green SUV 35000 220 ELECTRIC 35000
|
||||
16 Renault Clio 2019 black city 8900 110 SP 43900
|
||||
17 Renault Clio 2020 black city 9600 110 SP 53500
|
||||
18 Renault Twingo 2019 red city 7800 90 SP 61300
|
||||
19 Renault Twingo 2022 green city 9200 90 SP 70500
|
||||
8 Tesla Model 3 2023 black city 30500 250 ELECTRIC 30500
|
||||
9 Tesla Model 3 2023 white city 30500 250 ELECTRIC 61000
|
||||
10 Tesla Model 3 2022 black city 24000 250 ELECTRIC 85000
|
||||
5 Volkswagen Polo 2014 gray city 4800 90 DIESEL 4800
|
||||
|
||||
20 rows selected.
|
||||
|
||||
|
||||
|
||||
-- when the keys of ORDER BY are not distinct, over (order by KEY) the analytic function will not change for lignes having the same KEY value
|
||||
-- to force the compute from previous line to current add : rows between unbounded preceding and current row
|
||||
|
||||
|
||||
|
||||
select b.*,
|
||||
count(*) over (
|
||||
order by weight
|
||||
) running_total,
|
||||
sum ( weight ) over (
|
||||
order by weight
|
||||
) running_weight
|
||||
from bricks b
|
||||
order by weight;
|
||||
|
||||
|
||||
BRICK_ID COLOUR SHAPE WEIGHT RUNNING_TOTAL RUNNING_WEIGHT
|
||||
---------- ---------- ---------- ---------- ------------- --------------
|
||||
1 blue cube 1 3 3
|
||||
3 red cube 1 3 3
|
||||
6 green pyramid 1 3 3
|
||||
4 red cube 2 5 7
|
||||
2 blue pyramid 2 5 7
|
||||
5 red pyramid 3 6 10
|
||||
|
||||
|
||||
select b.*,
|
||||
count(*) over (
|
||||
order by weight
|
||||
rows between unbounded preceding and current row
|
||||
) running_total,
|
||||
sum ( weight ) over (
|
||||
order by weight
|
||||
rows between unbounded preceding and current row
|
||||
) running_weight
|
||||
from bricks b
|
||||
order by weight;
|
||||
|
||||
|
||||
|
||||
BRICK_ID COLOUR SHAPE WEIGHT RUNNING_TOTAL RUNNING_WEIGHT
|
||||
---------- ---------- ---------- ---------- ------------- --------------
|
||||
1 blue cube 1 1 1
|
||||
3 red cube 1 2 2
|
||||
6 green pyramid 1 3 3
|
||||
4 red cube 2 4 5
|
||||
2 blue pyramid 2 5 7
|
||||
5 red pyramid 3 6 10
|
||||
|
||||
6 rows selected.
|
||||
18
divers/swingbench_01.md
Normal file
18
divers/swingbench_01.md
Normal file
@@ -0,0 +1,18 @@
|
||||
Setup (schema creation).
|
||||
This will create SOE schema with *secret* password in the PDB YODA where admin user is sysdba.
|
||||
|
||||
./oewizard -v -cl -create \
|
||||
-cs wayland/YODA -u soe -p secret \
|
||||
-scale 1 -tc 2 -dba "admin as sysdba" -dbap "Secret00!" \
|
||||
-ts ts_swingbench
|
||||
|
||||
Check:
|
||||
|
||||
./sbutil -soe -cs wayland/YODA -soe -u soe -p secret -val
|
||||
|
||||
Run benchmark:
|
||||
|
||||
./charbench -c ../configs/SOE_Server_Side_V2.xml \
|
||||
-u soe -p secret -uc 5 -cs wayland/YODA \
|
||||
-min 0 -max 10 -intermin 200 -intermax 500 -mt 5000 -mr -v users,tpm,tps,errs,vresp
|
||||
|
||||
21
divers/tanel_update.txt
Normal file
21
divers/tanel_update.txt
Normal file
@@ -0,0 +1,21 @@
|
||||
delete mode 100644 tpt/ash/ash_wait_chains2.sql
|
||||
create mode 100644 tpt/ash/cashtop.sql
|
||||
delete mode 100644 tpt/ash/dash_wait_chains2.sql
|
||||
create mode 100644 tpt/ash/dashtopsum.sql
|
||||
create mode 100644 tpt/ash/dashtopsum_pga.sql
|
||||
delete mode 100644 tpt/ash/example_ash_report.html
|
||||
create mode 100644 tpt/ash/sqlexec_duration_buckets.sql
|
||||
create mode 100644 tpt/awr/awr_sqlid_binds.sql
|
||||
create mode 100644 tpt/awr/perfhub.html
|
||||
create mode 100644 tpt/create_sql_baseline_awr.sql
|
||||
create mode 100644 tpt/descpartxx.sql
|
||||
create mode 100644 tpt/descxx11.sql
|
||||
create mode 100644 tpt/lpstat.sql
|
||||
create mode 100644 tpt/netstat.sql
|
||||
create mode 100644 tpt/netstat2.sql
|
||||
create mode 100644 tpt/npstat.sql
|
||||
create mode 100644 tpt/oerrh.sql
|
||||
create mode 100644 tpt/oerrign.sql
|
||||
create mode 100644 tpt/setup/grant_snapper_privs.sql
|
||||
create mode 100644 tpt/setup/logon_trigger_ospid.sql
|
||||
create mode 100644 tpt/tabhisthybrid.sql
|
||||
231
divers/timescaledb_01.txt
Normal file
231
divers/timescaledb_01.txt
Normal file
@@ -0,0 +1,231 @@
|
||||
CREATE TABLE t (
|
||||
id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||
i INTEGER,
|
||||
c VARCHAR(30),
|
||||
ts TIMESTAMP
|
||||
);
|
||||
|
||||
INSERT INTO t (i, c, ts)
|
||||
SELECT
|
||||
(random() * 9999 + 1)::int AS i,
|
||||
md5(random()::text)::varchar(30) AS c,
|
||||
(
|
||||
timestamp '2000-01-01'
|
||||
+ random() * (timestamp '2025-12-31' - timestamp '2000-01-01')
|
||||
) AS ts
|
||||
FROM generate_series(1, 200000000);
|
||||
|
||||
|
||||
-- export standard table to CSV
|
||||
COPY t
|
||||
TO '/mnt/unprotected/tmp/postgres/t.csv'
|
||||
DELIMITER ','
|
||||
CSV HEADER;
|
||||
|
||||
-- import standard table from CSV
|
||||
|
||||
CREATE TABLE t (
|
||||
id INTEGER,
|
||||
i INTEGER,
|
||||
c TEXT,
|
||||
ts TIMESTAMPTZ
|
||||
);
|
||||
|
||||
COPY t
|
||||
FROM '/mnt/unprotected/tmp/postgres/t.csv'
|
||||
DELIMITER ','
|
||||
CSV HEADER;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS T_TS ON T (TS);
|
||||
|
||||
|
||||
------------
|
||||
-- Oracle --
|
||||
------------
|
||||
|
||||
CREATE TABLE t (
|
||||
id INTEGER,
|
||||
i INTEGER,
|
||||
c VARCHAR2(30),
|
||||
ts TIMESTAMP
|
||||
);
|
||||
|
||||
|
||||
|
||||
-- file t.ctl
|
||||
|
||||
LOAD DATA
|
||||
INFILE 't.csv'
|
||||
INTO TABLE t
|
||||
APPEND
|
||||
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
|
||||
TRAILING NULLCOLS
|
||||
(
|
||||
id INTEGER EXTERNAL,
|
||||
i INTEGER EXTERNAL,
|
||||
c CHAR(30),
|
||||
ts TIMESTAMP "YYYY-MM-DD HH24:MI:SS.FF"
|
||||
)
|
||||
|
||||
sqlldr "'/ as sysdba'" \
|
||||
control=t.ctl \
|
||||
log=t.log \
|
||||
bad=t.bad \
|
||||
rows=50000
|
||||
|
||||
|
||||
------------------
|
||||
-- TimescaleDB --
|
||||
------------------
|
||||
|
||||
Install & config from sources:
|
||||
https://www.tigerdata.com/docs/self-hosted/latest/install/installation-source
|
||||
|
||||
CREATE TABLE ht (
|
||||
id INTEGER,
|
||||
i INTEGER,
|
||||
c TEXT,
|
||||
ts TIMESTAMPTZ
|
||||
);
|
||||
|
||||
SELECT create_hypertable(
|
||||
'ht', -- table name
|
||||
'ts', -- time column
|
||||
chunk_time_interval => INTERVAL '1 month'
|
||||
);
|
||||
|
||||
SELECT add_retention_policy(
|
||||
'ht',
|
||||
INTERVAL '25 years'
|
||||
);
|
||||
|
||||
SELECT * FROM timescaledb_information.jobs
|
||||
WHERE proc_name = 'policy_retention';
|
||||
|
||||
SELECT alter_job(
|
||||
job_id => <your_job_id>,
|
||||
schedule_interval => INTERVAL '6 hours'
|
||||
);
|
||||
|
||||
timescaledb-parallel-copy --connection "postgres://postgres@localhost/db01" --table ht --file '/mnt/unprotected/tmp/postgres/t.csv' \
|
||||
--workers 16 --reporting-period 30s -skip-header
|
||||
|
||||
|
||||
SELECT show_chunks('t');
|
||||
|
||||
-----------
|
||||
-- Bench --
|
||||
-----------
|
||||
|
||||
-- q1
|
||||
select * from t where ts between timestamp'2015-04-01:09:00:00' and timestamp'2015-04-01:09:00:20';
|
||||
|
||||
-- q2
|
||||
select count(*) from t;
|
||||
|
||||
|
||||
|
||||
|
||||
Classic PostgreSQL
|
||||
|
||||
Table load: 5 min
|
||||
|
||||
q1: 52 sec
|
||||
q2: 45 sec
|
||||
|
||||
|
||||
|
||||
|
||||
TimescaleDB
|
||||
|
||||
Table load: 5 min
|
||||
|
||||
|
||||
db01=# SELECT pg_size_pretty(pg_total_relation_size('public.t'));
|
||||
pg_size_pretty
|
||||
----------------
|
||||
18 GB
|
||||
(1 row)
|
||||
|
||||
db01=# SELECT pg_size_pretty(hypertable_size('public.ht'));
|
||||
pg_size_pretty
|
||||
----------------
|
||||
19 GB
|
||||
(1 row)
|
||||
|
||||
|
||||
ALTER TABLE ht
|
||||
SET (
|
||||
timescaledb.compress
|
||||
);
|
||||
|
||||
|
||||
SELECT add_compression_policy(
|
||||
'ht',
|
||||
INTERVAL '2 years'
|
||||
);
|
||||
|
||||
SELECT job_id
|
||||
FROM timescaledb_information.jobs
|
||||
WHERE proc_name = 'policy_compression'
|
||||
AND hypertable_name = 'ht';
|
||||
|
||||
CALL run_job(1002);
|
||||
|
||||
|
||||
SELECT
|
||||
chunk_schema || '.' || chunk_name AS chunk,
|
||||
is_compressed,
|
||||
range_start,
|
||||
range_end
|
||||
FROM timescaledb_information.chunks
|
||||
WHERE hypertable_name = 'ht'
|
||||
ORDER BY range_start;
|
||||
|
||||
|
||||
|
||||
|
||||
-----------------------------------------
|
||||
|
||||
CREATE MATERIALIZED VIEW ht_hourly_avg
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', ts) AS bucket,
|
||||
AVG(i) AS avg_i
|
||||
FROM ht
|
||||
GROUP BY bucket;
|
||||
|
||||
SELECT add_continuous_aggregate_policy('ht_hourly_avg',
|
||||
start_offset => INTERVAL '2 days',
|
||||
end_offset => INTERVAL '0 hours',
|
||||
schedule_interval => INTERVAL '5 minutes'
|
||||
);
|
||||
|
||||
SELECT add_continuous_aggregate_policy('ht_hourly_avg',
|
||||
start_offset => INTERVAL '7 days',
|
||||
end_offset => INTERVAL '0 hours',
|
||||
schedule_interval => INTERVAL '30 minutes'
|
||||
);
|
||||
|
||||
|
||||
|
||||
SELECT *
|
||||
FROM ht_hourly_avg
|
||||
WHERE bucket >= now() - INTERVAL '7 days'
|
||||
ORDER BY bucket;
|
||||
|
||||
|
||||
|
||||
SELECT job_id, proc_name, config
|
||||
FROM timescaledb_information.jobs;
|
||||
|
||||
|
||||
SELECT pid, query, state, backend_type
|
||||
FROM pg_stat_activity
|
||||
WHERE query LIKE '%run_job%'
|
||||
AND query LIKE '%' || <job_id> || '%';
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
41
divers/tiny_root_CA_01.md
Normal file
41
divers/tiny_root_CA_01.md
Normal file
@@ -0,0 +1,41 @@
|
||||
> Based on article https://www.baeldung.com/openssl-self-signed-cert
|
||||
|
||||
## Build a home made root CA
|
||||
|
||||
mkdir -p /app/CA
|
||||
cd /app/CA
|
||||
|
||||
Create rootCA private key:
|
||||
|
||||
openssl genrsa -des3 -out rootCA.key 4096
|
||||
|
||||
Create rootCA certificate:
|
||||
|
||||
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 7300 -out rootCA.pem
|
||||
|
||||
|
||||
## Generate client root CA signed certificate for a client
|
||||
|
||||
Client private key:
|
||||
|
||||
openssl genrsa -out raxus.swgalaxy.key 2048
|
||||
|
||||
Client certificate signature request:
|
||||
|
||||
openssl req -new -key raxus.swgalaxy.key -out raxus.swgalaxy.csr
|
||||
|
||||
Root CA create a signed certificate using the certificate signature request:
|
||||
|
||||
openssl x509 -req -CA rootCA.pem -CAkey rootCA.key -in raxus.swgalaxy.csr -out raxus.swgalaxy.crt -days 365 -CAcreateserial
|
||||
|
||||
Optionally create the full chain:
|
||||
|
||||
cat raxus.swgalaxy.crt rootCA.pem > raxus.swgalaxy.fullchain.crt
|
||||
|
||||
Optionally create an export to be imported into a Oracle wallet:
|
||||
|
||||
openssl pkcs12 -export \
|
||||
-in raxus.swgalaxy.crt \
|
||||
-inkey raxus.swgalaxy.key \
|
||||
-certfile rootCA.pem \
|
||||
-out raxus.swgalaxy.p12
|
||||
9
divers/use_cp_to_copy_hidden_files_01.md
Normal file
9
divers/use_cp_to_copy_hidden_files_01.md
Normal file
@@ -0,0 +1,9 @@
|
||||
== Use cp to copy hidden files
|
||||
|
||||
cp -r from/.[^.]* to/
|
||||
|
||||
Eample:
|
||||
|
||||
cd /root
|
||||
cp -r ./.[^.]* /mnt/unprotected/tmp/reinstall_coruscant/dom0/slash_root/
|
||||
|
||||
8
divers/windows_11_auto_login_01
Normal file
8
divers/windows_11_auto_login_01
Normal file
@@ -0,0 +1,8 @@
|
||||
# create local admin user
|
||||
net user vplesnila secret /add
|
||||
net localgroup administrators vplesnila /add
|
||||
|
||||
# setup autologin
|
||||
REG ADD "HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" /v AutoAdminLogon /t REG_SZ /d 1 /f
|
||||
REG ADD "HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" /v DefaultUserName /t REG_SZ /d vplesnila /f
|
||||
REG ADD "HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" /v DefaultPassword /t REG_SZ /d secret /f
|
||||
3
divers/windows_11_create_local_admin_01.txt
Normal file
3
divers/windows_11_create_local_admin_01.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
net user USER-NAME PASSWORD /add
|
||||
net localgroup administrators USER-ACCOUNT /add
|
||||
|
||||
555
divers/xtts_non-cdb_to_cdb_01.md
Normal file
555
divers/xtts_non-cdb_to_cdb_01.md
Normal file
@@ -0,0 +1,555 @@
|
||||
## Context
|
||||
|
||||
- Source: non-CDB = GREEDO@rodia-scan
|
||||
- Target: PDB = REEK, CDB=AERONPRD@ylesia-scan
|
||||
|
||||
## Setup
|
||||
|
||||
Create tablespaces and users:
|
||||
|
||||
```
|
||||
create tablespace TS1 datafile size 16M autoextend on next 16M;
|
||||
create tablespace TS2 datafile size 16M autoextend on next 16M;
|
||||
create tablespace TS3 datafile size 16M autoextend on next 16M;
|
||||
|
||||
alter tablespace TS1 add datafile size 16M autoextend on next 16M;
|
||||
alter tablespace TS1 add datafile size 16M autoextend on next 16M;
|
||||
alter tablespace TS2 add datafile size 16M autoextend on next 16M;
|
||||
alter tablespace TS3 add datafile size 16M autoextend on next 16M;
|
||||
alter tablespace TS3 add datafile size 16M autoextend on next 16M;
|
||||
alter tablespace TS3 add datafile size 16M autoextend on next 16M;
|
||||
|
||||
create user U1 identified by secret;
|
||||
grant connect, resource, create view,create job to U1;
|
||||
alter user U1 quota unlimited on TS1;
|
||||
alter user U1 quota unlimited on TS2;
|
||||
alter user U1 quota unlimited on TS3;
|
||||
|
||||
create user U2 identified by secret;
|
||||
grant connect, resource, create view,create job to U2;
|
||||
alter user U2 quota unlimited on TS1;
|
||||
alter user U2 quota unlimited on TS2;
|
||||
alter user U2 quota unlimited on TS3;
|
||||
```
|
||||
|
||||
For each user, create objects:
|
||||
|
||||
connect U1/secret
|
||||
-- create objcts
|
||||
connect U2/secret
|
||||
-- create objcts
|
||||
|
||||
Create objects script:
|
||||
|
||||
```
|
||||
-- TABLE 1 dans TS1
|
||||
CREATE TABLE table1_ts1 (
|
||||
id NUMBER PRIMARY KEY,
|
||||
data VARCHAR2(100),
|
||||
created_at DATE DEFAULT SYSDATE
|
||||
) TABLESPACE TS1;
|
||||
|
||||
CREATE SEQUENCE table1_seq
|
||||
START WITH 1
|
||||
INCREMENT BY 1
|
||||
NOCACHE
|
||||
NOCYCLE;
|
||||
|
||||
CREATE OR REPLACE TRIGGER trg_table1_id
|
||||
BEFORE INSERT ON table1_ts1
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
IF :NEW.id IS NULL THEN
|
||||
SELECT table1_seq.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END IF;
|
||||
END;
|
||||
/
|
||||
|
||||
-- TABLE 2 dans TS2
|
||||
CREATE TABLE table2_ts2 (
|
||||
id NUMBER PRIMARY KEY,
|
||||
data VARCHAR2(100),
|
||||
updated_at DATE
|
||||
) TABLESPACE TS2;
|
||||
|
||||
CREATE SEQUENCE table2_seq
|
||||
START WITH 1
|
||||
INCREMENT BY 1
|
||||
NOCACHE
|
||||
NOCYCLE;
|
||||
|
||||
CREATE OR REPLACE TRIGGER trg_table2_id
|
||||
BEFORE INSERT ON table2_ts2
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
IF :NEW.id IS NULL THEN
|
||||
SELECT table2_seq.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END IF;
|
||||
END;
|
||||
/
|
||||
|
||||
-- TABLE 3 dans TS3
|
||||
CREATE TABLE table3_ts3 (
|
||||
id NUMBER PRIMARY KEY,
|
||||
info VARCHAR2(100),
|
||||
status VARCHAR2(20)
|
||||
) TABLESPACE TS3;
|
||||
|
||||
CREATE SEQUENCE table3_seq
|
||||
START WITH 1
|
||||
INCREMENT BY 1
|
||||
NOCACHE
|
||||
NOCYCLE;
|
||||
|
||||
CREATE OR REPLACE TRIGGER trg_table3_id
|
||||
BEFORE INSERT ON table3_ts3
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
IF :NEW.id IS NULL THEN
|
||||
SELECT table3_seq.NEXTVAL INTO :NEW.id FROM dual;
|
||||
END IF;
|
||||
END;
|
||||
/
|
||||
|
||||
|
||||
CREATE OR REPLACE VIEW combined_view AS
|
||||
SELECT id, data, created_at, NULL AS updated_at, NULL AS status FROM table1_ts1
|
||||
UNION ALL
|
||||
SELECT id, data, updated_at, NULL AS created_at, NULL AS status FROM table2_ts2
|
||||
UNION ALL
|
||||
SELECT id, info AS data, NULL, NULL, status FROM table3_ts3;
|
||||
|
||||
|
||||
CREATE OR REPLACE PACKAGE data_ops AS
|
||||
PROCEDURE insert_random_data;
|
||||
PROCEDURE update_random_data;
|
||||
PROCEDURE delete_random_data;
|
||||
END data_ops;
|
||||
/
|
||||
|
||||
CREATE OR REPLACE PACKAGE BODY data_ops AS
|
||||
PROCEDURE insert_random_data IS
|
||||
BEGIN
|
||||
FOR i IN 1..10 LOOP
|
||||
INSERT INTO table1_ts1 (data)
|
||||
VALUES (DBMS_RANDOM.STRING('A', 10));
|
||||
END LOOP;
|
||||
|
||||
FOR i IN 1..3 LOOP
|
||||
INSERT INTO table3_ts3 (info, status)
|
||||
VALUES (DBMS_RANDOM.STRING('A', 10), 'NEW');
|
||||
END LOOP;
|
||||
END;
|
||||
|
||||
PROCEDURE update_random_data IS
|
||||
BEGIN
|
||||
FOR i IN 1..7 LOOP
|
||||
INSERT INTO table2_ts2 (data)
|
||||
VALUES (DBMS_RANDOM.STRING('A', 10));
|
||||
END LOOP;
|
||||
FOR rec IN (
|
||||
SELECT id FROM (
|
||||
SELECT id FROM table2_ts2 ORDER BY DBMS_RANDOM.VALUE
|
||||
) WHERE ROWNUM <= 5
|
||||
) LOOP
|
||||
UPDATE table2_ts2
|
||||
SET data = DBMS_RANDOM.STRING('A', 10), updated_at = SYSDATE
|
||||
WHERE id = rec.id;
|
||||
END LOOP;
|
||||
END;
|
||||
|
||||
PROCEDURE delete_random_data IS
|
||||
BEGIN
|
||||
FOR rec IN (
|
||||
SELECT id FROM (
|
||||
SELECT id FROM table3_ts3 ORDER BY DBMS_RANDOM.VALUE
|
||||
) WHERE ROWNUM <= 2
|
||||
) LOOP
|
||||
DELETE FROM table3_ts3 WHERE id = rec.id;
|
||||
END LOOP;
|
||||
END;
|
||||
END data_ops;
|
||||
/
|
||||
```
|
||||
|
||||
Create job to run every 1 minute:
|
||||
|
||||
```
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.CREATE_JOB (
|
||||
job_name => 'random_ops_job',
|
||||
job_type => 'PLSQL_BLOCK',
|
||||
job_action => '
|
||||
BEGIN
|
||||
data_ops.insert_random_data;
|
||||
data_ops.update_random_data;
|
||||
data_ops.delete_random_data;
|
||||
END;',
|
||||
start_date => SYSTIMESTAMP,
|
||||
repeat_interval => 'FREQ=MINUTELY; INTERVAL=1',
|
||||
enabled => TRUE,
|
||||
comments => 'Job to insert, update and delete random data every minute.'
|
||||
);
|
||||
END;
|
||||
/
|
||||
```
|
||||
|
||||
To restart the job:
|
||||
|
||||
```
|
||||
--Restart the job
|
||||
BEGIN
|
||||
DBMS_SCHEDULER.enable('random_ops_job');
|
||||
END;
|
||||
/
|
||||
```
|
||||
|
||||
Count the lines in tables:
|
||||
|
||||
```
|
||||
select
|
||||
'u1.table1_ts1:'||count(*) from u1.table1_ts1
|
||||
union select
|
||||
'u1.table2_ts2:'||count(*) from u1.table2_ts2
|
||||
union select
|
||||
'u1.table3_ts3:'||count(*) from u1.table3_ts3
|
||||
union select
|
||||
'u2.table1_ts1:'||count(*) from u2.table1_ts1
|
||||
union select
|
||||
'u2.table2_ts2:'||count(*) from u2.table2_ts2
|
||||
union select
|
||||
'u2.table3_ts3:'||count(*) from u2.table3_ts3
|
||||
order by 1 asc
|
||||
/
|
||||
```
|
||||
|
||||
To ensure the automatic opening of PDB, create a service to start automatically in the PDB:
|
||||
|
||||
srvctl add service -s adm_reek -db AERONPRD -preferred AERONPRD1,AERONPRD2,AERONPRD3 -pdb REEK -role PRIMARY
|
||||
srvctl start service -s adm_reek -db AERONPRD
|
||||
|
||||
|
||||
## XTTS
|
||||
|
||||
> Note MOS: V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)
|
||||
|
||||
### Initial setup
|
||||
|
||||
Identify tablespaces to transport, probably all non "administrative" tablespaces:
|
||||
|
||||
```
|
||||
select
|
||||
listagg(tablespace_name, ',')
|
||||
within group
|
||||
(order by tablespace_name) as non_sys_ts
|
||||
from
|
||||
dba_tablespaces
|
||||
where
|
||||
contents not in ('UNDO','TEMPORARY') and
|
||||
tablespace_name not in ('SYSTEM','SYSAUX');
|
||||
```
|
||||
|
||||
For source and target servers, define folders to be used for scripts, backupset, datapump etc.
|
||||
In our case, that will be a shared NFS folder `/mnt/unprotected/tmp/oracle/xtts`
|
||||
|
||||
> The size of folder should be greather than the size of full database.
|
||||
|
||||
Unzip xtts scripts:
|
||||
|
||||
cd /mnt/unprotected/tmp/oracle/xtts
|
||||
unzip /mnt/yavin4/kit/Oracle/XTTS/rman_xttconvert_VER4.3.zip
|
||||
|
||||
Configure xtt.properties file:
|
||||
|
||||
```
|
||||
tablespaces=TS1,TS2,TS3,USERS
|
||||
src_scratch_location=/mnt/unprotected/tmp/oracle/xtts/scratch
|
||||
dest_datafile_location=+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts/
|
||||
dest_scratch_location=/mnt/unprotected/tmp/oracle/xtts/scratch
|
||||
asm_home=/app/oracle/grid/product/19
|
||||
asm_sid=+ASM1
|
||||
destconnstr=sys/"Secret00!"@ylesia-scan/adm_reek
|
||||
usermantransport=1
|
||||
```
|
||||
|
||||
On target server, create ASM directory where the datafile will be restored:
|
||||
|
||||
mkdir +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts
|
||||
|
||||
On **both source and target** servers, set `TMPDIR` environment variable to the path of xtts scripts:
|
||||
|
||||
export TMPDIR=/mnt/unprotected/tmp/oracle/xtts
|
||||
|
||||
### Prepare Phase
|
||||
|
||||
This step corresponds to initial full backup/restore of source database on target system.
|
||||
|
||||
Initial backup on source server:
|
||||
|
||||
```
|
||||
export TMPDIR=/mnt/unprotected/tmp/oracle/xtts
|
||||
cd $TMPDIR
|
||||
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3
|
||||
```
|
||||
|
||||
Initial restore on target server:
|
||||
|
||||
```
|
||||
export TMPDIR=/mnt/unprotected/tmp/oracle/xtts
|
||||
cd $TMPDIR
|
||||
$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3
|
||||
```
|
||||
|
||||
> `debug` argument is optional
|
||||
|
||||
### Roll Forward Phase
|
||||
|
||||
As long as necessary we can do incremental backup/resore operations.
|
||||
|
||||
> New datafiles add to source database are automatically managed by this step.
|
||||
|
||||
The commands are exactly the sames (with or without debug mode).
|
||||
|
||||
For backup:
|
||||
|
||||
```
|
||||
export TMPDIR=/mnt/unprotected/tmp/oracle/xtts
|
||||
cd $TMPDIR
|
||||
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
|
||||
```
|
||||
|
||||
For restore:
|
||||
|
||||
```
|
||||
export TMPDIR=/mnt/unprotected/tmp/oracle/xtts
|
||||
cd $TMPDIR
|
||||
$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore
|
||||
```
|
||||
|
||||
> Running succesives backup or successive restore operations does not pose a problem.
|
||||
|
||||
### Final Incremental Backup
|
||||
|
||||
On **source** database, put tablespaces in **read-only** mode:
|
||||
|
||||
```
|
||||
select
|
||||
'alter tablespace '||tablespace_name||' read only;' as COMMAND
|
||||
from
|
||||
dba_tablespaces
|
||||
where
|
||||
contents not in ('UNDO','TEMPORARY') and
|
||||
tablespace_name not in ('SYSTEM','SYSAUX');
|
||||
```
|
||||
|
||||
Check:
|
||||
|
||||
```
|
||||
select distinct status
|
||||
from
|
||||
dba_tablespaces
|
||||
where
|
||||
contents not in ('UNDO','TEMPORARY') and
|
||||
tablespace_name not in ('SYSTEM','SYSAUX');
|
||||
```
|
||||
|
||||
Take final incremental backup:
|
||||
|
||||
```
|
||||
export TMPDIR=/mnt/unprotected/tmp/oracle/xtts
|
||||
cd $TMPDIR
|
||||
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
|
||||
```
|
||||
|
||||
Rstore final incremental backup:
|
||||
|
||||
```
|
||||
export TMPDIR=/mnt/unprotected/tmp/oracle/xtts
|
||||
cd $TMPDIR
|
||||
$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore
|
||||
```
|
||||
|
||||
### Metadata export
|
||||
|
||||
Create DATAPUMP directory on **both** source and destination databases.
|
||||
On source (non-CDB):
|
||||
|
||||
SQL> create or replace directory XTTS as '/mnt/unprotected/tmp/oracle/xtts';
|
||||
|
||||
On destination (PDB):
|
||||
|
||||
export ORACLE_PDB_SID=REEK
|
||||
SQL> create or replace directory XTTS as '/mnt/unprotected/tmp/oracle/xtts';
|
||||
|
||||
Export metadata
|
||||
|
||||
expdp userid="'/ as sysdba'" dumpfile=XTTS:metadata.dmp logfile=XTTS:metadata.log FULL=y TRANSPORTABLE=always
|
||||
|
||||
### Optionally: on target, pout target datafiles read-only at OS level
|
||||
|
||||
Identify OMF target datafiles:
|
||||
|
||||
```
|
||||
asmcmd -p
|
||||
cd +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts
|
||||
ls --permission
|
||||
```
|
||||
|
||||
For each datafile, set read-olny permisions, example:
|
||||
|
||||
chmod 444 +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts/*
|
||||
|
||||
If you got:
|
||||
|
||||
ORA-15304: operation requires ACCESS_CONTROL.ENABLED attribute to be TRUE (DBD ERROR: OCIStmtExecute)
|
||||
|
||||
then set following diskgroup attributes and retry.
|
||||
|
||||
```
|
||||
column dg_name format a20
|
||||
column name format a50
|
||||
column VALUE format a30
|
||||
|
||||
set lines 120
|
||||
|
||||
select
|
||||
dg.name dg_name, attr.name, attr.value
|
||||
from
|
||||
v$asm_attribute attr
|
||||
join v$asm_diskgroup dg on attr.group_number=dg.group_number
|
||||
where
|
||||
attr.name in ('compatible.rdbms','access_control.enabled')
|
||||
order by dg.name, attr.name
|
||||
/
|
||||
|
||||
|
||||
alter diskgroup DATA set attribute 'compatible.rdbms' = '19.0.0.0.0';
|
||||
alter diskgroup RECO set attribute 'compatible.rdbms' = '19.0.0.0.0';
|
||||
|
||||
alter diskgroup DATA set attribute 'access_control.enabled' = 'TRUE';
|
||||
alter diskgroup RECO set attribute 'access_control.enabled' = 'TRUE';
|
||||
```
|
||||
|
||||
> Compare number of datafiles transported and the number of datafiles of non-Oracle tablespaces
|
||||
> Check if transported tablespaces already exists on target database
|
||||
|
||||
### Metadata import and tablespace plug-in
|
||||
|
||||
Create impdp parfile `impo_metadata.par`:
|
||||
|
||||
```
|
||||
userid="/ as sysdba"
|
||||
dumpfile=XTTS:metadata.dmp
|
||||
logfile=XTTS:impo_metadata.log
|
||||
transport_datafiles=
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.290.1205059373,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.291.1205059373,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.298.1205060113,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.289.1205059373,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS2.293.1205059375,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS2.300.1205060113,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS2.292.1205059375,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.294.1205059381,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.295.1205059381,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.296.1205059381,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.297.1205059381,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.299.1205060113,
|
||||
+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/USERS.302.1205084171
|
||||
```
|
||||
|
||||
Run import:
|
||||
|
||||
impdp parfile=impo_metadata.par
|
||||
|
||||
|
||||
Rebounce the PDB (or the CDB), otherwise we can get errors like:
|
||||
|
||||
```
|
||||
ORA-01114: IO error writing block to file 33 (block # 1)
|
||||
ORA-01110: data file 33:
|
||||
'+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/ts1.298.1205060113'
|
||||
ORA-27009: cannot write to file opened for read
|
||||
```
|
||||
|
||||
Put plugged tablespaces in read/write mode:
|
||||
|
||||
```
|
||||
select
|
||||
'alter tablespace '||tablespace_name||' read write;' as COMMAND
|
||||
from
|
||||
dba_tablespaces
|
||||
where
|
||||
contents not in ('UNDO','TEMPORARY') and
|
||||
tablespace_name not in ('SYSTEM','SYSAUX');
|
||||
```
|
||||
|
||||
Remove aliases in order to user only OMF datafiles:
|
||||
|
||||
```
|
||||
cd +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts
|
||||
rmalias ts1_8.dbf ts2_13.dbf... .... ...
|
||||
cd ..
|
||||
rm -rf xtts
|
||||
```
|
||||
|
||||
## Unxexpectd issues
|
||||
|
||||
In metadata import step I relize I forgot to include USER tablespace in `xtt.properties` and impdp failed wit error:
|
||||
|
||||
ORA-39352: Wrong number of TRANSPORT_DATAFILES specified: expected 13, received 12
|
||||
|
||||
The tablespace USER being in read-only mode I copied the datafile manually on target database.
|
||||
|
||||
Identify the file number:
|
||||
|
||||
```
|
||||
SQL> select FILE_ID from dba_data_files where TABLESPACE_NAME='USERS';
|
||||
|
||||
FILE_ID
|
||||
----------
|
||||
7
|
||||
```
|
||||
|
||||
Backup datafile on source:
|
||||
|
||||
```
|
||||
run{
|
||||
set nocfau;
|
||||
backup datafile 7 format '/mnt/unprotected/tmp/oracle/xtts/%d_%U_%s_%t.bck';
|
||||
}
|
||||
```
|
||||
|
||||
Restore datafile on target;
|
||||
|
||||
```
|
||||
run {
|
||||
restore from platform 'Linux x86 64-bit'
|
||||
foreign datafile 7 format '+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts//USERS.dbf'
|
||||
from backupset '/mnt/unprotected/tmp/oracle/xtts/GREEDO_0i3t87ss_18_1_1_18_1205084060.bck';
|
||||
}
|
||||
```
|
||||
|
||||
Put datafile in read-ony at ASM level:
|
||||
|
||||
chmod 444 +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/USERS.302.1205084171
|
||||
|
||||
Run the impdp again.
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Having datafile to plug-in in read-only mode at ASM level allow to repeat tne impdp operations as many time as necessary.
|
||||
For example, to completly re-execute the impdp metadata as on initial conditions:
|
||||
- drop new plugged tablespaces
|
||||
- drop non oracle maintened users
|
||||
- run impdp metadata again
|
||||
|
||||
```
|
||||
drop tablespace TS1 including contents;
|
||||
drop tablespace TS2 including contents;
|
||||
drop tablespace TS3 including contents;
|
||||
drop tablespace USERS including contents;
|
||||
|
||||
select 'drop user '||USERNAME||' cascade;' from dba_users where ORACLE_MAINTAINED='N';
|
||||
```
|
||||
|
||||
86
histograms/histogram_01.txt
Normal file
86
histograms/histogram_01.txt
Normal file
@@ -0,0 +1,86 @@
|
||||
# Tracking column histogram modifications by M.Houri
|
||||
# https://hourim.wordpress.com/2020/08/06/historical-column-histogram/
|
||||
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select rownum id, decode(mod(rownum,10),0,2,1) c_freq, nvl(blocks,999) c_hb
|
||||
from dba_tables ;
|
||||
|
||||
update T1 set c_freq=3 where rownum<=10;
|
||||
commit;
|
||||
|
||||
create index idx_freq on T1(C_FREQ) tablespace TS1;
|
||||
create index idx_hb on T1(C_HB) tablespace TS1;
|
||||
|
||||
|
||||
select c_freq,count(*) from T1 group by c_freq order by 2 desc;
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats (user, 'T1', method_opt=>'for all columns size 1');
|
||||
|
||||
col column_name for a20
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T1' and column_name='C_FREQ';
|
||||
|
||||
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */ * from T1 where C_FREQ=3;
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T1' and column_name='C_HB';
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */ * from T1 where C_HB=999;
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
---------------- FREQ
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns C_FREQ size AUTO');
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T1' and column_name='C_FREQ';
|
||||
|
||||
select endpoint_value as column_value,
|
||||
endpoint_number as cummulative_frequency,
|
||||
endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency
|
||||
from user_tab_histograms
|
||||
where table_name = 'T1' and column_name = 'C_FREQ';
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */ * from T1 where C_FREQ=3;
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
--------------- WEIGHT
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns C_HB size 254');
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T1' and column_name='C_HB';
|
||||
|
||||
|
||||
select endpoint_value as column_value,
|
||||
endpoint_number as cummulative_frequency,
|
||||
endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency
|
||||
from user_tab_histograms
|
||||
where table_name = 'T1' and column_name = 'C_HB';
|
||||
|
||||
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select rownum id, decode(mod(rownum,10),0,2,1) c_freq, nvl(blocks,999) c_hb
|
||||
from dba_extents ;
|
||||
|
||||
update T1 set c_freq=3 where rownum<=10;
|
||||
commit;
|
||||
|
||||
252
histograms/histogram_02.txt
Normal file
252
histograms/histogram_02.txt
Normal file
@@ -0,0 +1,252 @@
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select
|
||||
rownum id,
|
||||
decode(mod(rownum,10),0,10,1) col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100000
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
---------
|
||||
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select
|
||||
rownum id,
|
||||
decode(mod(rownum,3),0,'m3',
|
||||
decode(mod(rownum,5),0,'m5',
|
||||
decode(mod(rownum,7),0,'m7',
|
||||
decode(mod(rownum,11),0,'m11',
|
||||
decode(mod(rownum,13),0,'m13',
|
||||
decode(mod(rownum,17),0,'m17',
|
||||
'other')))))) col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100000
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
------------
|
||||
|
||||
|
||||
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<=10 then rownum else 99999 end col1,
|
||||
case when rownum<=400 then rownum else 99999 end col2,
|
||||
case when rownum<=4000 then rownum else 99999 end col3,
|
||||
case when rownum<=10000 then rownum else 99999 end col4
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100000
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
---------
|
||||
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum>=1 and rownum<1000 then mod(rownum,10) else 99999 end col1,
|
||||
case when rownum>=1 and rownum<99900 then mod(rownum,1000) else rownum end col2,
|
||||
mod(rownum,300) col3
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100000
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
|
||||
---------
|
||||
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select
|
||||
rownum id,
|
||||
mod(rownum,254) col1,
|
||||
mod(rownum,255) col2,
|
||||
mod(rownum,256) col3
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100000
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY');
|
||||
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T1';
|
||||
|
||||
|
||||
|
||||
|
||||
select endpoint_value as column_value,
|
||||
endpoint_number as cummulative_frequency,
|
||||
endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency
|
||||
from user_tab_histograms
|
||||
where table_name = 'T1' and column_name = 'COL4';
|
||||
|
||||
|
||||
|
||||
select col1,count(*) from T1 group by col1 order by 2 desc;
|
||||
|
||||
|
||||
|
||||
--------------------
|
||||
|
||||
https://www.red-gate.com/simple-talk/databases/oracle-databases/12c-histogram-top-frequency/
|
||||
|
||||
drop table T_TopFreq purge;
|
||||
create table T_TopFreq as
|
||||
select
|
||||
rownum n1
|
||||
, case when mod(rownum, 100000) = 0 then 90
|
||||
when mod(rownum, 10000) = 0 then 180
|
||||
when mod(rownum, 1000) = 0 then 84
|
||||
when mod(rownum, 100) = 0 then 125
|
||||
when mod(rownum,50) = 2 then 7
|
||||
when mod(rownum-1,80) = 2 then 22
|
||||
when mod(rownum, 10) = 0 then 19
|
||||
when mod(rownum-1,10) = 5 then 15
|
||||
when mod(rownum-1,5) = 1 then 11
|
||||
when trunc((rownum -1/3)) < 5 then 25
|
||||
when trunc((rownum -1/5)) < 20 then 33
|
||||
else 42
|
||||
end n2
|
||||
from dual
|
||||
connect by level <= 2e2
|
||||
/
|
||||
|
||||
|
||||
set serveroutput ON
|
||||
|
||||
exec dbms_stats.set_global_prefs ('TRACE', to_char (1+16));
|
||||
exec dbms_stats.gather_table_stats (user,'T_TOPFREQ',method_opt=> 'for columns n2 size 8');
|
||||
exec dbms_stats.set_global_prefs('TRACE', null);
|
||||
|
||||
|
||||
select
|
||||
sum (cnt) TopNRows
|
||||
from (select
|
||||
n2
|
||||
,count(*) cnt
|
||||
from t_topfreq
|
||||
group by n2
|
||||
order by count(*) desc
|
||||
)
|
||||
where rownum <= 8;
|
||||
|
||||
with FREQ as
|
||||
( select
|
||||
n2
|
||||
,count(*) cnt
|
||||
from t_topfreq
|
||||
group by n2
|
||||
order by count(*) desc
|
||||
)
|
||||
select sum(cnt) from FREQ where rownum<=8;
|
||||
|
||||
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T_TOPFREQ';
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------
|
||||
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select
|
||||
rownum id,
|
||||
mod(rownum,300) col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100e3
|
||||
)
|
||||
/
|
||||
|
||||
update T1 set col1=567 where id between 70e3 and 75e3;
|
||||
update T1 set col1=678 where id between 75e3 and 90e3;
|
||||
update T1 set col1=789 where id between 90e3 and 100e3;
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY');
|
||||
|
||||
-- type de histogram
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T1';
|
||||
|
||||
|
||||
-- how many rows are in the TOP-N values ?
|
||||
with FREQ as
|
||||
( select
|
||||
col1
|
||||
,count(*) cnt
|
||||
from T1
|
||||
group by col1
|
||||
order by count(*) desc
|
||||
)
|
||||
select sum(cnt) from FREQ where rownum<=254
|
||||
;
|
||||
|
||||
-- frequency by column value / bucket
|
||||
select endpoint_value as column_value,
|
||||
endpoint_number as cummulative_frequency,
|
||||
endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency,
|
||||
ENDPOINT_REPEAT_COUNT
|
||||
from user_tab_histograms
|
||||
where table_name = 'T1' and column_name = 'COL1';
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------
|
||||
|
||||
--------------------------------------------------------------
|
||||
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace TS1 as
|
||||
select
|
||||
rownum id,
|
||||
mod(rownum,2000) col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 1000e3
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 2048');
|
||||
|
||||
-- type de histogram
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from user_tab_col_statistics
|
||||
where table_name='T1';
|
||||
|
||||
|
||||
120
histograms/histogram_03.txt
Normal file
120
histograms/histogram_03.txt
Normal file
@@ -0,0 +1,120 @@
|
||||
create pluggable database NEREUS admin user PDB$OWNER identified by secret;
|
||||
alter pluggable database NEREUS open;
|
||||
alter pluggable database NEREUS save state;
|
||||
|
||||
alter session set container=NEREUS;
|
||||
show pdbs
|
||||
show con_name
|
||||
|
||||
grant sysdba to adm identified by secret;
|
||||
|
||||
alias NEREUS='rlwrap sqlplus adm/secret@bakura/NEREUS as sysdba'
|
||||
|
||||
create tablespace USERS datafile size 32M autoextend ON next 32M;
|
||||
alter database default tablespace USERS;
|
||||
|
||||
create user HR identified by secret
|
||||
quota unlimited on USERS;
|
||||
|
||||
grant CONNECT,RESOURCE to HR;
|
||||
grant CREATE VIEW to HR;
|
||||
|
||||
wget https://raw.githubusercontent.com/oracle-samples/db-sample-schemas/main/human_resources/hr_cre.sql
|
||||
wget https://raw.githubusercontent.com/oracle-samples/db-sample-schemas/main/human_resources/hr_popul.sql
|
||||
|
||||
connect HR/secret@bakura/NEREUS
|
||||
|
||||
spool install.txt
|
||||
@hr_cre.sql
|
||||
@hr_popul.sql
|
||||
|
||||
|
||||
alter user HR no authentication;
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
emp.FIRST_NAME
|
||||
, emp.LAST_NAME
|
||||
, dept.DEPARTMENT_NAME
|
||||
from
|
||||
HR.EMPLOYEES emp,
|
||||
HR.DEPARTMENTS dept
|
||||
where
|
||||
emp.DEPARTMENT_ID = dept.DEPARTMENT_ID
|
||||
order by
|
||||
FIRST_NAME,
|
||||
LAST_NAME
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
exec dbms_stats.delete_table_stats('HR','EMPLOYEES');
|
||||
exec dbms_stats.delete_table_stats('HR','DEPARTMENTS');
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
exec dbms_stats.gather_table_stats('HR','EMPLOYEES', method_opt=>'for all columns size SKEWONLY');
|
||||
exec dbms_stats.gather_table_stats('HR','DEPARTMENTS', method_opt=>'for all columns size SKEWONLY');
|
||||
|
||||
exec dbms_stats.gather_table_stats('HR','EMPLOYEES', method_opt=>'for all columns size 254');
|
||||
exec dbms_stats.gather_table_stats('HR','DEPARTMENTS', method_opt=>'for all columns size 254');
|
||||
|
||||
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from dba_tab_col_statistics
|
||||
where owner='HR' and table_name='EMPLOYEES' and column_name='DEPARTMENT_ID';
|
||||
|
||||
select endpoint_value as column_value,
|
||||
endpoint_number as cummulative_frequency,
|
||||
endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency
|
||||
from dba_tab_histograms
|
||||
where owner='HR' and table_name='EMPLOYEES' and column_name='DEPARTMENT_ID';
|
||||
|
||||
select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram
|
||||
from dba_tab_col_statistics
|
||||
where owner='HR' and table_name='DEPARTMENTS' and column_name='DEPARTMENT_ID';
|
||||
|
||||
select endpoint_value as column_value,
|
||||
endpoint_number as cummulative_frequency,
|
||||
endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency
|
||||
from dba_tab_histograms
|
||||
where owner='HR' and table_name='DEPARTMENTS' and column_name='DEPARTMENT_ID';
|
||||
|
||||
|
||||
break on report skip 1
|
||||
compute sum of product on report
|
||||
column product format 999,999,999
|
||||
|
||||
with f1 as (
|
||||
select
|
||||
endpoint_value value,
|
||||
endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
|
||||
from
|
||||
dba_tab_histograms
|
||||
where
|
||||
owner='HR'
|
||||
and table_name = 'EMPLOYEES'
|
||||
and column_name = 'DEPARTMENT_ID'
|
||||
order by
|
||||
endpoint_value
|
||||
),
|
||||
f2 as (
|
||||
select
|
||||
endpoint_value value,
|
||||
endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
|
||||
from
|
||||
dba_tab_histograms
|
||||
where
|
||||
owner='HR'
|
||||
and table_name = 'DEPARTMENTS'
|
||||
and column_name = 'DEPARTMENT_ID'
|
||||
order by
|
||||
endpoint_value
|
||||
)
|
||||
select
|
||||
f1.value, f1.frequency, f2.frequency, f1.frequency * f2.frequency product
|
||||
from
|
||||
f1, f2
|
||||
where
|
||||
f2.value = f1.value
|
||||
;
|
||||
85
histograms/histogram_04.txt
Normal file
85
histograms/histogram_04.txt
Normal file
@@ -0,0 +1,85 @@
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<10 then mod(rownum,4) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 20
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
|
||||
drop table T2 purge;
|
||||
|
||||
create table T2 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<25 then mod(rownum,10) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100
|
||||
)
|
||||
/
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1');
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=150
|
||||
and T1.COL1=T2.COL1
|
||||
/
|
||||
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1');
|
||||
|
||||
exec dbms_stats.delete_table_stats('SYS','T1');
|
||||
exec dbms_stats.delete_table_stats('SYS','T2');
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size SKEWONLY');
|
||||
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID
|
||||
, T2.ID
|
||||
, T1.COL1
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=3
|
||||
and T1.COL1=T2.COL1
|
||||
/
|
||||
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
@stats_col SYS T1 % % % %
|
||||
@stats_col SYS T2 % % % %
|
||||
|
||||
|
||||
@hist_cross_freq SYS T1 COL1 SYS T2 COL2
|
||||
|
||||
|
||||
60
histograms/histogram_05.txt
Normal file
60
histograms/histogram_05.txt
Normal file
@@ -0,0 +1,60 @@
|
||||
drop table T1 purge;
|
||||
create table T1 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<4e4 then mod(rownum,500) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 5e5
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
drop table T2 purge;
|
||||
create table T2 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<8e5 then mod(rownum,500) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 1e6
|
||||
)
|
||||
/
|
||||
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=150
|
||||
and T1.COL1=T2.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1');
|
||||
|
||||
|
||||
exec dbms_stats.delete_table_stats('SYS','T1');
|
||||
exec dbms_stats.delete_table_stats('SYS','T2');
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size SKEWONLY');
|
||||
|
||||
|
||||
@stats_col SYS T1 % % % %
|
||||
@stats_col SYS T2 % % % %
|
||||
|
||||
@hist_cross_freq SYS T1 COL1 SYS T2 COL2
|
||||
|
||||
|
||||
97
histograms/histogram_06.txt
Normal file
97
histograms/histogram_06.txt
Normal file
@@ -0,0 +1,97 @@
|
||||
https://hourim.wordpress.com/?s=histogram
|
||||
|
||||
https://jonathanlewis.wordpress.com/2013/10/09/12c-histograms-pt-3/
|
||||
|
||||
exec dbms_stats.delete_table_stats('SYS','T1');
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns size 20 col1');
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
|
||||
|
||||
|
||||
select
|
||||
endpoint_number,
|
||||
endpoint_value,
|
||||
endpoint_repeat_count
|
||||
from
|
||||
user_tab_histograms
|
||||
where
|
||||
table_name = 'T1'
|
||||
order by
|
||||
endpoint_number
|
||||
;
|
||||
|
||||
|
||||
set pages 50 lines 256
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
a.COL1 COL1
|
||||
from
|
||||
T1 a,
|
||||
T1 b
|
||||
where
|
||||
a.COL1=b.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
set pages 50 lines 256
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
a.COL1 COL1
|
||||
from
|
||||
T1 a,
|
||||
T1 b
|
||||
where
|
||||
a.COL1=33 and
|
||||
a.COL1=b.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
|
||||
|
||||
set pages 50 lines 256
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
a.COL1 COL1
|
||||
from
|
||||
T1 a,
|
||||
T1 b
|
||||
where
|
||||
a.COL1=37 and
|
||||
a.COL1=b.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
|
||||
37 distinct values - 20 popular values = 17 non popular values
|
||||
On 32 lines => 17 non popular values (oniform distributed) => 2 lines / value
|
||||
|
||||
|
||||
|
||||
x 17
|
||||
|
||||
|
||||
|
||||
48
histograms/histogram_07.txt
Normal file
48
histograms/histogram_07.txt
Normal file
@@ -0,0 +1,48 @@
|
||||
exec dbms_stats.delete_table_stats('SYS','T1');
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns size 20 col1');
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
|
||||
|
||||
|
||||
|
||||
set pages 50 lines 256
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
a.COL1 COL1
|
||||
from
|
||||
T1 a,
|
||||
T1 b
|
||||
where
|
||||
a.COL1=9999 and
|
||||
a.COL1=b.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
|
||||
density = (nr_of_lines/_distinct_values)/100 = frequency_of_column / 100
|
||||
|
||||
|
||||
|
||||
frequency_of_non_popular_values = (nr_of_lines-sum(endpoint repeat count)) / (number_of_distinct_values - number_of_endpoints)
|
||||
|
||||
|
||||
|
||||
32 LINES ---- 17 NON POP
|
||||
?
|
||||
|
||||
|
||||
Test: val popuaire
|
||||
val non populaire
|
||||
val non populaire out of range
|
||||
|
||||
|
||||
138
histograms/histogram_08.txt
Normal file
138
histograms/histogram_08.txt
Normal file
@@ -0,0 +1,138 @@
|
||||
-- Setup
|
||||
--------
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<10 then mod(rownum,4) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 20
|
||||
)
|
||||
/
|
||||
|
||||
drop table T2 purge;
|
||||
|
||||
create table T2 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<25 then mod(rownum,10) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100
|
||||
)
|
||||
/
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1');
|
||||
|
||||
|
||||
set lines 250 pages 999
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=T2.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
@stats_col SYS T1 % % % %
|
||||
b s Avg Num
|
||||
Object a e Col Buc
|
||||
Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket
|
||||
-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ----
|
||||
TABLE SYS.T1 COL1 11-FEB-23 09:20:04 Y N 0 20 4 5 0 .200000000000000 NONE 1
|
||||
TABLE SYS.T1 ID 11-FEB-23 09:20:04 Y N 0 20 3 20 0 .050000000000000 NONE 1
|
||||
|
||||
SQL> @stats_col SYS T2 % % % %
|
||||
b s Avg Num
|
||||
Object a e Col Buc
|
||||
Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket
|
||||
-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ----
|
||||
TABLE SYS.T2 COL1 11-FEB-23 09:20:04 Y N 0 100 4 11 0 .090909090909091 NONE 1
|
||||
TABLE SYS.T2 ID 11-FEB-23 09:20:04 Y N 0 100 3 100 0 .010000000000000 NONE 1
|
||||
|
||||
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | Writes | OMem | 1Mem | Used-Mem |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| 0 | CREATE TABLE STATEMENT | | 1 | | | 7 (100)| 0 |00:00:00.01 | 25 | 2 | | | |
|
||||
| 1 | LOAD AS SELECT | Q | 1 | | | | 0 |00:00:00.01 | 25 | 2 | 1043K| 1043K| 1043K (0)|
|
||||
|* 2 | HASH JOIN | | 1 | 182 | 2366 | 6 (0)| 861 |00:00:00.01 | 4 | 0 | 2078K| 2078K| 1219K (0)|
|
||||
| 3 | TABLE ACCESS FULL | T1 | 1 | 20 | 120 | 3 (0)| 20 |00:00:00.01 | 2 | 0 | | | |
|
||||
| 4 | TABLE ACCESS FULL | T2 | 1 | 100 | 700 | 3 (0)| 100 |00:00:00.01 | 2 | 0 | | | |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- rows1*rows2/max(distinct1,distinct1) = rows1*rows2*min(density1,density2)
|
||||
|
||||
SQL> select 20*100*.090909090909091 from dual;
|
||||
|
||||
20*100*.090909090909091
|
||||
-----------------------
|
||||
181.818182
|
||||
|
||||
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS LEADING(T2 T1) */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
-- T1.COL1=150 and
|
||||
T1.COL1=T2.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
@stats_col SYS T1 % % % %
|
||||
b s Avg Num
|
||||
Object a e Col Buc
|
||||
Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket
|
||||
-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ----
|
||||
TABLE SYS.T1 COL1 11-FEB-23 09:20:04 Y N 0 20 4 5 0 .200000000000000 NONE 1
|
||||
TABLE SYS.T1 ID 11-FEB-23 09:20:04 Y N 0 20 3 20 0 .050000000000000 NONE 1
|
||||
|
||||
SQL> @stats_col SYS T2 % % % %
|
||||
b s Avg Num
|
||||
Object a e Col Buc
|
||||
Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket
|
||||
-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ----
|
||||
TABLE SYS.T2 COL1 11-FEB-23 09:20:04 Y N 0 100 4 11 0 .090909090909091 NONE 1
|
||||
TABLE SYS.T2 ID 11-FEB-23 09:20:04 Y N 0 100 3 100 0 .010000000000000 NONE 1
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | Writes | OMem | 1Mem | Used-Mem |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| 0 | CREATE TABLE STATEMENT | | 1 | | | 7 (100)| 0 |00:00:00.01 | 24 | 2 | | | |
|
||||
| 1 | LOAD AS SELECT | Q | 1 | | | | 0 |00:00:00.01 | 24 | 2 | 1043K| 1043K| 1043K (0)|
|
||||
|* 2 | HASH JOIN | | 1 | 182 | 2366 | 6 (0)| 861 |00:00:00.01 | 4 | 0 | 2078K| 2078K| 1315K (0)|
|
||||
| 3 | TABLE ACCESS FULL | T2 | 1 | 100 | 700 | 3 (0)| 100 |00:00:00.01 | 2 | 0 | | | |
|
||||
| 4 | TABLE ACCESS FULL | T1 | 1 | 20 | 120 | 3 (0)| 20 |00:00:00.01 | 2 | 0 | | | |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- rows1*rows2/max(distinct1,distinct1) = rows1*rows2*min(density1,density2)
|
||||
|
||||
SQL> select 100*20*.090909090909091 from dual;
|
||||
|
||||
100*20*.090909090909091
|
||||
-----------------------
|
||||
181.818182
|
||||
|
||||
|
||||
83
histograms/histogram_09.txt
Normal file
83
histograms/histogram_09.txt
Normal file
@@ -0,0 +1,83 @@
|
||||
-- Setup
|
||||
--------
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<10 then mod(rownum,4) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 20
|
||||
)
|
||||
/
|
||||
|
||||
drop table T2 purge;
|
||||
|
||||
create table T2 tablespace USERS as
|
||||
select
|
||||
rownum id,
|
||||
case when rownum<25 then mod(rownum,10) else 999 end col1
|
||||
from ( select 1 just_a_column
|
||||
from DUAL
|
||||
connect by level <= 100
|
||||
)
|
||||
/
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1');
|
||||
|
||||
|
||||
set lines 250 pages 999
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=T2.ID
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
@stats_col SYS T1 % % % %
|
||||
b s Avg Num
|
||||
Object a e Col Buc
|
||||
Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket
|
||||
-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ----
|
||||
TABLE SYS.T1 COL1 11-FEB-23 09:20:04 Y N 0 20 4 5 0 .200000000000000 NONE 1
|
||||
TABLE SYS.T1 ID 11-FEB-23 09:20:04 Y N 0 20 3 20 0 .050000000000000 NONE 1
|
||||
|
||||
SQL> @stats_col SYS T2 % % % %
|
||||
b s Avg Num
|
||||
Object a e Col Buc
|
||||
Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket
|
||||
-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ----
|
||||
TABLE SYS.T2 COL1 11-FEB-23 09:20:04 Y N 0 100 4 11 0 .090909090909091 NONE 1
|
||||
TABLE SYS.T2 ID 11-FEB-23 09:20:04 Y N 0 100 3 100 0 .010000000000000 NONE 1
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | Writes | OMem | 1Mem | Used-Mem |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
| 0 | CREATE TABLE STATEMENT | | 1 | | | 7 (100)| 0 |00:00:00.01 | 24 | 1 | | | |
|
||||
| 1 | LOAD AS SELECT | Q | 1 | | | | 0 |00:00:00.01 | 24 | 1 | 1043K| 1043K| 1043K (0)|
|
||||
|* 2 | HASH JOIN | | 1 | 20 | 180 | 6 (0)| 7 |00:00:00.01 | 4 | 0 | 2078K| 2078K| 1219K (0)|
|
||||
| 3 | TABLE ACCESS FULL | T1 | 1 | 20 | 120 | 3 (0)| 20 |00:00:00.01 | 2 | 0 | | | |
|
||||
| 4 | TABLE ACCESS FULL | T2 | 1 | 100 | 300 | 3 (0)| 100 |00:00:00.01 | 2 | 0 | | | |
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- rows1*rows2/max(distinct1,distinct1) = rows1*rows2*min(density1,density2)
|
||||
|
||||
SQL> select 20*100*.010000000000000 from dual;
|
||||
|
||||
20*100*.010000000000000
|
||||
-----------------------
|
||||
20
|
||||
|
||||
179
histograms/histogram_10.txt
Normal file
179
histograms/histogram_10.txt
Normal file
@@ -0,0 +1,179 @@
|
||||
-- Setup
|
||||
--------
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1(
|
||||
id NUMBER not null,
|
||||
col1 NUMBER,
|
||||
col2 NUMBER
|
||||
)
|
||||
tablespace USERS;
|
||||
|
||||
declare
|
||||
v_id NUMBER;
|
||||
v_col1 NUMBER;
|
||||
v_col2 NUMBER;
|
||||
begin
|
||||
for i IN 1..40 loop
|
||||
-- id column
|
||||
v_id:=i;
|
||||
-- col1 column
|
||||
if (i between 1 and 15) then v_col1:=mod(i,3); end if;
|
||||
if (i between 16 and 40) then v_col1:=i; end if;
|
||||
-- col2 column
|
||||
if (i between 1 and 30) then v_col2:=mod(i,6); end if;
|
||||
if (i between 31 and 40) then v_col2:=999; end if;
|
||||
-- insert values
|
||||
insert into T1 values (v_id,v_col1,v_col2);
|
||||
end loop;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
drop table T2 purge;
|
||||
|
||||
create table T2(
|
||||
id NUMBER not null,
|
||||
col1 NUMBER,
|
||||
col2 NUMBER
|
||||
)
|
||||
tablespace USERS;
|
||||
|
||||
declare
|
||||
v_id NUMBER;
|
||||
v_col1 NUMBER;
|
||||
v_col2 NUMBER;
|
||||
begin
|
||||
for i IN 1..150 loop
|
||||
-- id column
|
||||
v_id:=i;
|
||||
-- col1 column
|
||||
if (i between 1 and 49) then v_col1:=mod(i,7); end if;
|
||||
if (i between 50 and 100) then v_col1:=i; end if;
|
||||
if (i between 101 and 150) then v_col1:=777; end if;
|
||||
-- col2 column
|
||||
if (i between 1 and 100) then v_col2:=mod(i,10); end if;
|
||||
if (i between 101 and 140) then v_col2:=999; end if;
|
||||
if (i between 141 and 150) then v_col2:=i; end if;
|
||||
-- insert values
|
||||
insert into T2 values (v_id,v_col1,v_col2);
|
||||
end loop;
|
||||
commit;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1');
|
||||
exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1');
|
||||
|
||||
|
||||
set lines 250 pages 999
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=T2.COL1
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
set lines 250 pages 999
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL2=T2.COL2
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
---------------------------------------------------------
|
||||
set lines 250 pages 999
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=T2.COL1 and
|
||||
T1.COL2=T2.COL2
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
set lines 250 pages 999
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS */
|
||||
T1.ID id1
|
||||
, T2.ID id2
|
||||
, T1.COL1 val
|
||||
from
|
||||
T1,
|
||||
T2
|
||||
where
|
||||
T1.COL1=T2.COL1 or
|
||||
T1.COL2=T2.COL2
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES'));
|
||||
|
||||
--------------------------------------------------------
|
||||
|
||||
set lines 250 pages 999
|
||||
alter system flush shared_pool;
|
||||
|
||||
drop table Q purge;
|
||||
create table Q as
|
||||
select /*+ GATHER_PLAN_STATISTICS MONITOR */
|
||||
*
|
||||
from
|
||||
T2
|
||||
where
|
||||
COL1>=7
|
||||
/
|
||||
|
||||
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES +NOTE'));
|
||||
|
||||
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
|
||||
|
||||
select dbms_perf.report_sql(sql_id=>'cgud94u0jkhjj',outer_start_time=>sysdate-1, outer_end_time=>sysdate, selected_start_time=>sysdate-1, selected_end_time=>sysdate,type=>'TEXT') from dual;
|
||||
|
||||
SELECT report_id,PERIOD_START_TIME,PERIOD_END_TIME,GENERATION_TIME FROM dba_hist_reports WHERE component_name = 'sqlmonitor' AND (period_start_time BETWEEN sysdate-1 and sysdate )AND key1 = 'cgud94u0jkhjj';
|
||||
|
||||
|
||||
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
|
||||
SELECT DBMS_AUTO_REPORT.REPORT_REPOSITORY_DETAIL(RID => 145, TYPE => 'text') FROM dual;
|
||||
|
||||
|
||||
109
histograms/hybrid_stats_tab1_insert.sql
Normal file
109
histograms/hybrid_stats_tab1_insert.sql
Normal file
@@ -0,0 +1,109 @@
|
||||
drop table T1 purge;
|
||||
|
||||
create table T1 (col1 NUMBER) tablespace USERS;
|
||||
|
||||
insert into T1 values (8);
|
||||
insert into T1 values (12);
|
||||
insert into T1 values (12);
|
||||
insert into T1 values (13);
|
||||
insert into T1 values (13);
|
||||
insert into T1 values (13);
|
||||
insert into T1 values (15);
|
||||
insert into T1 values (16);
|
||||
insert into T1 values (16);
|
||||
insert into T1 values (17);
|
||||
insert into T1 values (18);
|
||||
insert into T1 values (18);
|
||||
insert into T1 values (19);
|
||||
insert into T1 values (19);
|
||||
insert into T1 values (19);
|
||||
insert into T1 values (20);
|
||||
insert into T1 values (20);
|
||||
insert into T1 values (20);
|
||||
insert into T1 values (20);
|
||||
insert into T1 values (20);
|
||||
insert into T1 values (21);
|
||||
insert into T1 values (22);
|
||||
insert into T1 values (22);
|
||||
insert into T1 values (22);
|
||||
insert into T1 values (23);
|
||||
insert into T1 values (23);
|
||||
insert into T1 values (24);
|
||||
insert into T1 values (24);
|
||||
insert into T1 values (25);
|
||||
insert into T1 values (26);
|
||||
insert into T1 values (26);
|
||||
insert into T1 values (26);
|
||||
insert into T1 values (27);
|
||||
insert into T1 values (27);
|
||||
insert into T1 values (27);
|
||||
insert into T1 values (27);
|
||||
insert into T1 values (27);
|
||||
insert into T1 values (27);
|
||||
insert into T1 values (28);
|
||||
insert into T1 values (28);
|
||||
insert into T1 values (28);
|
||||
insert into T1 values (28);
|
||||
insert into T1 values (28);
|
||||
insert into T1 values (28);
|
||||
insert into T1 values (29);
|
||||
insert into T1 values (29);
|
||||
insert into T1 values (29);
|
||||
insert into T1 values (29);
|
||||
insert into T1 values (29);
|
||||
insert into T1 values (29);
|
||||
insert into T1 values (30);
|
||||
insert into T1 values (30);
|
||||
insert into T1 values (30);
|
||||
insert into T1 values (31);
|
||||
insert into T1 values (31);
|
||||
insert into T1 values (31);
|
||||
insert into T1 values (31);
|
||||
insert into T1 values (31);
|
||||
insert into T1 values (32);
|
||||
insert into T1 values (32);
|
||||
insert into T1 values (32);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (33);
|
||||
insert into T1 values (34);
|
||||
insert into T1 values (34);
|
||||
insert into T1 values (34);
|
||||
insert into T1 values (35);
|
||||
insert into T1 values (35);
|
||||
insert into T1 values (35);
|
||||
insert into T1 values (35);
|
||||
insert into T1 values (35);
|
||||
insert into T1 values (35);
|
||||
insert into T1 values (35);
|
||||
insert into T1 values (36);
|
||||
insert into T1 values (37);
|
||||
insert into T1 values (38);
|
||||
insert into T1 values (38);
|
||||
insert into T1 values (38);
|
||||
insert into T1 values (38);
|
||||
insert into T1 values (38);
|
||||
insert into T1 values (39);
|
||||
insert into T1 values (39);
|
||||
insert into T1 values (40);
|
||||
insert into T1 values (41);
|
||||
insert into T1 values (42);
|
||||
insert into T1 values (42);
|
||||
insert into T1 values (43);
|
||||
insert into T1 values (43);
|
||||
insert into T1 values (43);
|
||||
insert into T1 values (44);
|
||||
insert into T1 values (45);
|
||||
insert into T1 values (46);
|
||||
insert into T1 values (50);
|
||||
insert into T1 values (59);
|
||||
|
||||
commit;
|
||||
|
||||
|
||||
|
||||
109
histograms/hybrid_stats_tab2_insert.sql
Normal file
109
histograms/hybrid_stats_tab2_insert.sql
Normal file
@@ -0,0 +1,109 @@
|
||||
drop table T2 purge;
|
||||
|
||||
create table T2 (col1 NUMBER) tablespace USERS;
|
||||
|
||||
insert into T2 values (8);
|
||||
insert into T2 values (12);
|
||||
insert into T2 values (12);
|
||||
insert into T2 values (22);
|
||||
insert into T2 values (22);
|
||||
insert into T2 values (22);
|
||||
insert into T2 values (15);
|
||||
insert into T2 values (16);
|
||||
insert into T2 values (16);
|
||||
insert into T2 values (17);
|
||||
insert into T2 values (18);
|
||||
insert into T2 values (18);
|
||||
insert into T2 values (19);
|
||||
insert into T2 values (19);
|
||||
insert into T2 values (19);
|
||||
insert into T2 values (20);
|
||||
insert into T2 values (20);
|
||||
insert into T2 values (20);
|
||||
insert into T2 values (20);
|
||||
insert into T2 values (20);
|
||||
insert into T2 values (21);
|
||||
insert into T2 values (22);
|
||||
insert into T2 values (22);
|
||||
insert into T2 values (22);
|
||||
insert into T2 values (23);
|
||||
insert into T2 values (23);
|
||||
insert into T2 values (25);
|
||||
insert into T2 values (25);
|
||||
insert into T2 values (25);
|
||||
insert into T2 values (26);
|
||||
insert into T2 values (26);
|
||||
insert into T2 values (26);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (28);
|
||||
insert into T2 values (28);
|
||||
insert into T2 values (28);
|
||||
insert into T2 values (28);
|
||||
insert into T2 values (28);
|
||||
insert into T2 values (28);
|
||||
insert into T2 values (29);
|
||||
insert into T2 values (29);
|
||||
insert into T2 values (29);
|
||||
insert into T2 values (29);
|
||||
insert into T2 values (29);
|
||||
insert into T2 values (29);
|
||||
insert into T2 values (30);
|
||||
insert into T2 values (30);
|
||||
insert into T2 values (30);
|
||||
insert into T2 values (31);
|
||||
insert into T2 values (31);
|
||||
insert into T2 values (31);
|
||||
insert into T2 values (31);
|
||||
insert into T2 values (31);
|
||||
insert into T2 values (32);
|
||||
insert into T2 values (32);
|
||||
insert into T2 values (32);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (33);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (35);
|
||||
insert into T2 values (36);
|
||||
insert into T2 values (37);
|
||||
insert into T2 values (38);
|
||||
insert into T2 values (38);
|
||||
insert into T2 values (38);
|
||||
insert into T2 values (38);
|
||||
insert into T2 values (38);
|
||||
insert into T2 values (39);
|
||||
insert into T2 values (39);
|
||||
insert into T2 values (50);
|
||||
insert into T2 values (51);
|
||||
insert into T2 values (52);
|
||||
insert into T2 values (52);
|
||||
insert into T2 values (53);
|
||||
insert into T2 values (53);
|
||||
insert into T2 values (53);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (55);
|
||||
insert into T2 values (56);
|
||||
insert into T2 values (50);
|
||||
insert into T2 values (59);
|
||||
|
||||
commit;
|
||||
|
||||
|
||||
|
||||
124
logminer/logmnr_02.txt
Normal file
124
logminer/logmnr_02.txt
Normal file
@@ -0,0 +1,124 @@
|
||||
# https://redikx.wordpress.com/2015/07/10/logminer-to-analyze-archive-logs-on-different-database/
|
||||
|
||||
alias HUTTPRD='rlwrap sqlplus sys/"Secret00!"@bakura:1521/HUTTPRD as sysdba'
|
||||
alias ZABRAKPRD='rlwrap sqlplus sys/"Secret00!"@togoria:1521/ZABRAKPRD as sysdba'
|
||||
|
||||
alias DURGA='rlwrap sqlplus jedi/"Secret00!"@bakura:1521/DURGA as sysdba'
|
||||
alias MAUL='rlwrap sqlplus jedi/"Secret00!"@togoria:1521/MAUL as sysdba'
|
||||
|
||||
alias WOMBAT='sqlplus wombat/animal@bakura/DURGA'
|
||||
|
||||
|
||||
# on PDB DURGA as WOMBAT user
|
||||
alter session set NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss';
|
||||
|
||||
drop table DEMO purge;
|
||||
create table DEMO(d date);
|
||||
|
||||
insert into DEMO values (sysdate);
|
||||
insert into DEMO values (sysdate);
|
||||
insert into DEMO values (sysdate);
|
||||
insert into DEMO values (sysdate);
|
||||
insert into DEMO values (sysdate);
|
||||
commit;
|
||||
insert into DEMO values (sysdate);
|
||||
commit;
|
||||
delete from DEMO;
|
||||
commit;
|
||||
|
||||
# backup generated archivelog
|
||||
rman target /
|
||||
run
|
||||
{
|
||||
set nocfau;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
backup as compressed backupset archivelog all delete input;
|
||||
}
|
||||
|
||||
# store dictionary in redolog
|
||||
begin
|
||||
dbms_logmnr_d.build(options=>dbms_logmnr_d.store_in_redo_logs);
|
||||
end;
|
||||
/
|
||||
|
||||
# identify archivelog containing the dictionary
|
||||
select thread#,sequence# from gv$archived_log where DICTIONARY_BEGIN='YES';
|
||||
select thread#,sequence# from gv$archived_log where DICTIONARY_END='YES';
|
||||
|
||||
# backup archivelog containing the dictionary
|
||||
rman target /
|
||||
run
|
||||
{
|
||||
set nocfau;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
backup as compressed backupset archivelog sequence 12 delete input;
|
||||
}
|
||||
|
||||
# Goal: list all DML against DEMO table between 2024-06-23 15:00:00 and 2024-06-23 16:00:00
|
||||
|
||||
# identify required archivelog
|
||||
select THREAD#,max(SEQUENCE#) from gv$archived_log where FIRST_TIME<=timestamp'2024-06-23 15:00:00' group by THREAD#;
|
||||
select THREAD#,min(SEQUENCE#) from gv$archived_log where NEXT_TIME>=timestamp'2024-06-23 16:00:00' group by THREAD#;
|
||||
|
||||
|
||||
# all operation will be realized on a different CDB on the CDB$ROOT
|
||||
# restore required archivelog
|
||||
rman target /
|
||||
run
|
||||
{
|
||||
set nocfau;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
set archivelog destination to '/mnt/yavin4/tmp/00000/logminer/arch/';
|
||||
restore archivelog from sequence 3 until sequence 8;
|
||||
}
|
||||
|
||||
|
||||
# restore dictionary archivelog
|
||||
rman target /
|
||||
run
|
||||
{
|
||||
set nocfau;
|
||||
allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck';
|
||||
set archivelog destination to '/mnt/yavin4/tmp/00000/logminer/arch/';
|
||||
restore archivelog from sequence 12 until sequence 12;
|
||||
}
|
||||
|
||||
|
||||
# add log
|
||||
execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_12_1172413318.arc', options => dbms_logmnr.new);
|
||||
execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_3_1172413318.arc', options => dbms_logmnr.addfile);
|
||||
execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_4_1172413318.arc', options => dbms_logmnr.addfile);
|
||||
execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_5_1172413318.arc', options => dbms_logmnr.addfile);
|
||||
execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_6_1172413318.arc', options => dbms_logmnr.addfile);
|
||||
execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_7_1172413318.arc', options => dbms_logmnr.addfile);
|
||||
execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_8_1172413318.arc', options => dbms_logmnr.addfile);
|
||||
|
||||
# to list added log
|
||||
|
||||
set lines 256
|
||||
col FILENAME for a60
|
||||
col INFO for a60
|
||||
select FILENAME,INFO from V$LOGMNR_LOGS;
|
||||
|
||||
# start logminer
|
||||
begin
|
||||
DBMS_LOGMNR.START_LOGMNR (startTime=>timestamp'2024-06-23 15:00:00'
|
||||
,endTime=> timestamp'2024-06-23 16:00:00'
|
||||
,OPTIONS=>DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY
|
||||
);
|
||||
end;
|
||||
/
|
||||
|
||||
# do mining
|
||||
alter session set NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss';
|
||||
|
||||
col username for a20
|
||||
col sql_redo for a70
|
||||
col table_name for a20
|
||||
col timestamp for a25
|
||||
|
||||
select timestamp,username,table_name,sql_redo from v$logmnr_contents where seg_name='DEMO';
|
||||
70
materialized_views/mw01.txt
Normal file
70
materialized_views/mw01.txt
Normal file
@@ -0,0 +1,70 @@
|
||||
create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret;
|
||||
alter pluggable database NIHILUS open;
|
||||
alter pluggable database NIHILUS save state;
|
||||
|
||||
orapwd file=orapwSITHPRD password="ad420e57a205c9a7d80d!"
|
||||
|
||||
alias NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba'
|
||||
|
||||
alter session set container=NIHILUS;
|
||||
|
||||
create user DEMO identified by secret;
|
||||
grant connect, resource to DEMO;
|
||||
grant create materialized view to DEMO;
|
||||
grant create view to DEMO;
|
||||
grant unlimited tablespace to DEMO;
|
||||
|
||||
alias DEMO='rlwrap sqlplus DEMO/"secret"@bakura:1521/NIHILUS'
|
||||
|
||||
create table DEMO as
|
||||
select 0 seq,current_timestamp now
|
||||
from
|
||||
xmltable('1 to 1000');
|
||||
|
||||
|
||||
-- infinite_update.sql
|
||||
whenever sqlerror exit failure
|
||||
begin
|
||||
loop
|
||||
update demo set seq=seq+1,now=current_timestamp where rownum=1;
|
||||
commit;
|
||||
dbms_session.sleep(1);
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
select max(seq),max(now) from DEMO.DEMO;
|
||||
create materialized view DEMOMV1 as select * from DEMO;
|
||||
create materialized view DEMOMV2 as select * from DEMO;
|
||||
|
||||
create view V as
|
||||
select 'DEMOMV1' source,seq,now from DEMOMV1
|
||||
union all
|
||||
select 'DEMOMV2' source,seq,now from DEMOMV2
|
||||
union all
|
||||
select 'DEMO' source,seq,now from DEMO;
|
||||
|
||||
set lines 256
|
||||
col maxseq for 999999999
|
||||
col maxnow for a50
|
||||
|
||||
select source,max(seq) maxseq,max(now) maxnow from V group by source;
|
||||
|
||||
|
||||
exec dbms_refresh.make('DEMO.DEMORGROUP', list=>'DEMOMV1,DEMOMV2', next_date=>null, interval=>'null');
|
||||
|
||||
exec dbms_refresh.refresh('DEMO.DEMORGROUP');
|
||||
|
||||
-- we can index and gather stats on materialized views
|
||||
create index IMV1 on DEMOMV1(seq);
|
||||
create index IMV2 on DEMOMV2(now);
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'DEMOMV1', method_opt=>'for all columns size SKEWONLY');
|
||||
exec dbms_stats.gather_table_stats(user,'DEMOMV2', method_opt=>'for all columns size AUTO');
|
||||
|
||||
alter table DEMO add constraint PK_DEMO primary key (NOW);
|
||||
|
||||
create materialized view log on DEMO.DEMO
|
||||
including new values;
|
||||
|
||||
127
materialized_views/mw02.txt
Normal file
127
materialized_views/mw02.txt
Normal file
@@ -0,0 +1,127 @@
|
||||
alias DEMO='rlwrap sqlplus DEMO/"secret"@bakura:1521/NIHILUS'
|
||||
|
||||
drop table T1 purge;
|
||||
create table T1 (
|
||||
id number generated always as identity,
|
||||
n1 number(1),
|
||||
c1 varchar2(10),
|
||||
d1 DATE
|
||||
);
|
||||
|
||||
alter table T1 add constraint T1_PK primary key (ID);
|
||||
|
||||
|
||||
-- infinite_update2.sql
|
||||
whenever sqlerror exit failure
|
||||
declare
|
||||
i NUMBER;
|
||||
begin
|
||||
i:=0;
|
||||
loop
|
||||
i:=i+1;
|
||||
insert into T1(n1,c1,d1) values(mod(i,3),DBMS_RANDOM.string('a',10),sysdate);
|
||||
commit;
|
||||
dbms_session.sleep(1);
|
||||
end loop;
|
||||
end;
|
||||
/
|
||||
|
||||
|
||||
drop materialized view MW0;
|
||||
drop materialized view MW1;
|
||||
drop materialized view MW2;
|
||||
|
||||
create materialized view MW0 as select * from T1 where n1=0;
|
||||
create materialized view MW1 as select * from T1 where n1=1;
|
||||
create materialized view MW2 as select * from T1 where n1=2;
|
||||
|
||||
|
||||
alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
|
||||
select max(d1) from MW0;
|
||||
select max(d1) from MW1;
|
||||
select max(d1) from MW2;
|
||||
|
||||
create materialized view log on T1 with primary key including new values;
|
||||
|
||||
set lines 256
|
||||
|
||||
col log_table for a30
|
||||
col log_trigger for a30
|
||||
col primary_key for a3 head PK
|
||||
|
||||
select log_table,log_trigger,primary_key from dba_mview_logs where log_owner='DEMO' and MASTER='T1';
|
||||
|
||||
|
||||
-- on snap "server" site
|
||||
|
||||
set lines 256
|
||||
|
||||
col owner for a15
|
||||
col name for a25
|
||||
col master_owner for a15
|
||||
col master for a25
|
||||
col master_link for a25
|
||||
col refresh_method for a15
|
||||
col type for a10
|
||||
col status for a7
|
||||
col snaptime for a20
|
||||
|
||||
select
|
||||
snap.owner
|
||||
,snap.name
|
||||
,snap.snapid
|
||||
,snap.status
|
||||
,slog.snaptime
|
||||
,snap.master_owner
|
||||
,snap.master
|
||||
,snap.refresh_method
|
||||
,snap.type
|
||||
,snap.master_link
|
||||
from
|
||||
sys.slog$ slog
|
||||
join dba_snapshots snap on slog.snapid=snap.snapid
|
||||
where slog.mowner='DEMO' and slog.master='T1';
|
||||
|
||||
|
||||
col snapname for a30
|
||||
col snapsite for a30
|
||||
col snaptime for a30
|
||||
|
||||
select
|
||||
r.name snapname, snapid, nvl(r.snapshot_site, 'not registered') snapsite, snaptime
|
||||
from
|
||||
sys.slog$ s, dba_registered_snapshots r
|
||||
where
|
||||
s.snapid=r.snapshot_id(+) and mowner='DEMO' and master='T1';
|
||||
|
||||
|
||||
exec dbms_mview.refresh('MW0');
|
||||
exec dbms_mview.refresh('MW1');
|
||||
exec dbms_mview.refresh('MW2');
|
||||
|
||||
-- point of view of snap "client"
|
||||
select last_refresh_date,sysdate from dba_mviews where mview_name='MW0';
|
||||
|
||||
-- point of view of snap "server"
|
||||
select sysdate,last_refresh from dba_snapshots where name='MW0';
|
||||
|
||||
|
||||
select log_table from dba_mview_logs where master='T1';
|
||||
|
||||
select count(*) from DEMO.MLOG$_T1;
|
||||
|
||||
|
||||
exec dbms_refresh.make('MWGROUP0', list=>'MW0,MW1', next_date=>null, interval=>'null',parallelism=>2);
|
||||
|
||||
exec dbms_refresh.refresh('MWGROUP0');
|
||||
|
||||
|
||||
-- https://www.oracleplsqltr.com/2021/03/14/how-to-unregister-materialized-view-from-source-db/
|
||||
|
||||
exec dbms_mview.unregister_mview(mviewowner=>'DEMO',mviewname=>'MW2',mviewsite=>'NIHILUS');
|
||||
exec dbms_mview.purge_mview_from_log(mview_id=>9);
|
||||
|
||||
select segment_name,bytes/1024 Kb from dba_segments where segment_name='MLOG$_T1';
|
||||
|
||||
|
||||
|
||||
85
materialized_views/mw03.txt
Normal file
85
materialized_views/mw03.txt
Normal file
@@ -0,0 +1,85 @@
|
||||
-- setup PDB
|
||||
------------
|
||||
|
||||
orapwd file=orapwSITHPRD password="ad420e57a205c9a7d80d!"
|
||||
|
||||
create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret;
|
||||
alter pluggable database NIHILUS open;
|
||||
alter pluggable database NIHILUS save state;
|
||||
|
||||
alter session set container=NIHILUS;
|
||||
create user adm identified by "secret";
|
||||
grant sysdba to adm;
|
||||
|
||||
alias NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba'
|
||||
|
||||
create user MASTER identified by secret;
|
||||
grant connect, resource to MASTER;
|
||||
grant unlimited tablespace to MASTER;
|
||||
|
||||
|
||||
alias MASTER='rlwrap sqlplus MASTER/"secret"@bakura:1521/NIHILUS'
|
||||
|
||||
-- setup PDB
|
||||
------------
|
||||
|
||||
orapwd file=orapwANDOPRD password="oIp757a205c9?jj90yhgf"
|
||||
|
||||
create pluggable database RANDOR admin user RANDOR$OWNER identified by secret;
|
||||
alter pluggable database RANDOR open;
|
||||
alter pluggable database RANDOR save state;
|
||||
|
||||
alter session set container=RANDOR;
|
||||
create user adm identified by "secret";
|
||||
grant sysdba to adm;
|
||||
|
||||
|
||||
alias RANDOR='rlwrap sqlplus adm/"secret"@togoria:1521/RANDOR as sysdba'
|
||||
|
||||
create user REPLICA identified by secret;
|
||||
grant connect, resource to REPLICA;
|
||||
grant create materialized view to REPLICA;
|
||||
grant create view to REPLICA;
|
||||
grant create database link to REPLICA;
|
||||
grant unlimited tablespace to REPLICA;
|
||||
|
||||
alias REPLICA='rlwrap sqlplus REPLICA/"secret"@togoria:1521/RANDOR'
|
||||
|
||||
|
||||
-- master site NIHILUS
|
||||
drop table T1 purge;
|
||||
create table T1 (
|
||||
id number generated always as identity,
|
||||
n1 number(1),
|
||||
c1 varchar2(10),
|
||||
d1 DATE
|
||||
);
|
||||
|
||||
alter table T1 add constraint T1_PK primary key (ID);
|
||||
|
||||
|
||||
-- replica site RANDOR
|
||||
create database link RANDOR_TO_NIHILUS connect to MASTER identified by "secret" using 'bakura:1521/NIHILUS';
|
||||
select * from DUAL@RANDOR_TO_NIHILUS;
|
||||
|
||||
|
||||
drop materialized view MW0;
|
||||
drop materialized view MW1;
|
||||
drop materialized view MW2;
|
||||
|
||||
create materialized view MW0 as select * from T1@RANDOR_TO_NIHILUS where n1=0;
|
||||
create materialized view MW1 as select * from T1@RANDOR_TO_NIHILUS where n1=1;
|
||||
create materialized view MW2 as select * from T1@RANDOR_TO_NIHILUS where n1=2;
|
||||
|
||||
|
||||
alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
|
||||
select max(d1) from MW0;
|
||||
select max(d1) from MW1;
|
||||
select max(d1) from MW2;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2
partitioning/articles_01.txt
Normal file
2
partitioning/articles_01.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
Implementing dynamic partitions AND subpartitions
|
||||
https://connor-mcdonald.com/2022/04/22/implementing-dynamic-partitions-and-subpartitions/
|
||||
126
partitioning/range_to_interval_01.txt
Normal file
126
partitioning/range_to_interval_01.txt
Normal file
@@ -0,0 +1,126 @@
|
||||
-- http://www.oraclefindings.com/2017/07/23/switching-range-interval-partitioning/
|
||||
|
||||
drop table DEMO purge;
|
||||
|
||||
create table DEMO(
|
||||
id INTEGER generated always as identity
|
||||
,day DATE not null
|
||||
,code VARCHAR2(2) not null
|
||||
,val NUMBER not null
|
||||
,PRIMARY KEY(id)
|
||||
)
|
||||
partition by range(day)(
|
||||
partition P_2024_01 values less than (date'2024-02-01')
|
||||
,partition P_2024_02 values less than (date'2024-03-01')
|
||||
,partition INFINITY values less than (MAXVALUE)
|
||||
)
|
||||
;
|
||||
|
||||
create index IDX_VAL on DEMO(val) local;
|
||||
|
||||
|
||||
insert into DEMO (day,code,val) values (date'2024-01-09','UK',1005);
|
||||
insert into DEMO (day,code,val) values (date'2024-01-10','IT',900);
|
||||
insert into DEMO (day,code,val) values (date'2024-01-11','IT',400);
|
||||
insert into DEMO (day,code,val) values (date'2024-01-11','FR',400);
|
||||
insert into DEMO (day,code,val) values (date'2024-01-12','UK',400);
|
||||
insert into DEMO (day,code,val) values (date'2024-01-12','IT',500);
|
||||
|
||||
insert into DEMO (day,code,val) values (date'2024-02-07','UK',765);
|
||||
insert into DEMO (day,code,val) values (date'2024-02-09','IT',551);
|
||||
insert into DEMO (day,code,val) values (date'2024-02-09','IT',90);
|
||||
insert into DEMO (day,code,val) values (date'2024-02-09','FR',407);
|
||||
insert into DEMO (day,code,val) values (date'2024-02-09','UK',101);
|
||||
insert into DEMO (day,code,val) values (date'2024-02-10','IT',505);
|
||||
insert into DEMO (day,code,val) values (date'2024-02-10','FR',2000);
|
||||
|
||||
commit;
|
||||
|
||||
|
||||
exec dbms_stats.gather_table_stats(user,'DEMO');
|
||||
exec dbms_stats.delete_table_stats(user,'DEMO');
|
||||
|
||||
-- IMPORTANT: the table should NOT have a MAXVALUE partition
|
||||
-- ALTER TABLE… SET INTERVAL fails with: ORA-14759: SET INTERVAL is not legal on this table. (Doc ID 2926948.1)
|
||||
|
||||
select count(*) from DEMO partition (INFINITY);
|
||||
-- Drop the MAXVALUE partition.
|
||||
alter table POC.DEMO drop partition INFINITY;
|
||||
|
||||
alter table DEMO set interval(NUMTOYMINTERVAL(1, 'MONTH'));
|
||||
|
||||
insert into DEMO (day,code,val) values (date'2024-04-01','IT',50);
|
||||
insert into DEMO (day,code,val) values (date'2024-05-12','FR',60);
|
||||
insert into DEMO (day,code,val) values (date'2024-05-14','UK',70);
|
||||
commit;
|
||||
|
||||
|
||||
-------------------------------------------------------------
|
||||
|
||||
drop table DEMO purge;
|
||||
|
||||
create table DEMO(
|
||||
id INTEGER generated always as identity
|
||||
,day DATE not null
|
||||
,code VARCHAR2(2) not null
|
||||
,val NUMBER not null
|
||||
,PRIMARY KEY(id)
|
||||
)
|
||||
partition by range(day) subpartition by list (code)(
|
||||
partition P_2024_01 values less than (date'2024-02-01')
|
||||
(
|
||||
subpartition P_2024_01_UK values ('UK')
|
||||
,subpartition P_2024_01_IT values ('IT')
|
||||
,subpartition P_2024_01_FR values ('FR')
|
||||
)
|
||||
,partition P_2024_02 values less than (date'2024-03-01')
|
||||
(
|
||||
subpartition P_2024_02_UK values ('UK')
|
||||
,subpartition P_2024_02_IT values ('IT')
|
||||
,subpartition P_2024_02_FR values ('FR')
|
||||
)
|
||||
,partition INFINITY values less than (MAXVALUE)
|
||||
(
|
||||
subpartition INFINITY_UK values ('UK')
|
||||
,subpartition INFINITY_IT values ('IT')
|
||||
,subpartition INFINITY_FR values ('FR')
|
||||
)
|
||||
)
|
||||
;
|
||||
|
||||
create index IDX_VAL on DEMO(val) local;
|
||||
|
||||
alter table POC.DEMO drop partition INFINITY;
|
||||
alter table DEMO set interval(NUMTOYMINTERVAL(1, 'MONTH'));
|
||||
|
||||
alter index POC.SYS_C007367 rebuild;
|
||||
|
||||
|
||||
ALTER TABLE DEMO SPLIT SUBPARTITION SYS_SUBP3241
|
||||
VALUES ('UK') INTO (
|
||||
SUBPARTITION SYS_SUBP3241_UK,
|
||||
SUBPARTITION SYS_SUBP3241_DIFF
|
||||
)
|
||||
ONLINE;
|
||||
|
||||
ALTER TABLE DEMO SPLIT SUBPARTITION SYS_SUBP3241_DIFF
|
||||
VALUES ('IT') INTO (
|
||||
SUBPARTITION SYS_SUBP3241_IT,
|
||||
SUBPARTITION SYS_SUBP3241_FR
|
||||
)
|
||||
ONLINE;
|
||||
|
||||
-- because wrong previous subpart name
|
||||
alter table POC.DEMO rename subpartition SYS_SUBP3241_FR to SYS_SUBP3241_DIFF;
|
||||
|
||||
ALTER TABLE DEMO SPLIT SUBPARTITION SYS_SUBP3241_DIFF
|
||||
VALUES ('FR') INTO (
|
||||
SUBPARTITION SYS_SUBP3241_FR,
|
||||
SUBPARTITION SYS_SUBP3241_OTHER
|
||||
)
|
||||
ONLINE;
|
||||
|
||||
select count(*) from DEMO subpartition(SYS_SUBP3241_OTHER);
|
||||
alter table DEMO drop subpartition SYS_SUBP3241_OTHER;
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user