diff --git a/AI_generated/PostgreSQL_TLS_01.md b/AI_generated/PostgreSQL_TLS_01.md new file mode 100644 index 0000000..c38f052 --- /dev/null +++ b/AI_generated/PostgreSQL_TLS_01.md @@ -0,0 +1,208 @@ +# 📄 Technical Guide: Setting Up TLS for PostgreSQL + +This document consolidates the group’s discussion into a practical, production‑ready reference for configuring and using TLS with PostgreSQL, including server setup, client configuration, password management, and example application code. + +--- + +## 1. Overview + +PostgreSQL supports encrypted connections using TLS (often referred to as SSL in its configuration). Enabling TLS ensures secure client–server communication and can optionally enforce client certificate authentication. This guide provides step‑by‑step instructions for server and client configuration, common pitfalls, and usage examples. + +--- + +## 2. Server-Side Configuration + +### Certificates + +* Required files: + * [`server.key`](https://server.key) → private key + * [`server.crt`](https://server.crt) → server certificate + * [`root.crt`](https://root.crt) → CA certificate (recommended) +* Sources: internal PKI, Let’s Encrypt, or self‑signed CA. +* Permissions: [`server.key`](https://server.key) must be `0600` or root‑owned with restricted group access. + +### Placement + +* Default paths: + * `$PGDATA/`[`server.key`](https://server.key) + * `$PGDATA/`[`server.crt`](https://server.crt) +* Override with `ssl_cert_file`, `ssl_key_file`, `ssl_ca_file` in [`postgresql.conf`](https://postgresql.conf). + +### Configuration + +Enable TLS in [`postgresql.conf`](https://postgresql.conf): + +```conf +ssl = on +ssl_ciphers = 'HIGH:!aNULL:!MD5' +ssl_prefer_server_ciphers = on +ssl_min_protocol_version = 'TLSv1.2' +ssl_max_protocol_version = 'TLSv1.3' +``` + +### Access Control + +Configure `pg_hba.conf`: + +* Allow TLS but not require: + + ``` + host all all 0.0.0.0/0 md5 + ``` +* Require TLS: + + ``` + hostssl all all 0.0.0.0/0 md5 + ``` +* Require TLS + client certificate: + + ``` + hostssl all all 0.0.0.0/0 cert + ``` + +--- + +## 3. Client-Side Configuration + +### Basic TLS + +```bash +psql "host=db.example.com sslmode=require" +``` + +### Verify server certificate + +```bash +psql "host=db.example.com sslmode=verify-full sslrootcert=/etc/ssl/myca/root.crt" +``` + +* `sslrootcert` is a **client-side path** to the CA certificate. + +### Mutual TLS + +```bash +psql "host=db.example.com sslmode=verify-full \ + sslrootcert=/etc/ssl/myca/root.crt \ + sslcert=/etc/ssl/myca/client.crt \ + sslkey=/etc/ssl/myca/client.key" +``` + +### Modes comparison + +| Mode | Encrypts | Validates CA | Validates Hostname | Typical Use | +| --- | --- | --- | --- | --- | +| `require` | Yes | No | No | Basic encryption | +| `verify-ca` | Yes | Yes | No | Internal/IP-based | +| `verify-full` | Yes | Yes | Yes | Production | +--- + +## 4. Password Management + +### `.pgpass` file (recommended) + +Format: + +``` +hostname:port:database:username:password +``` + +Example: + +``` +db.example.com:5432:mydb:myuser:SuperSecretPassword123 +localhost:5432:*:postgres:localdevpass +*:5432:*:replicator:replicaPassword +``` + +* Location: `~/.pgpass` +* Permissions: `chmod 600 ~/.pgpass` +* Supports wildcards (`*`). + +### Environment variable + +```bash +PGPASSWORD='secret123' psql -U myuser -h localhost -d mydb +``` + +Less secure; use only for quick commands. + +--- + +## 5. Testing & Verification + +* Check server TLS status: + + ```sql + SHOW ssl; + ``` +* Inspect negotiated protocol & cipher: + + ```sql + SELECT * FROM pg_stat_ssl; + ``` +* External test: + + ```bash + openssl s_client -connect db.example.com:5432 -starttls postgres + ``` + +--- + +## 6. Common Pitfalls + +| Issue | Cause | Fix | +| --- | --- | --- | +| `FATAL: private key file has group or world access` | Wrong permissions | `chmod 600 `[`server.key`](https://server.key) | +| Client rejects certificate | CN/SAN mismatch | Ensure proper DNS SANs | +| TLS not enforced | Used `host` instead of `hostssl` | Update `pg_hba.conf` | +| Backup tools fail | Key readable only by postgres | Store certs outside `$PGDATA` if group-readable needed | +--- + +## 7. Application Example (Python) + +Minimal psycopg2 script with TLS `verify-full` and bind variables: + +```python +import psycopg2 +import psycopg2.extras + +def main(): + conn = psycopg2.connect( + host="db.example.com", + port=5432, + dbname="mydb", + user="myuser", + password="SuperSecretPassword123", + sslmode="verify-full", + sslrootcert="/etc/ssl/myca/root.crt", + sslcert="/etc/ssl/myca/client.crt", # optional + sslkey="/etc/ssl/myca/client.key" # optional + ) + + with conn: + with conn.cursor(cursor_factory=psycopg2.extras.DictCursor) as cur: + cur.execute("SELECT id, name FROM demo WHERE id = %s", (1,)) + print("SELECT:", cur.fetchone()) + + cur.execute("INSERT INTO demo (id, name) VALUES (%s, %s)", (2, "Inserted Name")) + cur.execute("UPDATE demo SET name = %s WHERE id = %s", ("Updated Name", 2)) + cur.execute("DELETE FROM demo WHERE id = %s", (2,)) + + conn.close() + +if __name__ == "__main__": + main() +``` + +--- + +## Appendix / Future Considerations + +* Hardened production templates (PKI layout, Ansible roles, CI/CD verification checklist). +* Alternative drivers: psycopg3, SQLAlchemy, async examples. +* Integration with secret management (Kubernetes secrets, systemd, Ansible vault). +* Directory layout best practices for server vs client PKI. + +--- + +✅ This document now serves as a consolidated technical guide for setting up and using TLS with PostgreSQL, including secure password handling and client application integration. diff --git a/ASPM/aspm_01.md b/ASPM/aspm_01.md new file mode 100644 index 0000000..235c4d8 --- /dev/null +++ b/ASPM/aspm_01.md @@ -0,0 +1,152 @@ +set lines 256 + +column client_name format a35 +column task_name format a30 + +column last_try_date format a20 +column last_good_date format a20 +column next_try_date format a20 + +alter session set nls_timestamp_format = 'yyyy-mm-dd hh24:mi:ss'; + +select + client_name, task_name, status, + to_char(last_try_date,'yyyy-mm-dd hh24:mi:ss') as last_try_date, + to_char(last_good_date,'yyyy-mm-dd hh24:mi:ss') as last_good_date, + to_char(next_try_date,'yyyy-mm-dd hh24:mi:ss') as next_try_date +from dba_autotask_task; + + + +SQL> show parameter optimizer%baselines + +NAME TYPE VALUE +------------------------------------ ----------- ------------------------------ +optimizer_capture_sql_plan_baselines boolean FALSE +optimizer_use_sql_plan_baselines boolean TRUE + + + + +set lines 200 +set pages 1000 +col parameter_name for a35 +col parameter_value for a30 +col last_modified for a30 +col modified_by for a30 + +select * from dba_sql_management_config where parameter_name like 'AUTO_SPM_EVOLVE_TASK%'; + + + +exec dbms_spm.configure('AUTO_SPM_EVOLVE_TASK','ON'); +exec dbms_spm.configure('AUTO_SPM_EVOLVE_TASK','OFF'); + +The list of tunable parameters with DBMS_SPM.CONFIGURE: + +col description FOR a40 word_wrapped +SET pages 1000 + +select parameter_name, parameter_value, description + from dba_advisor_parameters + where task_name = 'SYS_AUTO_SPM_EVOLVE_TASK' + and parameter_value != 'UNUSED'; + + + +set lines 256 + +col DBID noprint +col TASK_ID noprint +col TASK_NAME noprint + +select * + from dba_autotask_schedule_control + where dbid = sys_context('userenv','con_dbid') + and task_name = 'Auto SPM Task'; + +-- last task details +SET LONG 1000000 PAGESIZE 1000 LONGCHUNKSIZE 256 LINESIZE 256 + +SELECT DBMS_SPM.report_auto_evolve_task +FROM dual; + + +CREATE TABLE test1(id NUMBER, descr VARCHAR(50)) TABLESPACE users; + +DECLARE +i NUMBER; +nbrows NUMBER; +BEGIN + i:=1; + nbrows:=50000; + LOOP + EXIT WHEN i>nbrows; + IF (i=1) THEN + INSERT INTO test1 VALUES(1,RPAD('A',49,'A')); + ELSE + INSERT INTO test1 VALUES(nbrows,RPAD('A',49,'A')); + END IF; + i:=i+1; + END LOOP; + COMMIT; +END; +/ + +CREATE INDEX test1_idx_id ON test1(id) TABLESPACE users; + + +EXEC DBMS_STATS.GATHER_TABLE_STATS(ownname=>user, tabname=>'test1', estimate_percent=>NULL, method_opt=>'FOR ALL INDEXED COLUMNS SIZE 2'); + + +ALTER SYSTEM flush shared_pool; + +SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM test1 WHERE id=1; + +SELECT sql_id,child_number,plan_hash_value,is_bind_sensitive,is_bind_aware,is_shareable,is_obsolete,sql_plan_baseline + FROM v$sql + WHERE sql_id='4q7zcj8kp9q2r'; + + +EXEC DBMS_STATS.GATHER_TABLE_STATS(ownname=>user, tabname=>'test1', estimate_percent=>NULL, method_opt=>'FOR ALL INDEXED COLUMNS SIZE 1'); + + +SELECT + plan_hash_value, + cpu_time, + buffer_gets, + disk_reads, + direct_writes, + rows_processed, + fetches, + executions, + optimizer_cost, + TO_CHAR(plan_timestamp,'dd-mon-yyyy hh24:mi:ss') AS plan_timestamp + FROM dba_sqlset_statements + WHERE sqlset_name='SYS_AUTO_STS' + AND sql_id='4q7zcj8kp9q2r' + ORDER BY plan_timestamp DESC; + + +select * from SYS.WRI$_ADV_EXECUTIONS where exec_type='SPM EVOLVE' order by exec_start desc; + +select * from SYS.WRI$_ADV_EXECUTIONS where exec_type='SPM EVOLVE' +where exec_start between date'2025-05-25 09:00:00' and date'2025-05-25 19:00:00' +order by exec_start desc; + + + +select + sql_id + ,plan_hash_value + ,LAST_MODIFIED +from( +select + dbms_sql_translator.sql_id(sql_text) sql_id, + (select to_number(regexp_replace(plan_table_output,'^[^0-9]*')) + from table(dbms_xplan.display_sql_plan_baseline(sql_handle,plan_name)) + where plan_table_output like 'Plan hash value: %') plan_hash_value, + bl.* +from dba_sql_plan_baselines bl +) +; diff --git a/ASPM/asts_01.md b/ASPM/asts_01.md new file mode 100644 index 0000000..3a16199 --- /dev/null +++ b/ASPM/asts_01.md @@ -0,0 +1,217 @@ +## Setup + +Check if Automatic SQL Tuning Sets (ASTS) is activated (enabled) and get the last execution time of the automatic schedule: + + set lines 200 + col task_name for a22 + + select * from dba_autotask_schedule_control where task_name = 'Auto STS Capture Task'; + +To enable: + + exec dbms_auto_task_admin.enable(client_name => 'Auto STS Capture Task', operation => NULL, window_name => NULL); + +> No way to change the interval and maximum run time + +To disable: + + exec dbms_auto_task_admin.disable(client_name => 'Auto STS Capture Task', operation => NULL, window_name => NULL); + +To manually run the job: + + exec dbms_scheduler.run_job('ORA$_ATSK_AUTOSTS'); + +List last job executions: + + col ACTUAL_START_DATE for a45 + + select ACTUAL_START_DATE,STATUS from dba_scheduler_job_run_details where JOB_NAME='ORA$_ATSK_AUTOSTS' + order by ACTUAL_START_DATE desc fetch first 10 rows only; + +More statistics on the task job: + + WITH dsjrd AS + ( + SELECT (TO_DATE('1','j')+run_duration-TO_DATE('1','j'))* 86400 duration_sec, + (TO_DATE('1','j')+cpu_used-TO_DATE('1','j'))* 86400 cpu_used_sec + FROM dba_scheduler_job_run_details + WHERE job_name = 'ORA$_ATSK_AUTOSTS' + ) + SELECT MIN(duration_sec) ASTS_Min_Time_Sec, + MAX(duration_sec) ASTS_Max_Time_Sec, + AVG(duration_sec) ASTS_Average_Time_Sec, + AVG(cpu_used_sec) ASTS_Average_CPU_Sec + FROM dsjrd; + +How many SQL statements we have actually in the SYS_AUTO_STS SQL Tuning Set (STS): + + set lines 200 + col name for a15 + col description for a30 + col owner for a10 + + select name, owner, description, created, last_modified, statement_count from dba_sqlset where name='SYS_AUTO_STS'; + +To purge all statements: + + exec dbms_sqlset.drop_sqlset(sqlset_name => 'SYS_AUTO_STS', sqlset_owner => 'SYS'); + + How much space it takes in your SYSAUX tablesapce: + + col table_name for a30 + col table_size_mb for 999999.99 + col total_size_mb for 999999.99 + + select + table_name, + round(sum(size_b) / 1024 / 1024, 3) as table_size_mb, + round(max(total_size_b) / 1024 / 1024, 3) as total_size_mb + from + ( + select + table_name, + size_b, + sum(size_b) over() as total_size_b + from + ( + select + segment_name as table_name, + bytes as size_b + from dba_segments + where + segment_name not like '%WORKSPA%' + and owner = 'SYS' + and (segment_name like 'WRI%SQLSET%' or segment_name like 'WRH$_SQLTEXT') + union all + select + t.table_name, + bytes as size_b + from dba_segments s, + (select + table_name, + segment_name + from dba_lobs + where table_name in ('WRI$_SQLSET_PLAN_LINES', 'WRH$_SQLTEXT') + and owner = 'SYS' + ) t + where s.segment_name = t.segment_name + ) + ) + group by table_name + order by table_size_mb desc; + +## Test case + + DROP TABLE test01 purge; + CREATE TABLE test01(id NUMBER, descr VARCHAR(50)) TABLESPACE users; + + DECLARE + i NUMBER; + nbrows NUMBER; + BEGIN + i:=1; + nbrows:=50000; + LOOP + EXIT WHEN i>nbrows; + IF (i=1) THEN + INSERT INTO test01 VALUES(1,RPAD('A',49,'A')); + ELSE + INSERT INTO test01 VALUES(nbrows,RPAD('A',49,'A')); + END IF; + i:=i+1; + END LOOP; + COMMIT; + END; + / + + CREATE INDEX test01_idx_id ON test01(id); + + exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'test01', method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO'); + +No histogram will be calculated: + + col column_name for a20 + + select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram + from user_tab_col_statistics + where table_name='TEST01'; + + + + select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1; + + ID DESCR + ---------- -------------------------------------------------- + 1 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + +The optimize we will choose a full scan: + + SQL_ID 28stunrv2985c, child number 0 + ------------------------------------- + select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1 + + Plan hash value: 262542483 + + ----------------------------------------------------------------------------------------------------------- + | Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | + ----------------------------------------------------------------------------------------------------------- + | 0 | SELECT STATEMENT | | 1 | | | 136 (100)| 1 |00:00:00.01 | 443 | + |* 1 | TABLE ACCESS FULL| TEST01 | 1 | 25000 | 732K| 136 (0)| 1 |00:00:00.01 | 443 | + ----------------------------------------------------------------------------------------------------------- + + +Wait for next Auto STS Capture Task schedule or run the job manually. +The SQL_ID will be captured bu ASTS: + + col SQLSET_NAME for a30 + col PARSING_SCHEMA_NAME for a30 + + select SQLSET_NAME,PLAN_HASH_VALUE,PARSING_SCHEMA_NAME,BUFFER_GETS from DBA_SQLSET_STATEMENTS where SQL_ID='28stunrv2985c'; + + SQLSET_NAME PLAN_HASH_VALUE PARSING_SCHEMA_NAME BUFFER_GETS + ------------------------------ --------------- ------------------------------ ----------- + SYS_AUTO_STS 262542483 RED 453 + +Gather the stats again: + + exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'test01', method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO'); + +Oracle learned from its mistake and will calulate histograms: + + COLUMN_NAME NUM_DISTINCT DENSITY NUM_NULLS NUM_BUCKETS SAMPLE_SIZE HISTOGRAM + -------------------- ------------ ---------- ---------- ----------- ----------- --------------- + ID 2 .00001 0 2 50000 FREQUENCY + +Flush the shared pool and re-execute the query: + + alter system flush shared_pool; + + select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1; + +As expected, the index has been used: + + SQL_ID 28stunrv2985c, child number 0 + ------------------------------------- + select /*+ GATHER_PLAN_STATISTICS */ * FROM test01 WHERE id=1 + + Plan hash value: 4138272685 + + ------------------------------------------------------------------------------------------------------------------------------------ + | Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | + ------------------------------------------------------------------------------------------------------------------------------------ + | 0 | SELECT STATEMENT | | 1 | | | 2 (100)| 1 |00:00:00.01 | 4 | + | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| TEST01 | 1 | 1 | 30 | 2 (0)| 1 |00:00:00.01 | 4 | + |* 2 | INDEX RANGE SCAN | TEST01_IDX_ID | 1 | 1 | | 1 (0)| 1 |00:00:00.01 | 3 | + ------------------------------------------------------------------------------------------------------------------------------------ + +Wait for next Auto STS Capture Task schedule and check if the SQL_ID is in with both executions. +> For me the manual execution does not add the 2-end plan to ASTS. + + select SQLSET_NAME,PLAN_HASH_VALUE,PARSING_SCHEMA_NAME,BUFFER_GETS from DBA_SQLSET_STATEMENTS where SQL_ID='28stunrv2985c'; + + SQLSET_NAME PLAN_HASH_VALUE PARSING_SCHEMA_NAME BUFFER_GETS + ------------------------------ --------------- ------------------------------ ----------- + SYS_AUTO_STS 262542483 RED 453 + SYS_AUTO_STS 4138272685 RED 203 + + diff --git a/FDA/ORA-55622.txt b/FDA/ORA-55622.txt new file mode 100644 index 0000000..2c5774c --- /dev/null +++ b/FDA/ORA-55622.txt @@ -0,0 +1,26 @@ +SQL> show user +USER is "USR" + + +SQL> delete from SYS_FBA_DDL_COLMAP_26338; +delete from SYS_FBA_DDL_COLMAP_26338 + * +ERROR at line 1: +ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on +table "USR"."SYS_FBA_DDL_COLMAP_26338" + + +SQL> delete from SYS_FBA_HIST_26338; +delete from SYS_FBA_HIST_26338 + * +ERROR at line 1: +ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on +table "USR"."SYS_FBA_HIST_26338" + + +ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on table “string”.”string” + +Reason for the Error: + An attempt was made to write to or alter or create unique index on a Flashback Archive internal table. +Solution + No action required. Only Oracle is allowed to perform such operations on Flashback Archive internal tables. diff --git a/FDA/fda_01.txt b/FDA/fda_01.txt new file mode 100644 index 0000000..6943bba --- /dev/null +++ b/FDA/fda_01.txt @@ -0,0 +1,317 @@ +alias adm_NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba' + + +create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret; +alter pluggable database NIHILUS open; +alter pluggable database NIHILUS save state; + + +alter session set container=NIHILUS; + +create bigfile tablespace LIVE_TS datafile size 32M autoextend on next 32M; +create bigfile tablespace ARCHIVE_TS datafile size 32M autoextend on next 32M; + +create user adm identified by "secret"; +grant sysdba to adm; + + +create user usr identified by "secret"; +grant CONNECT,RESOURCE to usr; +grant alter session to usr; + +alter user usr default tablespace LIVE_TS; + +alter user usr quota unlimited on LIVE_TS; +alter user usr quota unlimited on ARCHIVE_TS; + +alias adm_NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba' +alias usr_NIHILUS='rlwrap sqlplus usr/"secret"@bakura:1521/NIHILUS' + + +create flashback archive default ARCHIVE_7_DAY + tablespace ARCHIVE_TS + quota 1G + retention 7 DAY; + +grant flashback archive on ARCHIVE_7_DAY to usr; +grant flashback archive administer to usr; +grant execute on dbms_flashback_archive to usr; + + +------------------------------------------------------------------------------ +SET LINESIZE 150 + +COLUMN owner_name FORMAT A20 +COLUMN flashback_archive_name FORMAT A22 +COLUMN create_time FORMAT A20 +COLUMN last_purge_time FORMAT A20 + +SELECT owner_name, + flashback_archive_name, + flashback_archive#, + retention_in_days, + TO_CHAR(create_time, 'YYYY-MM-DD HH24:MI:SS') AS create_time, + TO_CHAR(last_purge_time, 'YYYY-MM-DD HH24:MI:SS') AS last_purge_time, + status +FROM dba_flashback_archive +ORDER BY owner_name, flashback_archive_name; +------------------------------------------------------------------------------ + + +------------------------------------------------------------------------------ +SET LINESIZE 150 + +COLUMN flashback_archive_name FORMAT A22 +COLUMN tablespace_name FORMAT A20 +COLUMN quota_in_mb FORMAT A11 + +SELECT flashback_archive_name, + flashback_archive#, + tablespace_name, + quota_in_mb +FROM dba_flashback_archive_ts +ORDER BY flashback_archive_name; +------------------------------------------------------------------------------ + + +------------------------------------------------------------------------------ +SET LINESIZE 150 + +COLUMN owner_name FORMAT A20 +COLUMN table_name FORMAT A20 +COLUMN flashback_archive_name FORMAT A22 +COLUMN archive_table_name FORMAT A20 + +SELECT owner_name, + table_name, + flashback_archive_name, + archive_table_name, + status +FROM dba_flashback_archive_tables +ORDER BY owner_name, table_name; +------------------------------------------------------------------------------ + + +-- Example 1 +------------- + +create table TAB1 ( + ID number, + DESCRIPTION varchar2(50), + constraint TAB_1_PK primary key (id) +); + +alter table TAB1 flashback archive ARCHIVE_7_DAY; + +insert into TAB1 values (1, 'one'); +commit; + +update TAB1 set description = 'two' where id = 1; +commit; + +update TAB1 set description = 'three' where id = 1; +commit; + + +------------------------------------------------------------------------------ +SET LINESIZE 200 + +COLUMN versions_startscn FORMAT 99999999999999999 +COLUMN versions_starttime FORMAT A32 +COLUMN versions_endscn FORMAT 99999999999999999 +COLUMN versions_endtime FORMAT A32 +COLUMN versions_xid FORMAT A16 +COLUMN versions_operation FORMAT A1 +COLUMN description FORMAT A11 + +SELECT versions_startscn, + versions_starttime, + versions_endscn, + versions_endtime, + versions_xid, + versions_operation, + description +FROM tab1 + VERSIONS BETWEEN TIMESTAMP SYSTIMESTAMP-(1/24) AND SYSTIMESTAMP +WHERE id = 1 +ORDER BY versions_startscn; +------------------------------------------------------------------------------ + + +create table TAB1 (d date); +alter table TAB1 flashback archive ARCHIVE_7_DAY; + +insert into TAB1 values (sysdate); +commit; + +-- infinite_update1.sql +begin + loop + update TAB1 set d=sysdate; + commit; + dbms_session.sleep(1); + end loop; +end; +/ + + +alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'; + +SET LINESIZE 200 + +COLUMN versions_startscn FORMAT 99999999999999999 +COLUMN versions_starttime FORMAT A32 +COLUMN versions_endscn FORMAT 99999999999999999 +COLUMN versions_endtime FORMAT A32 +COLUMN versions_xid FORMAT A16 +COLUMN versions_operation FORMAT A1 +COLUMN description FORMAT A25 + +SELECT + versions_startscn, + versions_starttime, + versions_endscn, + versions_endtime, + versions_xid, + versions_operation, + d +FROM + TAB1 +VERSIONS BETWEEN TIMESTAMP TIMESTAMP'2023-06-17 17:20:10' and TIMESTAMP'2023-06-17 17:20:40' +ORDER BY versions_startscn; + + +SELECT * from TAB1 +AS OF TIMESTAMP TIMESTAMP'2023-06-17 17:05:10'; + + +SELECT * from TAB1 +AS OF TIMESTAMP TIMESTAMP'2023-06-17 17:30:49'; + + + +EXEC DBMS_SYSTEM.set_ev(si=>163, se=>24797, ev=>10046, le=>8, nm=>''); + + +-- Example 2 +------------- + +alter table TAB2 no flashback archive; +drop table TAB2 purge; + + +create table TAB2 ( + n1 number, + c1 varchar2(10), + d1 DATE +); + +alter table TAB2 flashback archive ARCHIVE_7_DAY; + +insert into TAB2 values(1,'One',TIMESTAMP'2023-01-01 00:00:00'); +commit; +insert into TAB2 values(2,'Two',TIMESTAMP'2023-01-01 00:00:00'); +commit; +insert into TAB2 values(3,'Three',TIMESTAMP'2023-01-01 00:00:00'); +commit; + + + + + + + + +alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'; + +SET LINESIZE 200 +COLUMN versions_startscn FORMAT 99999999999999999 +COLUMN versions_starttime FORMAT A32 +COLUMN versions_endscn FORMAT 99999999999999999 +COLUMN versions_endtime FORMAT A32 +COLUMN versions_xid FORMAT A16 +COLUMN versions_operation FORMAT A1 +COLUMN description FORMAT A25 + +SELECT + versions_startscn, + versions_starttime, + versions_endscn, + versions_endtime, + versions_xid, + versions_operation, + T.* +FROM + TAB2 VERSIONS BETWEEN TIMESTAMP (systimestamp-3/24) and systimestamp T +where + N1=1 +ORDER BY versions_startscn; + +update TAB2 set d1=TIMESTAMP'2023-12-31 23:59:59' where n1=1; +commit; + + +select * from TAB2 as of timestamp TIMESTAMP'2023-06-18 08:47:20' where N1=1; + +select * from TAB2 as of timestamp systimestamp where N1=1; +select * from TAB2 as of scn 4335762 where N1=1; +select * from TAB2 as of scn 4335824 where N1=1; + +-> +alter table TAB2 add C2 varchar2(3); + +update TAB2 set C2='abc' where n1=1; +update TAB2 set C2='***' where n1=1; +commit; +update TAB2 set C2='def' where n1=1; +commit; + + +alter table TAB2 drop column C2; +alter table TAB2 rename column C1 to C3; + +update TAB2 set d1=systimestamp where n1=1; +commit; + +update TAB2 set d1=TIMESTAMP'1973-10-05 10:00:00',C3='birthday' where n1=1; +commit; + +update TAB2 set d1=systimestamp,C3='right now' where n1=1; +commit; + + +4336404 18-JUN-23 03.14.59 +select * from TAB2 as of timestamp TIMESTAMP'2023-06-18 03:15:00' where N1=1; + +select * from TAB2 as of scn 4336403 where N1=1; +select * from TAB2 as of scn 4336404 where N1=1; +select * from TAB2 as of scn 4337054 where N1=1; + +select * from TAB2 as of scn 4282896 where N1=1; +select * from TAB2 as of scn 4283027 where N1=1; + + + + + + + + +-- cleanup +alter table TAB2 no flashback archive; +drop table TAB2 purge; + +alter table TAB1 no flashback archive; +drop table TAB1 purge; + +drop user USR cascade; + +drop flashback archive ARCHIVE_7_DAY; + +drop tablespace LIVE_TS including contents and datafiles; +drop tablespace ARCHIVE_TS including contents and datafiles; + + +-- cleanup +alter pluggable database NIHILUS close instances=ALL; +drop pluggable database NIHILUS including datafiles; diff --git a/FDA/fda_02.txt b/FDA/fda_02.txt new file mode 100644 index 0000000..1d8875c --- /dev/null +++ b/FDA/fda_02.txt @@ -0,0 +1,123 @@ +create bigfile tablespace ARCHIVE_TS datafile size 32M autoextend on next 32M; + +create flashback archive default ARCHIVE_7_DAY + tablespace ARCHIVE_TS + quota 1G + retention 7 DAY; + + +create table TAB1 ( + ID number, + DESCRIPTION varchar2(50), + constraint TAB_1_PK primary key (id) +); + +alter table TAB2 flashback archive ARCHIVE_7_DAY; + + +insert into TAB2 values(1,'One',TIMESTAMP'2023-01-01 00:00:00'); +commit; +insert into TAB2 values(2,'Two',TIMESTAMP'2023-01-01 00:00:00'); +commit; +insert into TAB2 values(3,'Three',TIMESTAMP'2023-01-01 00:00:00'); +commit; + +alter table TAB2 add C2 varchar2(3); + +update TAB2 set C2='abc' where n1=1; +update TAB2 set C2='***' where n1=1; +commit; +update TAB2 set C2='def' where n1=1; +commit; + + +alter table TAB2 drop column C2; +alter table TAB2 rename column C1 to C3; + +update TAB2 set d1=systimestamp where n1=1; +commit; + +update TAB2 set d1=TIMESTAMP'1973-10-05 10:00:00',C3='birthday' where n1=1; +commit; + +update TAB2 set d1=systimestamp,C3='right now' where n1=1; +commit; + + + + + +Query: select * from TAB2 as of timestamp systimestamp-1/24+21/24/60 where N1=1; + + + + + +SQL> @desc SYS_FBA_DDL_COLMAP_26338 + Name Null? Type + ------------------------------- -------- ---------------------------- + 1 STARTSCN NUMBER + 2 ENDSCN NUMBER + 3 XID RAW(8) + 4 OPERATION VARCHAR2(1) + 5 COLUMN_NAME VARCHAR2(255) + 6 TYPE VARCHAR2(255) + 7 HISTORICAL_COLUMN_NAME VARCHAR2(255) + +SQL> @desc SYS_FBA_HIST_26338 + Name Null? Type + ------------------------------- -------- ---------------------------- + 1 RID VARCHAR2(4000) + 2 STARTSCN NUMBER + 3 ENDSCN NUMBER + 4 XID RAW(8) + 5 OPERATION VARCHAR2(1) + 6 N1 NUMBER + 7 C3 VARCHAR2(10) + 8 D1 DATE + 9 D_4335990_C2 VARCHAR2(3) + + + +set lines 200 +col STARTSCN for 9999999999 +col ENDSCN for 9999999999 +col HISTORICAL_COLUMN_NAME for a30 +col COLUMN_NAME for a30 +col XID noprint +col TYPE for a20 + +select * from SYS_FBA_DDL_COLMAP_26338 order by STARTSCN; + + + STARTSCN ENDSCN O COLUMN_NAME TYPE HISTORICAL_COLUMN_NAME +----------- ----------- - ------------------------------ -------------------- ------------------------------ + 4297455 N1 NUMBER N1 + 4297455 4336109 C3 VARCHAR2(10) C1 + 4297455 D1 DATE D1 + 4335662 4335990 D_4335990_C2 VARCHAR2(3) C2 + 4336109 C3 VARCHAR2(10) C3 + + +col RID noprint +col XID noprint +col OPERATION noprint + +select * from SYS_FBA_HIST_26338 order by STARTSCN; + + STARTSCN ENDSCN XID O N1 C3 D1 D_4 +----------- ----------- ---------------- - ---------- ---------- ------------------- --- + 4336404 4336452 08000200AE020000 U 1 birthday 1973-10-05 10:00:00 + 4298014 4335762 08000700A5020000 U 1 One 2023-12-31 23:59:59 + 4336266 4336404 06000400A2020000 U 1 One 2023-06-18 15:12:51 + 4335762 4335824 09000A00B4020000 U 1 One 2023-12-31 23:59:59 *** + 4335996 4336266 U 1 One 2023-12-31 23:59:59 + 4335824 4335996 02000300AE020000 U 1 One 2023-12-31 23:59:59 def + 4297497 4335996 0300190095020000 I 2 Two 2023-01-01 00:00:00 + 4297630 4335996 0600200092020000 I 3 Three 2023-01-01 00:00:00 + 4297491 4298014 0400180090020000 I 1 One 2023-01-01 00:00:00 + 4336452 4337054 07001200A1020000 U 1 birthday 2023-06-18 15:15:13 + + + + diff --git a/FDA/fda_asof_01.txt b/FDA/fda_asof_01.txt new file mode 100755 index 0000000..cb251c6 --- /dev/null +++ b/FDA/fda_asof_01.txt @@ -0,0 +1,226 @@ + +TKPROF: Release 21.0.0.0.0 - Development on Sun Jun 18 15:33:28 2023 + +Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. + +Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3396.trc +Sort options: default + +******************************************************************************** +count = number of times OCI procedure was executed +cpu = cpu time in seconds executing +elapsed = elapsed time in seconds executing +disk = number of physical reads of buffers from disk +query = number of buffers gotten for consistent read +current = number of buffers gotten in current mode (usually for update) +rows = number of rows processed by the fetch or execute call +******************************************************************************** + +SQL ID: 2ajc7pwz9jsx3 Plan Hash: 2536448058 + +select max(scn) +from + smon_scn_time + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 3 0.00 0.00 0 0 0 0 +Execute 3 0.00 0.00 0 0 0 0 +Fetch 3 0.00 0.00 0 3 0 3 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 9 0.00 0.00 0 3 0 3 + +Misses in library cache during parse: 1 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 3 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 1 1 1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=20 us starts=1) + 1 1 1 INDEX FULL SCAN (MIN/MAX) SMON_SCN_TIME_SCN_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=6 card=1)(object id 425) + +******************************************************************************** + +SQL ID: 41dzdw7ca24a1 Plan Hash: 1159443182 + +select count(*) +from + "USR".SYS_FBA_DDL_COLMAP_26338 + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.00 0.00 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 1 0.00 0.00 0 6 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 3 0.00 0.00 0 6 0 1 + +Misses in library cache during parse: 1 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 1 1 1 SORT AGGREGATE (cr=6 pr=0 pw=0 time=95 us starts=1) + 5 5 5 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=6 pr=0 pw=0 time=92 us starts=1 cost=3 size=0 card=3) + +******************************************************************************** + +SQL ID: 15fqvf9xff3hm Plan Hash: 3966719185 + +select HISTORICAL_COLUMN_NAME, COLUMN_NAME +from + "USR".SYS_FBA_DDL_COLMAP_26338 where (STARTSCN<=4336404 or STARTSCN is NULL) + and (ENDSCN > 4336404 or ENDSCN is NULL) order by STARTSCN, ROWID + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.00 0.00 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 4 0.00 0.00 0 6 0 3 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 6 0.00 0.00 0 6 0 3 + +Misses in library cache during parse: 1 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 3 3 3 SORT ORDER BY (cr=6 pr=0 pw=0 time=61 us starts=1 cost=4 size=60 card=3) + 3 3 3 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=6 pr=0 pw=0 time=42 us starts=1 cost=3 size=60 card=3) + +******************************************************************************** + +SQL ID: 5ty7pv13y930m Plan Hash: 1347681019 + +select count(*) +from + sys.col_group_usage$ where obj# = :1 and cols = :2 and trunc(sysdate) = + trunc(timestamp) and bitand(flags, :3) = :3 and (cols_range is null and + length(:4) = 0 or cols_range is not null and cols_range = + dbms_auto_index_internal.merge_cols_str(cols_range, :4)) + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 0 0.00 0.00 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 1 0.00 0.00 0 2 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 2 0.00 0.00 0 2 0 1 + +Misses in library cache during parse: 0 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + PGA memory operation 67 0.00 0.00 +******************************************************************************** + +SQL ID: g0181my81qz4x Plan Hash: 303836101 + +select * +from + TAB2 as of scn 4336404 where N1=1 + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.01 0.01 0 16 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 2 0.00 0.00 0 93 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 4 0.01 0.01 0 109 0 1 + +Misses in library cache during parse: 1 +Optimizer mode: ALL_ROWS +Parsing user id: 84 +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 1 1 1 VIEW (cr=110 pr=0 pw=0 time=102 us starts=1 cost=282 size=58 card=2) + 1 1 1 UNION-ALL (cr=110 pr=0 pw=0 time=100 us starts=1) + 1 1 1 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=100 pr=0 pw=0 time=100 us starts=1 cost=274 size=29 card=1) + 1 1 1 TABLE ACCESS FULL SYS_FBA_HIST_26338 PARTITION: 1 1 (cr=100 pr=0 pw=0 time=94 us starts=1 cost=274 size=29 card=1) + 0 0 0 FILTER (cr=10 pr=0 pw=0 time=571 us starts=1) + 1 1 1 NESTED LOOPS OUTER (cr=10 pr=0 pw=0 time=571 us starts=1 cost=8 size=44 card=1) + 1 1 1 TABLE ACCESS FULL TAB2 (cr=7 pr=0 pw=0 time=528 us starts=1 cost=6 size=16 card=1) + 1 1 1 TABLE ACCESS BY INDEX ROWID BATCHED SYS_FBA_TCRV_26338 (cr=3 pr=0 pw=0 time=24 us starts=1 cost=2 size=28 card=1) + 3 3 3 INDEX RANGE SCAN SYS_FBA_TCRV_IDX1_26338 (cr=1 pr=0 pw=0 time=8 us starts=1 cost=1 size=0 card=1)(object id 26344) + + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + PGA memory operation 4 0.00 0.00 + SQL*Net message to client 2 0.00 0.00 + SQL*Net message from client 2 12.15 12.16 + + + +******************************************************************************** + +OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.01 0.01 0 16 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 2 0.00 0.00 0 93 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 4 0.01 0.01 0 109 0 1 + +Misses in library cache during parse: 1 + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + SQL*Net message to client 3 0.00 0.00 + SQL*Net message from client 3 47.08 59.24 + PGA memory operation 4 0.00 0.00 + + +OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 5 0.00 0.00 0 0 0 0 +Execute 6 0.00 0.00 0 0 0 0 +Fetch 9 0.00 0.00 0 17 0 8 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 20 0.00 0.00 0 17 0 8 + +Misses in library cache during parse: 3 + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + PGA memory operation 67 0.00 0.00 + + 1 user SQL statements in session. + 6 internal SQL statements in session. + 7 SQL statements in session. +******************************************************************************** +Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3396.trc +Trace file compatibility: 12.2.0.0 +Sort options: default + + 1 session in tracefile. + 1 user SQL statements in trace file. + 6 internal SQL statements in trace file. + 7 SQL statements in trace file. + 5 unique SQL statements in trace file. + 218 lines in trace file. + 12 elapsed seconds in trace file. + + diff --git a/FDA/fda_asof_02.txt b/FDA/fda_asof_02.txt new file mode 100755 index 0000000..57e0d17 --- /dev/null +++ b/FDA/fda_asof_02.txt @@ -0,0 +1,324 @@ + +TKPROF: Release 21.0.0.0.0 - Development on Sun Jun 18 15:55:49 2023 + +Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. + +Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3510.trc +Sort options: default + +******************************************************************************** +count = number of times OCI procedure was executed +cpu = cpu time in seconds executing +elapsed = elapsed time in seconds executing +disk = number of physical reads of buffers from disk +query = number of buffers gotten for consistent read +current = number of buffers gotten in current mode (usually for update) +rows = number of rows processed by the fetch or execute call +******************************************************************************** + +SQL ID: 41dzdw7ca24a1 Plan Hash: 1159443182 + +select count(*) +from + "USR".SYS_FBA_DDL_COLMAP_26338 + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.00 0.00 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 1 0.00 0.00 0 6 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 3 0.00 0.00 0 6 0 1 + +Misses in library cache during parse: 0 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 1 1 1 SORT AGGREGATE (cr=6 pr=0 pw=0 time=67 us starts=1) + 5 5 5 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=6 pr=0 pw=0 time=59 us starts=1 cost=3 size=0 card=3) + +******************************************************************************** + +SQL ID: 2syvqzbxp4k9z Plan Hash: 533170135 + +select u.name, o.name, a.interface_version#, o.obj# +from + association$ a, user$ u, obj$ o where a.obj# = :1 + and a.property = :2 + and a.statstype# = o.obj# and + u.user# = o.owner# + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 6 0.00 0.00 0 0 0 0 +Execute 6 0.00 0.00 0 0 0 0 +Fetch 6 0.00 0.00 0 12 0 0 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 18 0.00 0.00 0 12 0 0 + +Misses in library cache during parse: 0 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 0 0 0 HASH JOIN (cr=2 pr=0 pw=0 time=39 us starts=1 cost=5 size=62 card=1) + 0 0 0 NESTED LOOPS (cr=2 pr=0 pw=0 time=35 us starts=1 cost=5 size=62 card=1) + 0 0 0 STATISTICS COLLECTOR (cr=2 pr=0 pw=0 time=33 us starts=1) + 0 0 0 HASH JOIN (cr=2 pr=0 pw=0 time=26 us starts=1 cost=4 size=44 card=1) + 0 0 0 NESTED LOOPS (cr=2 pr=0 pw=0 time=26 us starts=1 cost=4 size=44 card=1) + 0 0 0 STATISTICS COLLECTOR (cr=2 pr=0 pw=0 time=26 us starts=1) + 0 0 0 TABLE ACCESS FULL ASSOCIATION$ (cr=2 pr=0 pw=0 time=24 us starts=1 cost=2 size=16 card=1) + 0 0 0 TABLE ACCESS BY INDEX ROWID BATCHED OBJ$ (cr=0 pr=0 pw=0 time=0 us starts=0 cost=2 size=28 card=1) + 0 0 0 INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=0 card=1)(object id 36) + 0 0 0 INDEX FAST FULL SCAN I_OBJ2 (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=28 card=1)(object id 37) + 0 0 0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=18 card=1) + 0 0 0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us starts=0 cost=0 size=0 card=1)(object id 11) + 0 0 0 TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us starts=0 cost=1 size=18 card=1) + +******************************************************************************** + +SQL ID: 2xyb5d6xg9srh Plan Hash: 785096182 + +select a.default_cpu_cost, a.default_io_cost +from + association$ a where a.obj# = :1 + and a.property = :2 + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 6 0.00 0.00 0 0 0 0 +Execute 6 0.00 0.00 0 0 0 0 +Fetch 6 0.00 0.00 0 12 0 0 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 18 0.00 0.00 0 12 0 0 + +Misses in library cache during parse: 0 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 0 0 0 TABLE ACCESS FULL ASSOCIATION$ (cr=2 pr=0 pw=0 time=16 us starts=1 cost=2 size=18 card=1) + +******************************************************************************** + +SQL ID: 476v06tzdhkhc Plan Hash: 3966719185 + +select HISTORICAL_COLUMN_NAME, COLUMN_NAME +from + "USR".SYS_FBA_DDL_COLMAP_26338 where (STARTSCN<= + TIMESTAMP_TO_SCN(systimestamp-1/24+21/24/60) or STARTSCN is NULL) and + (ENDSCN > TIMESTAMP_TO_SCN(systimestamp-1/24+21/24/60) or ENDSCN is NULL) + order by STARTSCN, ROWID + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.00 0.00 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 4 0.00 0.00 0 6 0 3 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 6 0.00 0.00 0 6 0 3 + +Misses in library cache during parse: 1 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 3 3 3 SORT ORDER BY (cr=13 pr=0 pw=0 time=1075 us starts=1 cost=4 size=20 card=1) + 3 3 3 TABLE ACCESS FULL SYS_FBA_DDL_COLMAP_26338 (cr=13 pr=0 pw=0 time=1061 us starts=1 cost=3 size=20 card=1) + +******************************************************************************** + +SQL ID: 4jrkd9ymavb8x Plan Hash: 3631124065 + +select max(time_mp) +from + smon_scn_time + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 20 0.00 0.00 0 0 0 0 +Execute 20 0.00 0.00 0 0 0 0 +Fetch 20 0.00 0.00 0 20 0 20 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 60 0.00 0.00 0 20 0 20 + +Misses in library cache during parse: 0 +Optimizer mode: ALL_ROWS +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 1 1 1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=19 us starts=1) + 1 1 1 INDEX FULL SCAN (MIN/MAX) SMON_SCN_TIME_TIM_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=7 card=1)(object id 424) + +******************************************************************************** + +SQL ID: 2ajc7pwz9jsx3 Plan Hash: 2536448058 + +select max(scn) +from + smon_scn_time + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 2 0.00 0.00 0 0 0 0 +Execute 2 0.00 0.00 0 0 0 0 +Fetch 2 0.00 0.00 0 2 0 2 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 6 0.00 0.00 0 2 0 2 + +Misses in library cache during parse: 0 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 1 1 1 SORT AGGREGATE (cr=1 pr=0 pw=0 time=12 us starts=1) + 1 1 1 INDEX FULL SCAN (MIN/MAX) SMON_SCN_TIME_SCN_IDX (cr=1 pr=0 pw=0 time=6 us starts=1 cost=1 size=6 card=1)(object id 425) + +******************************************************************************** + +SQL ID: 5ty7pv13y930m Plan Hash: 1347681019 + +select count(*) +from + sys.col_group_usage$ where obj# = :1 and cols = :2 and trunc(sysdate) = + trunc(timestamp) and bitand(flags, :3) = :3 and (cols_range is null and + length(:4) = 0 or cols_range is not null and cols_range = + dbms_auto_index_internal.merge_cols_str(cols_range, :4)) + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 0 0.00 0.00 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 1 0.00 0.00 0 2 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 2 0.00 0.00 0 2 0 1 + +Misses in library cache during parse: 0 +Optimizer mode: CHOOSE +Parsing user id: SYS (recursive depth: 1) + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + PGA memory operation 75 0.00 0.00 +******************************************************************************** + +SQL ID: 36g2pydn13abk Plan Hash: 2739728740 + +select * +from + TAB2 as of timestamp systimestamp-1/24+21/24/60 where N1=1 + + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.01 0.01 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 2 0.00 0.00 0 109 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 4 0.01 0.01 0 109 0 1 + +Misses in library cache during parse: 1 +Optimizer mode: ALL_ROWS +Parsing user id: 84 +Number of plan statistics captured: 1 + +Rows (1st) Rows (avg) Rows (max) Row Source Operation +---------- ---------- ---------- --------------------------------------------------- + 1 1 1 VIEW (cr=123 pr=0 pw=0 time=1775 us starts=1 cost=282 size=58 card=2) + 1 1 1 UNION-ALL (cr=123 pr=0 pw=0 time=1772 us starts=1) + 1 1 1 FILTER (cr=112 pr=0 pw=0 time=1769 us starts=1) + 1 1 1 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=111 pr=0 pw=0 time=1530 us starts=1 cost=274 size=29 card=1) + 1 1 1 TABLE ACCESS FULL SYS_FBA_HIST_26338 PARTITION: 1 1 (cr=111 pr=0 pw=0 time=1526 us starts=1 cost=274 size=29 card=1) + 0 0 0 FILTER (cr=11 pr=0 pw=0 time=410 us starts=1) + 1 1 1 NESTED LOOPS OUTER (cr=10 pr=0 pw=0 time=245 us starts=1 cost=8 size=44 card=1) + 1 1 1 TABLE ACCESS FULL TAB2 (cr=7 pr=0 pw=0 time=215 us starts=1 cost=6 size=16 card=1) + 1 1 1 TABLE ACCESS BY INDEX ROWID BATCHED SYS_FBA_TCRV_26338 (cr=3 pr=0 pw=0 time=20 us starts=1 cost=2 size=28 card=1) + 3 3 3 INDEX RANGE SCAN SYS_FBA_TCRV_IDX1_26338 (cr=1 pr=0 pw=0 time=7 us starts=1 cost=1 size=0 card=1)(object id 26344) + + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + PGA memory operation 3 0.00 0.00 + SQL*Net message to client 2 0.00 0.00 + SQL*Net message from client 2 2.20 2.20 + + + +******************************************************************************** + +OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 1 0.01 0.01 0 0 0 0 +Execute 1 0.00 0.00 0 0 0 0 +Fetch 2 0.00 0.00 0 109 0 1 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 4 0.01 0.01 0 109 0 1 + +Misses in library cache during parse: 1 + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + SQL*Net message to client 3 0.00 0.00 + SQL*Net message from client 3 5.67 7.88 + PGA memory operation 3 0.00 0.00 + + +OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS + +call count cpu elapsed disk query current rows +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +Parse 36 0.00 0.00 0 0 0 0 +Execute 37 0.00 0.00 0 0 0 0 +Fetch 40 0.00 0.00 0 60 0 27 +------- ------ -------- ---------- ---------- ---------- ---------- ---------- +total 113 0.01 0.01 0 60 0 27 + +Misses in library cache during parse: 1 + +Elapsed times include waiting on following events: + Event waited on Times Max. Wait Total Waited + ---------------------------------------- Waited ---------- ------------ + PGA memory operation 75 0.00 0.00 + + 1 user SQL statements in session. + 7 internal SQL statements in session. + 8 SQL statements in session. +******************************************************************************** +Trace file: /app/oracle/base/admin/SITHPRD/diag/rdbms/sithprd/SITHPRD/trace/SITHPRD_ora_3510.trc +Trace file compatibility: 12.2.0.0 +Sort options: default + + 1 session in tracefile. + 1 user SQL statements in trace file. + 7 internal SQL statements in trace file. + 8 SQL statements in trace file. + 8 unique SQL statements in trace file. + 509 lines in trace file. + 2 elapsed seconds in trace file. + + diff --git a/FDA/infinite_update1.sql b/FDA/infinite_update1.sql new file mode 100644 index 0000000..7387ccd --- /dev/null +++ b/FDA/infinite_update1.sql @@ -0,0 +1,9 @@ +begin + loop + update TAB1 set d=sysdate; + commit; + dbms_session.sleep(1); + end loop; +end; +/ + diff --git a/Golden_Gate/.DS_Store b/Golden_Gate/.DS_Store new file mode 100644 index 0000000..2cdbf5e Binary files /dev/null and b/Golden_Gate/.DS_Store differ diff --git a/Golden_Gate/Clean_up_old_Extracts_01.txt b/Golden_Gate/Clean_up_old_Extracts_01.txt new file mode 100644 index 0000000..3c06b78 --- /dev/null +++ b/Golden_Gate/Clean_up_old_Extracts_01.txt @@ -0,0 +1,47 @@ +Clean up old Extracts +--------------------- +https://www.dbasolved.com/2022/04/clean-up-old-extracts/ + +0. Identify captures and log miner sessions +------------------------------------------- +set linesize 150 +col capture_name format a20 +select capture_name from dba_capture; + + +set linesize 130 +col session_name format a20 +col global_db_name format a45 +select SESSION#,CLIENT#,SESSION_NAME,DB_ID,GLOBAL_DB_NAME from system.LOGMNR_SESSION$; + + +1. Drop the extracts +--------------------- +exec DBMS_CAPTURE_ADM.DROP_CAPTURE (''); + +2. Drop queue tables from log miner +----------------------------------- + +set linesize 250 +col owner format a30 +col name format a30 +col queue_table format a30 +select owner, name, queue_table from dba_queues where owner = 'OGGADMIN'; + + + +# delete in automatic mode +declare + v_queue_name varchar2(60); +begin +for i in (select queue_table, owner from dba_queues where owner = 'OGGADMIN') +loop + v_queue_name := i.owner||'.'||i.queue_table; + DBMS_AQADM.DROP_QUEUE_TABLE(queue_table => v_queue_name, force => TRUE); +end loop; +end; +/ + +# or delete one by one +exec DBMS_AQADM.DROP_QUEUE_TABLE(queue_table => '.', force => TRUE); +# note that tables with AQ$_ prefix will be autotaic deleted diff --git a/Golden_Gate/distrib_certif_01.md b/Golden_Gate/distrib_certif_01.md new file mode 100644 index 0000000..3ac9ee3 --- /dev/null +++ b/Golden_Gate/distrib_certif_01.md @@ -0,0 +1,234 @@ +### Sources + +- [OGG Documentation](https://docs.oracle.com/en/middleware/goldengate/core/19.1/securing/securing-deployments.html#GUID-472E5C9C-85FC-4B87-BB90-2CE877F41DC0) +- [Markdown Basic Syntax](https://www.markdownguide.org/basic-syntax/) + +### Creating a Self-Signed Root Certificate + +Create an automatic login wallet + + orapki wallet create \ + -wallet /app/oracle/staging_area/wallet_dir/rootCA \ + -pwd "LuxAeterna12;" \ + -auto_login + + Create self-signed certificate + + orapki wallet add -wallet ~/wallet_directory/root_ca \ + -wallet /app/oracle/staging_area/wallet_dir/rootCA \ + -pwd "LuxAeterna12;" \ + -dn "CN=RootCA" \ + -keysize 2048 \ + -self_signed \ + -validity 7300 \ + -sign_alg sha256 + +Check the contents of the wallet + + orapki wallet display \ + -wallet /app/oracle/staging_area/wallet_dir/rootCA \ + -pwd "LuxAeterna12;" + +Export the certificate to a .pem file + + orapki wallet export \ + -wallet /app/oracle/staging_area/wallet_dir/rootCA \ + -pwd "LuxAeterna12;" \ + -dn "CN=RootCA" \ + -cert /app/oracle/staging_area/export/rootCA_Cert.pem + + +### Creating Server Certificates + +#### For [exegol] server + +Create an automatic login wallet + + orapki wallet create \ + -wallet /app/oracle/staging_area/wallet_dir/exegol \ + -pwd "TabulaRasa32;" \ + -auto_login + +Add a Certificate Signing Request (CSR) to the server’s wallet + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/exegol \ + -pwd "TabulaRasa32;" \ + -dn "CN=exegol.swgalaxy" \ + -keysize 2048 + +Export the CSR to a .pem file + + orapki wallet export \ + -wallet /app/oracle/staging_area/wallet_dir/exegol \ + -pwd "TabulaRasa32;" \ + -dn "CN=exegol.swgalaxy" \ + -request /app/oracle/staging_area/export/exegol_req.pem + +Using the CSR, create a signed server or client certificate and sign it using the root certificate. +Assign a unique serial number to each certificate. + + orapki cert create \ + -wallet /app/oracle/staging_area/wallet_dir/rootCA \ + -pwd "LuxAeterna12;" \ + -request /app/oracle/staging_area/export/exegol_req.pem \ + -cert /app/oracle/staging_area/export/exegol_Cert.pem \ + -serial_num 20 \ + -validity 375 \ + -sign_alg sha256 + +Add the root certificate into the client’s or server’s wallet as a trusted certificate. + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/exegol \ + -pwd "TabulaRasa32;" \ + -trusted_cert \ + -cert /app/oracle/staging_area/export/rootCA_Cert.pem + +Add the server or client certificate as a user certificate into the client’s or server’s wallet + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/exegol \ + -pwd "TabulaRasa32;" \ + -user_cert \ + -cert /app/oracle/staging_area/export/exegol_Cert.pem + +Check the contents of the wallet + + orapki wallet display \ + -wallet /app/oracle/staging_area/wallet_dir/exegol \ + -pwd "TabulaRasa32;" + + +#### For [helska] server + +Create an automatic login wallet + + orapki wallet create \ + -wallet /app/oracle/staging_area/wallet_dir/helska \ + -pwd "SicSemper81;" \ + -auto_login + +Add a Certificate Signing Request (CSR) to the server’s wallet + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/helska \ + -pwd "SicSemper81;" \ + -dn "CN=helska.swgalaxy" \ + -keysize 2048 + +Export the CSR to a .pem file + + orapki wallet export \ + -wallet /app/oracle/staging_area/wallet_dir/helska \ + -pwd "SicSemper81;" \ + -dn "CN=helska.swgalaxy" \ + -request /app/oracle/staging_area/export/helska_req.pem + +Using the CSR, create a signed server or client certificate and sign it using the root certificate. +Assign a unique serial number to each certificate. + + orapki cert create \ + -wallet /app/oracle/staging_area/wallet_dir/rootCA \ + -pwd "LuxAeterna12;" \ + -request /app/oracle/staging_area/export/helska_req.pem \ + -cert /app/oracle/staging_area/export/helska_Cert.pem \ + -serial_num 21 \ + -validity 375 \ + -sign_alg sha256 + +Add the root certificate into the client’s or server’s wallet as a trusted certificate. + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/helska \ + -pwd "SicSemper81;" \ + -trusted_cert \ + -cert /app/oracle/staging_area/export/rootCA_Cert.pem + +Add the server or client certificate as a user certificate into the client’s or server’s wallet + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/helska \ + -pwd "SicSemper81;" \ + -user_cert \ + -cert /app/oracle/staging_area/export/helska_Cert.pem + +Check the contents of the wallet + + orapki wallet display \ + -wallet /app/oracle/staging_area/wallet_dir/helska \ + -pwd "SicSemper81;" + +### Creating a Distribution Server User Certificate + +Create an automatic login wallet + + orapki wallet create \ + -wallet /app/oracle/staging_area/wallet_dir/dist_client \ + -pwd "LapsusLinguae91" \ + -auto_login + +Add a Certificate Signing Request (CSR) to the wallet + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/dist_client \ + -pwd "LapsusLinguae91" \ + -dn "CN=dist_client" \ + -keysize 2048 + +Export the CSR to a .pem file + + orapki wallet export \ + -wallet /app/oracle/staging_area/wallet_dir/dist_client \ + -pwd "LapsusLinguae91" \ + -dn "CN=dist_client" \ + -request /app/oracle/staging_area/export/dist_client_req.pem + +Using the CSR, create a signed certificate and sign it using the root certificate. +Assign a unique serial number to each certificate. + + orapki cert create \ + -wallet /app/oracle/staging_area/wallet_dir/rootCA \ + -pwd "LuxAeterna12;" \ + -request /app/oracle/staging_area/export/dist_client_req.pem \ + -cert /app/oracle/staging_area/export/dist_client_Cert.pem \ + -serial_num 22 \ + -validity 375 \ + -sign_alg sha256 + +Add the root certificate into the client’s or server’s wallet as a trusted certificate. + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/dist_client \ + -pwd "LapsusLinguae91" \ + -trusted_cert \ + -cert /app/oracle/staging_area/export/rootCA_Cert.pem + +Add the server or client certificate as a user certificate into the client’s or server’s wallet + + orapki wallet add \ + -wallet /app/oracle/staging_area/wallet_dir/dist_client \ + -pwd "LapsusLinguae91" \ + -user_cert \ + -cert /app/oracle/staging_area/export/dist_client_Cert.pem + +Check the contents of the wallet + + orapki wallet display \ + -wallet /app/oracle/staging_area/wallet_dir/dist_client \ + -pwd "LapsusLinguae91" + + +### Trusted Certificates + +Both the Distribution Server and Receiver Server need certificates. +- The Distribution Server uses the certificate in the client wallet location under outbound section +- For the Receiver Server, the certificate is in the wallet for the inbound wallet location + +For self-signed certificates, you can choose from one of the following: +- Have both certificates signed by the same Root Certificate +- The other side’s certificate is added to the local wallet as trusted certificate + + + + diff --git a/Golden_Gate/example_01/add_2_tables.md b/Golden_Gate/example_01/add_2_tables.md new file mode 100644 index 0000000..f3139f1 --- /dev/null +++ b/Golden_Gate/example_01/add_2_tables.md @@ -0,0 +1,296 @@ +## Context + +- setup extract/replicat for 3 tables: ORDERS, PRODUCTS and USERS +- add 2 new tables TRANSACTIONS and TASKS to this extract/replica peer + +The aim is to minimize the downtime for the peer extract/replicat, so we will proceed in 2 steps: +- create a second parallel extract/replicat for the 2 new tables +- merge the second extract/replicat to initial extract/replicat + +## Extract setup + +Add trandata to tables: + + dblogin useridalias YODA + add trandata GREEN.ORDERS + add trandata GREEN.PRODUCTS + add trandata GREEN.USERS + list tables GREEN.* + + +Define params file for extract: + + edit params EXTRAA + + + extract EXTRAA + useridalias JEDIPRD + sourcecatalog YODA + exttrail ./dirdat/aa + purgeoldextracts + checkpointsecs 1 + ddl include mapped + warnlongtrans 1h, checkinterval 30m + ------------------------------------ + table GREEN.ORDERS; + table GREEN.PRODUCTS; + table GREEN.USERS; + + +Add, register and start extract: + + dblogin useridalias JEDIPRD + add extract EXTRAA, integrated tranlog, begin now + add exttrail ./dirdat/aa, extract EXTRAA + register extract EXTRAA, database container (YODA) + start extract EXTRAA + info extract EXTRAA detail + + +## Initial load + +Note down the current SCN on source database. + + SQL> select current_scn from v$database; + + CURRENT_SCN + ----------- + 10138382 + + +On target DB create tables structure for ORDERS, PRODUCTS, USERS and do the inlitial load: + + SCN=10138382 + impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_01.log remap_schema=GREEN:RED tables=GREEN.ORDERS,GREEN.PRODUCTS,GREEN.USERS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN + +## Replicat setup + +Define params file for replicat. +Take care to filter `filter(@GETENV ('TRANSACTION','CSN')`, it must be positionned to the SCN of initial load. + + edit params REPLAA + + replicat REPLAA + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAA.dsc, purge, megabytes 10 + + map YODA.GREEN.ORDERS, target MAUL.RED.ORDERS, filter(@GETENV ('TRANSACTION','CSN') > 10138382); + map YODA.GREEN.PRODUCTS, target MAUL.RED.PRODUCTS, filter(@GETENV ('TRANSACTION','CSN') > 10138382); + map YODA.GREEN.USERS, target MAUL.RED.USERS, filter(@GETENV ('TRANSACTION','CSN') > 10138382); + + +Add and start replicat: + + add replicat REPLAA, integrated, exttrail ./dirdat/aa + + dblogin useridalias SITHPRD + register replicat REPLAA database + start replicat REPLAA + info all + + +Wait to catch the lag: + + lag replicat + +When done you can remove filter `filter(@GETENV ('TRANSACTION','CSN')` + + edit params REPLAA + + replicat REPLAA + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAA.dsc, purge, megabytes 10 + + map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ; + map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ; + map YODA.GREEN.USERS , target MAUL.RED.USERS ; + + + restart replicat REPLAA + + + +## Add 2 new tables to extract/replicat + +Add trandata to tables: + + dblogin useridalias YODA + add trandata GREEN.TRANSACTIONS + add trandata GREEN.TASKS + list tables GREEN.* + + +Create a second extract EXTRAB to manage the new tables. +Define extract parameters: + + edit params EXTRAB + + extract EXTRAB + useridalias JEDIPRD + sourcecatalog YODA + exttrail ./dirdat/ab + purgeoldextracts + checkpointsecs 1 + ddl include mapped + warnlongtrans 1h, checkinterval 30m + + table GREEN.TRANSACTIONS; + table GREEN.TASKS; + +Add, register and start extract: + + dblogin useridalias JEDIPRD + add extract EXTRAB, integrated tranlog, begin now + add exttrail ./dirdat/ab, extract EXTRAB + register extract EXTRAB, database container (YODA) + start extract EXTRAB + info extract EXTRAB detail + +## Initial load for new tables + +Note down the current SCN on source database. + + SQL> select current_scn from v$database; + + CURRENT_SCN + ----------- + 10284191 + +On target DB create tables structure for TRANSACTIONS, TASKS and do the inlitial load: + + SCN=10284191 + impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_02.log remap_schema=GREEN:RED tables=GREEN.TRANSACTIONS,GREEN.TASKS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN + +## New replicat setup + +Define extract parameters. +Pay attention to `filter(@GETENV ('TRANSACTION','CSN')` clause to be setup to SCN of intial datapump load. + + edit params REPLAB + + replicat REPLAB + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAB.dsc, purge, megabytes 10 + + map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 10284191); + map YODA.GREEN.TASKS, target MAUL.RED.TASKS, filter(@GETENV ('TRANSACTION','CSN') > 10284191); + +Add and start new replicat: + + add replicat REPLAB, integrated, exttrail ./dirdat/ab + dblogin useridalias SITHPRD + register replicat REPLAB database + start replicat REPLAB + info all + +Check if new replicat is running and wait to lag 0. + +## Integrate the 2 new tables to initial extract/replicat: EXTRAA/REPLAA + +Add new tables to initial extract for a **double run**: + + edit params EXTRAA + + extract EXTRAA + useridalias JEDIPRD + sourcecatalog YODA + exttrail ./dirdat/aa + purgeoldextracts + checkpointsecs 1 + ddl include mapped + warnlongtrans 1h, checkinterval 30m + + table GREEN.ORDERS; + table GREEN.PRODUCTS; + table GREEN.USERS; + table GREEN.TRANSACTIONS; + table GREEN.TASKS; + +Restart extract EXTRAA: + + restart extract EXTRAA + +Stop extracts in this **strictly order**: +- **first** extract: EXTRAA +- **second** extract: EXTRAB + +> It is **mandatory** to stop extracts in this order. +> **The applied SCN on first replicat tables must be less than the SCN on second replicat** in order to allow the first replicat to start at the last applied psition in the trail file. Like this, the first replicat must not be repositionned in the past. + + stop EXTRACT EXTRAA + stop EXTRACT EXTRAB + +Now stop both replicat also: + + stop replicat REPLAA + stop replicat REPLAB + +Note down the SCN for each extract and premare new params file for initial replicat. + + info extract EXTRAA detail + info extract EXTRAB detail + +In my case: +- EXTRAA: SCN=10358472 +- EXTRAB: SCN=10358544 + +> The SCN of EXTRAB should be greater than the SCN of EXTRAA + +Update REPLAA replicat parameter file in accordance with the latest SCN applied on new tables (the SCN of EXTRAB): + + edit params REPLAA + + replicat REPLAA + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAA.dsc, purge, megabytes 10 + + map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ; + map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ; + map YODA.GREEN.USERS , target MAUL.RED.USERS ; + + map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 10358544); + map YODA.GREEN.TASKS , target MAUL.RED.TASKS, filter(@GETENV ('TRANSACTION','CSN') > 10358544); + +Start first extract/replicat + + start extract EXTRAA + start replicat REPLAA + +When the lag is zero you can remove filter `filter(@GETENV ('TRANSACTION','CSN')` + + edit params REPLAA + + replicat REPLAA + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAA.dsc, purge, megabytes 10 + + map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ; + map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ; + map YODA.GREEN.USERS , target MAUL.RED.USERS ; + + + map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ; + map YODA.GREEN.TASKS , target MAUL.RED.TASKS ; + + +Restart first replicat: + + start replicat REPLAA + +Now all tables are integrated in first extract/replicat. + +## Remove second extract/replicat + + dblogin useridalias JEDIPRD + unregister extract EXTRAB database + delete extract EXTRAB + + dblogin useridalias MAUL + unregister replicat REPLAB database + delete replicat REPLAB + diff --git a/Golden_Gate/example_01/count_lines.sql b/Golden_Gate/example_01/count_lines.sql new file mode 100644 index 0000000..469626b --- /dev/null +++ b/Golden_Gate/example_01/count_lines.sql @@ -0,0 +1,12 @@ +select 'ORDERS (target)='||count(1) as "#rows" from RED.ORDERS union +select 'ORDERS (source)='||count(1) as "#rows" from GREEN.ORDERS@GREEN_AT_YODA union +select 'PRODUCTS (target)='||count(1) as "#rows" from RED.PRODUCTS union +select 'PRODUCTS (source)='||count(1) as "#rows" from GREEN.PRODUCTS@GREEN_AT_YODA union +select 'USERS (target)='||count(1) as "#rows" from RED.USERS union +select 'USERS (source)='||count(1) as "#rows" from GREEN.USERS@GREEN_AT_YODA union +select 'TRANSACTIONS (target)='||count(1) as "#rows" from RED.TRANSACTIONS union +select 'TRANSACTIONS (source)='||count(1) as "#rows" from GREEN.TRANSACTIONS@GREEN_AT_YODA union +select 'TASKS (target)='||count(1) as "#rows" from RED.TASKS union +select 'TASKS (source)='||count(1) as "#rows" from GREEN.TASKS@GREEN_AT_YODA +order by 1 asc +/ diff --git a/Golden_Gate/example_01/cr_tables.sql b/Golden_Gate/example_01/cr_tables.sql new file mode 100644 index 0000000..ce71cfd --- /dev/null +++ b/Golden_Gate/example_01/cr_tables.sql @@ -0,0 +1,83 @@ +-- Create sequences for primary key generation +CREATE SEQUENCE seq_products START WITH 1 INCREMENT BY 1; +CREATE SEQUENCE seq_orders START WITH 1 INCREMENT BY 1; +CREATE SEQUENCE seq_users START WITH 1 INCREMENT BY 1; +CREATE SEQUENCE seq_transactions START WITH 1 INCREMENT BY 1; +CREATE SEQUENCE seq_tasks START WITH 1 INCREMENT BY 1; + +-- Create tables with meaningful names and relevant columns +CREATE TABLE products ( + id NUMBER PRIMARY KEY, + name VARCHAR2(100), + category VARCHAR2(20), + quantity INTEGER +); + +CREATE TABLE orders ( + id NUMBER PRIMARY KEY, + description VARCHAR2(255), + status VARCHAR2(20) +); + +CREATE TABLE users ( + id NUMBER PRIMARY KEY, + created_at DATE DEFAULT SYSDATE, + username VARCHAR2(20), + age INTEGER, + location VARCHAR2(20) +); + +CREATE TABLE transactions ( + id NUMBER PRIMARY KEY, + amount NUMBER(10,2), + currency VARCHAR2(20) +); + +CREATE TABLE tasks ( + id NUMBER PRIMARY KEY, + status VARCHAR2(50), + priority INTEGER, + type VARCHAR2(20), + assigned_to VARCHAR2(20) +); + +-- Create triggers to auto-generate primary key values using sequences +CREATE OR REPLACE TRIGGER trg_products_pk +BEFORE INSERT ON products +FOR EACH ROW +BEGIN + SELECT seq_products.NEXTVAL INTO :NEW.id FROM dual; +END; +/ + +CREATE OR REPLACE TRIGGER trg_orders_pk +BEFORE INSERT ON orders +FOR EACH ROW +BEGIN + SELECT seq_orders.NEXTVAL INTO :NEW.id FROM dual; +END; +/ + +CREATE OR REPLACE TRIGGER trg_users_pk +BEFORE INSERT ON users +FOR EACH ROW +BEGIN + SELECT seq_users.NEXTVAL INTO :NEW.id FROM dual; +END; +/ + +CREATE OR REPLACE TRIGGER trg_transactions_pk +BEFORE INSERT ON transactions +FOR EACH ROW +BEGIN + SELECT seq_transactions.NEXTVAL INTO :NEW.id FROM dual; +END; +/ + +CREATE OR REPLACE TRIGGER trg_tasks_pk +BEFORE INSERT ON tasks +FOR EACH ROW +BEGIN + SELECT seq_tasks.NEXTVAL INTO :NEW.id FROM dual; +END; +/ diff --git a/Golden_Gate/example_01/delete_extr_repl.md b/Golden_Gate/example_01/delete_extr_repl.md new file mode 100644 index 0000000..9e089ce --- /dev/null +++ b/Golden_Gate/example_01/delete_extr_repl.md @@ -0,0 +1,16 @@ +## Delete an integreted replicat + + dblogin useridalias SITHPRD + stop replicat REPLAB + unregister replicat REPLAB database + delete replicat REPLAB + info all + +## Delete an integreted extract + + dblogin useridalias JEDIPRD + stop extract EXTRAB + unregister extract EXTRAB database + delete extract EXTRAB + info all + diff --git a/Golden_Gate/example_01/job_actions.sql b/Golden_Gate/example_01/job_actions.sql new file mode 100644 index 0000000..8ebd300 --- /dev/null +++ b/Golden_Gate/example_01/job_actions.sql @@ -0,0 +1,20 @@ +--Stop the job (Disable) +BEGIN + DBMS_SCHEDULER.disable('JOB_MANAGE_DATA'); +END; +/ + + +--Restart the job +BEGIN + DBMS_SCHEDULER.enable('JOB_MANAGE_DATA'); +END; +/ + + +--Fully Remove the Job +BEGIN + DBMS_SCHEDULER.drop_job('JOB_MANAGE_DATA'); +END; +/ + diff --git a/Golden_Gate/example_01/repair_failed_table.md b/Golden_Gate/example_01/repair_failed_table.md new file mode 100644 index 0000000..7b1ffc2 --- /dev/null +++ b/Golden_Gate/example_01/repair_failed_table.md @@ -0,0 +1,195 @@ +## Context +Replicat is ABBENDED because of data issue. +The aim is to restablish the replicat and minimize the downtime. + +## Provoke a failure on replicat +On target database truncate RED.TRANSACTIONS table: + + truncate table RED.TRANSACTIONS; + +Replicat will be abbended because of update/delete orders: + + status replicat REPLAA + REPLICAT REPLAA: ABENDED + +## Remove tablme from replicat + +Comment MAP line relative to TRANSACTIONS table on replicat and restart the replicat. + + edit params REPLAA + + replicat REPLAA + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAA.dsc, purge, megabytes 10 + + map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ; + map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ; + map YODA.GREEN.USERS , target MAUL.RED.USERS ; + + + -- map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ; + map YODA.GREEN.TASKS , target MAUL.RED.TASKS ; + + + start replicat REPLAA + +At this moment replicat should be **RUNNING**. + +## Create a dedicated extract/replicat for the table in failiure + +Create a second extract EXTRAB to manage the new tables. +Define extract parameters: + + edit params EXTRAB + + extract EXTRAB + useridalias JEDIPRD + sourcecatalog YODA + exttrail ./dirdat/ab + purgeoldextracts + checkpointsecs 1 + ddl include mapped + warnlongtrans 1h, checkinterval 30m + table GREEN.TRANSACTIONS; + +Add, register and start extract: + + dblogin useridalias JEDIPRD + add extract EXTRAB, integrated tranlog, begin now + add exttrail ./dirdat/ab, extract EXTRAB + register extract EXTRAB, database container (YODA) + start extract EXTRAB + info extract EXTRAB detail + +> Start **distribution path** (aka **PUMP**) if the replicat is running on distant site (Golden Gate deployment) + +## Initial load + +Note down the current SCN on source database. + + SQL> select current_scn from v$database; + + CURRENT_SCN + ----------- + 12234159 + +On target DB create tables structure for TRANSACTIONS, TASKS and do the inlitial load: + + SCN=12234159 + impdp userid=admin/"Secret00!"@togoria/MAUL network_link=GREEN_AT_YODA logfile=MY:import_03.log remap_schema=GREEN:RED tables=GREEN.TRANSACTIONS TABLE_EXISTS_ACTION=TRUNCATE flashback_scn=$SCN + +## New replicat setup + +Define extract parameters. +Pay attention to `filter(@GETENV ('TRANSACTION','CSN')` clause to be setup to SCN of intial datapump load. + + edit params REPLAB + + replicat REPLAB + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAB.dsc, purge, megabytes 10 + + map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 12234159); + +Add and start new replicat: + + add replicat REPLAB, integrated, exttrail ./dirdat/ab + dblogin useridalias SITHPRD + register replicat REPLAB database + start replicat REPLAB + info all + +Check if new replicat is running and wait to lag 0. + +## Reintegrate table to initial extract/replicat + +Now, TRANSACTIONS table is replicated by EXTRAB/REPLAB, but not by intial replication EXTRAA/REPLAA. +Let's reintegrate TRANSACTIONS in intial replication EXTRAA/REPLAA. +Note that TRANSACTIONS was not removed from EXTRAA definition, so all table changes are still recorded in EXTRAA trail files. + +Stop extracts in this **strictly order**: +- **first** extract: EXTRAA +- **second** extract: EXTRAB + +> It is **mandatory** to stop extracts in this order. +> **The applied SCN on first replicat tables must be less than the SCN on second replicat** in order to allow the first replicat to start at the last applied position in the trail file. Like this, the first replicat must not be repositionned in the past. + + stop EXTRACT EXTRAA + stop EXTRACT EXTRAB + +Now stop both replicat also: + + stop replicat REPLAA + stop replicat REPLAB + +Note down the SCN for each extract and premare new params file for initial replicat. + + info extract EXTRAA detail + info extract EXTRAB detail + +In my case: +- EXTRAA: SCN=12245651 +- EXTRAB: SCN=12245894 + +> The SCN of EXTRAB should be greater than the SCN of EXTRAA + +Update REPLAA replicat parameter file in accordance with the latest SCN applied TRANSACTION table (the SCN of EXTRAB): + + edit params REPLAA + + replicat REPLAA + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAA.dsc, purge, megabytes 10 + + map YODA.GREEN.ORDERS, target MAUL.RED.ORDERS ; + map YODA.GREEN.PRODUCTS, target MAUL.RED.PRODUCTS ; + map YODA.GREEN.USERS, target MAUL.RED.USERS ; + map YODA.GREEN.TASKS, target MAUL.RED.TASKS ; + + map YODA.GREEN.TRANSACTIONS, target MAUL.RED.TRANSACTIONS, filter(@GETENV ('TRANSACTION','CSN') > 12245894); + + +Start first extract/replicat + + start extract EXTRAA + start replicat REPLAA + +When the lag is zero you can remove filter `filter(@GETENV ('TRANSACTION','CSN')` from REPLAA. + + stop replicat REPLAA + + edit params REPLAA + + replicat REPLAA + useridalias MAUL + dboptions enable_instantiation_filtering + discardfile REPLAA.dsc, purge, megabytes 10 + + map YODA.GREEN.ORDERS , target MAUL.RED.ORDERS ; + map YODA.GREEN.PRODUCTS , target MAUL.RED.PRODUCTS ; + map YODA.GREEN.USERS , target MAUL.RED.USERS ; + map YODA.GREEN.TASKS , target MAUL.RED.TASKS ; + + map YODA.GREEN.TRANSACTIONS , target MAUL.RED.TRANSACTIONS ; + +Restart REPLAA replicat: + + start replicat REPLAA + +Now all tables are integrated in first extract/replicat. + +## Remove second extract/replicat + + dblogin useridalias JEDIPRD + unregister extract EXTRAB database + delete extract EXTRAB + + dblogin useridalias MAUL + unregister replicat REPLAB database + delete replicat REPLAB + +Stop and delete **distribution path** (aka **PUMP**) if the replicat is running on distant site (Golden Gate deployment). + diff --git a/Golden_Gate/example_01/worlkoad_as_job.sql b/Golden_Gate/example_01/worlkoad_as_job.sql new file mode 100644 index 0000000..d623e46 --- /dev/null +++ b/Golden_Gate/example_01/worlkoad_as_job.sql @@ -0,0 +1,91 @@ +-- Step 1: Create the stored procedure +CREATE OR REPLACE PROCEDURE manage_data IS + new_products INTEGER default 3; + new_orders INTEGER default 10; + new_users INTEGER default 2; + new_transactions INTEGER default 20; + new_tasks INTEGER default 5; +BEGIN + FOR i IN 1..new_products LOOP + INSERT INTO products (id, name, category, quantity) + VALUES (seq_products.NEXTVAL, + DBMS_RANDOM.STRING('A', 10), + DBMS_RANDOM.STRING('A', 20), + TRUNC(DBMS_RANDOM.VALUE(1, 100))); + END LOOP; + + + FOR i IN 1..new_orders LOOP + INSERT INTO orders (id, description, status) + VALUES (seq_orders.NEXTVAL, + DBMS_RANDOM.STRING('A', 50), + DBMS_RANDOM.STRING('A', 20)); + END LOOP; + + + FOR i IN 1..new_users LOOP + INSERT INTO users (id, created_at, username, age, location) + VALUES (seq_users.NEXTVAL, SYSDATE, + DBMS_RANDOM.STRING('A', 15), + TRUNC(DBMS_RANDOM.VALUE(18, 60)), + DBMS_RANDOM.STRING('A', 20)); + END LOOP; + + + FOR i IN 1..new_transactions LOOP + INSERT INTO transactions (id, amount, currency) + VALUES (seq_transactions.NEXTVAL, + ROUND(DBMS_RANDOM.VALUE(1, 10000), 2), + DBMS_RANDOM.STRING('A', 3)); + END LOOP; + + + FOR i IN 1..new_tasks LOOP + INSERT INTO tasks (id, status, priority, type, assigned_to) + VALUES (seq_tasks.NEXTVAL, + DBMS_RANDOM.STRING('A', 20), + TRUNC(DBMS_RANDOM.VALUE(1, 10)), + DBMS_RANDOM.STRING('A', 20), + DBMS_RANDOM.STRING('A', 15)); + END LOOP; + + -- Update 2 random rows in each table + UPDATE products SET quantity = TRUNC(DBMS_RANDOM.VALUE(1, 200)) + WHERE id IN (SELECT id FROM products ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY); + + UPDATE orders SET status = DBMS_RANDOM.STRING('A', 20) + WHERE id IN (SELECT id FROM orders ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY); + + UPDATE users SET age = TRUNC(DBMS_RANDOM.VALUE(18, 75)) + WHERE id IN (SELECT id FROM users ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY); + + UPDATE transactions SET amount = ROUND(DBMS_RANDOM.VALUE(1, 5000), 2) + WHERE id IN (SELECT id FROM transactions ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY); + + UPDATE tasks SET priority = TRUNC(DBMS_RANDOM.VALUE(1, 10)) + WHERE id IN (SELECT id FROM tasks ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 2 ROWS ONLY); + + -- Delete 1 random row from each table + DELETE FROM products WHERE id = (SELECT id FROM products ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY); + DELETE FROM orders WHERE id = (SELECT id FROM orders ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY); + DELETE FROM users WHERE id = (SELECT id FROM users ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY); + DELETE FROM transactions WHERE id = (SELECT id FROM transactions ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY); + DELETE FROM tasks WHERE id = (SELECT id FROM tasks ORDER BY DBMS_RANDOM.VALUE FETCH FIRST 1 ROW ONLY); + + COMMIT; +END; +/ + +-- Step 2: Create a scheduled job to run every 10 seconds +BEGIN + DBMS_SCHEDULER.create_job ( + job_name => 'JOB_MANAGE_DATA', + job_type => 'PLSQL_BLOCK', + job_action => 'BEGIN manage_data; END;', + start_date => SYSTIMESTAMP, + repeat_interval => 'FREQ=SECONDLY; INTERVAL=10', + enabled => TRUE + ); +END; +/ + diff --git a/Golden_Gate/ogg_01.txt b/Golden_Gate/ogg_01.txt new file mode 100644 index 0000000..24206b8 --- /dev/null +++ b/Golden_Gate/ogg_01.txt @@ -0,0 +1,74 @@ +https://www.dbi-services.com/blog/setting-up-a-sample-replication-with-goldengate/ + + +# source: 19c database, schema OTTER, NON-CDB //togoria:1521/ANDOPRD +# target: 21c database, schema BEAVER, PDB //bakura:1521/WOMBAT + + +-- on source DB +create user OTTER identified by "K91@9kLorg1j_7OxV"; +grant connect,resource to OTTER; +alter user OTTER quota unlimited on USERS; + +-- on target DB +create user BEAVER identified by "Versq99#LerB009aX"; +grant connect,resource to BEAVER; +alter user BEAVER quota unlimited on USERS; + +# on BOTH databases +################### + +# check if ARCHIVELOG mode is ON +archive log list; + +# activate integrated OGG replication +alter system set enable_goldengate_replication=TRUE scope=both sid='*'; + +# put databases in FORCE LOGGING mode +alter database force logging; + +# add suplimental log +alter database add supplemental log data; + +# create a GoldenGate admin user +create user OGGADMIN identified by "eXtpam!ZarghOzVe81p@1"; +grant create session to OGGADMIN; +grant select any dictionary to OGGADMIN; +exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE ('OGGADMIN'); +grant flashback any table to OGGADMIN; + +# test GoldenGate admin user connections +sqlplus /nolog +connect OGGADMIN/"eXtpam!ZarghOzVe81p@1"@//togoria:1521/ANDOPRD +connect OGGADMIN/"eXtpam!ZarghOzVe81p@1"@//bakura:1521/WOMBAT + + +# create tables to repliacate on source DB +create table OTTER.T1(d date); + + +ggsci +create wallet +add credentialstore +alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "eXtpam!ZarghOzVe81p@1" alias ANDOPRD +info credentialstore + +dblogin useridalias ANDOPRD +add trandata OTTER.T1 + + + + + +# cleanup +######### +# on source DB +drop user OTTER cascade; +drop user OGGADMIN cascade; +# on target DB +drop user WOMBAT cascade; +drop user OGGADMIN cascade; + + + + diff --git a/Golden_Gate/ogg_02.txt b/Golden_Gate/ogg_02.txt new file mode 100644 index 0000000..dd18ff7 --- /dev/null +++ b/Golden_Gate/ogg_02.txt @@ -0,0 +1,128 @@ +alias gg='rlwrap /app/oracle/product/ogg21/ggsci' + +create user OGGADMIN identified by "eXtpam!ZarghOzVe81p@1"; +# maybe too much +grant DBA to OGGADMIN; + +add credentialstore +info credentialstore domain admin +alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "eXtpam!ZarghOzVe81p@1" alias ANDOPRD domain admin +dblogin useridalias ANDOPRD domain admin +list tables OTTER.* +# delete trandata OTTER.* +add trandata OTTER.* + +Edit params ./GLOBALS +#--> +GGSCHEMA OGGADMIN +#<-- + +edit params myextr1 +#--> +EXTRACT myextr1 +USERID OGGADMIN@//togoria:1521/ANDOPRD, PASSWORD "eXtpam!ZarghOzVe81p@1" +EXTTRAIL ./dirdat/ex +CHECKPOINTSECS 1 +TABLE OTTER.*; +#<-- + + +ADD EXTRACT myextr1, TRANLOG, BEGIN now +REGISTER EXTRACT myextr1, DATABASE +ADD EXTTRAIL ./dirdat/ex, EXTRACT myextr1 +START EXTRACT myextr1 +info myextr1 + +edit param mypump1 +#--> +EXTRACT mypump1 +PASSTHRU +RMTHOST bakura, MGRPORT 7809 +RMTTRAIL ./dirdat/RT +CHECKPOINTSECS 1 +TABLE OTTER.*; +#<-- + + + +ADD EXTRACT mypump1, EXTTRAILSOURCE ./dirdat/ex +Add RMTTRAIL ./dirdat/rt, EXTRACT mypump1 +START EXTRACT mypump1 +info mypump1 + +add checkpointtable OGGADMIN.checkpointtable + + +add credentialstore +info credentialstore domain admin +alter credentialstore add user OGGADMIN@//bakura:1521/EWOKPRD password "eXtpam!ZarghOzVe81p@1" alias EWOKPRD domain admin +dblogin useridalias EWOKPRD domain admin + + +add checkpointtable OGGADMIN.checkpointtable + +edit params myrepl1 +#--> +REPLICAT myrepl1 +USERID OGGADMIN@//bakura:1521/EWOKPRD, PASSWORD "eXtpam!ZarghOzVe81p@1" +DISCARDFILE ./dirdsc/myrepl1.dsc, PURGE +ASSUMETARGETDEFS +MAP OTTER.*, TARGET OTTER.*; +#<-- + +add replicat myrepl1, EXTTRAIL ./dirdat/RT, checkpointtable OGGADMIN.checkpointtable + +start MYREPL1 + +create spfile='/app/oracle/base/admin/EWOKPRD/spfile/spfileEWOKPRD.ora' from pfile='/mnt/yavin4/tmp/_oracle_/tmp/ANDO.txt'; + +# create a static listener to connect as sysdba in NOMOUNT state + +oracle@bakura[EWOKPRD]:/mnt/yavin4/tmp/_oracle_/tmp$ cat listener.ora + +MYLSNR = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCP)(HOST = bakura)(PORT = 1600)) + ) + ) + +SID_LIST_MYLSNR = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = EWOKPRD_STATIC) + (SID_NAME = EWOKPRD) + (ORACLE_HOME = /app/oracle/product/19) + ) + ) + + +export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/tmp +lsnrctl start MYLSNR +lsnrctl status MYLSNR + + +connect sys/"Secret00!"@//bakura:1600/EWOKPRD_STATIC as sysdba +connect sys/"Secret00!"@//togoria:1521/ANDOPRD as sysdba + + +rman target=sys/"Secret00!"@//togoria:1521/ANDOPRD auxiliary=sys/"Secret00!"@//bakura:1600/EWOKPRD_STATIC +run { + allocate channel pri1 device type DISK; + allocate channel pri2 device type DISK; + allocate channel pri3 device type DISK; + allocate channel pri4 device type DISK; + allocate auxiliary channel aux1 device type DISK; + allocate auxiliary channel aux2 device type DISK; + allocate auxiliary channel aux3 device type DISK; + allocate auxiliary channel aux4 device type DISK; + duplicate target database to 'EWOK' + from active database + using compressed backupset section size 1G; +} + + + + + + diff --git a/Golden_Gate/ogg_03.txt b/Golden_Gate/ogg_03.txt new file mode 100644 index 0000000..c2e58bd --- /dev/null +++ b/Golden_Gate/ogg_03.txt @@ -0,0 +1,147 @@ +-- https://www.dbi-services.com/blog/performing-an-initial-load-with-goldengate-1-file-to-replicat/ +-- https://www.dbi-services.com/blog/performing-an-initial-load-with-goldengate-2-expdpimpdp/ + +Source DB: ANDOPRD@togoria +Target DB: EWOKPRD@bakura + +alias gg='rlwrap /app/oracle/product/ogg21/ggsci' + +# install HR schema on source database +@install.sql + +# install HR schema on target database, disable constraints and delete all data +@install.sql + +connect / as sysdba +declare + lv_statement varchar2(2000); +begin + for r in ( select c.CONSTRAINT_NAME, c.TABLE_NAME + from dba_constraints c + , dba_tables t + where c.owner = 'HR' + and t.table_name = c.table_name + and t.owner = 'HR' + and c.constraint_type != 'P' + ) + loop + lv_statement := 'alter table hr.'||r.TABLE_NAME||' disable constraint '||r.CONSTRAINT_NAME; + execute immediate lv_statement; + end loop; + for r in ( select table_name + from dba_tables + where owner = 'HR' + ) + loop + execute immediate 'delete hr.'||r.table_name; + end loop; +end; +/ + +select count(*) from hr.employees; +select count(*) from hr.jobs; + +# create OGGADMIN user on both databases +create user OGGADMIN identified by "Chan8em11fUwant!"; +grant dba to OGGADMIN; + + +# on source machine +add credentialstore +info credentialstore domain admin +alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "Chan8em11fUwant!" alias ANDOPRD domain admin +info credentialstore domain admin +dblogin useridalias ANDOPRD domain admin + +# on target machine +add credentialstore +info credentialstore domain admin +alter credentialstore add user OGGADMIN@//bakura:1521/EWOKPRD password "Chan8em11fUwant!" alias EWOKPRD domain admin +info credentialstore domain admin +dblogin useridalias EWOKPRD domain admin + + +# on source machine +dblogin useridalias ANDOPRD domain admin +list tables HR.* +add trandata HR.* + + +# on source, in order to catch transactions during the initial load, we will create an extract for Change Data Capture + +edit params extrcdc1 +--------------------------------> +EXTRACT extrcdc1 +useridalias ANDOPRD domain admin +EXTTRAIL ./dirdat/gg +LOGALLSUPCOLS +UPDATERECORDFORMAT compact +TABLE HR.*; +TABLEEXCLUDE HR.EMP_DETAILS_VIEW; +<-------------------------------- + +dblogin useridalias ANDOPRD domain admin +register extract extrcdc1 database + +add extract extrcdc1, integrated tranlog, begin now +EXTRACT added. + +add extract extrcdc1, integrated tranlog, begin now +add exttrail ./dirdat/gg, extract extrcdc1, megabytes 5 + +# on source, configure the datapump +edit params dppump1 +--------------------------------> +EXTRACT dppump1 +PASSTHRU +RMTHOST bakura, MGRPORT 7809 +RMTTRAIL ./dirdat/jj +TABLE HR.*; +TABLEEXCLUDE HR.EMP_DETAILS_VIEW; +<-------------------------------- + +add extract dppump1, exttrailsource ./dirdat/gg +add rmttrail ./dirdat/jj, extract dppump1, megabytes 5 + +# on sourxe, start extracts CDC capture and datapump +start extract dppump1 +start extract extrcdc1 +info * + +# on target, configure replicat for CDC + +edit params replcdd +--------------------------------> +REPLICAT replcdd +ASSUMETARGETDEFS +DISCARDFILE ./dirrpt/replccd.dsc, purge +useridalias EWOKPRD domain admin +MAP HR.*, TARGET HR.*; +<-------------------------------- + +dblogin useridalias EWOKPRD domain admin +add replicat replcdd, integrated, exttrail ./dirdat/jj + +# We will NOT START the replicat right now as we wan to do the initial load before + +# Note down the current scn of the source database +SQL> select current_scn from v$database; + +CURRENT_SCN +----------- + 3968490 + +# on destination, import HS schema +create public database link ANDOPRD connect to OGGADMIN identified by "Chan8em11fUwant!" using '//togoria:1521/ANDOPRD'; +select * from DUAL@ANDOPRD; + +impdp userid=OGGADMIN/"Chan8em11fUwant!"@//bakura:1521/EWOKPRD logfile=MY:HR.log network_link=ANDOPRD schemas=HR flashback_scn=3968490 + +start replicat replcdd, aftercsn 3968490 + + + + + + + diff --git a/Golden_Gate/ogg_04.txt b/Golden_Gate/ogg_04.txt new file mode 100644 index 0000000..54344ed --- /dev/null +++ b/Golden_Gate/ogg_04.txt @@ -0,0 +1,416 @@ +# setup source schema +##################### + +create user WOMBAT identified by "NDbGvewNHVj8@#2FFGfz!De"; +grant connect, resource to WOMBAT; +alter user WOMBAT quota unlimited on USERS; + +connect WOMBAT/"NDbGvewNHVj8@#2FFGfz!De"; + +drop table T0 purge; +drop table T1 purge; +drop table T2 purge; +drop table T3 purge; + +create table JOB ( + id NUMBER GENERATED ALWAYS AS IDENTITY, + d DATE not null +); +alter table JOB add constraint JOB_PK_ID primary key (ID); + + +create table T0 ( + id NUMBER GENERATED ALWAYS AS IDENTITY, + d DATE not null, + c VARCHAR2(20), + n NUMBER +) +partition by range (d) + interval (interval '1' MONTH) ( + partition p0 values less than (DATE'2000-01-01') + ) +; + +alter table T0 add constraint T0_PK_ID primary key (ID); + +create table T1 ( + d DATE not null, + c VARCHAR2(10), + n1 NUMBER, + n2 NUMBER +) +partition by range (d) + interval (interval '1' MONTH) ( + partition p0 values less than (DATE'2000-01-01') + ) +; + +create table T2 ( + d DATE not null, + n1 NUMBER, + n2 NUMBER, + n3 NUMBER +) +partition by range (d) + interval (interval '1' MONTH) ( + partition p0 values less than (DATE'2000-01-01') + ) +; + +create table T3 ( + d DATE not null, + n NUMBER, + c1 VARCHAR2(10), + c2 VARCHAR2(10), + c3 VARCHAR2(10) +) +partition by range (d) + interval (interval '1' MONTH) ( + partition p0 values less than (DATE'2000-01-01') + ) +; + + +CREATE OR REPLACE FUNCTION random_date( + p_from IN DATE, + p_to IN DATE +) RETURN DATE +IS +BEGIN + RETURN p_from + DBMS_RANDOM.VALUE() * (p_to -p_from); +END random_date; +/ + +CREATE OR REPLACE FUNCTION random_string( + maxsize IN NUMBER +) RETURN VARCHAR2 +IS +BEGIN + RETURN dbms_random.string('x',maxsize); +END random_string; +/ + +CREATE OR REPLACE FUNCTION random_integer( + maxvalue IN NUMBER +) RETURN NUMBER +IS +BEGIN + RETURN trunc(dbms_random.value(1,maxvalue)); +END random_integer; +/ + +# add some data into tables +########################### + +set timing ON + +DECLARE + imax NUMBER default 100000; + i number; +begin + dbms_random.seed (val => 0); + for i in 1 .. imax loop + insert /*+ APPEND */ into T0 (d,c,n) values (random_date(DATE'2000-01-01',SYSDATE),random_string(20),random_integer(999999999)); + end loop; + commit; +end; +/ + +DECLARE + imax NUMBER default 100000; + i number; +begin + dbms_random.seed (val => 0); + for i in 1 .. imax loop + insert /*+ APPEND */ into T1 (d,c,n1,n2) values (random_date(DATE'2000-01-01',SYSDATE),random_string(10),random_integer(999999999),random_integer(999999999)); + end loop; + commit; +end; +/ + +DECLARE + imax NUMBER default 100000; + i number; +begin + dbms_random.seed (val => 0); + for i in 1 .. imax loop + insert /*+ APPEND */ into T2 (d,n1,n2,n3) values (random_date(DATE'2000-01-01',SYSDATE),random_integer(999999999),random_integer(999999999),random_integer(999999999)); + end loop; + commit; +end; +/ + +DECLARE + imax NUMBER default 100000; + i number; +begin + dbms_random.seed (val => 0); + for i in 1 .. imax loop + insert /*+ APPEND */ into T3 (d,n,c1,c2,c3) values (random_date(DATE'2000-01-01',SYSDATE),random_integer(999999999),random_string(10),random_string(10),random_string(10)); + end loop; + commit; +end; +/ + + +# run this PL/SQL block to generate living data +############################################### +connect WOMBAT/"NDbGvewNHVj8@#2FFGfz!De"; + +DECLARE + i number; +begin + loop + sys.dbms_session.sleep(5); + dbms_random.seed (val => 0); + i:=random_integer(999999999); + insert into JOB (d) values (sysdate); + + update T0 set c=random_string(20) where n=i; + update T1 set c=random_string(20) where n2 between i-1000 and i+1000; + update T2 set d=random_date(DATE'2000-01-01',SYSDATE) where n1 between i-1000 and i+1000; + update T3 set c1=random_string(20),d=random_date(DATE'2000-01-01',SYSDATE) where n between i-1000 and i+1000; + + insert into T0 (d,c,n) values (random_date(DATE'2000-01-01',SYSDATE),random_string(20),random_integer(999999999)); + insert into T1 (d,c,n1,n2) values (random_date(DATE'2000-01-01',SYSDATE),random_string(10),random_integer(999999999),random_integer(999999999)); + insert into T2 (d,n1,n2,n3) values (random_date(DATE'2000-01-01',SYSDATE),random_integer(999999999),random_integer(999999999),random_integer(999999999)); + insert into T3 (d,c1,c2,c3) values (random_date(DATE'2000-01-01',SYSDATE),random_string(10),random_string(10),random_string(10)); + + commit; + exit when 1=0; + end loop; +end; +/ + + +## Golden Gate setup +#################### + +# on source & destination +alias gg='rlwrap /app/oracle/product/ogg21/ggsci' + +create user OGGADMIN identified by "eXtpam!ZarghOzVe81p@1"; +# maybe too much +grant DBA to OGGADMIN; + +Edit params ./GLOBALS +#--> +GGSCHEMA OGGADMIN +#<-- + +# on source +add credentialstore +info credentialstore domain admin +alter credentialstore add user OGGADMIN@//togoria:1521/ANDOPRD password "eXtpam!ZarghOzVe81p@1" alias ANDOPRD domain admin +dblogin useridalias ANDOPRD domain admin + +# on destination +add credentialstore +info credentialstore domain admin +alter credentialstore add user OGGADMIN@//bakura:1521/EWOKPRD password "Chan8em11fUwant!" alias EWOKPRD domain admin +info credentialstore domain admin +dblogin useridalias EWOKPRD domain admin + + +# setup replication only for tables T0, T1 and T2 +################################################# + +# on source machine +dblogin useridalias ANDOPRD domain admin +list tables WOMBAT.* +add trandata WOMBAT.T0 +add trandata WOMBAT.T1 +add trandata WOMBAT.T2 + +edit params extr_w1 +--------------------------------> +EXTRACT extr_w1 +useridalias ANDOPRD domain admin +EXTTRAIL ./dirdat/w1 +LOGALLSUPCOLS +UPDATERECORDFORMAT compact +table WOMBAT.T0; +table WOMBAT.T1; +table WOMBAT.T2; +<-------------------------------- + +dblogin useridalias ANDOPRD domain admin +register extract extr_w1 database + +add extract extr_w1, integrated tranlog, begin now +add exttrail ./dirdat/w1, extract extr_w1, megabytes 5 + +start extr_w1 +info extr_w1 + +# on source, configure the datapump +edit params dpump_w1 +--------------------------------> +EXTRACT dpump_w1 +PASSTHRU +RMTHOST bakura, MGRPORT 7809 +RMTTRAIL ./dirdat/w1 +table WOMBAT.T0; +table WOMBAT.T1; +table WOMBAT.T2; +<-------------------------------- + +add extract dpump_w1, exttrailsource ./dirdat/w1 +add rmttrail ./dirdat/w1, extract dpump_w1, megabytes 5 + +start dpump_w1 +info dpump_w1 + +# on target, setup replcat but not start it +edit params repl_w1 +--------------------------------> +REPLICAT repl_w1 +ASSUMETARGETDEFS +DISCARDFILE ./dirrpt/repl_w1.dsc, purge +useridalias EWOKPRD domain admin +MAP WOMBAT.T0, TARGET OTTER.T0; +MAP WOMBAT.T1, TARGET OTTER.T1; +MAP WOMBAT.T2, TARGET OTTER.T2; +<-------------------------------- + +dblogin useridalias EWOKPRD domain admin +add replicat repl_w1, integrated, exttrail ./dirdat/w1 + +# perform the intial LOAD +######################### + +# Note down the current scn of the source database +SQL> select current_scn from v$database; + +CURRENT_SCN +----------- + 4531616 + +# on destination, import tables +create public database link ANDOPRD connect to OGGADMIN identified by "Chan8em11fUwant!" using '//togoria:1521/ANDOPRD'; +select * from DUAL@ANDOPRD; + +# create target schema using same DDL defionotion as on source database +create user OTTER identified by "50DbGvewN00K@@)2FFGfzKg"; +grant connect, resource to OTTER; +alter user OTTER quota unlimited on USERS; + +impdp userid=OGGADMIN/"Chan8em11fUwant!"@//bakura:1521/EWOKPRD logfile=MY:WOMBAT_01.log network_link=ANDOPRD tables=WOMBAT.T0,WOMBAT.T1,WOMBAT.T2 flashback_scn=4531616 remap_schema=WOMBAT:OTTER + +start repl_w1, aftercsn 4531616 + +# when LAG is catched, retart replcat +stop repl_w1 +start repl_w1 +info repl_w1 + +# add 2 tables to SYNC +###################### + +# on source, add 2 tables to extract & datapump +stop dpump_w1 +stop extr_w1 + +# add new tables in extract & datapump parameter files +edit params extr_w1 +--------------------------------> +EXTRACT extr_w1 +useridalias ANDOPRD domain admin +EXTTRAIL ./dirdat/w1 +LOGALLSUPCOLS +UPDATERECORDFORMAT compact +table WOMBAT.T0; +table WOMBAT.T1; +table WOMBAT.T2; +table WOMBAT.JOB; +table WOMBAT.T3; +<-------------------------------- + +# add trandata for new tables +dblogin useridalias ANDOPRD domain admin +list tables WOMBAT.* +add trandata WOMBAT.JOB +add trandata WOMBAT.T3 + +start extr_w1 +info extr_w1 + +edit params dpump_w1 +--------------------------------> +EXTRACT dpump_w1 +PASSTHRU +RMTHOST bakura, MGRPORT 7809 +RMTTRAIL ./dirdat/w1 +table WOMBAT.T0; +table WOMBAT.T1; +table WOMBAT.T2; +table WOMBAT.JOB; +table WOMBAT.T3; +<-------------------------------- + +start dpump_w1 +info dpump_w1 + +# once extract & datapump are up and running, we will proceed with the initial load of the nexw tables using expdp/impdp +# Note down the current scn of the source database +SQL> select current_scn from v$database; + +CURRENT_SCN +----------- + 4675686 + +impdp userid=OGGADMIN/"Chan8em11fUwant!"@//bakura:1521/EWOKPRD logfile=MY:WOMBAT_02.log network_link=ANDOPRD tables=WOMBAT.JOB,WOMBAT.T3 flashback_scn=4675686 remap_schema=WOMBAT:OTTER + +# on target, stop replicat, add new tables and start FROM THE GOOD SCN ON NEW TABLES +stop repl_w1 + +edit params repl_w1 +--------------------------------> +REPLICAT repl_w1 +ASSUMETARGETDEFS +DISCARDFILE ./dirrpt/repl_w1.dsc, purge +useridalias EWOKPRD domain admin +MAP WOMBAT.T0, TARGET OTTER.T0; +MAP WOMBAT.T1, TARGET OTTER.T1; +MAP WOMBAT.T2, TARGET OTTER.T2; +MAP WOMBAT.JOB, TARGET OTTER.JOB, filter(@GETENV ('TRANSACTION','CSN') > 4633243); +MAP WOMBAT.T3, TARGET OTTER.T3, filter(@GETENV ('TRANSACTION','CSN') > 4633243); +<-------------------------------- + +start repl_w1 +info repl_w1 + +# wen lag is catched, remove SCN clauses from replicat and restart + +stop repl_w1 + +edit params repl_w1 +--------------------------------> +REPLICAT repl_w1 +ASSUMETARGETDEFS +DISCARDFILE ./dirrpt/repl_w1.dsc, purge +useridalias EWOKPRD domain admin +MAP WOMBAT.T0, TARGET OTTER.T0; +MAP WOMBAT.T1, TARGET OTTER.T1; +MAP WOMBAT.T2, TARGET OTTER.T2; +MAP WOMBAT.JOB, TARGET OTTER.JOB; +MAP WOMBAT.T3, TARGET OTTER.T3; +<-------------------------------- + +start repl_w1 +info repl_w1 + + + + + + + + + + + + + + + diff --git a/Golden_Gate/setup.md b/Golden_Gate/setup.md new file mode 100644 index 0000000..00945a9 --- /dev/null +++ b/Golden_Gate/setup.md @@ -0,0 +1,141 @@ +## Articles + +https://www.dbi-services.com/blog/how-to-create-an-oracle-goldengate-extract-in-multitenant/ +http://blog.data-alchemy.org/posts/oracle-goldengate-pluggable/ + +## Topology + +Databases: + - source: CDB: JEDIPRD@wayland, PDB: YODA + - target: CDB: SITHPRD@togoria, PDB: MAUL + +## Databases setup for Golden Gate + +In **both** databases, create Golden Gate admin user in `CDB$ROOT`: + + create user c##oggadmin identified by "Secret00!"; + alter user c##oggadmin quota unlimited on USERS; + grant create session, connect,resource,alter system, select any dictionary, flashback any table to c##oggadmin container=all; + exec dbms_goldengate_auth.grant_admin_privilege(grantee => 'c##oggadmin',container=>'all'); + alter user c##oggadmin set container_data=all container=current; + grant alter any table to c##oggadmin container=ALL; + alter system set enable_goldengate_replication=true scope=both; + alter database force logging; + alter database add supplemental log data; + select supplemental_log_data_min, force_logging from v$database; + +> On **target** database I had to add extra grants: + + grant select any table to c##oggadmin container=ALL; + grant insert any table to c##oggadmin container=ALL; + grant update any table to c##oggadmin container=ALL; + grant delete any table to c##oggadmin container=ALL; + +Create schemas for replicated tables on source and target PDB: + + alter session set container=YODA; + create user GREEN identified by "Secret00!"; + alter user GREEN quota unlimited on USERS; + grant connect,resource to GREEN; + connect GREEN/"Secret00!"@wayland/YODA; + + + alter session set container=MAUL; + create user RED identified by "Secret00!"; + alter user RED quota unlimited on USERS; + grant connect,resource to RED; + connect RED/"Secret00!"@togoria/MAUL; + +## Setup `exegol` Golden Gate deployment + +> My Root CA (added to truststore host) has not be recognized by `admincmient` resulting OGG-12982 error while `curl` works perfectly. + +Solution: define `OGG_CLIENT_TLS_CAPATH` environement variable to my root CA certificate prior to using `admincmient` + + export OGG_CLIENT_TLS_CAPATH=/etc/pki/ca-trust/source/anchors/rootCA.pem + +Add in the credentialstore enteries for database connections: + + adminclient + connect https://exegol.swgalaxy:2000 deployment ogg_exegol_deploy as OGGADMIN password "Secret00!" + +Optionaly store credentials to connect to deployement: + + add credentials admin user OGGADMIN password "Secret00!" + +Now we can hide the password when conecting to deployement: + + connect https://exegol.swgalaxy:2000 deployment ogg_exegol_deploy as admin + +Add in the credentialstore enteries for database connections: + + create wallet + add credentialstore + alter credentialstore add user c##oggadmin@wayland/JEDIPRD password "Secret00!" alias JEDIPRD + alter credentialstore add user c##oggadmin@wayland/YODA password "Secret00!" alias YODA + info credentialstore + +Test database connections: + + dblogin useridalias JEDIPRD + dblogin useridalias YODA + +To delete a user from credential store: + + alter credentialstore delete user JEDIPRD + +> IMPORTANT: in a database **MULTITENANT** architecture, Golden Gate is working at `CDB$ROOT` level. + +Create the checkpoint table: + + dblogin useridalias JEDIPRD + add checkpointtable YODA.c##oggadmin.checkpt + +Set **global** parameters: + + edit GLOBALS + +Put: + + ggschema c##oggadmin + checkpointtable YODA.c##oggadmin.checkpt + + +## Setup `helska` Golden Gate deployment + + adminclient + connect https://helska.swgalaxy:2000 deployment ogg_helska_deploy as OGGADMIN password "Secret00!" + +Optionaly store credentials to connect to deployement: + + add credentials admin user OGGADMIN password "Secret00!" + +Now we can hide the password when conecting to deployement: + + connect https://helska.swgalaxy:2000 deployment ogg_helska_deploy as admin + +Add in the credentialstore enteries for database connections: + + alter credentialstore add user c##oggadmin@togoria/SITHPRD password "Secret00!" alias SITHPRD + alter credentialstore add user c##oggadmin@togoria/MAUL password "Secret00!" alias MAUL + info credentialstore + +Test database connections: + + dblogin useridalias SITHPRD + dblogin useridalias MAUL + +Create the checkpoint table: + + dblogin useridalias SITHPRD + add checkpointtable MAUL.c##oggadmin.checkpt + +Set **global** parameters: + + edit GLOBALS + +Put: + + ggschema c##oggadmin + checkpointtable MAUL.c##oggadmin.checkpt + diff --git a/Oracle_26_AI/install_01.md b/Oracle_26_AI/install_01.md new file mode 100644 index 0000000..c4f9b8d --- /dev/null +++ b/Oracle_26_AI/install_01.md @@ -0,0 +1,39 @@ +Packages to install before executing `runInstaller`: + +```bash +dnf install fontconfig.x86_64 compat-openssl11.x86_64 -y +``` +Script for **stand alone** database creation: + +```bash +#!/bin/bash + +DB_NAME=DEFENDER +ORACLE_UNQNAME=DEFENDERPRD +PDB_NAME=SENTINEL +SYS_PWD="Secret00!" +PDB_PWD="Secret00!" + +dbca -silent \ + -createDatabase \ + -templateName General_Purpose.dbc \ + -gdbname ${ORACLE_UNQNAME} \ + -sid ${DB_NAME} \ + -createAsContainerDatabase true \ + -numberOfPDBs 1 \ + -pdbName ${PDB_NAME} \ + -pdbAdminPassword ${PDB_PWD} \ + -sysPassword ${SYS_PWD} \ + -systemPassword ${SYS_PWD} \ + -datafileDestination /data \ + -storageType FS \ + -useOMF true \ + -recoveryAreaDestination /reco \ + -recoveryAreaSize 10240 \ + -characterSet AL32UTF8 \ + -nationalCharacterSet AL16UTF16 \ + -databaseType MULTIPURPOSE \ + -automaticMemoryManagement false \ + -totalMemory 3072 +``` + diff --git a/Oracle_TLS/oracle_tls_01.md b/Oracle_TLS/oracle_tls_01.md new file mode 100644 index 0000000..2afee90 --- /dev/null +++ b/Oracle_TLS/oracle_tls_01.md @@ -0,0 +1,268 @@ +# Setup 1: self signed certificated and certificates exchange + +## Server side (togoria) + +Create the wallet: + + orapki wallet create \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -auto_login_local + + +Create certificate in wallet: + + orapki wallet add \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -dn "CN=togoria.swgalaxy" -keysize 1024 -self_signed -validity 3650 + +Display wallet contents (wallet password is not required): + + orapki wallet display \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" + +Export certificate: + + orapki wallet export \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -dn "CN=togoria.swgalaxy" \ + -cert /app/oracle/staging_area/TLS_poc/exports/togoria.swgalaxy.crt + +## Client side (wayland) + +Create the wallet: + + orapki wallet create \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -auto_login_local + +Create certificate in wallet: + + orapki wallet add \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -dn "CN=wayland.swgalaxy" -keysize 1024 -self_signed -validity 3650 + +Display wallet contents (wallet password is not required): + + orapki wallet display \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" + +Export certificate: + + orapki wallet export \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -dn "CN=wayland.swgalaxy" \ + -cert /app/oracle/staging_area/TLS_poc/exports/wayland.swgalaxy.crt + +## Exchange certificates between server and client + +Load client certificate into server wallet as **trusted** certificate: + + orapki wallet add \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -trusted_cert -cert /app/oracle/staging_area/TLS_poc/exports/wayland.swgalaxy.crt + +Load server certificate into client wallet as **trusted** certificate: + + orapki wallet add \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -trusted_cert -cert /app/oracle/staging_area/TLS_poc/exports/togoria.swgalaxy.crt + +## Server side (togoria) + +> It is not possible to use a custom `TNS_ADMIN` for the listener. `sqlnet.ora` and `listener.ora` shound be placed under `$(orabasehome)/network/admin` for a **read-only** `ORACLE_HOME` or under `$ORACLE_HOME/network/admin` for a **read-write** `ORACLE_HOME` + +File `sqlnet.ora`: + + WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /app/oracle/staging_area/TLS_poc/wallet) + ) + ) + + SQLNET.AUTHENTICATION_SERVICES = (TCPS,NTS,BEQ) + SSL_CLIENT_AUTHENTICATION = FALSE + SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA) + + +File `listener.ora`: + + SSL_CLIENT_AUTHENTICATION = FALSE + + WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /app/oracle/staging_area/TLS_poc/wallet) + ) + ) + + LISTENER_SECURE = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCPS)(HOST = togoria.swgalaxy)(PORT = 24000)) + ) + ) + + +Start listener: + + lsnrctl start LISTENER_SECURE + +Register listener in database: + + alter system set local_listener="(DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCPS)(HOST = togoria.swgalaxy)(PORT = 24000)) + ) + )" + scope=both sid='*'; + + alter system register; + +## Client network configuration + + export TNS_ADMIN=/app/oracle/staging_area/TLS_poc/tnsadmin + +File `$TNS_ADMIN/sqlnet.ora`: + + WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /app/oracle/staging_area/TLS_poc/wallet) + ) + ) + + SQLNET.AUTHENTICATION_SERVICES = (TCPS,NTS) + SSL_CLIENT_AUTHENTICATION = FALSE + SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA) + + +File `$TNS_ADMIN/tnsnames.ora`: + + MAUL_24000= + (DESCRIPTION= + (ADDRESS= + (PROTOCOL=TCPS)(HOST=togoria.swgalaxy)(PORT=24000) + ) + (CONNECT_DATA= + (SERVICE_NAME=MAUL) + ) + ) + + +Check **TCPS** connection: + + connect vpl/*****@MAUL_24000 + + select SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') from dual; + + +# Setup 2: use certificates signed by a CA Root + +Stop the listener: + + lsnrctl stop LISTENER_SECURE + +Remove trusted/user certificates and certificate requests on **server** side. + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -trusted_cert \ + -alias 'CN=togoria.swgalaxy' + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -trusted_cert \ + -alias 'CN=wayland.swgalaxy' + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -user_cert \ + -dn 'CN=togoria.swgalaxy' + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -cert_req \ + -dn 'CN=togoria.swgalaxy' + +Remove trusted/user certificates and certificate requests on **client** side. + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -trusted_cert \ + -alias 'CN=togoria.swgalaxy' + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -trusted_cert \ + -alias 'CN=wayland.swgalaxy' + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -user_cert \ + -dn 'CN=wayland.swgalaxy' + + orapki wallet remove \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -cert_req \ + -dn 'CN=wayland.swgalaxy' + +Check if wallets are empty client/server side. + + orapki wallet display \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" + +We will use certificates signed by the same CA Root for the client and for the server. + +Create an export file using the server certificate, server private key and CA Root certificate: + + openssl pkcs12 -export \ + -in /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.crt \ + -inkey /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.key \ + -certfile /app/oracle/staging_area/TLS_poc/openssl_files/rootCA.pem \ + -out /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.p12 + +Import into Oracle wallet: + + orapki wallet import_pkcs12 \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "C0mpl1cated#Ph|rase" \ + -pkcs12file /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.p12 + +Server certificate will be imported as **user** certificate and CA Root certificate will be imported as **trusted** certificate. + +Perform the same certificate export-import operation client side: + + openssl pkcs12 -export \ + -in /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.crt \ + -inkey /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.key \ + -certfile /app/oracle/staging_area/TLS_poc/openssl_files/rootCA.pem \ + -out /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.p12 + + orapki wallet import_pkcs12 \ + -wallet "/app/oracle/staging_area/TLS_poc/wallet" \ + -pwd "Dont1Try@toGuessth1s" \ + -pkcs12file /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.p12 + +Start the listener: + + lsnrctl start LISTENER_SECURE diff --git a/PDB_clone/clone_PDB_from_non-CDB_01.txt b/PDB_clone/clone_PDB_from_non-CDB_01.txt new file mode 100644 index 0000000..8fbc27a --- /dev/null +++ b/PDB_clone/clone_PDB_from_non-CDB_01.txt @@ -0,0 +1,40 @@ +# clone non-CDB to PDB using database link +########################################## + + +# Note: source is ARCHIVELOG mode and READ-WRITE state + +# on source (non-CDB) database, create the user to use for the database link +create user CLONE_USER identified by "m007jgert221PnH@A"; +grant create session, create pluggable database to CLONE_USER; + +# on target (CDB) database, create the database link +create database link CLONE_NON_CDB + connect to CLONE_USER identified by "m007jgert221PnH@A" + using '//togoria:1521/ANDOPRD'; + +select * from dual@CLONE_NON_CDB; + +# drop target database if exists +alter pluggable database WOMBAT close immediate instances=ALL; +drop pluggable database WOMBAT including datafiles; + +# clone PDB from database link +create pluggable database WOMBAT from NON$CDB@CLONE_NON_CDB parallel 4; + +# the PDB should be on MOUNT state +show pdbs + +# if the version of TARGET DB > version of SOURCE DB, the PDB should be upgrade +dbupgrade -l /home/oracle/tmp -c "WOMBAT" + +# convert to PDB before openning +alter session set container=WOMBAT; +@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql + +# after convversion, open the PDB and save state +alter pluggable database WOMBAT open instances=ALL; +alter pluggable database WOMBAT save state; + + + diff --git a/RAC_on_OEL8/OEL8_standalone_taris_install.txt b/RAC_on_OEL8/OEL8_standalone_taris_install.txt new file mode 100644 index 0000000..31a0d8a --- /dev/null +++ b/RAC_on_OEL8/OEL8_standalone_taris_install.txt @@ -0,0 +1,87 @@ +qemu-img create -f raw /vm/ssd0/taris/boot_01.img 4G +qemu-img create -f raw /vm/ssd0/taris/root_01.img 30G +qemu-img create -f raw /vm/ssd0/taris/swap_01.img 20G +qemu-img create -f raw /vm/ssd0/taris/app_01.img 60G + +virt-install \ + --graphics vnc,password=secret,listen=0.0.0.0 \ + --name=taris \ + --vcpus=4 \ + --memory=16384 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/vm/hdd0/_kit_/OracleLinux-R8-U7-x86_64-dvd.iso \ + --disk /vm/ssd0/taris/boot_01.img \ + --disk /vm/ssd0/taris/root_01.img \ + --disk /vm/ssd0/taris/swap_01.img \ + --disk /vm/ssd0/taris/app_01.img \ + --os-variant=ol8.5 + + + +dd if=/dev/zero of=/vm/ssd0/taris/data_01.img bs=1G count=20 +dd if=/dev/zero of=/vm/ssd0/taris/data_02.img bs=1G count=20 +dd if=/dev/zero of=/vm/ssd0/taris/reco_01.img bs=1G count=20 + +virsh domblklist taris --details + +virsh attach-disk taris --source /vm/ssd0/taris/data_01.img --target vde --persistent +virsh attach-disk taris --source /vm/ssd0/taris/data_02.img --target vdf --persistent +virsh attach-disk taris --source /vm/ssd0/taris/reco_01.img --target vdg --persistent + +# Enable EPEL Repository on Oracle Linux 8 +tee /etc/yum.repos.d/ol8-epel.repo< ylesia-db01 +# rodia-db02 -> ylesia-db02 +# rodia-scan -> ylesia-scan + + +# on rodia-db02: stop CRS +# on rodia-db02: deconfigure CRS +# on rodia-db02: uninstall GI +# on rodia-db01: remove rodia-db02 from cluster +# on rodia-db02: change IPs and change hostname to ylesia-db02 +# on rodia-db01: add ylesia-db02 to cluster + +# on rodia-db02 as root +$ORACLE_HOME/bin/crsctl stop crs +$ORACLE_HOME/crs/install/rootcrs.sh -deconfig -force +# on rodia-db02 as grid +$ORACLE_HOME/deinstall/deinstall -local + +# on rodia-db01 as root +$ORACLE_HOME/bin/crsctl delete node -n rodia-db02 +olsnodes +crsctl status res -t + +# change IPs and hostame rodia-db02 -> ylesia-db02 + +# on rodia-db01 as grid using graphical interface +$ORACLE_HOME/addnode/addnode.sh +olsnodes +crsctl status res -t + +# repeat operations for remove rodia-db01, rename rodia-db01 -> ylesia-db01 and add ylesia-db01 to the cluster + +# now change SCAN & SCAN listener +srvctl config scan + +srvctl stop scan_listener +srvctl stop scan -f + +srvctl status scan +srvctl status scan_listener + + +srvctl modify scan -n ylesia-scan +srvctl config scan + +srvctl start scan +srvctl start scan_listener + + + + + + + diff --git a/RAC_on_OEL8/ylesia_RAC_OEL8_install.txt b/RAC_on_OEL8/ylesia_RAC_OEL8_install.txt new file mode 100644 index 0000000..1fac7d8 --- /dev/null +++ b/RAC_on_OEL8/ylesia_RAC_OEL8_install.txt @@ -0,0 +1,468 @@ +# DNS config +############ + +# config file swgalaxy.zone + +ylesia-db01 IN A 192.168.0.114 +ylesia-db01-vip IN A 192.168.0.115 +ylesia-db01-priv IN A 192.168.1.114 +ylesia-db01-asm IN A 192.168.2.114 + +ylesia-db02 IN A 192.168.0.116 +ylesia-db02-vip IN A 192.168.0.117 +ylesia-db02-priv IN A 192.168.1.116 +ylesia-db02-asm IN A 192.168.2.116 + +ylesia-scan IN A 192.168.0.108 +ylesia-scan IN A 192.168.0.109 +ylesia-scan IN A 192.168.0.110 + +rodia-db01 IN A 192.168.0.93 +rodia-db01-vip IN A 192.168.0.95 +rodia-db01-priv IN A 192.168.1.93 +rodia-db01-asm IN A 192.168.2.93 + +rodia-db02 IN A 192.168.0.94 +rodia-db02-vip IN A 192.168.0.96 +rodia-db02-priv IN A 192.168.1.94 +rodia-db02-asm IN A 192.168.2.94 + +rodia-scan IN A 192.168.0.97 +rodia-scan IN A 192.168.0.98 +rodia-scan IN A 192.168.0.99 + +# config file 0.168.192.in-addr.arpa + +114 IN PTR ylesia-db01. +116 IN PTR ylesia-db02. +115 IN PTR ylesia-db01-vip. +117 IN PTR ylesia-db02-vip. + +108 IN PTR ylesia-scan. +109 IN PTR ylesia-scan. +110 IN PTR ylesia-scan. + +93 IN PTR rodia-db01.swgalaxy. +94 IN PTR rodia-db02.swgalaxy. +95 IN PTR rodia-db01-vip.swgalaxy. +96 IN PTR rodia-db02-vip.swgalaxy. + +97 IN PTR rodia-scan.swgalaxy. +98 IN PTR rodia-scan.swgalaxy. +99 IN PTR rodia-scan.swgalaxy. + + +qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/boot_01.img 4G +qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/root_01.img 30G +qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/swap_01.img 20G +qemu-img create -f raw /vm/hdd0/ylesia-rac/ylesia-db01/app_01.img 60G + + +# get os-variant as Short ID from OS info database +osinfo-query os | grep -i oracle | sort + +virt-install \ + --graphics vnc,password=secret,listen=0.0.0.0 \ + --name=ylesia-db01 \ + --vcpus=4 \ + --memory=40960 \ + --network bridge=br0 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/mnt/yavin4/kit/Oracle/OEL8/OracleLinux-R8-U7-x86_64-dvd.iso \ + --disk /vm/hdd0/ylesia-rac/ylesia-db01/boot_01.img \ + --disk /vm/hdd0/ylesia-rac/ylesia-db01/root_01.img \ + --disk /vm/hdd0/ylesia-rac/ylesia-db01/swap_01.img \ + --disk /vm/hdd0/ylesia-rac/ylesia-db01/app_01.img \ + --os-variant=ol8.5 + + +# on host install packages +dnf install bind-utils +dnf install zip.x86_64 unzip.x86_64 gzip.x86_64 +dnf install pigz.x86_64 +dnf install net-tools.x86_64 +dnf install oracle-database-preinstall-19c.x86_64 +dnf install oracle-database-preinstall-21c.x86_64 +dnf install unixODBC +dnf install wget +dnf install lsof.x86_64 + + +# Enable EPEL Repository on Oracle Linux 8 +tee /etc/yum.repos.d/ol8-epel.repo< + +oracleasm configure -i +# choose grid for user and asmdba for group +oracleasm init + +# if we need to use an older kernel prior to lmast kernel update +# https://www.golinuxcloud.com/change-default-kernel-version-rhel-centos-8/ + + +# create ASM disks +oracleasm status +oracleasm scandisks +oracleasm listdisks + +# list block devices +lsblk + +# use following shell script to create all new partitions + +--------------------------------------------------------------------------------------- +#!/bin/sh +hdd="/dev/vde /dev/vdf /dev/vdg /dev/vdh /dev/vdi /dev/vdj /dev/vdk /dev/vdl /dev/vdm" +for i in $hdd;do +echo "n +p +1 + + +w +"|fdisk $i;done +--------------------------------------------------------------------------------------- + +# if ASMLib is used +oracleasm createdisk DATA_01 /dev/vde1 +oracleasm createdisk DATA_02 /dev/vdf1 +oracleasm createdisk DATA_03 /dev/vdg1 +oracleasm createdisk DATA_04 /dev/vdh1 +oracleasm createdisk DATA_05 /dev/vdi1 + +oracleasm createdisk RECO_01 /dev/vdj1 +oracleasm createdisk RECO_02 /dev/vdk1 +oracleasm createdisk RECO_03 /dev/vdl1 +oracleasm createdisk RECO_04 /dev/vdm1 + + +# without ASMLib +vi /etc/udev/rules.d/99-oracle-asmdevices.rules +KERNEL=="vde1",NAME="asm_data_01",OWNER="grid",GROUP="asmadmin",MODE="0660" +KERNEL=="vdf1",NAME="asm_data_02",OWNER="grid",GROUP="asmadmin",MODE="0660" +KERNEL=="vdg1",NAME="asm_data_03",OWNER="grid",GROUP="asmadmin",MODE="0660" +KERNEL=="vdh1",NAME="asm_data_04",OWNER="grid",GROUP="asmadmin",MODE="0660" +KERNEL=="vdi1",NAME="asm_data_05",OWNER="grid",GROUP="asmadmin",MODE="0660" + +KERNEL=="vdj1",NAME="asm_reco_01",OWNER="grid",GROUP="asmadmin",MODE="0660" +KERNEL=="vdk1",NAME="asm_reco_02",OWNER="grid",GROUP="asmadmin",MODE="0660" +KERNEL=="vdl1",NAME="asm_reco_03",OWNER="grid",GROUP="asmadmin",MODE="0660" +KERNEL=="vdm1",NAME="asm_reco_04",OWNER="grid",GROUP="asmadmin",MODE="0660" + + +# at this moment clone the VM +# on Dom0 +virsh dumpxml ylesia-db01 > /tmp/myvm.xml +# modify XML file: +# replace ylesia-db01 by ylesia-db02 +# remove ... line +# generate new mac addresses for network interfaces + +date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/' + +virsh define /tmp/myvm.xml + +# start cloned ylesia-db02 VM and change IP adresses and host name +vi /etc/sysconfig/network-scripts/ifcfg-enp1s0 +vi /etc/sysconfig/network-scripts/ifcfg-enp2s0 +vi /etc/sysconfig/network-scripts/ifcfg-enp3s0 + +hostnamectl set-hostname ylesia-db02.swgalaxy + +# mount CIFS share on both VM +dnf install cifs-utils.x86_64 + +groupadd smbuser --gid 1502 +useradd smbuser --uid 1502 -g smbuser -G smbuser + +mkdir -p /mnt/yavin4 + +# test CIFS mount +mount -t cifs //192.168.0.9/share /mnt/yavin4 -o vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,user=vplesnila +umount /mnt/yavin4 + +# create credentials file for automount: /root/.smbcred +# username=vplesnila +# password=***** + +# add in /etc/fstab +# //192.168.0.9/share /mnt/yavin4 cifs vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred 0 0 + +-- mount +mount -a + +# oracle user profile +--------------------------------------------------------------------------------------- +# .bash_profile + +# Get the aliases and functions +if [ -f ~/.bashrc ]; then + . ~/.bashrc +fi + +# User specific environment and startup programs +alias listen='lsof -i -P | grep -i "listen"' +alias s='rlwrap sqlplus / as sysdba' +alias r='rlwrap rman target /' + +PS1='\u@\h[$ORACLE_SID]:$PWD\$ ' +umask 022 + +PATH=$PATH:$HOME/.local/bin:$HOME/bin + +export PATH +--------------------------------------------------------------------------------------- + + +# grid user profile +--------------------------------------------------------------------------------------- +# .bash_profile + +# Get the aliases and functions +if [ -f ~/.bashrc ]; then + . ~/.bashrc +fi + +# User specific environment and startup programs +alias listen='lsof -i -P | grep -i "listen"' +alias asmcmd='rlwrap asmcmd' +alias s='rlwrap sqlplus / as sysasm' +PS1='\u@\h[$ORACLE_SID]:$PWD\$ ' +umask 022 + +GRID_HOME=/app/grid/product/21.3 +ORACLE_SID=+ASM1 +ORACLE_BASE=/app/grid/base +ORACLE_HOME=$GRID_HOME +LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib +PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch + + +export ORACLE_BASE +export ORACLE_HOME +export LD_LIBRARY_PATH +export ORACLE_SID +export PATH +--------------------------------------------------------------------------------------- + +# generate SSH keys on both VM and add public keys in .ssh/authorized_keys in order to connect locally and cross connect without password +ssh-keygen +cd +cat .ssh/id_rsa.pub >> .ssh/authorized_keys + +# as root on both VM +mkdir -p /app/grid/product/21.3 +mkdir -p /app/grid/base +mkdir -p /app/grid/oraInventory + +chown -R grid:oinstall /app/grid/product/21.3 +chown -R grid:oinstall /app/grid/base +chown -R grid:oinstall /app/grid/oraInventory + +# on the 1st VM, unzip grid infrastructure distribution ZIP file +cd /app/grid/product/21.3 +unzip /mnt/yavin4/kit/Oracle/Oracle_Database_21/LINUX.X64_213000_grid_home.zip + + +# from a X11 terminal, proceed with software installation +/app/grid/product/21.3/gridSetup.sh + +# same command to use after software installation in order to configure the new Oracle Cluster +/app/grid/product/21.3/gridSetup.sh + +# if grid setup fails with the error PRVG-11250 The Check "RPM Package Manager Database" Was Not Performed +# consider apply following MOS note: Cluvfy Fail with PRVG-11250 The Check "RPM Package Manager Database" Was Not Performed (Doc ID 2548970.1) +/app/grid/product/21.3/runcluvfy.sh stage -pre crsinst -n ylesia-db01,ylesia-db02 -method root + +# from a X11 terminal, run ASM configuration assistent in order to create RECO diskgroup +/app/grid/product/21.3/bin/asmca + +# check cluster status +crsctl status res -t + + +# Apply the latest GIRU patch using out-of-place method +####################################################### + +# as root, create a staging area for patches on the first VM +mkdir -p /app/staging_area +chown -R grid:oinstall /app/staging_area +chmod g+w /app/staging_area + +# as grid user, unzip GI patch in the staging area onn the first VM +su - grid +cd /app/staging_area +unzip /mnt/yavin4/kit/Oracle/Oracle_Database_21/patch/GI_RU_AVR23/p35132566_210000_Linux-x86-64.zip + +# as root, on both VM, preparev the directory for the new GI +export NEW_GRID_HOME=/app/grid/software/21.10 + +mkdir -p $NEW_GRID_HOME +chown -R grid:oinstall $NEW_GRID_HOME + +# as grid, only on the first VM unzip the base distibution of thr GI +su - grid +export NEW_GRID_HOME=/app/grid/software/21.10 +cd $NEW_GRID_HOME +unzip /mnt/yavin4/kit/Oracle/Oracle_Database_21/LINUX.X64_213000_grid_home.zip + +# very IMPORTANT +# deploy the last version of OPatch in the new GI home before proceed with the GI install with RU apply +# as grid user +cd $NEW_GRID_HOME +rm -rf OPatch +ls OPatch +unzip /mnt/yavin4/kit/Oracle/opatch/p6880880_210000_Linux-x86-64.zip + +# at this moment, just simulate an install of the base GI, software only +# do not install, just put the response file aside + +# setup the new GI HOME and install the GIRU +export NEW_GRID_HOME=/app/grid/software/21.10 +export ORACLE_HOME=$NEW_GRID_HOME +$ORACLE_HOME/gridSetup.sh -executePrereqs -silent + +cd $ORACLE_HOME +./gridSetup.sh -ignorePrereq -waitforcompletion -silent \ + -applyRU /app/staging_area/35132566 \ + -responseFile /home/grid/grid.rsp + + +# once new GI homes are insalled and updated to the lasr GIRU +# switch CRS to the new GI HOME, on each VM's one by one (rolling mode) + +export NEW_GRID_HOME=/app/grid/software/21.10 +export ORACLE_HOME=$NEW_GRID_HOME +export CURRENT_NODE=$(hostname) + +$ORACLE_HOME/gridSetup.sh \ + -silent -switchGridHome \ + oracle.install.option=CRS_SWONLY \ + ORACLE_HOME=$ORACLE_HOME \ + oracle.install.crs.config.clusterNodes=$CURRENT_NODE \ + oracle.install.crs.rootconfig.executeRootScript=false + +# check if grid:oinstall is the owner of GI HOME, otherwise modify-it: +chown grid /app/grid/product/21.10 + +# IMPORTANT: do not remove the old GI HOME before switching to the new GI HOME on all nodes + +# update grid .bash_profile with the new GI home and check CRS +crsctl status res -t + +# display registered ORACLE_HOME's +cat /app/grid/oraInventory/ContentsXML/inventory.xml | grep "HOME NAME" + +# as grid user, on both VM, remove OLD ORACLE_HOME +export OLD_GRID_HOME=/app/grid/product +export ORACLE_HOME=$OLD_GRID_HOME +$ORACLE_HOME/deinstall/deinstall -local + +# divers +######## + +# if some install/deinstall operations for 19 rdbms are failing checking OEL8.7 compatibility, use: +export CV_ASSUME_DISTID=OL7 + +# possible also to need following libs +dnf install libstdc++-devel.x86_64 +dnf install libaio-devel.x86_64 +dnf install libcap.x86_64 libcap-devel.x86_64 + + +# potential issue with Oracle 19 RDBMS binary +# check permission (-rwsr-s--x) and owner (oracle:oinstall) for 19 oracle binary +ls -l /app/oracle/product/19/bin/oracle +# if is not good, issue as root +chown oracle:asmadmin /app/oracle/product/19/bin/oracle +chmod 6751 /app/oracle/product/19/bin/oracle + + +# if CLSRSC-762: Empty site GUID for the local site name (Doc ID 2878740.1) +# update $GRID_HOME/crs/install/crsgenconfig_params +# put the name of the RAC and generate a new UUID using linux uuid command + + +# Enabling a Read-Only Oracle Home +$ORACLE_HOME/bin/roohctl -enable + + + + diff --git a/RMAN/rman_duplicate_from_location_hot_backup_01.txt b/RMAN/rman_duplicate_from_location_hot_backup_01.txt new file mode 100644 index 0000000..de95fd8 --- /dev/null +++ b/RMAN/rman_duplicate_from_location_hot_backup_01.txt @@ -0,0 +1,20 @@ +# set duplicate target database to + + +rman auxiliary / + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate target database to ANDO backup location '/mnt/yavin4/tmp/_oracle_/orabackup/19_non_CDB/backupset/'; +} + diff --git a/Time_Zone_upgrade/ts_upgrade_01.txt b/Time_Zone_upgrade/ts_upgrade_01.txt new file mode 100644 index 0000000..3ac470d --- /dev/null +++ b/Time_Zone_upgrade/ts_upgrade_01.txt @@ -0,0 +1,143 @@ +# https://oracle-base.com/articles/misc/update-database-time-zone-file#upgrade-time-zone-file-multiteanant + + +# check current time zone version + +select * from V$TIMEZONE_FILE; +select TZ_VERSION from REGISTRY$DATABASE; + + +-- qyery dst_check.sql ----------- +COLUMN property_name FORMAT A30 +COLUMN property_value FORMAT A20 + +select + property_name, property_value +from + DATABASE_PROPERTIES +where + property_name like 'DST_%' +order by + property_name; +---------------------------------- + +# latest available version of the timezone +select DBMS_DST.GET_LATEST_TIMEZONE_VERSION from dual; + + +# prepare for the upgrade (optional) + +DECLARE + l_tz_version PLS_INTEGER; +BEGIN + l_tz_version := DBMS_DST.get_latest_timezone_version; + + DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version); + DBMS_DST.begin_prepare(l_tz_version); +END; +/ + +# execute dst_check.sql +# DST_UPGRADE_STATE should change from NONE to PREPARE + +# clean tachnical tables +truncate table SYS.DST$AFFECTED_TABLES; +truncate table SYS.DST$ERROR_TABLE; + + +# find tables and errors affected by the upgrade +exec DBMS_DST.FIND_AFFECTED_TABLES; + +select * from SYS.DST$AFFECTED_TABLES; +select * from SYS.DST$ERROR_TABLE; + +# perform necessaty checks and finish the prepare step if you want to go ahead with the upgrade +exec DBMS_DST.END_PREPARE; + +# Note: for a CDB, TZ should be upgrade in each container + +# restart the database in UPGRADE mode + +# BEGIN upgrade +############### +SET SERVEROUTPUT ON +DECLARE + l_tz_version PLS_INTEGER; +BEGIN + SELECT DBMS_DST.get_latest_timezone_version + INTO l_tz_version + FROM dual; + + DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version); + DBMS_DST.begin_upgrade(l_tz_version); +END; +/ + +# restart the database + +# END upgrade +############# +SET SERVEROUTPUT ON +DECLARE + l_failures PLS_INTEGER; +BEGIN + DBMS_DST.upgrade_database(l_failures); + DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures); + DBMS_DST.end_upgrade(l_failures); + DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures); +END; +/ + +# restart the database + + +# following queries can be used to check the progress of the TZ upgrade table by table +-- CDB +COLUMN owner FORMAT A30 +COLUMN table_name FORMAT A30 + +SELECT con_id, + owner, + table_name, + upgrade_in_progress +FROM cdb_tstz_tables +ORDER BY 1,2,3; + +-- Non-CDB +COLUMN owner FORMAT A30 +COLUMN table_name FORMAT A30 + +SELECT owner, + table_name, + upgrade_in_progress +FROM dba_tstz_tables +ORDER BY 1,2; + +# Note: in 21c, the following parameter is supposed tu avoid database restart during TZ upgrade +# in my test it does not worked +alter system set timezone_version_upgrade_online=true scope=both sid='*'; + + +-- restart PDB$SEED in UPGRADE mode +alter pluggable database PDB$SEED close immediate instances=ALL; +alter pluggable database PDB$SEED open upgrade instances=ALL; +show pdbs +alter session set container=PDB$SEED; +-- run BEGIN TZ upgrade procedure + +-- restart PDB$SEED in READ-WITE mode +alter session set container=CDB$ROOT; +alter pluggable database PDB$SEED close immediate instances=ALL; +alter pluggable database PDB$SEED open read write instances=ALL; +alter session set container=PDB$SEED; +-- run END TZ upgrade procedure + +-- restart PDB$SEED in READ-WITE mode +alter session set container=CDB$ROOT; +alter pluggable database PDB$SEED close immediate instances=ALL; +alter pluggable database PDB$SEED open instances=ALL; + +-- check TZ and close PDB$SEED +alter session set container=CDB$ROOT; +alter pluggable database PDB$SEED close immediate instances=ALL; + diff --git a/artcles.txt b/artcles.txt new file mode 100644 index 0000000..f75e325 --- /dev/null +++ b/artcles.txt @@ -0,0 +1,18 @@ +https://www.databasejournal.com/oracle/hybrid-histograms-in-oracle-12c/ +https://hourim.wordpress.com/2016/01/20/natural-and-adjusted-hybrid-histogram/ +https://chinaraliyev.wordpress.com/2018/11/06/understanding-hybrid-histogram/ + +http://www.br8dba.com/store-db-credentials-in-oracle-wallet/ +https://backendtales.blogspot.com/2023/02/santas-little-index-helper.html +Asymmetric Dataguard with multitenant +https://oracleandme.com/2023/10/31/asymmetric-dataguard-with-multitenant-part-1/ + +https://backendtales.blogspot.com/2023/02/santas-little-index-helper.html +https://github.com/GoogleCloudPlatform/community/blob/master/archived/setting-up-postgres-hot-standby.md + +# cursor: pin S wait on X +https://svenweller.wordpress.com/2018/05/23/tackling-cursor-pin-s-wait-on-x-wait-event-issue/ + +# pin cursor in shared pool +https://dbamarco.wordpress.com/2015/10/29/high-parse-time-in-oracle-12c/ + diff --git a/automatic_SPM/automatic_SPM_01.txt b/automatic_SPM/automatic_SPM_01.txt new file mode 100644 index 0000000..3c9e684 --- /dev/null +++ b/automatic_SPM/automatic_SPM_01.txt @@ -0,0 +1,78 @@ +show parameter optimizer_capture_sql_plan_baselines + +-- sould be FALSE for automatic SPM + +col parameter_name for a40 +col parameter_value for a20 + +SELECT parameter_name,parameter_value +FROM dba_sql_management_config; + +-- check for parameter_name='AUTO_SPM_EVOLVE_TASK' + +col task_name for a40 + +SELECT task_name,enabled +FROM dba_autotask_schedule_control +WHERE dbid = sys_context('userenv','con_dbid'); + +-- check for task_name = 'Auto SPM Task'; + +------------ +-- to ENABLE +------------ +BEGIN + DBMS_SPM.CONFIGURE('AUTO_SPM_EVOLVE_TASK','ON'); +END; +/ + +-- For non-autonomous systems only, in the relevant PDB +-- execute the following as SYS to ensure the correct plan source +-- and ACCEPT_PLANS has its default value, TRUE, +BEGIN + DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( + task_name => 'SYS_AUTO_SPM_EVOLVE_TASK', + parameter => 'ALTERNATE_PLAN_SOURCE', + value => 'SQL_TUNING_SET'); +END; +/ +BEGIN + DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( + task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' , + parameter => 'ACCEPT_PLANS', + value => 'TRUE'); +END; +/ + +------------- +-- to DISABLE +------------- +BEGIN + DBMS_SPM.CONFIGURE('AUTO_SPM_EVOLVE_TASK','OFF'); +END; +/ + +-- For non-autonomous systems only, +-- execute the following as SYS if you want to return +-- parameters to 'manual' SPM values - for example +BEGIN + DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( + task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' , + parameter => 'ALTERNATE_PLAN_BASELINE', + value => 'EXISTING'); +END; +/ +BEGIN + DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( + task_name => 'SYS_AUTO_SPM_EVOLVE_TASK', + parameter => 'ALTERNATE_PLAN_SOURCE', + value => 'AUTO'); +END; +/ + + + + + + + diff --git a/btrfs/btrfs_install_rocky8_01.txt b/btrfs/btrfs_install_rocky8_01.txt new file mode 100644 index 0000000..f5455ac --- /dev/null +++ b/btrfs/btrfs_install_rocky8_01.txt @@ -0,0 +1,22 @@ +# based on: https://www.unixmen.com/install-btrfs-tools-on-ubuntu-linux-to-manage-btrfs-operations/ + +dnf install -Y git automake asciidoc.noarch xmlto.x86_64 +dnf --enablerepo=powertools install python3-sphinx +dnf install -Y e2fsprogs-devel.x86_64 e2fsprogs-libs.x86_64 e2fsprogs.x86_64 libblkid-devel.x86_64 +dnf install -y libzstd.x86_64 libzstd-devel.x86_64 +dnf install -y systemd-devel.x86_64 +dnf install -y python39.x86_64 python36-devel.x86_64 +dnf install -y lzo.x86_64 lzo-devel.x86_64 + +git clone git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git +cd btrfs-progs/ +automake +./configure +make +# if falure on Making documentation, add master_doc = 'index' to Documentation/conf.py +make install + +# test +btrfs version + +lsblk diff --git a/clustoring_factor/clustering_factor_01.txt b/clustoring_factor/clustering_factor_01.txt new file mode 100644 index 0000000..3018b29 --- /dev/null +++ b/clustoring_factor/clustering_factor_01.txt @@ -0,0 +1,135 @@ +-- https://easyteam.fr/limpact-du-facteur-dordonnancement-sur-les-performances-clustering-factor/ + +create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret; +alter pluggable database NIHILUS open; +alter pluggable database NIHILUS save state; + + +alter session set container=NIHILUS; + +create tablespace USERS datafile size 32M autoextend ON next 32M; +alter database default tablespace USERS; + +create user adm identified by "secret"; +grant sysdba to adm; + +create user usr identified by "secret"; +grant CONNECT,RESOURCE to usr; +grant alter session to usr; +alter user usr quota unlimited on USERS; + +alias adm_NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba' +alias usr_NIHILUS='rlwrap sqlplus usr/"secret"@ba + + + +create table USR.TABLE_LIST_DISPLAY_PATTERNS ( + LIST_ID number not null, + DISPLAY_PATTERN_ID varchar(1000) not null +); + + +begin + for i in 1..100 loop + insert into USR.TABLE_LIST_DISPLAY_PATTERNS select i, lpad('x',1000,'x') from dba_objects where rownum < 35 order by 1; + end loop; +end; +/ +commit; + + +create index USR.LIST_DISPLAY_PATTERNS_IDX on USR.TABLE_LIST_DISPLAY_PATTERNS(LIST_ID); + + +create table USR.TABLE_LIST_DISPLAY_RAND as + select * from USR.TABLE_LIST_DISPLAY_PATTERNS order by DBMS_RANDOM.RANDOM; + +create index USR.LIST_DISPLAY_RAND_IDX on USR.TABLE_LIST_DISPLAY_RAND(LIST_ID); + +exec dbms_stats.gather_table_stats('USR','TABLE_LIST_DISPLAY_PATTERNS', method_opt=>'for all columns size AUTO'); +exec dbms_stats.gather_table_stats('USR','TABLE_LIST_DISPLAY_RAND', method_opt=>'for all columns size AUTO'); + + + + +SQL> @tab USR.TABLE_LIST_DISPLAY_PATTERNS +Show tables matching condition "%USR.TABLE_LIST_DISPLAY_PATTERNS%" (if schema is not specified then current user's tables only are shown)... + +OWNER TABLE_NAME TYPE NUM_ROWS BLOCKS EMPTY AVGSPC ROWLEN TAB_LAST_ANALYZED DEGREE COMPRESS +-------------------- ------------------------------ ---- ------------ ------------- --------- ------ ------ ------------------- ---------------------------------------- -------- +USR TABLE_LIST_DISPLAY_PATTERNS TAB 3400 496 0 0 1004 2023-06-25 15:41:27 1 DISABLED + +1 row selected. + +SQL> @ind USR.LIST_DISPLAY_PATTERNS_IDX +Display indexes where table or index name matches %USR.LIST_DISPLAY_PATTERNS_IDX%... + +TABLE_OWNER TABLE_NAME INDEX_NAME POS# COLUMN_NAME DSC +-------------------- ------------------------------ ------------------------------ ---- ------------------------------ ---- +USR TABLE_LIST_DISPLAY_PATTERNS LIST_DISPLAY_PATTERNS_IDX 1 LIST_ID + + +INDEX_OWNER TABLE_NAME INDEX_NAME IDXTYPE UNIQ STATUS PART TEMP H LFBLKS NDK NUM_ROWS CLUF LAST_ANALYZED DEGREE VISIBILIT +-------------------- ------------------------------ ------------------------------ ---------- ---- -------- ---- ---- -- ---------- ------------- ---------- ---------- ------------------- ------ --------- +USR TABLE_LIST_DISPLAY_PATTERNS LIST_DISPLAY_PATTERNS_IDX NORMAL NO VALID NO N 2 7 100 3400 551 2023-06-25 15:41:27 1 VISIBLE + + +-- each LIST_ID is stored in how many distinct blocks? + +alter session set current_schema=USR; + +select + norm.list_id, norm.cnt normanized_blocks , random.cnt randomanized_blocks +from + (select list_id, count(distinct(dbms_rowid.ROWID_BLOCK_NUMBER(rowid))) cnt + from TABLE_LIST_DISPLAY_PATTERNS + group by list_id)norm +, + ( select list_id, count(distinct(dbms_rowid.ROWID_BLOCK_NUMBER(rowid))) cnt + from TABLE_LIST_DISPLAY_RAND + group by list_id) random + where norm.list_id = random.list_id +order by list_id; + + + + +set lines 256 pages 999 + +var LID NUMBER; +execute :LID:=20; + +select /*+ GATHER_PLAN_STATISTICS */ + * from USR.TABLE_LIST_DISPLAY_PATTERNS where LIST_ID=:LID; + + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +------------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | +-------------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | 1 | | | 7 (100)| 34 |00:00:00.01 | 14 | +| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| TABLE_LIST_DISPLAY_PATTERNS | 1 | 34 | 34136 | 7 (0)| 34 |00:00:00.01 | 14 | +|* 2 | INDEX RANGE SCAN | LIST_DISPLAY_PATTERNS_IDX | 1 | 34 | | 1 (0)| 34 |00:00:00.01 | 5 | +-------------------------------------------------------------------------------------------------------------------------------------------------- + + +select /*+ GATHER_PLAN_STATISTICS */ + * from USR.TABLE_LIST_DISPLAY_RAND where LIST_ID=:LID; + + + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +---------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | +---------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | 1 | | | 35 (100)| 34 |00:00:00.01 | 39 | +| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| TABLE_LIST_DISPLAY_RAND | 1 | 34 | 34136 | 35 (0)| 34 |00:00:00.01 | 39 | +|* 2 | INDEX RANGE SCAN | LIST_DISPLAY_RAND_IDX | 1 | 34 | | 1 (0)| 34 |00:00:00.01 | 5 | +---------------------------------------------------------------------------------------------------------------------------------------------- + + + + + diff --git a/divers/ADB_free_install_01.txt b/divers/ADB_free_install_01.txt new file mode 100644 index 0000000..371ef21 --- /dev/null +++ b/divers/ADB_free_install_01.txt @@ -0,0 +1,117 @@ +-- https://github.com/oracle/adb-free/pkgs/container/adb-free + +dd if=/dev/zero of=/vm/ssd0/ithor/app_02.img bs=1G count=8 +dd if=/dev/zero of=/vm/ssd0/ithor/app_03.img bs=1G count=8 +virsh domblklist ithor --details +virsh attach-disk ithor /vm/ssd0/ithor/app_03.img vde --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk ithor /vm/ssd0/ithor/app_02.img vdf --driver qemu --subdriver raw --targetbus virtio --persistent + +lsblk +pvs +pvcreate /dev/vde1 +pvcreate /dev/vdf1 +vgs +vgextend vgapp /dev/vde1 +vgextend vgapp /dev/vdf1 +lvs +lvextend -l +100%FREE /dev/vgapp/app +xfs_growfs /app +df -hT + +# disable selinux +/etc/selinux/config +SELINUX=disabled + +# install podman +dnf install podman.x86_64 + +# change storage path for pods +/etc/containers/storage.conf + +# create a volume to user later for DATAPUMP / persistent storage across containers +podman volume create adb_container_volume + +# build pod +podman run -d \ +-p 1521:1522 \ +-p 1522:1522 \ +-p 8443:8443 \ +-p 27017:27017 \ +-e DATABASE_NAME=ITHOR \ +-e WORKLOAD_TYPE=ATP \ +-e WALLET_PASSWORD=Remotecontrol1 \ +-e ADMIN_PASSWORD=Remotecontrol1 \ +--cap-add SYS_ADMIN \ +--device /dev/fuse \ +--name adb-free \ +--volume adb_container_volume:/u01/data \ +ghcr.io/oracle/adb-free:latest-23ai + +# list pods and logs +podman ps -a +podman logs -f --names adb-free + +# generate systemd unit to manage pod startup +podman generate systemd --restart-policy=always -t 1 adb-free > /etc/systemd/system/adb-free.service +systemctl list-unit-files | grep adb + +systemctl enable adb-free.service +systemctl stop adb-free.service +systemctl start adb-free.service + +# extract certificates from pod +mkdir /app/adb-free +podman cp adb-free:/u01/app/oracle/wallets/tls_wallet /app/adb-free/ + +# setup SQL*Plus connections from a linux machine +# client 23 required +# from umbara +scp -rp ithor:/app/adb-free/tls_wallet adb-free_tls_wallet +chown -R oracle:oinstall adb-free_tls_wallet + +su - oracle +export TNS_ADMIN=/app/oracle/adb-free_tls_wallet +sed -i 's/localhost/ithor.swgalaxy/g' $TNS_ADMIN/tnsnames.ora + +sqcl admin/Remotecontrol1@ithor_low_tls +sqcl admin/Remotecontrol1@ithor_low + +# create another ADMIN user +----------------------------------------------------------------- +-- USER SQL +CREATE USER LIVESQL IDENTIFIED BY Remotecontrol1; + +-- ADD ROLES +GRANT CONNECT TO LIVESQL; +GRANT CONSOLE_DEVELOPER TO LIVESQL; +GRANT GRAPH_DEVELOPER TO LIVESQL; +GRANT RESOURCE TO LIVESQL; +ALTER USER LIVESQL DEFAULT ROLE CONSOLE_DEVELOPER,GRAPH_DEVELOPER; + +-- REST ENABLE +BEGIN + ORDS_ADMIN.ENABLE_SCHEMA( + p_enabled => TRUE, + p_schema => 'LIVESQL', + p_url_mapping_type => 'BASE_PATH', + p_url_mapping_pattern => 'livesql', + p_auto_rest_auth=> TRUE + ); + -- ENABLE DATA SHARING + C##ADP$SERVICE.DBMS_SHARE.ENABLE_SCHEMA( + SCHEMA_NAME => 'LIVESQL', + ENABLED => TRUE + ); + commit; +END; +/ + +-- ENABLE GRAPH +ALTER USER LIVESQL GRANT CONNECT THROUGH GRAPH$PROXY_USER; + +-- QUOTA +ALTER USER LIVESQL QUOTA UNLIMITED ON DATA; +----------------------------------------------------------------- +-- extra +GRANT PDB_DBA TO LIVESQL; + diff --git a/divers/FK_indexing_01.txt b/divers/FK_indexing_01.txt new file mode 100644 index 0000000..05ddb52 --- /dev/null +++ b/divers/FK_indexing_01.txt @@ -0,0 +1,105 @@ +drop table SUPPLIER purge; + +create table SUPPLIER( + id INTEGER generated always as identity + ,name varchar2(30) not null + ,primary key(id) +) +; + + +insert /*+ APPEND */ into SUPPLIER(name) +select + dbms_random.string('x',30) +from + xmltable('1 to 100') +; + +commit; + + +drop table PRODUCT purge; +create table PRODUCT( + id integer generated always as identity + ,supplier_id integer + ,product_name varchar2(30) + ,price NUMBER + ,primary key(id) + ,constraint fk_prod_suppl foreign key(supplier_id) references SUPPLIER(id) on delete cascade +) +; + +alter table PRODUCT drop constraint fk_prod_suppl; +alter table PRODUCT add constraint fk_prod_suppl foreign key(supplier_id) references SUPPLIER(id) on delete cascade; + +insert /*+ APPEND */ into PRODUCT(supplier_id,product_name,price) +select + trunc(dbms_random.value(1,90)) + ,dbms_random.string('x',30) + ,dbms_random.value(1,10000) +from + xmltable('1 to 10000000') +; + +commit; + + +-- grant execute on dbms_job to POC; +-- grant create job to POC; + +create or replace procedure delete_supplier(suppl_id integer) as + begin + DBMS_APPLICATION_INFO.set_module(module_name => 'delete_supplier', action_name => 'Delete supplier'); + delete from SUPPLIER where id=suppl_id; + commit; + end; + / + + + create or replace procedure parallel_delete_supplier as + v_jobno number:=0; + begin + for i in 51..100 loop + dbms_job.submit(v_jobno,'delete_supplier('||to_char(i)||');', sysdate); + end loop; + commit; + end; + / + +-- create a huge locking situation ;) +exec parallel_delete_supplier; + + +SQL> @ash/ashtop inst_id,session_id,sql_id,event2,blocking_inst_id,blocking_session,blocking_session_status,P1text,p2,p3 "username='POC'" sysdate-1/24/20 sysdate + + Total Distinct Distinct + Seconds AAS %This INST_ID SESSION_ID SQL_ID EVENT2 BLOCKING_INST_ID BLOCKING_SESSION BLOCKING_SE P1TEXT P2 P3 FIRST_SEEN LAST_SEEN Execs Seen Tstamps +--------- ------- ------- ---------- ---------- ------------- ------------------------------------------ ---------------- ---------------- ----------- ------------------------------ ---------- ---------- ------------------- ------------------- ---------- -------- + 15 .1 2% | 1 19 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 20 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 21 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 23 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 25 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 27 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 29 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 30 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 31 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 33 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 35 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 38 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 158 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 159 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + 15 .1 2% | 1 160 2b4hjy6xfb76s enq: TM - contention [mode=5] 1 450 VALID name|mode 42238 0 2024-02-11 19:09:40 2024-02-11 19:09:54 1 15 + + +-- find enq mode from P1 column og gv$session +SQL> select distinct' [mode='||BITAND(p1, POWER(2,14)-1)||']' from gv$session where username='POC' and event like 'enq%'; + +'[MODE='||BITAND(P1,POWER(2,14)-1)||']' +------------------------------------------------ + [mode=5] + + +-- index the FK on child table +create index IDX_PRODUCT_SUPPL_ID on PRODUCT(supplier_id); + diff --git a/divers/KVM_VM_create_Windows_11.txt b/divers/KVM_VM_create_Windows_11.txt new file mode 100644 index 0000000..005b00b --- /dev/null +++ b/divers/KVM_VM_create_Windows_11.txt @@ -0,0 +1,11 @@ +qemu-img create -f raw /vm/ssd0/utapau/hdd_01.img 200G + +virt-install \ + --graphics vnc,password=secret,listen=0.0.0.0 \ + --name=utapau \ + --vcpus=2 \ + --memory=4096 \ + --network bridge=br0 \ + --cdrom=/vm/hdd0/_kit_/Win10_1809Oct_v2_French_x64.iso \ + --disk=/vm/ssd0/utapau/hdd_01.img \ + --os-variant=win10 diff --git a/divers/KVM_VM_create_linux.txt b/divers/KVM_VM_create_linux.txt new file mode 100644 index 0000000..903ab5e --- /dev/null +++ b/divers/KVM_VM_create_linux.txt @@ -0,0 +1,13 @@ +qemu-img create -f raw /vm/ssd0/topawa/hdd_01.img 200G + +virt-install \ + --graphics vnc,password=secret,listen=0.0.0.0 \ + --name=topawa \ + --vcpus=4 \ + --memory=8192 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/vm/hdd0/_kit_/extix-23.4-64bit-deepin-23-refracta-3050mb-230403.iso \ + --disk=/vm/ssd0/topawa/hdd_01.img \ + --os-variant=ubuntu22.04 + diff --git a/divers/KVM_install_Rocky9_01.txt b/divers/KVM_install_Rocky9_01.txt new file mode 100644 index 0000000..2645543 --- /dev/null +++ b/divers/KVM_install_Rocky9_01.txt @@ -0,0 +1,95 @@ +-- Network setup +---------------- + +nmcli connection show --active + +nmcli connection modify enp4s0 ipv4.address 192.168.0.4/24 +nmcli connection modify enp4s0 ipv4.method manual ipv6.method ignore +nmcli connection modify enp4s0 ipv4.gateway 192.168.0.1 +nmcli connection modify enp4s0 ipv4.dns 192.168.0.8 +nmcli connection modify enp4s0 ipv4.dns-search swgalaxy + +hostnamectl set-hostname naboo.swgalaxy + +# SELINUX=disabled +/etc/selinux/config + +systemctl stop firewalld +systemctl disable firewalld + +-- KVM install +-------------- + +dnf install -y qemu-kvm libvirt virt-manager virt-install virtio-win.noarch +dnf install -y epel-release -y +dnf -y install bridge-utils virt-top libguestfs-tools bridge-utils virt-viewer +dnf -y install at wget bind-utils + +systemctl start atd +systemctl enable atd +systemctl status atd + +lsmod | grep kvm + +systemctl start libvirtd +systemctl enable libvirtd + +brctl show +nmcli connection show + +# This section should be scripted and run from the server console or run under at-script as background command +#----> + +export BR_NAME="br0" +export BR_INT="enp4s0" +export SUBNET_IP="192.168.0.4/24" +export GW="192.168.0.1" +export DNS1="192.168.0.8" + +nmcli connection add type bridge autoconnect yes con-name ${BR_NAME} ifname ${BR_NAME} + +nmcli connection modify ${BR_NAME} ipv4.addresses ${SUBNET_IP} ipv4.method manual +nmcli connection modify ${BR_NAME} ipv4.gateway ${GW} +nmcli connection modify ${BR_NAME} ipv4.dns ${DNS1} + +nmcli connection delete ${BR_INT} +nmcli connection add type bridge-slave autoconnect yes con-name ${BR_INT} ifname ${BR_INT} master ${BR_NAME} + +nmcli connection show +nmcli connection up br0 +nmcli connection show br0 + +ip addr show + +systemctl restart libvirtd +# <----- + + +# Install other stuff: Cockpit, bind-utils, cifs-utils etc. +dnf install cockpit cockpit-machines.noarch -y + +systemctl start cockpit +systemctl enable --now cockpit.socket + +# reboot the system + +dnf install -y lsof bind-utils cifs-utils.x86_64 + +# setup CIFS mounts +groupadd smbuser --gid 1502 +useradd smbuser --uid 1502 -g smbuser -G smbuser + +-- create credentials file for automount: /root/.smbcred +username=vplesnila +password=***** + +mkdir -p /mnt/yavin4 +mkdir -p /mnt/unprotected + +-- add in /etc/fstab +//192.168.0.9/share /mnt/yavin4 cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0 +//192.168.0.9/unprotected /mnt/unprotected cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0 + +systemctl daemon-reload +mount -a + diff --git a/divers/KVM_save_all_domain_XML.txt b/divers/KVM_save_all_domain_XML.txt new file mode 100644 index 0000000..130e419 --- /dev/null +++ b/divers/KVM_save_all_domain_XML.txt @@ -0,0 +1,2 @@ +virsh list --all --name | awk {'print "virsh dumpxml " $1 " > " $1".xml"'} | grep -v "virsh dumpxml > .xml" + diff --git a/divers/OEL9_install_01.txt b/divers/OEL9_install_01.txt new file mode 100644 index 0000000..e4b0a4a --- /dev/null +++ b/divers/OEL9_install_01.txt @@ -0,0 +1,144 @@ +dd if=/dev/zero of=system_01.img bs=1G count=10 +dd if=/dev/zero of=swap_01.img bs=1G count=4 + +# create new domain +virt-install \ + --graphics vnc,password=secret,listen=0.0.0.0 \ + --name=seedmachine \ + --vcpus=4 \ + --memory=8192 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/mnt/yavin4/kit/Linux/OracleLinux-R9-U4-x86_64-boot-uek.iso \ + --disk /vm/ssd0/seedmachine/system_01.img \ + --disk /vm/ssd0/seedmachine/swap_01.img \ + --os-variant=ol9.3 + +dnf install -y lsof bind-utils cifs-utils.x86_64 +dnf -y install at wget bind-utils tar.x86_64 + +systemctl start atd +systemctl enable atd +systemctl status atd + +-- Network setup +---------------- + +nmcli connection show --active + +nmcli connection modify enp1s0 ipv4.address 192.168.0.66/24 +nmcli connection modify enp1s0 ipv4.method manual ipv6.method ignore +nmcli connection modify enp1s0 ipv4.gateway 192.168.0.1 +nmcli connection modify enp1s0 ipv4.dns 192.168.0.8 +nmcli connection modify enp1s0 ipv4.dns-search swgalaxy + +nmcli connection modify enp2s0 ipv4.address 192.168.1.66/24 +nmcli connection modify enp2s0 ipv4.method manual ipv6.method ignore + +hostnamectl set-hostname seedmachine.swgalaxy + +# SELINUX=disabled +/etc/selinux/config + +systemctl stop firewalld +systemctl disable firewalld + +dnf install oracle-epel-release-el9.x86_64 oracle-database-preinstall-19c.x86_64 +dnf install -y rlwrap.x86_64 + + +# setup CIFS mounts +groupadd smbuser --gid 1502 +useradd smbuser --uid 1502 -g smbuser -G smbuser + +-- create credentials file for automount: /root/.smbcred +username=vplesnila +password=***** + +mkdir -p /mnt/yavin4 +mkdir -p /mnt/unprotected + +-- add in /etc/fstab +//192.168.0.9/share /mnt/yavin4 cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0 +//192.168.0.9/unprotected /mnt/unprotected cifs vers=3.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred,mfsymlinks,iocharset=utf8 0 0 + +systemctl daemon-reload +mount -a + +# add oracle user in smbuser group +cat /etc/group | grep smbuser + +smbuser:x:1502:smbuser,oracle + +# add /app FS +dd if=/dev/zero of=app_01.img bs=1G count=40 +dd if=/dev/zero of=data_01.img bs=1G count=20 +dd if=/dev/zero of=data_02.img bs=1G count=20 +dd if=/dev/zero of=reco_01.img bs=1G count=20 + +virsh domblklist seedmachine --details +virsh attach-disk seedmachine /vm/ssd0/seedmachine/app_01.img vdc --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk seedmachine /vm/ssd0/seedmachine/data_01.img vdd --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk seedmachine /vm/ssd0/seedmachine/data_02.img vde --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk seedmachine /vm/ssd0/seedmachine/reco_01.img vdf --driver qemu --subdriver raw --targetbus virtio --persistent + +fdisk /dev/vdc +fdisk /dev/vdd +fdisk /dev/vde +fdisk /dev/vdf + +pvs +pvcreate /dev/vdc1 +pvcreate /dev/vdd1 +pvcreate /dev/vde1 +pvcreate /dev/vdf1 + +vgs +vgcreate vgapp /dev/vdc1 +vgcreate vgdata /dev/vdd1 /dev/vde1 +vgcreate vgreco /dev/vdf1 + +lvs +lvcreate -n app -l 100%FREE vgapp +lvcreate -n data -l 100%FREE vgdata +lvcreate -n reco -l 100%FREE vgreco + +mkfs.xfs /dev/mapper/vgapp-app +mkfs.xfs /dev/mapper/vgdata-data +mkfs.xfs /dev/mapper/vgreco-reco + +mkdir -p /app /data /reco + +# add in /etc/fstab +/dev/mapper/vgapp-app /app xfs defaults 0 0 +/dev/mapper/vgdata-data /data xfs defaults 0 0 +/dev/mapper/vgreco-reco /reco xfs defaults 0 0 + +systemctl daemon-reload +mount -a + +chown -R oracle:oinstall /app /data /reco + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/divers/PC_boot_menu.txt b/divers/PC_boot_menu.txt new file mode 100644 index 0000000..a33f3f4 --- /dev/null +++ b/divers/PC_boot_menu.txt @@ -0,0 +1,2 @@ +AMD Ryzen - F7 + diff --git a/divers/PDB$SEED_recompile_all.sql b/divers/PDB$SEED_recompile_all.sql new file mode 100644 index 0000000..8f41a31 --- /dev/null +++ b/divers/PDB$SEED_recompile_all.sql @@ -0,0 +1,9 @@ +alter pluggable database PDB$SEED close immediate instances=ALL; +alter pluggable database PDB$SEED open read write instances=ALL; +alter session set container=PDB$SEED; +alter session set "_ORACLE_SCRIPT"=true; +@?/rdbms/admin/utlrp +alter session set "_ORACLE_SCRIPT"=false; +alter session set container=CDB$ROOT; +alter pluggable database PDB$SEED close immediate instances=ALL; +alter pluggable database PDB$SEED open read only instances=ALL; diff --git a/divers/PDB_PITR_scratch_01.txt b/divers/PDB_PITR_scratch_01.txt new file mode 100644 index 0000000..e0a7f5d --- /dev/null +++ b/divers/PDB_PITR_scratch_01.txt @@ -0,0 +1,157 @@ +rman target / + +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tech/oracle/work/dataguard_ADNA/backup/ADNAPRD/backupset/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; +} + + +sqlplus 'sys/"Secret00!"'@wayland.swgalaxy:1555/ADNAPRD_DGMGRL as sysdba +sqlplus 'sys/"Secret00!"'@togoria.swgalaxy:1555/ADNADRP_DGMGRL as sysdba + + +configure archivelog deletion policy to applied on all standby; + +rman target='sys/"Secret00!"'@wayland.swgalaxy:1555/ADNAPRD_DGMGRL auxiliary='sys/"Secret00!"'@togoria.swgalaxy:1555/ADNADRP_DGMGRL + +run +{ + allocate channel pri01 device type disk; + allocate channel pri02 device type disk; + allocate channel pri03 device type disk; + allocate channel pri04 device type disk; + allocate channel pri05 device type disk; + allocate channel pri06 device type disk; + allocate channel pri07 device type disk; + allocate channel pri08 device type disk; + allocate channel pri09 device type disk; + allocate channel pri10 device type disk; + + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + + duplicate database 'ADNA' for standby + from active database using compressed backupset section size 512M; +} + + + +alter system set dg_broker_config_file1='/app/oracle/base/admin/ADNAPRD/dgmgrl/dr1ADNAPRD.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='/app/oracle/base/admin/ADNAPRD/dgmgrl/dr2ADNAPRD.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + + +alter system set dg_broker_config_file1='/app/oracle/base/admin/ADNADRP/dgmgrl/dr1ADNADRP.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='/app/oracle/base/admin/ADNADRP/dgmgrl/dr2ADNADRP.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + + +rlwrap dgmgrl 'sys/"Secret00!"'@wayland.swgalaxy:1555/ADNAPRD_DGMGRL + +create configuration ADNA as + primary database is ADNAPRD + connect identifier is 'wayland.swgalaxy:1555/ADNAPRD_DGMGRL'; + +add database ADNADRP + as connect identifier is 'togoria.swgalaxy:1555/ADNADRP_DGMGRL' + maintained as physical; + +enable configuration; + +edit database 'adnaprd' set property ArchiveLagTarget=0; +edit database 'adnaprd' set property LogArchiveMaxProcesses=2; +edit database 'adnaprd' set property LogArchiveMinSucceedDest=1; +edit database 'adnaprd' set property StandbyFileManagement='AUTO'; + +edit database 'adnadrp' set property ArchiveLagTarget=0; +edit database 'adnadrp' set property LogArchiveMaxProcesses=2; +edit database 'adnadrp' set property LogArchiveMinSucceedDest=1; +edit database 'adnadrp' set property StandbyFileManagement='AUTO'; + +edit instance 'ADNAPRD' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=wayland.swgalaxy)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=ADNAPRD_DGMGRL)(INSTANCE_NAME=ADNAPRD)(SERVER=DEDICATED)))'; +edit instance 'ADNADRP' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=togoria.swgalaxy)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=ADNADRP_DGMGRL)(INSTANCE_NAME=ADNADRP)(SERVER=DEDICATED)))'; + +show configuration +validate database 'adnadrp' +validate database 'adnaprd' + + + + +create table heartbeat (ts TIMESTAMP); +insert into heartbeat values (CURRENT_TIMESTAMP); +commit; + + +CREATE OR REPLACE PROCEDURE update_heartbeat AS +BEGIN + UPDATE heartbeat + SET ts = SYSTIMESTAMP; + COMMIT; +END; +/ + + +BEGIN + DBMS_SCHEDULER.CREATE_JOB ( + job_name => 'HEARTBEAT_JOB', + job_type => 'STORED_PROCEDURE', + job_action => 'UPDATE_HEARTBEAT', + start_date => SYSTIMESTAMP, + repeat_interval => 'FREQ=SECONDLY; INTERVAL=1', + enabled => FALSE + ); +END; +/ + + +BEGIN + DBMS_SCHEDULER.ENABLE('HEARTBEAT_JOB'); +END; +/ + + +BEGIN + DBMS_SCHEDULER.DISABLE('HEARTBEAT_JOB'); +END; +/ + + + +BEGIN + DBMS_SCHEDULER.DROP_JOB('HEARTBEAT_JOB'); +END; +/ + +drop PROCEDURE update_heartbeat; + +drop table heartbeat purge; + + +run{ + set until time "TIMESTAMP'2026-02-21 15:50:00'"; + alter pluggable database RYLS close immediate instances=all; + restore pluggable database RYLS; + recover pluggable database RYLS; + alter pluggable database RYLS open resetlogs instances=all; +} diff --git a/divers/Purines_vs_Omega‑3.md b/divers/Purines_vs_Omega‑3.md new file mode 100644 index 0000000..4f199de --- /dev/null +++ b/divers/Purines_vs_Omega‑3.md @@ -0,0 +1,20 @@ +# Classement croisé Purines vs Oméga‑3 + +| Aliment | Purines (mg/100 g) | Catégorie purines | Oméga‑3 (g/100 g) | Catégorie oméga‑3 | Verdict croisé | +|--------------------------|--------------------|-------------------|-------------------|-------------------|----------------| +| Poulet (blanc/cuisse) | 150–175 | Modéré | ~0.05 | Pauvre | ⚠️ Peu d’intérêt nutritionnel, purines modérées mais quasi pas d’oméga‑3 | +| Bœuf (muscle) | ~120 | Modéré | ~0.04 | Pauvre | ⚠️ Idem, faible en oméga‑3 | +| Foie de bœuf | ~300 | Très élevé | ~0.10 | Pauvre | 🚫 À éviter (purines très élevées, peu d’oméga‑3) | +| Sardine | ~210 | Élevé | ~0.80–0.90 | Moyen | ⚖️ Bon apport en oméga‑3 mais purines élevées | +| Anchois | ~300 | Très élevé | ~0.90 | Moyen | 🚫 Risque goutte, malgré oméga‑3 | +| Saumon | ~170 | Modéré | ~2.3–2.6 | Riche | ✅ Excellent compromis (oméga‑3 riches, purines modérées) | +| Maquereau | ~145 | Modéré | ~1.4–1.8 | Riche | ✅ Très bon compromis | +| Hareng | ~170 | Modéré | ~1.6–2.2 | Riche | ✅ Très bon compromis | +| Truite | ~150 | Modéré | ~1.2–1.3 | Riche | ✅ Bon compromis | +| Thon (rouge) | ~150 | Modéré | ~1.6–1.7 | Riche | ✅ Bon compromis | +| Crevettes | ~150 | Modéré | ~0.30 | Moyen | ⚖️ Correct mais pas exceptionnel | +| Crabe / Tourteau | ~150 | Modéré | ~0.45 | Moyen | ⚖️ Correct | +| Homard / Langouste | ~135 | Modéré | ~0.52 | Moyen | ⚖️ Correct | +| Moules | ~150 | Modéré | ~0.59–0.85 | Moyen | ⚖️ Correct | +| Couteaux de mer | ~150 | Modéré | ~0.6 | Moyen | ⚖️ Correct | +| Coquilles Saint‑Jacques | ~150–180 | Modéré | ~0.5–0.6 | Moyen | ⚖️ Correct | diff --git a/divers/RAC_19_OEL9_ASMLIB3_setup_01.txt b/divers/RAC_19_OEL9_ASMLIB3_setup_01.txt new file mode 100644 index 0000000..df29051 --- /dev/null +++ b/divers/RAC_19_OEL9_ASMLIB3_setup_01.txt @@ -0,0 +1,256 @@ +# netwok setup on each node +nmcli connection show --active + +nmcli connection modify enp1s0 ipv4.address 192.168.0.95/24 +nmcli connection modify enp1s0 ipv4.method manual ipv6.method ignore +nmcli connection modify enp1s0 ipv4.gateway 192.168.0.1 +nmcli connection modify enp1s0 ipv4.dns 192.168.0.8 +nmcli connection modify enp1s0 ipv4.dns-search swgalaxy + +nmcli connection modify enp2s0 ipv4.address 192.168.1.95/24 +nmcli connection modify enp2s0 ipv4.method manual ipv6.method ignore + +nmcli connection modify enp10s0 ipv4.address 192.168.2.95/24 +nmcli connection modify enp10s0 ipv4.method manual ipv6.method ignore + +hostnamectl set-hostname rodia-db03.swgalaxy + +# attach disks in each node +virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_01.img vdd --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_02.img vde --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_03.img vdf --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_04.img vdg --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk ylesia-db03 /vm/ssd0/ylesia-rac/disk_array/asm_05.img vdh --driver qemu --subdriver raw --targetbus virtio --persistent + + +- unzip distrib in grid home +- unzip last GIRU in a temporary location +- apply GIRU in silent mode over the base GI distrib + +# on each node +############## + +mkdir -p /app/oracle +chmod 775 /app/oracle +chown -R oracle:oinstall /app/oracle + +cd /app/oracle/ +mkdir -p admin base grid oraInventory rdbms staging_area +chmod 775 admin base grid oraInventory rdbms staging_area + +chown -R oracle:oinstall admin rdbms staging_area +chown -R grid:oinstall grid oraInventory base + +su - grid +mkdir -p /app/oracle/grid/product/19 + + +# on first node +############### + +# unzip distrib +cd /app/oracle/grid/product/19 +unzip /mnt/yavin4/kit/Oracle/Grid_Infra/19/distrib/LINUX.X64_193000_grid_home.zip + +# update Opatch +rm -rf OPatch +unzip /mnt/yavin4/kit/Oracle/opatch/p6880880_190000_Linux-x86-64.zip + +cd /app/oracle/staging_area/ +unzip /mnt/yavin4/kit/Oracle/Grid_Infra/19/GIRU/GIRU_19.27/p37641958_190000_Linux-x86-64.zip + +# apply the RU on this ORACLE_HOME +# on first node as grid + +export ORACLE_BASE=/app/oracle/base +export ORACLE_HOME=/app/oracle/grid/product/19 +export PATH=$ORACLE_HOME/bin:$PATH + +$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/36758186 +$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37642901 +$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37643161 +$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37654975 +$ORACLE_HOME/gridSetup.sh -silent -applyRU /app/oracle/staging_area/37641958/37762426 + +# to satisfy ALL pre-requisits, to do on ALL nodes + +dnf install -y $ORACLE_HOME/cv/rpm/cvuqdisk-1.0.10-1.rpm + +# in /etc/security/limits.conf + +# Oracle +oracle soft stack 10240 +grid soft stack 10240 +grid soft memlock 3145728 +grid hard memlock 3145728 + +# in /etc/sysctl.conf + +# other oracle settings +kernel.panic = 1 + + +# temporary SWAP +dd if=/dev/zero of=/mnt/unprotected/tmp/oracle/swap_20g.img bs=1G count=20 +mkswap /mnt/unprotected/tmp/oracle/swap_20g.img +swapon /mnt/unprotected/tmp/oracle/swap_20g.img +free -h + +############## + +# pre-check as grid +export ORACLE_BASE=/app/oracle/base +export ORACLE_HOME=/app/oracle/grid/product/19 +export PATH=$ORACLE_HOME/bin:$PATH + +$ORACLE_HOME/runcluvfy.sh stage -pre crsinst -n ylesia-db01,ylesia-db02,ylesia-db03 + + +# ASM disks +lsblk --list | egrep "vdd|vde|vdf|vdg|vdh" +ls -ltr /dev/vd[d-h] + +fdisk ..... all disk + + +lsblk --list | egrep "vdd|vde|vdf|vdg|vdh" +ls -ltr /dev/vd[d-h]1 + +# install asmlib on all nodes +dnf install -y oracleasm-support-3.1.0-10.el9.x86_64.rpm +dnf install -y oracleasmlib-3.1.0-6.el9.x86_64.rpm + +systemctl start oracleasm.service + +oracleasm configure -i + +(answers: grid, asmdba and all default) + +echo "kernel.io_uring_disabled = 0" >> /etc/sysctl.conf +sysctl -p + +# create ASM disks on first node +oracleasm createdisk DATA_01 /dev/vdd1 +oracleasm createdisk DATA_02 /dev/vde1 +oracleasm createdisk DATA_03 /dev/vdf1 +oracleasm createdisk RECO_01 /dev/vdg1 +oracleasm createdisk RECO_02 /dev/vdh1 + +oracleasm scandisks +oracleasm listdisks + +# on other nodes, only scan and list ASL disks + +# on first node, grid setup +$ORACLE_HOME/gridSetup.sh + +# RDBMS install +############### + +# unzip distrib +mkdir -p /app/oracle/rdbms/product/19 +cd /app/oracle/rdbms/product/19 +unzip /mnt/yavin4/kit/Oracle/Oracle_Database_19/distrib/LINUX.X64_193000_db_home.zip + + +# update Opatch +rm -rf OPatch +unzip /mnt/yavin4/kit/Oracle/opatch/p6880880_190000_Linux-x86-64.zip + +# apply the RU on this ORACLE_HOME +# on first node as oracle + +export ORACLE_BASE=/app/oracle/base +export ORACLE_HOME=/app/oracle/rdbms/product/19 +export PATH=$ORACLE_HOME/bin:$PATH + +$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/36758186 +$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37642901 +$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37643161 +$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37654975 +$ORACLE_HOME/runInstaller -silent -applyRU /app/oracle/staging_area/37641958/37762426 + +# install from a X session +$ORACLE_HOME/runInstaller + +# on all nodes +chmod -R 775 /app/oracle/base/admin /app/oracle/base/diag + +cat <> /etc/oratab +SET19:/app/oracle/rdbms/product/19:N +EOF! + + +# using DBCA to create/delete database + +export ORACLE_DB_NAME=AERON +export ORACLE_UNQNAME=AERONPRD +export PDB_NAME=REEK +export NODE1=ylesia-db01 +export NODE2=ylesia-db02 +export NODE3=ylesia-db03 +export SYS_PASSWORD="Secret00!" +export PDB_PASSWORD="Secret00!" + +# create MULTITENANT database +dbca -silent -createDatabase \ + -templateName General_Purpose.dbc \ + -sid ${ORACLE_UNQNAME} \ + -gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \ + -characterSet AL32UTF8 \ + -sysPassword ${SYS_PASSWORD} \ + -systemPassword ${SYS_PASSWORD} \ + -createAsContainerDatabase true \ + -numberOfPDBs 1 \ + -pdbName ${PDB_NAME} \ + -pdbAdminPassword ${PDB_PASSWORD} \ + -databaseType MULTIPURPOSE \ + -automaticMemoryManagement false \ + -totalMemory 3072 \ + -redoLogFileSize 128 \ + -emConfiguration NONE \ + -ignorePreReqs \ + -nodelist ${NODE1},${NODE2},${NODE3} \ + -storageType ASM \ + -diskGroupName +DATA \ + -recoveryGroupName +RECO \ + -useOMF true \ + -initparams db_name=${ORACLE_DB_NAME},db_unique_name=${ORACLE_UNQNAME},sga_max_size=3G,sga_target=3G,pga_aggregate_target=512M \ + -enableArchive true \ + -recoveryAreaDestination +RECO \ + -recoveryAreaSize 30720 \ + -asmsnmpPassword ${SYS_PASSWORD} + +# create NON-CDB database +dbca -silent -createDatabase \ + -templateName General_Purpose.dbc \ + -sid ${ORACLE_UNQNAME} \ + -gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \ + -characterSet AL32UTF8 \ + -sysPassword ${SYS_PASSWORD} \ + -systemPassword ${SYS_PASSWORD} \ + -createAsContainerDatabase false \ + -databaseType MULTIPURPOSE \ + -automaticMemoryManagement false \ + -totalMemory 3072 \ + -redoLogFileSize 128 \ + -emConfiguration NONE \ + -ignorePreReqs \ + -nodelist ${NODE1},${NODE2},${NODE3} \ + -storageType ASM \ + -diskGroupName +DATA \ + -recoveryGroupName +RECO \ + -useOMF true \ + -initparams db_name=${ORACLE_DB_NAME},db_unique_name=${ORACLE_UNQNAME},sga_max_size=3G,sga_target=3G,pga_aggregate_target=512M \ + -enableArchive true \ + -recoveryAreaDestination +RECO \ + -recoveryAreaSize 30720 \ + -asmsnmpPassword ${SYS_PASSWORD} + + +# delete database +dbca -silent -deleteDatabase \ + -sourceDB AERONPRD \ + -sysPassword ${SYS_PASSWORD} \ + -forceArchiveLogDeletion + \ No newline at end of file diff --git a/divers/SuSE_install_01.txt b/divers/SuSE_install_01.txt new file mode 100644 index 0000000..e41db3f --- /dev/null +++ b/divers/SuSE_install_01.txt @@ -0,0 +1,86 @@ +############# +# VM creation +############# + +mkdir /vm/ssd0/aquaris + +qemu-img create -f raw /vm/ssd0/aquaris/root_01.img 64G + +virt-install \ + --graphics vnc,password=secret,listen=0.0.0.0 \ + --name=aquaris \ + --vcpus=4 \ + --memory=4096 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/vm/hdd0/_kit_/openSUSE-Leap-15.5-NET-x86_64-Build491.1-Media.iso \ + --disk /vm/ssd0/aquaris/root_01.img \ + --os-variant=opensuse15.4 + +#################### +# SuSE configuration +#################### + +# network interfaces +/etc/sysconfig/network/ifcfg-eth0 +/etc/sysconfig/network/ifcfg-eth1 + +#DNS +/run/netconfig/resolv.conf +# set NETCONFIG_DNS_POLICY="auto" in /etc/sysconfig/network/config + +# gateway +/etc/sysconfig/network/routes + +# delete unwanted statis enteries in /etc/hosts + +############## +# VM customize +############## + +qemu-img create -f raw /vm/ssd0/aquaris/app_01.img 60G +dd if=/dev/zero of=/vm/ssd0/aquaris/data_01.img bs=1G count=30 +dd if=/dev/zero of=/vm/ssd0/aquaris/backup_01.img bs=1G count=20 + +virsh domblklist aquaris --details + +virsh attach-disk aquaris /vm/ssd0/aquaris/app_01.img vdb --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk aquaris /vm/ssd0/aquaris/data_01.img vdc --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk aquaris /vm/ssd0/aquaris/backup_01.img vdd --driver qemu --subdriver raw --targetbus virtio --persistent + +btrfs device scan +btrfs filesystem show + +mkfs.btrfs /dev/vdb +mkfs.btrfs /dev/vdc +mkfs.btrfs /dev/vdd + + +# create mount points +mkdir /app /data /backup + +# add in /etc/fstab +UUID=fe1756c7-a062-40ed-921a-9fb1c12d8d51 /app btrfs defaults 0 0 +UUID=3b147a0d-ca13-46f5-aa75-72f5a2b9fd4c /data btrfs defaults 0 0 +UUID=d769e88b-5ec4-4e0a-93cd-1f2a9deecc8b /backup btrfs defaults 0 0 + +# mount all +mount -a + +btrfs subvolume create /backup/current +mkdir /backup/.snapshots + +btrfs subvolume snapshot /backup/current /backup/.snapshots/01 +btrfs subvolume snapshot /backup/current /backup/.snapshots/02 + +btrfs subvolume list /backup/current + +btrfs subvolume show /backup/.snapshots/01 +btrfs subvolume show /backup/.snapshots/02 + +tree -a /backup + +btrfs subvolume delete /backup/.snapshots/01 +btrfs subvolume delete /backup/.snapshots/02 +btrfs subvolume delete /backup/current + diff --git a/divers/TLS_connection_01.txt b/divers/TLS_connection_01.txt new file mode 100644 index 0000000..02c12c1 --- /dev/null +++ b/divers/TLS_connection_01.txt @@ -0,0 +1,222 @@ +# https://wadhahdaouehi.tn/2023/05/oracle-database-server-client-certificate-tcps-oracle-19c/ + + _____ _ _ + / ____| (_) | | + | (___ ___ _ ____ _____ _ __ ___ _ __| | ___ + \___ \ / _ \ '__\ \ / / _ \ '__| / __| |/ _` |/ _ \ + ____) | __/ | \ V / __/ | \__ \ | (_| | __/ + |_____/ \___|_| \_/ \___|_| |___/_|\__,_|\___| + + +# Create a new auto-login wallet +export WALLET_DIRECTORY=/home/oracle/poc_tls/wallet +export WALLET_PASSWORD="VaeVictis00!" + +orapki wallet create -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -auto_login_local + +# Create a self-signed certificate and load it into the wallet +export CERT_VALIDITY_DAYS=3650 + +orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN=`hostname`" -keysize 2048 -self_signed -validity ${CERT_VALIDITY_DAYS} + +# Check the contents of the wallet +orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} + +Note: The self-signed certificate is both a user and trusted certificate + +# Export the certificate to load it into the client wallet later +export CERT_EXPORT_PATH=/home/oracle/poc_tls/export +orapki wallet export -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN= `hostname` " -cert ${CERT_EXPORT_PATH}/`hostname`-certificate.crt + + + _____ _ _ _ _ _ + / ____| (_) | | (_) | | + | | | |_ ___ _ __ | |_ ___ _ __| | ___ + | | | | |/ _ \ '_ \| __| / __| |/ _` |/ _ \ + | |____| | | __/ | | | |_ \__ \ | (_| | __/ + \_____|_|_|\___|_| |_|\__| |___/_|\__,_|\___| + + +# Create a new auto-login wallet +export WALLET_DIRECTORY=/mnt/yavin4/tmp/00000/wayland/wallet +export WALLET_PASSWORD="AdVictoriam00!" + +orapki wallet create -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -auto_login_local + +# Create a self-signed certificate and load it into the wallet +export CERT_VALIDITY_DAYS=3650 + +orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN=`hostname`" -keysize 2048 -self_signed -validity ${CERT_VALIDITY_DAYS} + +# Check the contents of the wallet +orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} + +Note: The self-signed certificate is both a user and trusted certificate + +# Export the certificate to load it into the client wallet later +export CERT_EXPORT_PATH="/mnt/yavin4/tmp/00000/wayland/cert_expo" +orapki wallet export -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -dn "CN= `hostname` " -cert ${CERT_EXPORT_PATH}/`hostname`-certificate.crt + + + _____ _ _ __ _ _ _ + / ____| | | (_)/ _(_) | | | | + | | ___ _ __| |_ _| |_ _ ___ __ _| |_ ___ _____ _____| |__ __ _ _ __ __ _ ___ + | | / _ \ '__| __| | _| |/ __/ _` | __/ _ \ / _ \ \/ / __| '_ \ / _` | '_ \ / _` |/ _ \ + | |___| __/ | | |_| | | | | (_| (_| | || __/ | __/> < (__| | | | (_| | | | | (_| | __/ + \_____\___|_| \__|_|_| |_|\___\__,_|\__\___| \___/_/\_\___|_| |_|\__,_|_| |_|\__, |\___| + __/ | + |___/ + +Note: Both Server/Client should trust each other + +# Load the client certificate into the server wallet +export WALLET_DIRECTORY=/mnt/yavin4/tmp/00000/bakura/wallet +export WALLET_PASSWORD="VaeVictis00!" +export CERT_EXPORT_FILE="/mnt/yavin4/tmp/00000/wayland/cert_expo/wayland.swgalaxy-certificate.crt" + +orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -trusted_cert -cert ${CERT_EXPORT_FILE} +# Check the contents of the wallet +orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} + + +# Load the server certificate into the client wallet +export WALLET_DIRECTORY=/mnt/yavin4/tmp/00000/wayland/wallet +export WALLET_PASSWORD="AdVictoriam00!" +export CERT_EXPORT_FILE="/mnt/yavin4/tmp/00000/bakura/cert_expo/bakura.swgalaxy-certificate.crt" + +orapki wallet add -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} -trusted_cert -cert ${CERT_EXPORT_FILE} +# Check the contents of the wallet +orapki wallet display -wallet ${WALLET_DIRECTORY} -pwd ${WALLET_PASSWORD} + + + _ _ _ _ + | | (_) | | | | + | | _ ___| |_ ___ _ __ ___ _ __ ___ ___| |_ _ _ _ __ + | | | / __| __/ _ \ '_ \ / _ \ '__| / __|/ _ \ __| | | | '_ \ + | |____| \__ \ || __/ | | | __/ | \__ \ __/ |_| |_| | |_) | + |______|_|___/\__\___|_| |_|\___|_| |___/\___|\__|\__,_| .__/ + | | + |_| + +Note: I didn't succeed the LISTENER setup using a custom TNS_ADMIN or using /etc/listener.ora file + +rm -rf /etc/listener.ora +rm -rf /etc/tnsnames.ora + + +# I'm using a read-only ORACLE_HOME +cat $(orabasehome)/network/admin/sqlnet.ora + +WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /mnt/yavin4/tmp/00000/bakura/wallet) + ) + ) + +SQLNET.AUTHENTICATION_SERVICES = (TCPS,BEQ,NTP) +SSL_CLIENT_AUTHENTICATION = FALSE + + +cat $(orabasehome)/network/admin/listener.ora +SSL_CLIENT_AUTHENTICATION = FALSE +WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /mnt/yavin4/tmp/00000/bakura/wallet) + ) + ) + +LISTENER_DEMO = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCP)(HOST = bakura.swgalaxy)(PORT = 1600)) + ) + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCPS)(HOST = bakura.swgalaxy)(PORT = 1700)) + ) + ) + +# start specific listener +lsnrctl start LISTENER_DEMO + +# register the database in the listener; note that TCPS adress was not required +alter system set local_listener='(DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = bakura.swgalaxy)(PORT = 1600)) ) )' scope=both sid='*'; +alter system register; + +Note: I don't explicitly specified TCPS adress but TCPS connections will be OK + + _____ _ _ _ _ + / ____| (_) | | | | + | | | |_ ___ _ __ | |_ ___ ___| |_ _ _ _ __ + | | | | |/ _ \ '_ \| __| / __|/ _ \ __| | | | '_ \ + | |____| | | __/ | | | |_ \__ \ __/ |_| |_| | |_) | + \_____|_|_|\___|_| |_|\__| |___/\___|\__|\__,_| .__/ + | | + |_| +Note: On client side, custom TNS_ADMIN worked + +export TNS_ADMIN=/mnt/yavin4/tmp/00000/wayland/tns_admin + +cd $TNS_ADMIN + +cat sqlnet.ora + +WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /mnt/yavin4/tmp/00000/wayland/wallet) + ) + ) + +SQLNET.AUTHENTICATION_SERVICES = (TCPS,BEQ,NTP) +SSL_CLIENT_AUTHENTICATION = FALSE + + +cat tnsnames.ora + +HUTTPRD_tcp = + (DESCRIPTION = + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = bakura.swgalaxy)(PORT = 1600)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = HUTTPRD) + ) + ) + +HUTTPRD_tcps = + (DESCRIPTION = + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCPS)(HOST = bakura.swgalaxy)(PORT = 1700)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = HUTTPRD) + ) + ) + +# JABBA is a PDB inside HUTTPRD +JABBA_tcps = + (DESCRIPTION = + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCPS)(HOST = bakura.swgalaxy)(PORT = 1700)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = JABBA) + ) + ) + + +# check connections +connect c##globaldba/"secret"@HUTTPRD_tcp +connect c##globaldba/"secret"@HUTTPRD_tcps +connect c##globaldba/"secret"@JABBA_tcps + +# check for connection protocol: tcp/tcps +select SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') from dual; diff --git a/divers/ash_plsql_01.txt b/divers/ash_plsql_01.txt new file mode 100644 index 0000000..56704b1 --- /dev/null +++ b/divers/ash_plsql_01.txt @@ -0,0 +1,93 @@ +connect user1/secret@//bakura.swgalaxy:1521/WOMBAT + +create table tpl1 as select * from dba_extents; +create table tpl2 as (select * from tpl1 union all select * from tpl1); +create table tpl3 as (select * from tpl2 union all select * from tpl2); + + +select /* MYQ1 */ + count(*) +from + tpl1 + join tpl2 on tpl1.bytes=tpl2.bytes + join tpl3 on tpl1.segment_name=tpl3.segment_name +/ + + +-------------------------------------------------------- +-- DDL for Package PACKAGE1 +-------------------------------------------------------- + + CREATE OR REPLACE EDITIONABLE PACKAGE "USER1"."PACKAGE1" AS + +PROCEDURE PROC1; +PROCEDURE PROC2; +PROCEDURE PROC3; + +END PACKAGE1; + +/ + + +-------------------------------------------------------- +-- DDL for Package Body PACKAGE1 +-------------------------------------------------------- + + CREATE OR REPLACE EDITIONABLE PACKAGE BODY "USER1"."PACKAGE1" AS + + PROCEDURE proc1 AS + rr NUMBER; + BEGIN + SELECT /* MYQ2 */ + COUNT(*) + INTO rr + FROM + tpl1 + JOIN tpl2 ON tpl1.bytes = tpl2.bytes + JOIN tpl3 ON tpl1.segment_name = tpl3.segment_name; + + END; + + PROCEDURE proc2 AS + z NUMBER; + BEGIN + SELECT /* MYQ3 */ + COUNT(*) + INTO z + FROM + tpl1 + JOIN tpl2 ON tpl1.bytes = tpl2.bytes + JOIN tpl3 ON tpl1.segment_name = tpl3.segment_name; + + END; + + + PROCEDURE proc3 AS + v NUMBER; + BEGIN + SELECT /* MYQ4 */ + COUNT(*) + INTO v + FROM + tpl1 + JOIN tpl2 ON tpl1.bytes = tpl2.bytes + JOIN tpl3 ON tpl1.segment_name = tpl3.segment_name; + + END; + + +END package1; + +/ + + +SQL> @ash/ashtop sql_id,TOP_LEVEL_SQL_ID,PLSQL_ENTRY_OBJECT_ID,PLSQL_ENTRY_SUBPROGRAM_ID "username='USER1'" sysdate-1/24 sysdate + + Total Distinct Distinct + Seconds AAS %This SQL_ID TOP_LEVEL_SQL PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID FIRST_SEEN LAST_SEEN Execs Seen Tstamps +--------- ------- ------- ------------- ------------- --------------------- ------------------------- ------------------- ------------------- ---------- -------- + 105 .0 41% | a0dhc0nj62mk1 8ybf2rvtac57c 33008 3 2023-07-19 20:45:23 2023-07-19 20:47:07 1 105 + 104 .0 41% | a0dhc0nj62mk1 25ju18ztqn751 33008 1 2023-07-19 20:34:23 2023-07-19 20:36:06 1 104 + 42 .0 16% | a0dhc0nj62mk1 cum98j5xfkk62 33008 2 2023-07-19 20:44:37 2023-07-19 20:45:18 1 42 + + \ No newline at end of file diff --git a/divers/certbot_renew_01.txt b/divers/certbot_renew_01.txt new file mode 100644 index 0000000..61fc388 --- /dev/null +++ b/divers/certbot_renew_01.txt @@ -0,0 +1,8 @@ +certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/memo.dbaoracle.fr -d memo.dbaoracle.fr +certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/support.dbaoracle.fr -d support.dbaoracle.fr +certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/public.dbaoracle.fr -d public.dbaoracle.fr + +certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/sabnzbd.dbaoracle.fr -d sabnzbd.dbaoracle.fr +certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/lidarr.dbaoracle.fr -d lidarr.dbaoracle.fr +certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/sonarr.dbaoracle.fr -d sonarr.dbaoracle.fr +certbot certonly --webroot --webroot-path /app/persistent_docker/nginx/www/radarr.dbaoracle.fr -d radarr.dbaoracle.fr diff --git a/divers/clone_oracle_home_golden_image_01.txt b/divers/clone_oracle_home_golden_image_01.txt new file mode 100644 index 0000000..5d08804 --- /dev/null +++ b/divers/clone_oracle_home_golden_image_01.txt @@ -0,0 +1,88 @@ +-- https://rene-ace.com/how-to-clone-an-oracle-home-in-19c/ +----------------------------------------------------------- + +cd $ORACLE_HOME/rdbms/lib/ +cat config.c | grep define + +----------------------------> +#define SS_DBA_GRP "dba" +#define SS_OPER_GRP "oper" +#define SS_ASM_GRP "" +#define SS_BKP_GRP "backupdba" +#define SS_DGD_GRP "dgdba" +#define SS_KMT_GRP "kmdba" +#define SS_RAC_GRP "racdba" +<---------------------------- + +$ORACLE_HOME/runInstaller -silent -createGoldImage -destinationLocation /app/oracle/staging_area + +cd /app/oracle/staging_area +unzip -v db_home_2023-08-16_02-20-39PM.zip + +mkdir -p /app/oracle/product/19.20 +cd /app/oracle/product/19.20 +unzip /app/oracle/staging_area/db_home_2023-08-16_02-20-39PM.zip + +unset ORACLE_HOME ORACLE_SID ORACLE_RSID ORACLE_UNQNAME ORACLE_BASE + +export ORACLE_HOME=/app/oracle/product/19.20 +export ORACLE_HOSTNAME=togoria +export ORA_INVENTORY=/app/oracle/oraInventory +export NODE1_HOSTNAME=togoria +# export NODE2_HOSTNAME=reneace02 +export ORACLE_BASE=/app/oracle/base + + +# current +# required only IS is OEL8 +export CV_ASSUME_DISTID=OEL7.8 + +${ORACLE_HOME}/runInstaller -ignorePrereq -waitforcompletion -silent \ +-responseFile ${ORACLE_HOME}/install/response/db_install.rsp \ +oracle.install.option=INSTALL_DB_SWONLY \ +ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \ +UNIX_GROUP_NAME=oinstall \ +INVENTORY_LOCATION=${ORA_INVENTORY} \ +ORACLE_HOME=${ORACLE_HOME} \ +ORACLE_BASE=${ORACLE_BASE} \ +oracle.install.db.OSDBA_GROUP=dba \ +oracle.install.db.OSOPER_GROUP=oper \ +oracle.install.db.OSBACKUPDBA_GROUP=backupdba \ +oracle.install.db.OSDGDBA_GROUP=dgdba \ +oracle.install.db.OSKMDBA_GROUP=kmdba \ +oracle.install.db.OSRACDBA_GROUP=racdba + + +# original +${ORACLE_HOME}/runInstaller -ignorePrereq -waitforcompletion -silent \ +-responseFile ${ORACLE_HOME}/install/response/db_install.rsp \ +oracle.install.option=INSTALL_DB_SWONLY \ +ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \ +UNIX_GROUP_NAME=oinstall \ +INVENTORY_LOCATION=${ORA_INVENTORY} \ +SELECTED_LANGUAGES=en \ +ORACLE_HOME=${ORACLE_HOME} \ +ORACLE_BASE=${ORACLE_BASE} \ +oracle.install.db.InstallEdition=EE \ +oracle.install.db.OSDBA_GROUP=dba \ +oracle.install.db.OSOPER_GROUP=dba \ +oracle.install.db.OSBACKUPDBA_GROUP=dba \ +oracle.install.db.OSDGDBA_GROUP=dba \ +oracle.install.db.OSKMDBA_GROUP=dba \ +oracle.install.db.OSRACDBA_GROUP=dba \ +oracle.install.db.CLUSTER_NODES=${NODE1_HOSTNAME},${NODE2_HOSTNAME} \ +oracle.install.db.isRACOneInstall=false \ +oracle.install.db.rac.serverpoolCardinality=0 \ +oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \ +oracle.install.db.ConfigureAsContainerDB=false \ +SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \ +DECLINE_SECURITY_UPDATES=true + + +# check ORACLE homes in inventory +cat /app/oracle/oraInventory/ContentsXML/inventory.xml | grep "HOME NAME" + + + + + diff --git a/divers/dataguard_21_RAC_01.txt b/divers/dataguard_21_RAC_01.txt new file mode 100644 index 0000000..1948a18 --- /dev/null +++ b/divers/dataguard_21_RAC_01.txt @@ -0,0 +1,115 @@ +rman target / + +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; +} + +srvctl add database -d HUTTPRD -o /app/oracle/product/19 -p '+DATA/HUTTPRD/spfile.ora' + +~~ create passwordfile on ASM; if the DB is not yet registered on CRS, you will get a WARNING +orapwd FILE='+DATA/HUTTPRD/orapwHUTTPRD' ENTRIES=10 DBUNIQUENAME='HUTTPRD' password="Secret00!" +srvctl modify database -d HUTTPRD -pwfile '+DATA/HUTTPRD/orapwHUTTPRD' + +srvctl add instance -d HUTTPRD -i HUTTPRD1 -n ylesia-db01 +srvctl add instance -d HUTTPRD -i HUTTPRD2 -n ylesia-db02 + + +alias HUTTPRD='rlwrap sqlplus sys/"Secret00!"@ylesia-scan/HUTTPRD as sysdba' +alias HUTTDRP='rlwrap sqlplus sys/"Secret00!"@rodia-scan/HUTTDRP as sysdba' + + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + duplicate database 'HUTT' for standby backup location '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/'; +} + + +srvctl add database -d HUTTDRP -o /app/oracle/product/21 -p '+DATA/HUTTDRP/spfile.ora' +srvctl modify database -d HUTTDRP -r physical_standby -n HUTT -s MOUNT + +srvctl add instance -d HUTTDRP -i HUTTDRP1 -n rodia-db01 +srvctl add instance -d HUTTDRP -i HUTTDRP2 -n rodia-db02 + + +# copy passwordfile from primary to standby +ASMCMD [+DATA/HUTTPRD] > pwcopy +DATA/HUTTPRD/PASSWORD/pwdhuttprd.274.1137773649 /tmp +scp /tmp/pwdhuttprd.274.1137773649 rodia-db02:/tmp +ASMCMD [+DATA/HUTTDRP] > pwcopy /tmp/pwdhuttprd.274.1137773649 +DATA/HUTTDRP/orapwhuttdrp + +srvctl modify database -db HUTTDRP -pwfile '+DATA/HUTTDRP/orapwhuttdrp' + + +alter system set dg_broker_config_file1='+DATA/HUTTPRD/dg_broker_01.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/HUTTPRD/dg_broker_02.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + +alter system set dg_broker_config_file1='+DATA/HUTTDRP/dg_broker_01.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/HUTTDRP/dg_broker_02.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + + +select GROUP#,THREAD#,MEMBERS,STATUS, BYTES/(1024*1024) Mb from v$log; +select GROUP#,THREAD#,STATUS, BYTES/(1024*1024) Mb from v$standby_log; + +set lines 256 +col MEMBER for a80 +select * from v$logfile; + + +-- create standby redologs +select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; +select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; + +-- clear / drop standby redologs +select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log; +select 'ALTER DATABASE DROP STANDBY LOGFILE GROUP '||GROUP#||';' from v$standby_log; + + +dgmgrl +DGMGRL> connect sys/"Secret00!"@ylesia-scan:1521/HUTTPRD +DGMGRL> create configuration HUTT as primary database is HUTTPRD connect identifier is ylesia-scan:1521/HUTTPRD; +DGMGRL> add database HUTTDRP as connect identifier is rodia-scan:1521/HUTTDRP; + +DGMGRL> enable configuration; +DGMGRL> show configuration; + +DGMGRL> edit database 'huttdrp' set property ArchiveLagTarget=0; +DGMGRL> edit database 'huttdrp' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'huttdrp' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'huttdrp' set property StandbyFileManagement='AUTO'; + +DGMGRL> edit database 'huttprd' set property ArchiveLagTarget=0; +DGMGRL> edit database 'huttprd' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'huttprd' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'huttprd' set property StandbyFileManagement='AUTO'; + +DGMGRL> show configuration; + + + +RMAN> configure archivelog deletion policy to applied on all standby; + +# if incremental recover from source is required +RMAN> recover database from service 'ylesia-scan/HUTTPRD' using compressed backupset section size 2G; + + + + + diff --git a/divers/dataguard_cascade_routes_01.txt b/divers/dataguard_cascade_routes_01.txt new file mode 100644 index 0000000..8c5e132 --- /dev/null +++ b/divers/dataguard_cascade_routes_01.txt @@ -0,0 +1,162 @@ +Primary: ylesia-scan:1521/HUTTPRD +Dataguard: rodia-scan:1521/HUTTDRP +Cascade 1: kamino:1521/HUTTCA1 +Far sync: mandalore:1521/HUTTFAR +Remote dataguard: taris:1521/HUTTREM + +alias HUTTPRD='rlwrap sqlplus sys/"Secret00!"@ylesia-scan:1521/HUTTPRD as sysdba' +alias HUTTPRD1='rlwrap sqlplus sys/"Secret00!"@ylesia-db01-vip:1521/HUTTPRD as sysdba' +alias HUTTPRD2='rlwrap sqlplus sys/"Secret00!"@ylesia-db02-vip:1521/HUTTPRD as sysdba' +alias HUTTDRP='rlwrap sqlplus sys/"Secret00!"@rodia-scan:1521/HUTTDRP as sysdba' +alias HUTTCA1='rlwrap sqlplus sys/"Secret00!"@kamino:1521/HUTTCA1 as sysdba' +alias HUTTFAR='rlwrap sqlplus sys/"Secret00!"@mandalore:1521/HUTTFAR as sysdba' +alias HUTTREM='rlwrap sqlplus sys/"Secret00!"@taris:1521/HUTTREM as sysdba' + + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + duplicate database 'HUTT' for standby backup location '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/21/backupset/'; +} + + +run +{ + allocate channel pri01 device type disk; + allocate channel pri02 device type disk; + allocate channel pri03 device type disk; + allocate channel pri04 device type disk; + recover database from service 'ylesia-scan:1521/HUTTPRD' using compressed backupset section size 1G; +} + +alter database create standby controlfile as '/mnt/yavin4/tmp/00000/HUTTPRD1.stdby'; +alter database create far sync instance controlfile as '/mnt/yavin4/tmp/00000/HUTTPRD1.far'; + +dgmgrl +DGMGRL> connect sys/"Secret00!"@ylesia-scan:1521/HUTTPRD +DGMGRL> add database HUTTCA1 as connect identifier is kamino:1521/HUTTCA1; +DGMGRL> add database HUTTREM as connect identifier is taris:1521/HUTTREM +DGMGRL> add far_sync HUTTFAR as connect identifier is mandalore:1521/HUTTFAR; + +DGMGRL> show database 'huttprd' redoroutes; +DGMGRL> show database 'huttdrp' redoroutes; + +# routes config ########################################################################### + +# without FAR SYNC: main dataguard relies redo to cascade +DGMGRL> edit database huttprd set property redoroutes = '(local:huttdrp)(huttdrp:huttca1)'; +DGMGRL> edit database huttdrp set property redoroutes = '(huttprd:huttca1)(local:huttprd)'; + +# FAR SYNC built but not activated: main dataguard relies redo to cascade and remote dataguard +DGMGRL> edit database huttprd set property redoroutes = '(local:huttdrp)(huttdrp:huttca1)'; +DGMGRL> edit database huttdrp set property redoroutes = '(huttprd:huttca1,huttrem)(local:huttprd)'; + +# FAR SYNC activated: main dataguard relies redo to cascade and FAR SYNC relies redo to remote dataguard +DGMGRL> edit database huttprd set property redoroutes = '(local:huttdrp,huttfar SYNC)(huttdrp:huttca1 ASYNC)'; +DGMGRL> edit database huttdrp set property redoroutes = '(huttprd:huttca1 ASYNC)(local:huttprd,huttfar SYNC)'; +DGMGRL> edit far_sync huttfar set property redoroutes = '(huttprd:huttrem ASYNC)(huttdrp:huttrem ASYNC)'; + +# ######################################################################################### + + +DGMGRL> edit database huttprd set property StandbyFileManagement='AUTO'; +DGMGRL> edit database huttdrp set property StandbyFileManagement='AUTO'; +DGMGRL> edit database huttca1 set property StandbyFileManagement='AUTO'; +DGMGRL> edit database huttrem set property StandbyFileManagement='AUTO'; +DGMGRL> edit far_sync huttfar set property StandbyFileManagement='AUTO'; + +# unless setting configuration protection to MaxAvailability, cascade standby redelog was not used and broker show warnings +# after setting to MaxAvailability, switching back to MaxPerformance does not affected the sitiation, cascade standby still use +# standby redologs and broker status does not display warnings anymore + +DGMGRL> edit configuration set protection mode as MaxAvailability; +DGMGRL> edit configuration set protection mode as MaxPerformance; + + +# not sure that help for +# ORA-16853: apply lag has exceeded specified threshold +# ORA-16855: transport lag has exceeded specified threshold + +DGMGRL> edit database huttprd set property TransportDisconnectedThreshold=0; +DGMGRL> edit database huttdrp set property TransportDisconnectedThreshold=0; +DGMGRL> edit database huttca1 set property TransportDisconnectedThreshold=0; + +DGMGRL> edit database huttprd set property ApplyLagThreshold=0; +DGMGRL> edit database huttdrp set property ApplyLagThreshold=0; +DGMGRL> edit database huttca1 set property ApplyLagThreshold=0; + +# othrwise, to reset: + +DGMGRL> edit database huttprd reset property ApplyLagThreshold; +DGMGRL> edit database huttdrp reset property ApplyLagThreshold; +DGMGRL> edit database huttca1 reset property ApplyLagThreshold; + +DGMGRL> edit database huttprd reset property TransportDisconnectedThreshold; +DGMGRL> edit database huttdrp reset property TransportDisconnectedThreshold; +DGMGRL> edit database huttca1 reset property TransportDisconnectedThreshold; + + +DGMGRL> enable database huttca1; +DGMGRL> edit database huttca1 set state='APPLY-OFF'; +DGMGRL> edit database huttca1 set state='ONLINE'; + +-- create standby redologs +select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log union all +select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; + +-- clear / drop standby redologs +select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log; +select 'ALTER DATABASE DROP STANDBY LOGFILE GROUP '||GROUP#||';' from v$standby_log; + + +alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; +set lines 200 + +-- on PRIMARY database +---------------------- +select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log group by THREAD#; + +-- on STANDBY database +---------------------- +select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log + where APPLIED='YES' group by THREAD#; + + +set lines 155 pages 9999 +col thread# for 9999990 +col sequence# for 999999990 +col grp for 990 +col fnm for a50 head "File Name" +col "Fisrt SCN Number" for 999999999999990 +break on thread + +select + a.thread# + ,a.sequence# + ,a.group# grp + , a.bytes/1024/1024 Size_MB + ,a.status + ,a.archived + ,a.first_change# "First SCN Number" + ,to_char(FIRST_TIME,'YYYY-MM-DD HH24:MI:SS') "First SCN Time" + ,to_char(LAST_TIME,'YYYY-MM-DD HH24:MI:SS') "Last SCN Time" +from + gv$standby_log a order by 1,2,3,4 + / + + + +# https://www.dba-scripts.com/articles/dataguard-standby/data-guard-far-sync/ + + +edit database huttdrp set property redoroutes = '(huttprd:huttca1)(huttprd:huttrem)(local:huttprd)'; +enable database huttrem; + + + + +create pluggable database JABBA admin user admin identified by "Secret00!"; + diff --git a/divers/dg.txt b/divers/dg.txt new file mode 100644 index 0000000..ae53331 --- /dev/null +++ b/divers/dg.txt @@ -0,0 +1,11 @@ +alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; +set lines 200 + +-- on PRIMARY database +---------------------- +select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log group by THREAD#; + +-- on STANDBY database +---------------------- +select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log + where APPLIED='YES' group by THREAD#; diff --git a/divers/disable_IPV6.md b/divers/disable_IPV6.md new file mode 100644 index 0000000..13825d6 --- /dev/null +++ b/divers/disable_IPV6.md @@ -0,0 +1,17 @@ +Create a sysctl config file: +```bash +tee /etc/sysctl.d/99-disable-ipv6.conf >/dev/null <<'EOF' +net.ipv6.conf.all.disable_ipv6 = 1 +net.ipv6.conf.default.disable_ipv6 = 1 +EOF +``` + +Apply the settings: +```bash +sudo sysctl --system +``` + +Verify: +```bash +cat /proc/sys/net/ipv6/conf/all/disable_ipv6 +``` diff --git a/divers/dnsmanager_api_example_01.txt b/divers/dnsmanager_api_example_01.txt new file mode 100644 index 0000000..2a0f2f3 --- /dev/null +++ b/divers/dnsmanager_api_example_01.txt @@ -0,0 +1,9 @@ +curl -s https://app.dnsmanager.io/api/v1/user/domains \ + -u "9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy" | jq + +curl -s https://app.dnsmanager.io/api/v1/user/domain/151914/records \ + -u "9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy" | jq + +curl -s -X PUT -d content="1.1.1.1" https://app.dnsmanager.io/api/v1/user/domain/151914/record/16572810 \ +-u "9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy" | jq + diff --git a/divers/import_certificate_RHEL9.md b/divers/import_certificate_RHEL9.md new file mode 100644 index 0000000..419e875 --- /dev/null +++ b/divers/import_certificate_RHEL9.md @@ -0,0 +1,19 @@ +# How to Import Your Own CA Root on RHEL 9 + +## Place your CA certificate in the correct directory + +```bash +cp /mnt/unprotected/tmp/oracle/swgalaxy_root_ca.cert.pem /etc/pki/ca-trust/source/anchors/ +``` + +## Update the system trust store + +```bash +update-ca-trust extract +``` + +## Verify that your CA is now trusted + +```bash +openssl verify -CAfile /etc/pki/tls/certs/ca-bundle.crt /etc/pki/ca-trust/source/anchors/swgalaxy_root_ca.cert.pem +``` diff --git a/divers/issue_after_swap_lv_destroy_01.txt b/divers/issue_after_swap_lv_destroy_01.txt new file mode 100644 index 0000000..8beabcc --- /dev/null +++ b/divers/issue_after_swap_lv_destroy_01.txt @@ -0,0 +1,8 @@ +# after destroing a SWAP LV for create a new one, old reference remains on /etc/default/grub +# in GRUB_CMDLINE_LINUX + +# delete GRUB_CMDLINE_LINUX from /etc/default/grub +vi /etc/default/grub +grub2-mkconfig -o /boot/grub2/grub.cfg + +# restart the machine diff --git a/divers/linux_change_machine_id.md b/divers/linux_change_machine_id.md new file mode 100644 index 0000000..aff4e5a --- /dev/null +++ b/divers/linux_change_machine_id.md @@ -0,0 +1,5 @@ +Commands to generate a new machine ID: +```bash +cat /dev/null > /etc/machine-id +systemd-machine-id-setup +``` diff --git a/divers/linux_cleanup_boot_partition.txt b/divers/linux_cleanup_boot_partition.txt new file mode 100644 index 0000000..450dd02 --- /dev/null +++ b/divers/linux_cleanup_boot_partition.txt @@ -0,0 +1,27 @@ +@ Technical Tip: Clean up /boot in CentOS, RHEL or Rocky Linux 8 and up + +1) Check the current kernel being used: + + +sudo uname -sr + + +2) List all kernels installed on the system: + + +sudo rpm -q kernel + + +3) Delete old kernels and only leave number of kernels: + + +sudo dnf remove --oldinstallonly --setopt installonly_limit= kernel + + +Note: can be set to 1, 2, 3 or other numeric values. Carefully check the running kernel in step 2 and any other kernels used before running this command. Alternatively, use the following command to delete kernels one by one: + + +rpm -e + +Kernel names can be obtained through step 2. + diff --git a/divers/linux_create_swap_partition_01.txt b/divers/linux_create_swap_partition_01.txt new file mode 100644 index 0000000..957b103 --- /dev/null +++ b/divers/linux_create_swap_partition_01.txt @@ -0,0 +1,26 @@ +# create swap partition on /dev/vdb +################################### + +# create PV,VG and LV +lsblk +fdisk /dev/vdb1 +pvs +pvcreate /dev/vdb1 +vgcreate vgswap /dev/vdb1 +vgs +lvs +lvcreate -n swap -l 100%FREE vgswap +ls /dev/mapper/vgswap-swap + +# format LV as swap +mkswap /dev/mapper/vgswap-swap + +# add swap entery in /etc/fstab +/dev/mapper/vgswap-swap swap swap defaults 0 0 + +# activate swap +swapon -va + +# check swap +cat /proc/swaps +free -h diff --git a/divers/linux_remove_old_kernel_01.txt b/divers/linux_remove_old_kernel_01.txt new file mode 100644 index 0000000..d272a6f --- /dev/null +++ b/divers/linux_remove_old_kernel_01.txt @@ -0,0 +1,6 @@ +# remove old kernel from /boot +# https://community.fortinet.com/t5/FortiSOAR-Knowledge-Base/Technical-Tip-Clean-up-boot-in-CentOS-RHEL-or-Rocky-Linux-8-and/ta-p/257565 + +uname -sr +rpm -q kernel +dnf remove --oldinstallonly --setopt installonly_limit=2 kernel \ No newline at end of file diff --git a/divers/my_root_CA_generate_certificate.md b/divers/my_root_CA_generate_certificate.md new file mode 100644 index 0000000..7823a41 --- /dev/null +++ b/divers/my_root_CA_generate_certificate.md @@ -0,0 +1,96 @@ +# Issue a Server Certificate + +> Based on https://medium.com/@sureshchand.rhce/how-to-build-a-root-ca-intermediate-ca-with-openssl-eba1c73d1591 + +## Create server key +``` bash +openssl genpkey -algorithm RSA \ + -out exegol.swgalaxy.key.pem \ + -pkeyopt rsa_keygen_bits:2048 +``` + +## Create CSR with SAN + +Define a configuration file for the CSR `exegol.swgalaxy.cnf`: +``` +[ req ] +distinguished_name = req_distinguished_name +req_extensions = req_ext +prompt = no + +[ req_distinguished_name ] +C = FR +ST = Yvelines +L = Le Vesinet +O = swgalaxy +OU = swgalaxy servers +CN = exegol.swgalaxy + +[ req_ext ] +subjectAltName = @alt_names + +[ alt_names ] +DNS.1 = exegol.swgalaxy +DNS.2 = exegol +``` + +Create thr CSR: + +``` bash +openssl req -new -key exegol.swgalaxy.key.pem \ + -out exegol.swgalaxy.csr.pem \ + -config exegol.swgalaxy.cnf +``` + + +## Sign with Intermediate CA + +Update `server_cert` extension on **intermediate CA** configuration file `/app/pki/intermediate/openssl.cnf`: +``` +[ server_cert ] +# Basic identity +subjectKeyIdentifier = hash +authorityKeyIdentifier = keyid,issuer + +# Server certificates must NOT be CA certificates +basicConstraints = critical, CA:FALSE + +# Key usage: what the certificate is allowed to do +keyUsage = critical, digitalSignature, keyEncipherment + +# Extended key usage: define this as a TLS server certificate +extendedKeyUsage = serverAuth + +# Allow SANs (modern TLS requires SANs) +subjectAltName = @alt_names + +[ alt_names ] +DNS.1 = exegol.swgalaxy +DNS.2 = exegol +``` + +Sign the certificate with **intermediate CA**: + +``` bash +openssl ca -config /app/pki/intermediate/openssl.cnf \ + -extensions server_cert \ + -days 3650 -notext -md sha256 \ + -in exegol.swgalaxy.csr.pem \ + -out /app/pki/intermediate/certs/exegol.swgalaxy.cert.pem +``` + +## Verify the chain + +``` bash +openssl verify \ + -CAfile /app/pki/intermediate/certs/ca-chain.cert.pem \ + /app/pki/intermediate/certs/exegol.swgalaxy.cert.pem +``` + +## Verify the certificate + +``` bash +openssl x509 -text -noout \ + -in /app/pki/intermediate/certs/exegol.swgalaxy.cert.pem +``` + diff --git a/divers/oracle_resource_manager_01.txt b/divers/oracle_resource_manager_01.txt new file mode 100644 index 0000000..f20b3ab --- /dev/null +++ b/divers/oracle_resource_manager_01.txt @@ -0,0 +1,2 @@ +# CPU usage limit with resource manager in Oracle +# https://smarttechways.com/2021/05/12/cpu-usage-limit-with-resource-manager-in-oracle/ \ No newline at end of file diff --git a/divers/patch_standby_first_01.txt b/divers/patch_standby_first_01.txt new file mode 100644 index 0000000..d3f209d --- /dev/null +++ b/divers/patch_standby_first_01.txt @@ -0,0 +1,178 @@ +select force_logging from v$database; + +set lines 256 pages 999 + +col MEMBER for a60 +select * from v$logfile; + +-- create standby redologs +select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; +select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; + +-- clear / drop standby redologs +select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log; +select 'ALTER DATABASE DROP STANDBY LOGFILE GROUP '||GROUP#||';' from v$standby_log; + + + + +*.audit_file_dest='/app/oracle/base/admin/ANDODRP/adump' +*.audit_trail='OS' +*.compatible='19.0.0.0' +*.control_files='/data/ANDODRP/control01.ctl' +*.db_block_size=8192 +*.db_create_file_dest='/data' +*.db_create_online_log_dest_1='/data' +*.db_name='ANDO' +*.db_recovery_file_dest_size=10G +*.db_recovery_file_dest='/reco' +*.db_unique_name='ANDODRP' +*.diagnostic_dest='/app/oracle/base/admin/ANDODRP' +*.enable_goldengate_replication=TRUE +*.enable_pluggable_database=FALSE +*.instance_name='ANDODRP' +*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' +*.log_archive_format='%t_%s_%r.arc' +*.open_cursors=300 +*.pga_aggregate_target=512M +*.processes=350 +*.remote_login_passwordfile='exclusive' +*.sga_max_size=3G +*.sga_target=3G +*.undo_tablespace='TS_UNDO' + + + +create spfile='/app/oracle/base/admin/ANDODRP/spfile/spfileANDODRP.ora' from pfile='/mnt/yavin4/tmp/_oracle_/tmp/ANDO.txt'; + + +/mnt/yavin4/tmp/_oracle_/tmp/bakura/listener.ora + +STATIC = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCP)(HOST = bakura)(PORT = 1600)) + ) + ) + +SID_LIST_STATIC = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = ANDODRP_STATIC) + (SID_NAME = ANDODRP) + (ORACLE_HOME = /app/oracle/product/19) + ) + ) + + + + + +export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/tmp/bakura +lsnrctl start STATIC +lsnrctl status STATIC + + + + +/mnt/yavin4/tmp/_oracle_/tmp/togoria/listener.ora + +STATIC = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCP)(HOST = togoria)(PORT = 1600)) + ) + ) + +SID_LIST_STATIC = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = ANDOPRD_STATIC) + (SID_NAME = ANDOPRD) + (ORACLE_HOME = /app/oracle/product/19) + ) + ) + + + + + +export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/tmp/togoria +lsnrctl start STATIC +lsnrctl status STATIC + + +connect sys/"Secret00!"@//togoria:1600/ANDOPRD_STATIC as sysdba +connect sys/"Secret00!"@//bakura:1600/ANDODRP_STATIC as sysdba + + +rman target=sys/"Secret00!"@//togoria:1600/ANDOPRD_STATIC auxiliary=sys/"Secret00!"@//bakura:1600/ANDODRP_STATIC +run { + allocate channel pri1 device type DISK; + allocate channel pri2 device type DISK; + allocate channel pri3 device type DISK; + allocate channel pri4 device type DISK; + allocate auxiliary channel aux1 device type DISK; + allocate auxiliary channel aux2 device type DISK; + allocate auxiliary channel aux3 device type DISK; + allocate auxiliary channel aux4 device type DISK; + duplicate target database + for standby + dorecover + from active database + nofilenamecheck + using compressed backupset section size 1G; +} + + +alter system set dg_broker_config_file1='/app/oracle/base/admin/ANDOPRD/divers/dr1ANDOPRD.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='/app/oracle/base/admin/ANDOPRD/divers/dr2ANDOPRD.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + +alter system set dg_broker_config_file1='/app/oracle/base/admin/ANDODRP/divers/dr1ANDODRP.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='/app/oracle/base/admin/ANDODRP/divers/dr2ANDODRP.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + + +dgmgrl +connect sys/"Secret00!"@//togoria:1600/ANDOPRD_STATIC + +create configuration ANDO as + primary database is ANDOPRD + connect identifier is "//togoria:1600/ANDOPRD_STATIC"; + +add database ANDODRP + as connect identifier is "//bakura:1600/ANDODRP_STATIC" + maintained as physical; + +enable configuration; +show configuration; + +edit database 'andoprd' set property ArchiveLagTarget=0; +edit database 'andoprd' set property LogArchiveMaxProcesses=2; +edit database 'andoprd' set property LogArchiveMinSucceedDest=1; +edit database 'andoprd' set property StandbyFileManagement='AUTO'; + +edit database 'andodrp' set property ArchiveLagTarget=0; +edit database 'andodrp' set property LogArchiveMaxProcesses=2; +edit database 'andodrp' set property LogArchiveMinSucceedDest=1; +edit database 'andodrp' set property StandbyFileManagement='AUTO'; + +edit database 'andoprd' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=togoria)(PORT=1600))(CONNECT_DATA=(SERVICE_NAME=ANDOPRD_STATIC)(SERVER=DEDICATED)))'; +edit database 'andodrp' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=bakura)(PORT=1600))(CONNECT_DATA=(SERVICE_NAME=ANDODRP_STATIC)(SERVER=DEDICATED)))'; + +validate database 'andoprd' +validate database 'andodrp' + +switchover to 'andodrp' +switchover to 'andoprd' +switchover to 'andodrp' + +convert database 'andodrp' to snapshot standby; +convert database 'andodrp' to physical standby; + + + + + + diff --git a/divers/purines.md b/divers/purines.md new file mode 100644 index 0000000..b207f80 --- /dev/null +++ b/divers/purines.md @@ -0,0 +1,24 @@ +| Poisson / Viande / Fruit de mer | Purines (mg/100 g) | +|----------------------------------|--------------------| +| Dinde hachée, crue | ~96 | +| Cabillaud | ~98 | +| Aiglefin | ~110 | +| Colin | ~110 | +| Merlu | ~110 | +| Flétan | ~120 | +| Noix de Saint-Jacques | ~135 | +| Dorade | ~140 | +| Bar | ~150 | +| Poulet haché, cru | ~158.7 | +| Saumon | ~170 | +| Truite | ~170 | +| Crevette | ~200 | +| Porc | ~230 | +| Bœuf | ~250 | +| Thon | ~290 | +| Sardine crue | 345 | +| Hareng en conserve | 378 | +| Foie de lotte cuit | 398.7 | +| Anchois crus | 411 | +| Crevette Sakura séchée | 748.9 | +| Maquereau japonais | 1 175 | diff --git a/divers/random_string_bash.txt b/divers/random_string_bash.txt new file mode 100644 index 0000000..238fb4f --- /dev/null +++ b/divers/random_string_bash.txt @@ -0,0 +1,3 @@ +# generating random string in bash +echo $RANDOM | md5sum | head -c 20; echo; +cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20; echo; diff --git a/divers/rocky9_nmcli_example_01.txt b/divers/rocky9_nmcli_example_01.txt new file mode 100644 index 0000000..28d71f2 --- /dev/null +++ b/divers/rocky9_nmcli_example_01.txt @@ -0,0 +1,19 @@ +# Rocky 9 network interface change IP address and host name example +################################################################### + +nmcli connection show +nmcli connection show --active + +nmcli connection modify enp1s0 ipv4.address 192.168.0.52/24 +nmcli connection modify enp1s0 ipv4.method manual ipv6.method ignore +nmcli connection modify enp1s0 ipv4.gateway 192.168.0.1 +nmcli connection modify enp1s0 ipv4.dns 192.168.0.8 +nmcli connection modify enp1s0 ipv4.dns-search swgalaxy + +nmcli connection modify enp2s0 ipv4.address 192.168.1.52/24 ipv4.method manual ipv6.method ignore + +# list host interfaces +hostname -I + +# set host name +hostnamectl hostname ithor.swgalaxy diff --git a/divers/screen_command.md b/divers/screen_command.md new file mode 100644 index 0000000..1896f33 --- /dev/null +++ b/divers/screen_command.md @@ -0,0 +1,8 @@ +## Screen configuration + +Configuration file `~/.screenrc`: + + termcapinfo xterm* ti@:te@ + caption always + caption string "%{= bW}%3n %{y}%t %{-}%= %{m}%H%?%{-} -- %{c}%l%?%{-} -- %D %M %d %{y}%c" + diff --git a/divers/split_string_in_words_01.sql b/divers/split_string_in_words_01.sql new file mode 100644 index 0000000..aac3e2e --- /dev/null +++ b/divers/split_string_in_words_01.sql @@ -0,0 +1,34 @@ +/* + vplesnlia: split input string in words +*/ + + +DECLARE + TYPE v_arr IS + VARRAY(100) OF VARCHAR2(60); + var v_arr; + return_value VARCHAR2(60); +BEGIN + var := v_arr(); + FOR c1 IN ( + SELECT + regexp_substr( + '&&1', '[^ ]+', 1, level + ) AS string_parts + FROM + dual + CONNECT BY + regexp_substr( + '&&1', '[^ ]+', 1, level + ) IS NOT NULL + ) LOOP + var.extend; + var(var.last) := c1.string_parts; + END LOOP; + + FOR i IN var.first..var.last LOOP + return_value := var(i); + dbms_output.put_line(return_value); + END LOOP; + +END; diff --git a/divers/sql_analytic_01.txt b/divers/sql_analytic_01.txt new file mode 100644 index 0000000..51ce942 --- /dev/null +++ b/divers/sql_analytic_01.txt @@ -0,0 +1,236 @@ +https://livesql.oracle.com/apex/livesql/file/tutorial_GNRYA4548AQNXC0S04DXVEV08.html +https://oracle-base.com/articles/misc/rank-dense-rank-first-last-analytic-functions#rank + +drop table CARS purge; +create table CARS ( + id INTEGER GENERATED ALWAYS AS IDENTITY + ,brand VARCHAR2(15) not null + ,model VARCHAR2(10) not null + ,year NUMBER(4) not null + ,color VARCHAR2(10) not null + ,category VARCHAR2(12) not null + ,price NUMBER not null + ,power NUMBER(4) not null + ,fuel VARCHAR2(8) not null +) +; + +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','A4','2001','gray','city','5400','150','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','A6','2012','gray','limousine','12000','204','DIESEL'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 4','2020','white','sport','16000','240','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','X6','2018','blue','SUV','15000','280','DIESEL'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Volkswagen','Polo','2014','gray','city','4800','90','DIESEL'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Arkana','2023','green','SUV','35000','220','ELECTRIC'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Porche','Cayenne','2021','black','SUV','41000','280','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Tesla','Model 3','2023','black','city','30500','250','ELECTRIC'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Tesla','Model 3','2023','white','city','30500','250','ELECTRIC'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Tesla','Model 3','2022','black','city','24000','250','ELECTRIC'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','A4','2022','red','city','26000','200','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Audi','Q5','2021','gray','SUV','38000','260','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 3','2022','white','city','46000','240','ELECTRIC'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 3','2023','white','city','44000','240','ELECTRIC'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('BMW','Serie 3','2021','white','city','42000','240','ELECTRIC'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Clio','2019','black','city','8900','110','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Clio','2020','black','city','9600','110','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Twingo','2019','red','city','7800','90','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Renault','Twingo','2022','green','city','9200','90','SP'); +Insert into POC.CARS (BRAND,MODEL,YEAR,COLOR,CATEGORY,PRICE,POWER,FUEL) values ('Porche','911','2022','gray','sport','61000','310','SP'); + +commit; + + +-- display cars and total cars count +select + c.* + ,count(*) over() as Total_count +from + CARS c +; + +-- display cars and the number of cars by brand +select + c.* + ,count(*) over(partition by (brand)) as Brand_count +from + CARS c +; + + +-- number of cars and sum of prices grouped by color +select color, count(*), sum(price) +from CARS +group by color; + +-- integrating last group by query as analytic +-- adding that "inline" for each line +select + c.* + ,count(*) over(partition by (color)) as count_by_color + ,sum(price) over(partition by (color)) as SUM_price_by_color +from + CARS c +; + + + +-- average price by category +select CATEGORY, avg(price) +from CARS +group by CATEGORY; + +-- for each car, the percentage of price/median price of it's category +select + c.* + ,100*c.price/avg(c.price) over (partition by (category)) Price_by_avg_category_PERCENT +from + CARS c +; + + +select CATEGORY, average(price) +from CARS +group by CATEGORY; + + +-- order by in alalytic: runtime from FIRST key until CURRENT key +select b.*, + count(*) over ( + order by brick_id + ) running_total, + sum ( weight ) over ( + order by brick_id + ) running_weight +from bricks b; + + + BRICK_ID COLOUR SHAPE WEIGHT RUNNING_TOTAL RUNNING_WEIGHT +---------- ---------- ---------- ---------- ------------- -------------- + 1 blue cube 1 1 1 + 2 blue pyramid 2 2 3 + 3 red cube 1 3 4 + 4 red cube 2 4 6 + 5 red pyramid 3 5 9 + 6 green pyramid 1 6 10 + +6 rows selected. + + +select + c.* + ,sum(c.price) over (order by c.id) +from + cars c; + + + + ID BRAND MODEL YEAR COLOR CATEGORY PRICE POWER FUEL SUM(C.PRICE)OVER(ORDERBYC.ID) +---------- --------------- ---------- ---------- ---------- ------------ ---------- ---------- -------- ----------------------------- + 1 Audi A4 2001 gray city 5400 150 SP 5400 + 2 Audi A6 2012 gray limousine 12000 204 DIESEL 17400 + 3 BMW Serie 4 2020 white sport 16000 240 SP 33400 + 4 BMW X6 2018 blue SUV 15000 280 DIESEL 48400 + 5 Volkswagen Polo 2014 gray city 4800 90 DIESEL 53200 + 6 Renault Arkana 2023 green SUV 35000 220 ELECTRIC 88200 + 7 Porche Cayenne 2021 black SUV 41000 280 SP 129200 + 8 Tesla Model 3 2023 black city 30500 250 ELECTRIC 159700 + 9 Tesla Model 3 2023 white city 30500 250 ELECTRIC 190200 + 10 Tesla Model 3 2022 black city 24000 250 ELECTRIC 214200 + 11 Audi A4 2022 red city 26000 200 SP 240200 + 12 Audi Q5 2021 gray SUV 38000 260 SP 278200 + 13 BMW Serie 3 2022 white city 46000 240 ELECTRIC 324200 + 14 BMW Serie 3 2023 white city 44000 240 ELECTRIC 368200 + 15 BMW Serie 3 2021 white city 42000 240 ELECTRIC 410200 + 16 Renault Clio 2019 black city 8900 110 SP 419100 + 17 Renault Clio 2020 black city 9600 110 SP 428700 + 18 Renault Twingo 2019 red city 7800 90 SP 436500 + 19 Renault Twingo 2022 green city 9200 90 SP 445700 + 20 Porche 911 2022 gray sport 61000 310 SP 506700 + +20 rows selected. + + +-- adding PARTITION by EXPR will "group by EXPR" and reset FIRST key for each group +select + c.* + ,sum(c.price) over (partition by brand order by c.id) +from + cars c; + + + ID BRAND MODEL YEAR COLOR CATEGORY PRICE POWER FUEL SUM(C.PRICE)OVER(PARTITIONBYBRANDORDERBYC.ID) +---------- --------------- ---------- ---------- ---------- ------------ ---------- ---------- -------- --------------------------------------------- + 1 Audi A4 2001 gray city 5400 150 SP 5400 + 2 Audi A6 2012 gray limousine 12000 204 DIESEL 17400 + 11 Audi A4 2022 red city 26000 200 SP 43400 + 12 Audi Q5 2021 gray SUV 38000 260 SP 81400 + 3 BMW Serie 4 2020 white sport 16000 240 SP 16000 + 4 BMW X6 2018 blue SUV 15000 280 DIESEL 31000 + 13 BMW Serie 3 2022 white city 46000 240 ELECTRIC 77000 + 14 BMW Serie 3 2023 white city 44000 240 ELECTRIC 121000 + 15 BMW Serie 3 2021 white city 42000 240 ELECTRIC 163000 + 7 Porche Cayenne 2021 black SUV 41000 280 SP 41000 + 20 Porche 911 2022 gray sport 61000 310 SP 102000 + 6 Renault Arkana 2023 green SUV 35000 220 ELECTRIC 35000 + 16 Renault Clio 2019 black city 8900 110 SP 43900 + 17 Renault Clio 2020 black city 9600 110 SP 53500 + 18 Renault Twingo 2019 red city 7800 90 SP 61300 + 19 Renault Twingo 2022 green city 9200 90 SP 70500 + 8 Tesla Model 3 2023 black city 30500 250 ELECTRIC 30500 + 9 Tesla Model 3 2023 white city 30500 250 ELECTRIC 61000 + 10 Tesla Model 3 2022 black city 24000 250 ELECTRIC 85000 + 5 Volkswagen Polo 2014 gray city 4800 90 DIESEL 4800 + +20 rows selected. + + + +-- when the keys of ORDER BY are not distinct, over (order by KEY) the analytic function will not change for lignes having the same KEY value +-- to force the compute from previous line to current add : rows between unbounded preceding and current row + + + +select b.*, + count(*) over ( + order by weight + ) running_total, + sum ( weight ) over ( + order by weight + ) running_weight +from bricks b +order by weight; + + + BRICK_ID COLOUR SHAPE WEIGHT RUNNING_TOTAL RUNNING_WEIGHT +---------- ---------- ---------- ---------- ------------- -------------- + 1 blue cube 1 3 3 + 3 red cube 1 3 3 + 6 green pyramid 1 3 3 + 4 red cube 2 5 7 + 2 blue pyramid 2 5 7 + 5 red pyramid 3 6 10 + + +select b.*, + count(*) over ( + order by weight + rows between unbounded preceding and current row + ) running_total, + sum ( weight ) over ( + order by weight + rows between unbounded preceding and current row + ) running_weight +from bricks b +order by weight; + + + + BRICK_ID COLOUR SHAPE WEIGHT RUNNING_TOTAL RUNNING_WEIGHT +---------- ---------- ---------- ---------- ------------- -------------- + 1 blue cube 1 1 1 + 3 red cube 1 2 2 + 6 green pyramid 1 3 3 + 4 red cube 2 4 5 + 2 blue pyramid 2 5 7 + 5 red pyramid 3 6 10 + +6 rows selected. diff --git a/divers/swingbench_01.md b/divers/swingbench_01.md new file mode 100644 index 0000000..e51880a --- /dev/null +++ b/divers/swingbench_01.md @@ -0,0 +1,18 @@ +Setup (schema creation). +This will create SOE schema with *secret* password in the PDB YODA where admin user is sysdba. + + ./oewizard -v -cl -create \ + -cs wayland/YODA -u soe -p secret \ + -scale 1 -tc 2 -dba "admin as sysdba" -dbap "Secret00!" \ + -ts ts_swingbench + +Check: + + ./sbutil -soe -cs wayland/YODA -soe -u soe -p secret -val + +Run benchmark: + + ./charbench -c ../configs/SOE_Server_Side_V2.xml \ + -u soe -p secret -uc 5 -cs wayland/YODA \ + -min 0 -max 10 -intermin 200 -intermax 500 -mt 5000 -mr -v users,tpm,tps,errs,vresp + diff --git a/divers/tanel_update.txt b/divers/tanel_update.txt new file mode 100644 index 0000000..976d1c0 --- /dev/null +++ b/divers/tanel_update.txt @@ -0,0 +1,21 @@ + delete mode 100644 tpt/ash/ash_wait_chains2.sql + create mode 100644 tpt/ash/cashtop.sql + delete mode 100644 tpt/ash/dash_wait_chains2.sql + create mode 100644 tpt/ash/dashtopsum.sql + create mode 100644 tpt/ash/dashtopsum_pga.sql + delete mode 100644 tpt/ash/example_ash_report.html + create mode 100644 tpt/ash/sqlexec_duration_buckets.sql + create mode 100644 tpt/awr/awr_sqlid_binds.sql + create mode 100644 tpt/awr/perfhub.html + create mode 100644 tpt/create_sql_baseline_awr.sql + create mode 100644 tpt/descpartxx.sql + create mode 100644 tpt/descxx11.sql + create mode 100644 tpt/lpstat.sql + create mode 100644 tpt/netstat.sql + create mode 100644 tpt/netstat2.sql + create mode 100644 tpt/npstat.sql + create mode 100644 tpt/oerrh.sql + create mode 100644 tpt/oerrign.sql + create mode 100644 tpt/setup/grant_snapper_privs.sql + create mode 100644 tpt/setup/logon_trigger_ospid.sql + create mode 100644 tpt/tabhisthybrid.sql diff --git a/divers/timescaledb_01.txt b/divers/timescaledb_01.txt new file mode 100644 index 0000000..5940527 --- /dev/null +++ b/divers/timescaledb_01.txt @@ -0,0 +1,231 @@ +CREATE TABLE t ( + id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, + i INTEGER, + c VARCHAR(30), + ts TIMESTAMP +); + +INSERT INTO t (i, c, ts) +SELECT + (random() * 9999 + 1)::int AS i, + md5(random()::text)::varchar(30) AS c, + ( + timestamp '2000-01-01' + + random() * (timestamp '2025-12-31' - timestamp '2000-01-01') + ) AS ts +FROM generate_series(1, 200000000); + + +-- export standard table to CSV +COPY t +TO '/mnt/unprotected/tmp/postgres/t.csv' +DELIMITER ',' +CSV HEADER; + +-- import standard table from CSV + +CREATE TABLE t ( + id INTEGER, + i INTEGER, + c TEXT, + ts TIMESTAMPTZ +); + +COPY t +FROM '/mnt/unprotected/tmp/postgres/t.csv' +DELIMITER ',' +CSV HEADER; + +CREATE INDEX IF NOT EXISTS T_TS ON T (TS); + + +------------ +-- Oracle -- +------------ + +CREATE TABLE t ( + id INTEGER, + i INTEGER, + c VARCHAR2(30), + ts TIMESTAMP +); + + + +-- file t.ctl + +LOAD DATA +INFILE 't.csv' +INTO TABLE t +APPEND +FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' +TRAILING NULLCOLS +( + id INTEGER EXTERNAL, + i INTEGER EXTERNAL, + c CHAR(30), + ts TIMESTAMP "YYYY-MM-DD HH24:MI:SS.FF" +) + +sqlldr "'/ as sysdba'" \ + control=t.ctl \ + log=t.log \ + bad=t.bad \ + rows=50000 + + +------------------ +-- TimescaleDB -- +------------------ + +Install & config from sources: + https://www.tigerdata.com/docs/self-hosted/latest/install/installation-source + +CREATE TABLE ht ( + id INTEGER, + i INTEGER, + c TEXT, + ts TIMESTAMPTZ +); + +SELECT create_hypertable( + 'ht', -- table name + 'ts', -- time column + chunk_time_interval => INTERVAL '1 month' +); + +SELECT add_retention_policy( + 'ht', + INTERVAL '25 years' +); + +SELECT * FROM timescaledb_information.jobs +WHERE proc_name = 'policy_retention'; + +SELECT alter_job( + job_id => , + schedule_interval => INTERVAL '6 hours' +); + +timescaledb-parallel-copy --connection "postgres://postgres@localhost/db01" --table ht --file '/mnt/unprotected/tmp/postgres/t.csv' \ + --workers 16 --reporting-period 30s -skip-header + + +SELECT show_chunks('t'); + +----------- +-- Bench -- +----------- + +-- q1 +select * from t where ts between timestamp'2015-04-01:09:00:00' and timestamp'2015-04-01:09:00:20'; + +-- q2 +select count(*) from t; + + + + +Classic PostgreSQL + +Table load: 5 min + +q1: 52 sec +q2: 45 sec + + + + +TimescaleDB + +Table load: 5 min + + +db01=# SELECT pg_size_pretty(pg_total_relation_size('public.t')); + pg_size_pretty +---------------- + 18 GB +(1 row) + +db01=# SELECT pg_size_pretty(hypertable_size('public.ht')); + pg_size_pretty +---------------- + 19 GB +(1 row) + + +ALTER TABLE ht +SET ( + timescaledb.compress +); + + +SELECT add_compression_policy( + 'ht', + INTERVAL '2 years' +); + +SELECT job_id +FROM timescaledb_information.jobs +WHERE proc_name = 'policy_compression' + AND hypertable_name = 'ht'; + +CALL run_job(1002); + + +SELECT + chunk_schema || '.' || chunk_name AS chunk, + is_compressed, + range_start, + range_end +FROM timescaledb_information.chunks +WHERE hypertable_name = 'ht' +ORDER BY range_start; + + + + +----------------------------------------- + +CREATE MATERIALIZED VIEW ht_hourly_avg +WITH (timescaledb.continuous) AS +SELECT + time_bucket('1 hour', ts) AS bucket, + AVG(i) AS avg_i +FROM ht +GROUP BY bucket; + +SELECT add_continuous_aggregate_policy('ht_hourly_avg', + start_offset => INTERVAL '2 days', + end_offset => INTERVAL '0 hours', + schedule_interval => INTERVAL '5 minutes' +); + +SELECT add_continuous_aggregate_policy('ht_hourly_avg', + start_offset => INTERVAL '7 days', + end_offset => INTERVAL '0 hours', + schedule_interval => INTERVAL '30 minutes' +); + + + +SELECT * +FROM ht_hourly_avg +WHERE bucket >= now() - INTERVAL '7 days' +ORDER BY bucket; + + + +SELECT job_id, proc_name, config +FROM timescaledb_information.jobs; + + +SELECT pid, query, state, backend_type +FROM pg_stat_activity +WHERE query LIKE '%run_job%' + AND query LIKE '%' || || '%'; + + + + + diff --git a/divers/tiny_root_CA_01.md b/divers/tiny_root_CA_01.md new file mode 100644 index 0000000..7e9e717 --- /dev/null +++ b/divers/tiny_root_CA_01.md @@ -0,0 +1,41 @@ +> Based on article https://www.baeldung.com/openssl-self-signed-cert + +## Build a home made root CA + + mkdir -p /app/CA + cd /app/CA + +Create rootCA private key: + + openssl genrsa -des3 -out rootCA.key 4096 + +Create rootCA certificate: + + openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 7300 -out rootCA.pem + + +## Generate client root CA signed certificate for a client + +Client private key: + + openssl genrsa -out raxus.swgalaxy.key 2048 + +Client certificate signature request: + + openssl req -new -key raxus.swgalaxy.key -out raxus.swgalaxy.csr + +Root CA create a signed certificate using the certificate signature request: + + openssl x509 -req -CA rootCA.pem -CAkey rootCA.key -in raxus.swgalaxy.csr -out raxus.swgalaxy.crt -days 365 -CAcreateserial + +Optionally create the full chain: + + cat raxus.swgalaxy.crt rootCA.pem > raxus.swgalaxy.fullchain.crt + +Optionally create an export to be imported into a Oracle wallet: + + openssl pkcs12 -export \ + -in raxus.swgalaxy.crt \ + -inkey raxus.swgalaxy.key \ + -certfile rootCA.pem \ + -out raxus.swgalaxy.p12 \ No newline at end of file diff --git a/divers/use_cp_to_copy_hidden_files_01.md b/divers/use_cp_to_copy_hidden_files_01.md new file mode 100644 index 0000000..2a33638 --- /dev/null +++ b/divers/use_cp_to_copy_hidden_files_01.md @@ -0,0 +1,9 @@ +== Use cp to copy hidden files + + cp -r from/.[^.]* to/ + +Eample: + + cd /root + cp -r ./.[^.]* /mnt/unprotected/tmp/reinstall_coruscant/dom0/slash_root/ + diff --git a/divers/windows_11_auto_login_01 b/divers/windows_11_auto_login_01 new file mode 100644 index 0000000..7c3b88a --- /dev/null +++ b/divers/windows_11_auto_login_01 @@ -0,0 +1,8 @@ +# create local admin user +net user vplesnila secret /add +net localgroup administrators vplesnila /add + +# setup autologin +REG ADD "HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" /v AutoAdminLogon /t REG_SZ /d 1 /f +REG ADD "HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" /v DefaultUserName /t REG_SZ /d vplesnila /f +REG ADD "HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" /v DefaultPassword /t REG_SZ /d secret /f diff --git a/divers/windows_11_create_local_admin_01.txt b/divers/windows_11_create_local_admin_01.txt new file mode 100644 index 0000000..4fee5d6 --- /dev/null +++ b/divers/windows_11_create_local_admin_01.txt @@ -0,0 +1,3 @@ +net user USER-NAME PASSWORD /add +net localgroup administrators USER-ACCOUNT /add + diff --git a/divers/xtts_non-cdb_to_cdb_01.md b/divers/xtts_non-cdb_to_cdb_01.md new file mode 100644 index 0000000..b872023 --- /dev/null +++ b/divers/xtts_non-cdb_to_cdb_01.md @@ -0,0 +1,555 @@ +## Context + +- Source: non-CDB = GREEDO@rodia-scan +- Target: PDB = REEK, CDB=AERONPRD@ylesia-scan + +## Setup + +Create tablespaces and users: + +``` +create tablespace TS1 datafile size 16M autoextend on next 16M; +create tablespace TS2 datafile size 16M autoextend on next 16M; +create tablespace TS3 datafile size 16M autoextend on next 16M; + +alter tablespace TS1 add datafile size 16M autoextend on next 16M; +alter tablespace TS1 add datafile size 16M autoextend on next 16M; +alter tablespace TS2 add datafile size 16M autoextend on next 16M; +alter tablespace TS3 add datafile size 16M autoextend on next 16M; +alter tablespace TS3 add datafile size 16M autoextend on next 16M; +alter tablespace TS3 add datafile size 16M autoextend on next 16M; + +create user U1 identified by secret; +grant connect, resource, create view,create job to U1; +alter user U1 quota unlimited on TS1; +alter user U1 quota unlimited on TS2; +alter user U1 quota unlimited on TS3; + +create user U2 identified by secret; +grant connect, resource, create view,create job to U2; +alter user U2 quota unlimited on TS1; +alter user U2 quota unlimited on TS2; +alter user U2 quota unlimited on TS3; +``` + +For each user, create objects: + + connect U1/secret + -- create objcts + connect U2/secret + -- create objcts + +Create objects script: + +``` +-- TABLE 1 dans TS1 +CREATE TABLE table1_ts1 ( + id NUMBER PRIMARY KEY, + data VARCHAR2(100), + created_at DATE DEFAULT SYSDATE +) TABLESPACE TS1; + +CREATE SEQUENCE table1_seq + START WITH 1 + INCREMENT BY 1 + NOCACHE + NOCYCLE; + +CREATE OR REPLACE TRIGGER trg_table1_id +BEFORE INSERT ON table1_ts1 +FOR EACH ROW +BEGIN + IF :NEW.id IS NULL THEN + SELECT table1_seq.NEXTVAL INTO :NEW.id FROM dual; + END IF; +END; +/ + +-- TABLE 2 dans TS2 +CREATE TABLE table2_ts2 ( + id NUMBER PRIMARY KEY, + data VARCHAR2(100), + updated_at DATE +) TABLESPACE TS2; + +CREATE SEQUENCE table2_seq + START WITH 1 + INCREMENT BY 1 + NOCACHE + NOCYCLE; + +CREATE OR REPLACE TRIGGER trg_table2_id +BEFORE INSERT ON table2_ts2 +FOR EACH ROW +BEGIN + IF :NEW.id IS NULL THEN + SELECT table2_seq.NEXTVAL INTO :NEW.id FROM dual; + END IF; +END; +/ + +-- TABLE 3 dans TS3 +CREATE TABLE table3_ts3 ( + id NUMBER PRIMARY KEY, + info VARCHAR2(100), + status VARCHAR2(20) +) TABLESPACE TS3; + +CREATE SEQUENCE table3_seq + START WITH 1 + INCREMENT BY 1 + NOCACHE + NOCYCLE; + +CREATE OR REPLACE TRIGGER trg_table3_id +BEFORE INSERT ON table3_ts3 +FOR EACH ROW +BEGIN + IF :NEW.id IS NULL THEN + SELECT table3_seq.NEXTVAL INTO :NEW.id FROM dual; + END IF; +END; +/ + + +CREATE OR REPLACE VIEW combined_view AS +SELECT id, data, created_at, NULL AS updated_at, NULL AS status FROM table1_ts1 +UNION ALL +SELECT id, data, updated_at, NULL AS created_at, NULL AS status FROM table2_ts2 +UNION ALL +SELECT id, info AS data, NULL, NULL, status FROM table3_ts3; + + +CREATE OR REPLACE PACKAGE data_ops AS + PROCEDURE insert_random_data; + PROCEDURE update_random_data; + PROCEDURE delete_random_data; +END data_ops; +/ + +CREATE OR REPLACE PACKAGE BODY data_ops AS + PROCEDURE insert_random_data IS + BEGIN + FOR i IN 1..10 LOOP + INSERT INTO table1_ts1 (data) + VALUES (DBMS_RANDOM.STRING('A', 10)); + END LOOP; + + FOR i IN 1..3 LOOP + INSERT INTO table3_ts3 (info, status) + VALUES (DBMS_RANDOM.STRING('A', 10), 'NEW'); + END LOOP; + END; + + PROCEDURE update_random_data IS + BEGIN + FOR i IN 1..7 LOOP + INSERT INTO table2_ts2 (data) + VALUES (DBMS_RANDOM.STRING('A', 10)); + END LOOP; + FOR rec IN ( + SELECT id FROM ( + SELECT id FROM table2_ts2 ORDER BY DBMS_RANDOM.VALUE + ) WHERE ROWNUM <= 5 + ) LOOP + UPDATE table2_ts2 + SET data = DBMS_RANDOM.STRING('A', 10), updated_at = SYSDATE + WHERE id = rec.id; + END LOOP; + END; + + PROCEDURE delete_random_data IS + BEGIN + FOR rec IN ( + SELECT id FROM ( + SELECT id FROM table3_ts3 ORDER BY DBMS_RANDOM.VALUE + ) WHERE ROWNUM <= 2 + ) LOOP + DELETE FROM table3_ts3 WHERE id = rec.id; + END LOOP; + END; +END data_ops; +/ +``` + +Create job to run every 1 minute: + +``` +BEGIN + DBMS_SCHEDULER.CREATE_JOB ( + job_name => 'random_ops_job', + job_type => 'PLSQL_BLOCK', + job_action => ' + BEGIN + data_ops.insert_random_data; + data_ops.update_random_data; + data_ops.delete_random_data; + END;', + start_date => SYSTIMESTAMP, + repeat_interval => 'FREQ=MINUTELY; INTERVAL=1', + enabled => TRUE, + comments => 'Job to insert, update and delete random data every minute.' + ); +END; +/ +``` + +To restart the job: + +``` +--Restart the job +BEGIN + DBMS_SCHEDULER.enable('random_ops_job'); +END; +/ +``` + +Count the lines in tables: + +``` +select + 'u1.table1_ts1:'||count(*) from u1.table1_ts1 +union select + 'u1.table2_ts2:'||count(*) from u1.table2_ts2 +union select + 'u1.table3_ts3:'||count(*) from u1.table3_ts3 +union select + 'u2.table1_ts1:'||count(*) from u2.table1_ts1 +union select + 'u2.table2_ts2:'||count(*) from u2.table2_ts2 +union select + 'u2.table3_ts3:'||count(*) from u2.table3_ts3 +order by 1 asc +/ +``` + +To ensure the automatic opening of PDB, create a service to start automatically in the PDB: + + srvctl add service -s adm_reek -db AERONPRD -preferred AERONPRD1,AERONPRD2,AERONPRD3 -pdb REEK -role PRIMARY + srvctl start service -s adm_reek -db AERONPRD + + +## XTTS + +> Note MOS: V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1) + +### Initial setup + +Identify tablespaces to transport, probably all non "administrative" tablespaces: + +``` +select + listagg(tablespace_name, ',') +within group + (order by tablespace_name) as non_sys_ts +from + dba_tablespaces +where + contents not in ('UNDO','TEMPORARY') and + tablespace_name not in ('SYSTEM','SYSAUX'); +``` + +For source and target servers, define folders to be used for scripts, backupset, datapump etc. +In our case, that will be a shared NFS folder `/mnt/unprotected/tmp/oracle/xtts` + +> The size of folder should be greather than the size of full database. + +Unzip xtts scripts: + + cd /mnt/unprotected/tmp/oracle/xtts + unzip /mnt/yavin4/kit/Oracle/XTTS/rman_xttconvert_VER4.3.zip + +Configure xtt.properties file: + +``` +tablespaces=TS1,TS2,TS3,USERS +src_scratch_location=/mnt/unprotected/tmp/oracle/xtts/scratch +dest_datafile_location=+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts/ +dest_scratch_location=/mnt/unprotected/tmp/oracle/xtts/scratch +asm_home=/app/oracle/grid/product/19 +asm_sid=+ASM1 +destconnstr=sys/"Secret00!"@ylesia-scan/adm_reek +usermantransport=1 +``` + +On target server, create ASM directory where the datafile will be restored: + + mkdir +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts + +On **both source and target** servers, set `TMPDIR` environment variable to the path of xtts scripts: + + export TMPDIR=/mnt/unprotected/tmp/oracle/xtts + +### Prepare Phase + +This step corresponds to initial full backup/restore of source database on target system. + +Initial backup on source server: + +``` +export TMPDIR=/mnt/unprotected/tmp/oracle/xtts +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3 +``` + +Initial restore on target server: + +``` +export TMPDIR=/mnt/unprotected/tmp/oracle/xtts +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3 +``` + +> `debug` argument is optional + +### Roll Forward Phase + +As long as necessary we can do incremental backup/resore operations. + +> New datafiles add to source database are automatically managed by this step. + +The commands are exactly the sames (with or without debug mode). + +For backup: + +``` +export TMPDIR=/mnt/unprotected/tmp/oracle/xtts +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup +``` + +For restore: + +``` +export TMPDIR=/mnt/unprotected/tmp/oracle/xtts +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore +``` + +> Running succesives backup or successive restore operations does not pose a problem. + +### Final Incremental Backup + +On **source** database, put tablespaces in **read-only** mode: + +``` +select + 'alter tablespace '||tablespace_name||' read only;' as COMMAND +from + dba_tablespaces +where + contents not in ('UNDO','TEMPORARY') and + tablespace_name not in ('SYSTEM','SYSAUX'); +``` + +Check: + +``` +select distinct status +from + dba_tablespaces +where + contents not in ('UNDO','TEMPORARY') and + tablespace_name not in ('SYSTEM','SYSAUX'); +``` + +Take final incremental backup: + +``` +export TMPDIR=/mnt/unprotected/tmp/oracle/xtts +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup +``` + +Rstore final incremental backup: + +``` +export TMPDIR=/mnt/unprotected/tmp/oracle/xtts +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore +``` + +### Metadata export + +Create DATAPUMP directory on **both** source and destination databases. +On source (non-CDB): + + SQL> create or replace directory XTTS as '/mnt/unprotected/tmp/oracle/xtts'; + +On destination (PDB): + + export ORACLE_PDB_SID=REEK + SQL> create or replace directory XTTS as '/mnt/unprotected/tmp/oracle/xtts'; + +Export metadata + + expdp userid="'/ as sysdba'" dumpfile=XTTS:metadata.dmp logfile=XTTS:metadata.log FULL=y TRANSPORTABLE=always + +### Optionally: on target, pout target datafiles read-only at OS level + +Identify OMF target datafiles: + +``` +asmcmd -p +cd +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts +ls --permission +``` + +For each datafile, set read-olny permisions, example: + + chmod 444 +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts/* + +If you got: + + ORA-15304: operation requires ACCESS_CONTROL.ENABLED attribute to be TRUE (DBD ERROR: OCIStmtExecute) + +then set following diskgroup attributes and retry. + +``` +column dg_name format a20 +column name format a50 +column VALUE format a30 + +set lines 120 + +select + dg.name dg_name, attr.name, attr.value +from + v$asm_attribute attr + join v$asm_diskgroup dg on attr.group_number=dg.group_number +where + attr.name in ('compatible.rdbms','access_control.enabled') +order by dg.name, attr.name +/ + + +alter diskgroup DATA set attribute 'compatible.rdbms' = '19.0.0.0.0'; +alter diskgroup RECO set attribute 'compatible.rdbms' = '19.0.0.0.0'; + +alter diskgroup DATA set attribute 'access_control.enabled' = 'TRUE'; +alter diskgroup RECO set attribute 'access_control.enabled' = 'TRUE'; +``` + +> Compare number of datafiles transported and the number of datafiles of non-Oracle tablespaces +> Check if transported tablespaces already exists on target database + +### Metadata import and tablespace plug-in + +Create impdp parfile `impo_metadata.par`: + +``` +userid="/ as sysdba" +dumpfile=XTTS:metadata.dmp +logfile=XTTS:impo_metadata.log +transport_datafiles= ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.290.1205059373, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.291.1205059373, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.298.1205060113, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS1.289.1205059373, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS2.293.1205059375, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS2.300.1205060113, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS2.292.1205059375, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.294.1205059381, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.295.1205059381, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.296.1205059381, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.297.1205059381, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/TS3.299.1205060113, ++DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/USERS.302.1205084171 +``` + +Run import: + + impdp parfile=impo_metadata.par + + +Rebounce the PDB (or the CDB), otherwise we can get errors like: + +``` +ORA-01114: IO error writing block to file 33 (block # 1) +ORA-01110: data file 33: +'+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/ts1.298.1205060113' +ORA-27009: cannot write to file opened for read +``` + +Put plugged tablespaces in read/write mode: + +``` +select + 'alter tablespace '||tablespace_name||' read write;' as COMMAND +from + dba_tablespaces +where + contents not in ('UNDO','TEMPORARY') and + tablespace_name not in ('SYSTEM','SYSAUX'); +``` + +Remove aliases in order to user only OMF datafiles: + +``` +cd +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts +rmalias ts1_8.dbf ts2_13.dbf... .... ... +cd .. +rm -rf xtts +``` + +## Unxexpectd issues + +In metadata import step I relize I forgot to include USER tablespace in `xtt.properties` and impdp failed wit error: + + ORA-39352: Wrong number of TRANSPORT_DATAFILES specified: expected 13, received 12 + +The tablespace USER being in read-only mode I copied the datafile manually on target database. + +Identify the file number: + +``` +SQL> select FILE_ID from dba_data_files where TABLESPACE_NAME='USERS'; + + FILE_ID +---------- + 7 +``` + +Backup datafile on source: + +``` +run{ + set nocfau; + backup datafile 7 format '/mnt/unprotected/tmp/oracle/xtts/%d_%U_%s_%t.bck'; +} +``` + +Restore datafile on target; + +``` +run { + restore from platform 'Linux x86 64-bit' + foreign datafile 7 format '+DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/xtts//USERS.dbf' + from backupset '/mnt/unprotected/tmp/oracle/xtts/GREEDO_0i3t87ss_18_1_1_18_1205084060.bck'; +} +``` + +Put datafile in read-ony at ASM level: + + chmod 444 +DATA/AERONPRD/389011A6CB11A654E0635000A8C07D80/DATAFILE/USERS.302.1205084171 + +Run the impdp again. + + +## Troubleshooting + +Having datafile to plug-in in read-only mode at ASM level allow to repeat tne impdp operations as many time as necessary. +For example, to completly re-execute the impdp metadata as on initial conditions: + - drop new plugged tablespaces + - drop non oracle maintened users + - run impdp metadata again + +``` +drop tablespace TS1 including contents; +drop tablespace TS2 including contents; +drop tablespace TS3 including contents; +drop tablespace USERS including contents; + +select 'drop user '||USERNAME||' cascade;' from dba_users where ORACLE_MAINTAINED='N'; +``` + diff --git a/histograms/histogram_01.txt b/histograms/histogram_01.txt new file mode 100644 index 0000000..aa9e37a --- /dev/null +++ b/histograms/histogram_01.txt @@ -0,0 +1,86 @@ +# Tracking column histogram modifications by M.Houri +# https://hourim.wordpress.com/2020/08/06/historical-column-histogram/ + + +create table T1 tablespace TS1 as +select rownum id, decode(mod(rownum,10),0,2,1) c_freq, nvl(blocks,999) c_hb +from dba_tables ; + +update T1 set c_freq=3 where rownum<=10; +commit; + +create index idx_freq on T1(C_FREQ) tablespace TS1; +create index idx_hb on T1(C_HB) tablespace TS1; + + +select c_freq,count(*) from T1 group by c_freq order by 2 desc; + + +exec dbms_stats.gather_table_stats (user, 'T1', method_opt=>'for all columns size 1'); + +col column_name for a20 + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T1' and column_name='C_FREQ'; + + + +select /*+ GATHER_PLAN_STATISTICS */ * from T1 where C_FREQ=3; + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T1' and column_name='C_HB'; + +select /*+ GATHER_PLAN_STATISTICS */ * from T1 where C_HB=999; +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +---------------- FREQ + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns C_FREQ size AUTO'); + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T1' and column_name='C_FREQ'; + +select endpoint_value as column_value, +endpoint_number as cummulative_frequency, +endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency +from user_tab_histograms +where table_name = 'T1' and column_name = 'C_FREQ'; + +alter system flush shared_pool; + +select /*+ GATHER_PLAN_STATISTICS */ * from T1 where C_FREQ=3; + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +--------------- WEIGHT + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns C_HB size 254'); + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T1' and column_name='C_HB'; + + +select endpoint_value as column_value, +endpoint_number as cummulative_frequency, +endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency +from user_tab_histograms +where table_name = 'T1' and column_name = 'C_HB'; + + + +create table T1 tablespace TS1 as +select rownum id, decode(mod(rownum,10),0,2,1) c_freq, nvl(blocks,999) c_hb +from dba_extents ; + +update T1 set c_freq=3 where rownum<=10; +commit; + diff --git a/histograms/histogram_02.txt b/histograms/histogram_02.txt new file mode 100644 index 0000000..d837005 --- /dev/null +++ b/histograms/histogram_02.txt @@ -0,0 +1,252 @@ +drop table T1 purge; + +create table T1 tablespace TS1 as +select + rownum id, + decode(mod(rownum,10),0,10,1) col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 100000 + ) +/ + + +--------- + +drop table T1 purge; + +create table T1 tablespace TS1 as +select + rownum id, + decode(mod(rownum,3),0,'m3', + decode(mod(rownum,5),0,'m5', + decode(mod(rownum,7),0,'m7', + decode(mod(rownum,11),0,'m11', + decode(mod(rownum,13),0,'m13', + decode(mod(rownum,17),0,'m17', + 'other')))))) col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 100000 + ) +/ + + +------------ + + + +drop table T1 purge; + +create table T1 tablespace TS1 as +select + rownum id, + case when rownum<=10 then rownum else 99999 end col1, + case when rownum<=400 then rownum else 99999 end col2, + case when rownum<=4000 then rownum else 99999 end col3, + case when rownum<=10000 then rownum else 99999 end col4 +from ( select 1 just_a_column + from DUAL + connect by level <= 100000 + ) +/ + + +--------- + +drop table T1 purge; + +create table T1 tablespace TS1 as +select + rownum id, + case when rownum>=1 and rownum<1000 then mod(rownum,10) else 99999 end col1, + case when rownum>=1 and rownum<99900 then mod(rownum,1000) else rownum end col2, + mod(rownum,300) col3 +from ( select 1 just_a_column + from DUAL + connect by level <= 100000 + ) +/ + + + +--------- + +drop table T1 purge; + +create table T1 tablespace TS1 as +select + rownum id, + mod(rownum,254) col1, + mod(rownum,255) col2, + mod(rownum,256) col3 +from ( select 1 just_a_column + from DUAL + connect by level <= 100000 + ) +/ + + + + + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY'); + + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T1'; + + + + +select endpoint_value as column_value, +endpoint_number as cummulative_frequency, +endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency +from user_tab_histograms +where table_name = 'T1' and column_name = 'COL4'; + + + +select col1,count(*) from T1 group by col1 order by 2 desc; + + + +-------------------- + +https://www.red-gate.com/simple-talk/databases/oracle-databases/12c-histogram-top-frequency/ + +drop table T_TopFreq purge; +create table T_TopFreq as +select + rownum n1 + , case when mod(rownum, 100000) = 0 then 90 + when mod(rownum, 10000) = 0 then 180 + when mod(rownum, 1000) = 0 then 84 + when mod(rownum, 100) = 0 then 125 + when mod(rownum,50) = 2 then 7 + when mod(rownum-1,80) = 2 then 22 + when mod(rownum, 10) = 0 then 19 + when mod(rownum-1,10) = 5 then 15 + when mod(rownum-1,5) = 1 then 11 + when trunc((rownum -1/3)) < 5 then 25 + when trunc((rownum -1/5)) < 20 then 33 + else 42 + end n2 +from dual +connect by level <= 2e2 +/ + + +set serveroutput ON + +exec dbms_stats.set_global_prefs ('TRACE', to_char (1+16)); +exec dbms_stats.gather_table_stats (user,'T_TOPFREQ',method_opt=> 'for columns n2 size 8'); +exec dbms_stats.set_global_prefs('TRACE', null); + + +select + sum (cnt) TopNRows + from (select + n2 + ,count(*) cnt + from t_topfreq + group by n2 + order by count(*) desc + ) + where rownum <= 8; + +with FREQ as +( select + n2 + ,count(*) cnt + from t_topfreq + group by n2 + order by count(*) desc +) +select sum(cnt) from FREQ where rownum<=8; + + + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T_TOPFREQ'; + + + + + + +-------------------------------------------------------------- + +drop table T1 purge; + +create table T1 tablespace TS1 as +select + rownum id, + mod(rownum,300) col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 100e3 + ) +/ + +update T1 set col1=567 where id between 70e3 and 75e3; +update T1 set col1=678 where id between 75e3 and 90e3; +update T1 set col1=789 where id between 90e3 and 100e3; + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY'); + +-- type de histogram +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T1'; + + +-- how many rows are in the TOP-N values ? +with FREQ as +( select + col1 + ,count(*) cnt + from T1 + group by col1 + order by count(*) desc +) +select sum(cnt) from FREQ where rownum<=254 +; + +-- frequency by column value / bucket +select endpoint_value as column_value, +endpoint_number as cummulative_frequency, +endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency, +ENDPOINT_REPEAT_COUNT +from user_tab_histograms +where table_name = 'T1' and column_name = 'COL1'; + + + +-------------------------------------------------------------- + +-------------------------------------------------------------- + +drop table T1 purge; + +create table T1 tablespace TS1 as +select + rownum id, + mod(rownum,2000) col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 1000e3 + ) +/ + + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 2048'); + +-- type de histogram +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='T1'; + + diff --git a/histograms/histogram_03.txt b/histograms/histogram_03.txt new file mode 100644 index 0000000..5ddfaed --- /dev/null +++ b/histograms/histogram_03.txt @@ -0,0 +1,120 @@ +create pluggable database NEREUS admin user PDB$OWNER identified by secret; +alter pluggable database NEREUS open; +alter pluggable database NEREUS save state; + +alter session set container=NEREUS; +show pdbs +show con_name + +grant sysdba to adm identified by secret; + +alias NEREUS='rlwrap sqlplus adm/secret@bakura/NEREUS as sysdba' + +create tablespace USERS datafile size 32M autoextend ON next 32M; +alter database default tablespace USERS; + +create user HR identified by secret + quota unlimited on USERS; + +grant CONNECT,RESOURCE to HR; +grant CREATE VIEW to HR; + +wget https://raw.githubusercontent.com/oracle-samples/db-sample-schemas/main/human_resources/hr_cre.sql +wget https://raw.githubusercontent.com/oracle-samples/db-sample-schemas/main/human_resources/hr_popul.sql + +connect HR/secret@bakura/NEREUS + +spool install.txt +@hr_cre.sql +@hr_popul.sql + + +alter user HR no authentication; + +select /*+ GATHER_PLAN_STATISTICS */ + emp.FIRST_NAME + , emp.LAST_NAME + , dept.DEPARTMENT_NAME +from + HR.EMPLOYEES emp, + HR.DEPARTMENTS dept +where + emp.DEPARTMENT_ID = dept.DEPARTMENT_ID +order by + FIRST_NAME, + LAST_NAME +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +exec dbms_stats.delete_table_stats('HR','EMPLOYEES'); +exec dbms_stats.delete_table_stats('HR','DEPARTMENTS'); + +alter system flush shared_pool; + +exec dbms_stats.gather_table_stats('HR','EMPLOYEES', method_opt=>'for all columns size SKEWONLY'); +exec dbms_stats.gather_table_stats('HR','DEPARTMENTS', method_opt=>'for all columns size SKEWONLY'); + +exec dbms_stats.gather_table_stats('HR','EMPLOYEES', method_opt=>'for all columns size 254'); +exec dbms_stats.gather_table_stats('HR','DEPARTMENTS', method_opt=>'for all columns size 254'); + + + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from dba_tab_col_statistics +where owner='HR' and table_name='EMPLOYEES' and column_name='DEPARTMENT_ID'; + +select endpoint_value as column_value, +endpoint_number as cummulative_frequency, +endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency +from dba_tab_histograms +where owner='HR' and table_name='EMPLOYEES' and column_name='DEPARTMENT_ID'; + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from dba_tab_col_statistics +where owner='HR' and table_name='DEPARTMENTS' and column_name='DEPARTMENT_ID'; + +select endpoint_value as column_value, +endpoint_number as cummulative_frequency, +endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency +from dba_tab_histograms +where owner='HR' and table_name='DEPARTMENTS' and column_name='DEPARTMENT_ID'; + + +break on report skip 1 +compute sum of product on report +column product format 999,999,999 + +with f1 as ( +select + endpoint_value value, + endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency +from + dba_tab_histograms +where + owner='HR' +and table_name = 'EMPLOYEES' +and column_name = 'DEPARTMENT_ID' +order by + endpoint_value +), +f2 as ( +select + endpoint_value value, + endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency +from + dba_tab_histograms +where + owner='HR' +and table_name = 'DEPARTMENTS' +and column_name = 'DEPARTMENT_ID' +order by + endpoint_value +) +select + f1.value, f1.frequency, f2.frequency, f1.frequency * f2.frequency product +from + f1, f2 +where + f2.value = f1.value +; diff --git a/histograms/histogram_04.txt b/histograms/histogram_04.txt new file mode 100644 index 0000000..94a4562 --- /dev/null +++ b/histograms/histogram_04.txt @@ -0,0 +1,85 @@ +drop table T1 purge; + +create table T1 tablespace USERS as +select + rownum id, + case when rownum<10 then mod(rownum,4) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 20 + ) +/ + + + +drop table T2 purge; + +create table T2 tablespace USERS as +select + rownum id, + case when rownum<25 then mod(rownum,10) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 100 + ) +/ + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1'); + +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + T1.COL1=150 + and T1.COL1=T2.COL1 +/ + + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1'); + +exec dbms_stats.delete_table_stats('SYS','T1'); +exec dbms_stats.delete_table_stats('SYS','T2'); + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size SKEWONLY'); + + +alter system flush shared_pool; + + +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID + , T2.ID + , T1.COL1 +from + T1, + T2 +where + T1.COL1=3 + and T1.COL1=T2.COL1 +/ + + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +@stats_col SYS T1 % % % % +@stats_col SYS T2 % % % % + + +@hist_cross_freq SYS T1 COL1 SYS T2 COL2 + + diff --git a/histograms/histogram_05.txt b/histograms/histogram_05.txt new file mode 100644 index 0000000..36aea41 --- /dev/null +++ b/histograms/histogram_05.txt @@ -0,0 +1,60 @@ +drop table T1 purge; +create table T1 tablespace USERS as +select + rownum id, + case when rownum<4e4 then mod(rownum,500) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 5e5 + ) +/ + + +drop table T2 purge; +create table T2 tablespace USERS as +select + rownum id, + case when rownum<8e5 then mod(rownum,500) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 1e6 + ) +/ + + +alter system flush shared_pool; + +drop table Q purge; + +create table Q as + select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val + from + T1, + T2 + where + T1.COL1=150 + and T1.COL1=T2.COL1 + / + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1'); + + +exec dbms_stats.delete_table_stats('SYS','T1'); +exec dbms_stats.delete_table_stats('SYS','T2'); + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size SKEWONLY'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size SKEWONLY'); + + +@stats_col SYS T1 % % % % +@stats_col SYS T2 % % % % + +@hist_cross_freq SYS T1 COL1 SYS T2 COL2 + + diff --git a/histograms/histogram_06.txt b/histograms/histogram_06.txt new file mode 100644 index 0000000..50e8e7e --- /dev/null +++ b/histograms/histogram_06.txt @@ -0,0 +1,97 @@ +https://hourim.wordpress.com/?s=histogram + +https://jonathanlewis.wordpress.com/2013/10/09/12c-histograms-pt-3/ + +exec dbms_stats.delete_table_stats('SYS','T1'); + + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns size 20 col1'); + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); + + + +select + endpoint_number, + endpoint_value, + endpoint_repeat_count +from + user_tab_histograms +where + table_name = 'T1' +order by + endpoint_number +; + + +set pages 50 lines 256 + +alter system flush shared_pool; + +drop table Q purge; + +create table Q as + select /*+ GATHER_PLAN_STATISTICS */ + a.COL1 COL1 + from + T1 a, + T1 b + where + a.COL1=b.COL1 + / + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +set pages 50 lines 256 + +alter system flush shared_pool; + +drop table Q purge; + +create table Q as + select /*+ GATHER_PLAN_STATISTICS */ + a.COL1 COL1 + from + T1 a, + T1 b + where + a.COL1=33 and + a.COL1=b.COL1 + / + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + + + +set pages 50 lines 256 + +alter system flush shared_pool; + +drop table Q purge; + +create table Q as + select /*+ GATHER_PLAN_STATISTICS */ + a.COL1 COL1 + from + T1 a, + T1 b + where + a.COL1=37 and + a.COL1=b.COL1 + / + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + + +37 distinct values - 20 popular values = 17 non popular values +On 32 lines => 17 non popular values (oniform distributed) => 2 lines / value + + + +x 17 + + + diff --git a/histograms/histogram_07.txt b/histograms/histogram_07.txt new file mode 100644 index 0000000..129cd3b --- /dev/null +++ b/histograms/histogram_07.txt @@ -0,0 +1,48 @@ +exec dbms_stats.delete_table_stats('SYS','T1'); + + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for columns size 20 col1'); + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); + + + + +set pages 50 lines 256 + +alter system flush shared_pool; + +drop table Q purge; + +create table Q as + select /*+ GATHER_PLAN_STATISTICS */ + a.COL1 COL1 + from + T1 a, + T1 b + where + a.COL1=9999 and + a.COL1=b.COL1 + / + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + + +density = (nr_of_lines/_distinct_values)/100 = frequency_of_column / 100 + + + +frequency_of_non_popular_values = (nr_of_lines-sum(endpoint repeat count)) / (number_of_distinct_values - number_of_endpoints) + + + + 32 LINES ---- 17 NON POP + ? + + + Test: val popuaire + val non populaire + val non populaire out of range + + \ No newline at end of file diff --git a/histograms/histogram_08.txt b/histograms/histogram_08.txt new file mode 100644 index 0000000..4ae4be1 --- /dev/null +++ b/histograms/histogram_08.txt @@ -0,0 +1,138 @@ +-- Setup +-------- +drop table T1 purge; + +create table T1 tablespace USERS as +select + rownum id, + case when rownum<10 then mod(rownum,4) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 20 + ) +/ + +drop table T2 purge; + +create table T2 tablespace USERS as +select + rownum id, + case when rownum<25 then mod(rownum,10) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 100 + ) +/ + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1'); + + +set lines 250 pages 999 +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + T1.COL1=T2.COL1 +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +@stats_col SYS T1 % % % % + b s Avg Num +Object a e Col Buc +Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket +-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ---- +TABLE SYS.T1 COL1 11-FEB-23 09:20:04 Y N 0 20 4 5 0 .200000000000000 NONE 1 +TABLE SYS.T1 ID 11-FEB-23 09:20:04 Y N 0 20 3 20 0 .050000000000000 NONE 1 + +SQL> @stats_col SYS T2 % % % % + b s Avg Num +Object a e Col Buc +Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket +-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ---- +TABLE SYS.T2 COL1 11-FEB-23 09:20:04 Y N 0 100 4 11 0 .090909090909091 NONE 1 +TABLE SYS.T2 ID 11-FEB-23 09:20:04 Y N 0 100 3 100 0 .010000000000000 NONE 1 + + +------------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | Writes | OMem | 1Mem | Used-Mem | +-------------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | CREATE TABLE STATEMENT | | 1 | | | 7 (100)| 0 |00:00:00.01 | 25 | 2 | | | | +| 1 | LOAD AS SELECT | Q | 1 | | | | 0 |00:00:00.01 | 25 | 2 | 1043K| 1043K| 1043K (0)| +|* 2 | HASH JOIN | | 1 | 182 | 2366 | 6 (0)| 861 |00:00:00.01 | 4 | 0 | 2078K| 2078K| 1219K (0)| +| 3 | TABLE ACCESS FULL | T1 | 1 | 20 | 120 | 3 (0)| 20 |00:00:00.01 | 2 | 0 | | | | +| 4 | TABLE ACCESS FULL | T2 | 1 | 100 | 700 | 3 (0)| 100 |00:00:00.01 | 2 | 0 | | | | +-------------------------------------------------------------------------------------------------------------------------------------------------- + +-- rows1*rows2/max(distinct1,distinct1) = rows1*rows2*min(density1,density2) + +SQL> select 20*100*.090909090909091 from dual; + +20*100*.090909090909091 +----------------------- + 181.818182 + + +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS LEADING(T2 T1) */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + -- T1.COL1=150 and + T1.COL1=T2.COL1 +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +@stats_col SYS T1 % % % % + b s Avg Num +Object a e Col Buc +Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket +-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ---- +TABLE SYS.T1 COL1 11-FEB-23 09:20:04 Y N 0 20 4 5 0 .200000000000000 NONE 1 +TABLE SYS.T1 ID 11-FEB-23 09:20:04 Y N 0 20 3 20 0 .050000000000000 NONE 1 + +SQL> @stats_col SYS T2 % % % % + b s Avg Num +Object a e Col Buc +Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket +-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ---- +TABLE SYS.T2 COL1 11-FEB-23 09:20:04 Y N 0 100 4 11 0 .090909090909091 NONE 1 +TABLE SYS.T2 ID 11-FEB-23 09:20:04 Y N 0 100 3 100 0 .010000000000000 NONE 1 + + +-------------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | Writes | OMem | 1Mem | Used-Mem | +-------------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | CREATE TABLE STATEMENT | | 1 | | | 7 (100)| 0 |00:00:00.01 | 24 | 2 | | | | +| 1 | LOAD AS SELECT | Q | 1 | | | | 0 |00:00:00.01 | 24 | 2 | 1043K| 1043K| 1043K (0)| +|* 2 | HASH JOIN | | 1 | 182 | 2366 | 6 (0)| 861 |00:00:00.01 | 4 | 0 | 2078K| 2078K| 1315K (0)| +| 3 | TABLE ACCESS FULL | T2 | 1 | 100 | 700 | 3 (0)| 100 |00:00:00.01 | 2 | 0 | | | | +| 4 | TABLE ACCESS FULL | T1 | 1 | 20 | 120 | 3 (0)| 20 |00:00:00.01 | 2 | 0 | | | | +-------------------------------------------------------------------------------------------------------------------------------------------------- + +-- rows1*rows2/max(distinct1,distinct1) = rows1*rows2*min(density1,density2) + +SQL> select 100*20*.090909090909091 from dual; + +100*20*.090909090909091 +----------------------- + 181.818182 + + diff --git a/histograms/histogram_09.txt b/histograms/histogram_09.txt new file mode 100644 index 0000000..e80c88e --- /dev/null +++ b/histograms/histogram_09.txt @@ -0,0 +1,83 @@ +-- Setup +-------- +drop table T1 purge; + +create table T1 tablespace USERS as +select + rownum id, + case when rownum<10 then mod(rownum,4) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 20 + ) +/ + +drop table T2 purge; + +create table T2 tablespace USERS as +select + rownum id, + case when rownum<25 then mod(rownum,10) else 999 end col1 +from ( select 1 just_a_column + from DUAL + connect by level <= 100 + ) +/ + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1'); + + +set lines 250 pages 999 +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + T1.COL1=T2.ID +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +@stats_col SYS T1 % % % % + b s Avg Num +Object a e Col Buc +Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket +-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ---- +TABLE SYS.T1 COL1 11-FEB-23 09:20:04 Y N 0 20 4 5 0 .200000000000000 NONE 1 +TABLE SYS.T1 ID 11-FEB-23 09:20:04 Y N 0 20 3 20 0 .050000000000000 NONE 1 + +SQL> @stats_col SYS T2 % % % % + b s Avg Num +Object a e Col Buc +Type TableName ColumnName LastAnalyzed l r Size (MB) SampleSize Len NumDistinct NumNulls Density Histogram ket +-------- --------------------------------------------- ------------------------- ------------------ - - --------- ---------------- ---- --------------- -------------- ------------------ --------------- ---- +TABLE SYS.T2 COL1 11-FEB-23 09:20:04 Y N 0 100 4 11 0 .090909090909091 NONE 1 +TABLE SYS.T2 ID 11-FEB-23 09:20:04 Y N 0 100 3 100 0 .010000000000000 NONE 1 + + +-------------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | Writes | OMem | 1Mem | Used-Mem | +-------------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | CREATE TABLE STATEMENT | | 1 | | | 7 (100)| 0 |00:00:00.01 | 24 | 1 | | | | +| 1 | LOAD AS SELECT | Q | 1 | | | | 0 |00:00:00.01 | 24 | 1 | 1043K| 1043K| 1043K (0)| +|* 2 | HASH JOIN | | 1 | 20 | 180 | 6 (0)| 7 |00:00:00.01 | 4 | 0 | 2078K| 2078K| 1219K (0)| +| 3 | TABLE ACCESS FULL | T1 | 1 | 20 | 120 | 3 (0)| 20 |00:00:00.01 | 2 | 0 | | | | +| 4 | TABLE ACCESS FULL | T2 | 1 | 100 | 300 | 3 (0)| 100 |00:00:00.01 | 2 | 0 | | | | +-------------------------------------------------------------------------------------------------------------------------------------------------- + +-- rows1*rows2/max(distinct1,distinct1) = rows1*rows2*min(density1,density2) + +SQL> select 20*100*.010000000000000 from dual; + +20*100*.010000000000000 +----------------------- + 20 + diff --git a/histograms/histogram_10.txt b/histograms/histogram_10.txt new file mode 100644 index 0000000..2c412dc --- /dev/null +++ b/histograms/histogram_10.txt @@ -0,0 +1,179 @@ +-- Setup +-------- +drop table T1 purge; + +create table T1( + id NUMBER not null, + col1 NUMBER, + col2 NUMBER +) +tablespace USERS; + +declare + v_id NUMBER; + v_col1 NUMBER; + v_col2 NUMBER; +begin + for i IN 1..40 loop + -- id column + v_id:=i; + -- col1 column + if (i between 1 and 15) then v_col1:=mod(i,3); end if; + if (i between 16 and 40) then v_col1:=i; end if; + -- col2 column + if (i between 1 and 30) then v_col2:=mod(i,6); end if; + if (i between 31 and 40) then v_col2:=999; end if; + -- insert values + insert into T1 values (v_id,v_col1,v_col2); + end loop; + commit; +end; +/ + + +drop table T2 purge; + +create table T2( + id NUMBER not null, + col1 NUMBER, + col2 NUMBER +) +tablespace USERS; + +declare + v_id NUMBER; + v_col1 NUMBER; + v_col2 NUMBER; +begin + for i IN 1..150 loop + -- id column + v_id:=i; + -- col1 column + if (i between 1 and 49) then v_col1:=mod(i,7); end if; + if (i between 50 and 100) then v_col1:=i; end if; + if (i between 101 and 150) then v_col1:=777; end if; + -- col2 column + if (i between 1 and 100) then v_col2:=mod(i,10); end if; + if (i between 101 and 140) then v_col2:=999; end if; + if (i between 141 and 150) then v_col2:=i; end if; + -- insert values + insert into T2 values (v_id,v_col1,v_col2); + end loop; + commit; +end; +/ + + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1'); + + +set lines 250 pages 999 +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + T1.COL1=T2.COL1 +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +set lines 250 pages 999 +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + T1.COL2=T2.COL2 +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + +--------------------------------------------------------- +set lines 250 pages 999 +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + T1.COL1=T2.COL1 and + T1.COL2=T2.COL2 +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + + + + + +set lines 250 pages 999 +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS */ + T1.ID id1 + , T2.ID id2 + , T1.COL1 val +from + T1, + T2 +where + T1.COL1=T2.COL1 or + T1.COL2=T2.COL2 +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +-------------------------------------------------------- + +set lines 250 pages 999 +alter system flush shared_pool; + +drop table Q purge; +create table Q as +select /*+ GATHER_PLAN_STATISTICS MONITOR */ +* +from + T2 +where + COL1>=7 +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES +NOTE')); + +set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000 + +select dbms_perf.report_sql(sql_id=>'cgud94u0jkhjj',outer_start_time=>sysdate-1, outer_end_time=>sysdate, selected_start_time=>sysdate-1, selected_end_time=>sysdate,type=>'TEXT') from dual; + +SELECT report_id,PERIOD_START_TIME,PERIOD_END_TIME,GENERATION_TIME FROM dba_hist_reports WHERE component_name = 'sqlmonitor' AND (period_start_time BETWEEN sysdate-1 and sysdate )AND key1 = 'cgud94u0jkhjj'; + + +set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000 +SELECT DBMS_AUTO_REPORT.REPORT_REPOSITORY_DETAIL(RID => 145, TYPE => 'text') FROM dual; + + diff --git a/histograms/hybrid_stats_tab1_insert.sql b/histograms/hybrid_stats_tab1_insert.sql new file mode 100644 index 0000000..83b8635 --- /dev/null +++ b/histograms/hybrid_stats_tab1_insert.sql @@ -0,0 +1,109 @@ +drop table T1 purge; + +create table T1 (col1 NUMBER) tablespace USERS; + +insert into T1 values (8); +insert into T1 values (12); +insert into T1 values (12); +insert into T1 values (13); +insert into T1 values (13); +insert into T1 values (13); +insert into T1 values (15); +insert into T1 values (16); +insert into T1 values (16); +insert into T1 values (17); +insert into T1 values (18); +insert into T1 values (18); +insert into T1 values (19); +insert into T1 values (19); +insert into T1 values (19); +insert into T1 values (20); +insert into T1 values (20); +insert into T1 values (20); +insert into T1 values (20); +insert into T1 values (20); +insert into T1 values (21); +insert into T1 values (22); +insert into T1 values (22); +insert into T1 values (22); +insert into T1 values (23); +insert into T1 values (23); +insert into T1 values (24); +insert into T1 values (24); +insert into T1 values (25); +insert into T1 values (26); +insert into T1 values (26); +insert into T1 values (26); +insert into T1 values (27); +insert into T1 values (27); +insert into T1 values (27); +insert into T1 values (27); +insert into T1 values (27); +insert into T1 values (27); +insert into T1 values (28); +insert into T1 values (28); +insert into T1 values (28); +insert into T1 values (28); +insert into T1 values (28); +insert into T1 values (28); +insert into T1 values (29); +insert into T1 values (29); +insert into T1 values (29); +insert into T1 values (29); +insert into T1 values (29); +insert into T1 values (29); +insert into T1 values (30); +insert into T1 values (30); +insert into T1 values (30); +insert into T1 values (31); +insert into T1 values (31); +insert into T1 values (31); +insert into T1 values (31); +insert into T1 values (31); +insert into T1 values (32); +insert into T1 values (32); +insert into T1 values (32); +insert into T1 values (33); +insert into T1 values (33); +insert into T1 values (33); +insert into T1 values (33); +insert into T1 values (33); +insert into T1 values (33); +insert into T1 values (33); +insert into T1 values (33); +insert into T1 values (34); +insert into T1 values (34); +insert into T1 values (34); +insert into T1 values (35); +insert into T1 values (35); +insert into T1 values (35); +insert into T1 values (35); +insert into T1 values (35); +insert into T1 values (35); +insert into T1 values (35); +insert into T1 values (36); +insert into T1 values (37); +insert into T1 values (38); +insert into T1 values (38); +insert into T1 values (38); +insert into T1 values (38); +insert into T1 values (38); +insert into T1 values (39); +insert into T1 values (39); +insert into T1 values (40); +insert into T1 values (41); +insert into T1 values (42); +insert into T1 values (42); +insert into T1 values (43); +insert into T1 values (43); +insert into T1 values (43); +insert into T1 values (44); +insert into T1 values (45); +insert into T1 values (46); +insert into T1 values (50); +insert into T1 values (59); + +commit; + + + diff --git a/histograms/hybrid_stats_tab2_insert.sql b/histograms/hybrid_stats_tab2_insert.sql new file mode 100644 index 0000000..1ae0b34 --- /dev/null +++ b/histograms/hybrid_stats_tab2_insert.sql @@ -0,0 +1,109 @@ +drop table T2 purge; + +create table T2 (col1 NUMBER) tablespace USERS; + +insert into T2 values (8); +insert into T2 values (12); +insert into T2 values (12); +insert into T2 values (22); +insert into T2 values (22); +insert into T2 values (22); +insert into T2 values (15); +insert into T2 values (16); +insert into T2 values (16); +insert into T2 values (17); +insert into T2 values (18); +insert into T2 values (18); +insert into T2 values (19); +insert into T2 values (19); +insert into T2 values (19); +insert into T2 values (20); +insert into T2 values (20); +insert into T2 values (20); +insert into T2 values (20); +insert into T2 values (20); +insert into T2 values (21); +insert into T2 values (22); +insert into T2 values (22); +insert into T2 values (22); +insert into T2 values (23); +insert into T2 values (23); +insert into T2 values (25); +insert into T2 values (25); +insert into T2 values (25); +insert into T2 values (26); +insert into T2 values (26); +insert into T2 values (26); +insert into T2 values (55); +insert into T2 values (55); +insert into T2 values (55); +insert into T2 values (55); +insert into T2 values (55); +insert into T2 values (55); +insert into T2 values (28); +insert into T2 values (28); +insert into T2 values (28); +insert into T2 values (28); +insert into T2 values (28); +insert into T2 values (28); +insert into T2 values (29); +insert into T2 values (29); +insert into T2 values (29); +insert into T2 values (29); +insert into T2 values (29); +insert into T2 values (29); +insert into T2 values (30); +insert into T2 values (30); +insert into T2 values (30); +insert into T2 values (31); +insert into T2 values (31); +insert into T2 values (31); +insert into T2 values (31); +insert into T2 values (31); +insert into T2 values (32); +insert into T2 values (32); +insert into T2 values (32); +insert into T2 values (33); +insert into T2 values (33); +insert into T2 values (33); +insert into T2 values (33); +insert into T2 values (33); +insert into T2 values (33); +insert into T2 values (33); +insert into T2 values (33); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (35); +insert into T2 values (36); +insert into T2 values (37); +insert into T2 values (38); +insert into T2 values (38); +insert into T2 values (38); +insert into T2 values (38); +insert into T2 values (38); +insert into T2 values (39); +insert into T2 values (39); +insert into T2 values (50); +insert into T2 values (51); +insert into T2 values (52); +insert into T2 values (52); +insert into T2 values (53); +insert into T2 values (53); +insert into T2 values (53); +insert into T2 values (55); +insert into T2 values (55); +insert into T2 values (56); +insert into T2 values (50); +insert into T2 values (59); + +commit; + + + diff --git a/ipconfig b/ipconfig new file mode 100644 index 0000000..e69de29 diff --git a/logminer/logmnr_02.txt b/logminer/logmnr_02.txt new file mode 100644 index 0000000..4d8ec12 --- /dev/null +++ b/logminer/logmnr_02.txt @@ -0,0 +1,124 @@ +# https://redikx.wordpress.com/2015/07/10/logminer-to-analyze-archive-logs-on-different-database/ + +alias HUTTPRD='rlwrap sqlplus sys/"Secret00!"@bakura:1521/HUTTPRD as sysdba' +alias ZABRAKPRD='rlwrap sqlplus sys/"Secret00!"@togoria:1521/ZABRAKPRD as sysdba' + +alias DURGA='rlwrap sqlplus jedi/"Secret00!"@bakura:1521/DURGA as sysdba' +alias MAUL='rlwrap sqlplus jedi/"Secret00!"@togoria:1521/MAUL as sysdba' + +alias WOMBAT='sqlplus wombat/animal@bakura/DURGA' + + +# on PDB DURGA as WOMBAT user +alter session set NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'; + +drop table DEMO purge; +create table DEMO(d date); + +insert into DEMO values (sysdate); +insert into DEMO values (sysdate); +insert into DEMO values (sysdate); +insert into DEMO values (sysdate); +insert into DEMO values (sysdate); +commit; +insert into DEMO values (sysdate); +commit; +delete from DEMO; +commit; + +# backup generated archivelog +rman target / +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + backup as compressed backupset archivelog all delete input; +} + +# store dictionary in redolog +begin + dbms_logmnr_d.build(options=>dbms_logmnr_d.store_in_redo_logs); +end; +/ + +# identify archivelog containing the dictionary +select thread#,sequence# from gv$archived_log where DICTIONARY_BEGIN='YES'; +select thread#,sequence# from gv$archived_log where DICTIONARY_END='YES'; + +# backup archivelog containing the dictionary +rman target / +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + backup as compressed backupset archivelog sequence 12 delete input; +} + +# Goal: list all DML against DEMO table between 2024-06-23 15:00:00 and 2024-06-23 16:00:00 + +# identify required archivelog +select THREAD#,max(SEQUENCE#) from gv$archived_log where FIRST_TIME<=timestamp'2024-06-23 15:00:00' group by THREAD#; +select THREAD#,min(SEQUENCE#) from gv$archived_log where NEXT_TIME>=timestamp'2024-06-23 16:00:00' group by THREAD#; + + +# all operation will be realized on a different CDB on the CDB$ROOT +# restore required archivelog +rman target / +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + set archivelog destination to '/mnt/yavin4/tmp/00000/logminer/arch/'; + restore archivelog from sequence 3 until sequence 8; +} + + +# restore dictionary archivelog +rman target / +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/00000/logminer/backup/%d_%U_%s_%t.bck'; + set archivelog destination to '/mnt/yavin4/tmp/00000/logminer/arch/'; + restore archivelog from sequence 12 until sequence 12; +} + + +# add log +execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_12_1172413318.arc', options => dbms_logmnr.new); +execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_3_1172413318.arc', options => dbms_logmnr.addfile); +execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_4_1172413318.arc', options => dbms_logmnr.addfile); +execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_5_1172413318.arc', options => dbms_logmnr.addfile); +execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_6_1172413318.arc', options => dbms_logmnr.addfile); +execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_7_1172413318.arc', options => dbms_logmnr.addfile); +execute dbms_logmnr.add_logfile(logfilename=>'/mnt/yavin4/tmp/00000/logminer/arch/1_8_1172413318.arc', options => dbms_logmnr.addfile); + +# to list added log + +set lines 256 +col FILENAME for a60 +col INFO for a60 +select FILENAME,INFO from V$LOGMNR_LOGS; + +# start logminer +begin + DBMS_LOGMNR.START_LOGMNR (startTime=>timestamp'2024-06-23 15:00:00' + ,endTime=> timestamp'2024-06-23 16:00:00' + ,OPTIONS=>DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + ); +end; +/ + +# do mining +alter session set NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'; + +col username for a20 +col sql_redo for a70 +col table_name for a20 +col timestamp for a25 + +select timestamp,username,table_name,sql_redo from v$logmnr_contents where seg_name='DEMO'; diff --git a/materialized_views/mw01.txt b/materialized_views/mw01.txt new file mode 100644 index 0000000..d8c81d1 --- /dev/null +++ b/materialized_views/mw01.txt @@ -0,0 +1,70 @@ +create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret; +alter pluggable database NIHILUS open; +alter pluggable database NIHILUS save state; + +orapwd file=orapwSITHPRD password="ad420e57a205c9a7d80d!" + +alias NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba' + +alter session set container=NIHILUS; + +create user DEMO identified by secret; +grant connect, resource to DEMO; +grant create materialized view to DEMO; +grant create view to DEMO; +grant unlimited tablespace to DEMO; + +alias DEMO='rlwrap sqlplus DEMO/"secret"@bakura:1521/NIHILUS' + +create table DEMO as + select 0 seq,current_timestamp now + from + xmltable('1 to 1000'); + + +-- infinite_update.sql +whenever sqlerror exit failure +begin + loop + update demo set seq=seq+1,now=current_timestamp where rownum=1; + commit; + dbms_session.sleep(1); + end loop; +end; +/ + + +select max(seq),max(now) from DEMO.DEMO; +create materialized view DEMOMV1 as select * from DEMO; +create materialized view DEMOMV2 as select * from DEMO; + +create view V as + select 'DEMOMV1' source,seq,now from DEMOMV1 + union all + select 'DEMOMV2' source,seq,now from DEMOMV2 + union all + select 'DEMO' source,seq,now from DEMO; + +set lines 256 +col maxseq for 999999999 +col maxnow for a50 + +select source,max(seq) maxseq,max(now) maxnow from V group by source; + + +exec dbms_refresh.make('DEMO.DEMORGROUP', list=>'DEMOMV1,DEMOMV2', next_date=>null, interval=>'null'); + +exec dbms_refresh.refresh('DEMO.DEMORGROUP'); + +-- we can index and gather stats on materialized views +create index IMV1 on DEMOMV1(seq); +create index IMV2 on DEMOMV2(now); + +exec dbms_stats.gather_table_stats(user,'DEMOMV1', method_opt=>'for all columns size SKEWONLY'); +exec dbms_stats.gather_table_stats(user,'DEMOMV2', method_opt=>'for all columns size AUTO'); + +alter table DEMO add constraint PK_DEMO primary key (NOW); + +create materialized view log on DEMO.DEMO + including new values; + diff --git a/materialized_views/mw02.txt b/materialized_views/mw02.txt new file mode 100644 index 0000000..95adddf --- /dev/null +++ b/materialized_views/mw02.txt @@ -0,0 +1,127 @@ +alias DEMO='rlwrap sqlplus DEMO/"secret"@bakura:1521/NIHILUS' + +drop table T1 purge; +create table T1 ( + id number generated always as identity, + n1 number(1), + c1 varchar2(10), + d1 DATE +); + +alter table T1 add constraint T1_PK primary key (ID); + + +-- infinite_update2.sql +whenever sqlerror exit failure +declare + i NUMBER; +begin + i:=0; + loop + i:=i+1; + insert into T1(n1,c1,d1) values(mod(i,3),DBMS_RANDOM.string('a',10),sysdate); + commit; + dbms_session.sleep(1); + end loop; +end; +/ + + +drop materialized view MW0; +drop materialized view MW1; +drop materialized view MW2; + +create materialized view MW0 as select * from T1 where n1=0; +create materialized view MW1 as select * from T1 where n1=1; +create materialized view MW2 as select * from T1 where n1=2; + + +alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'; +select max(d1) from MW0; +select max(d1) from MW1; +select max(d1) from MW2; + +create materialized view log on T1 with primary key including new values; + +set lines 256 + +col log_table for a30 +col log_trigger for a30 +col primary_key for a3 head PK + +select log_table,log_trigger,primary_key from dba_mview_logs where log_owner='DEMO' and MASTER='T1'; + + +-- on snap "server" site + +set lines 256 + +col owner for a15 +col name for a25 +col master_owner for a15 +col master for a25 +col master_link for a25 +col refresh_method for a15 +col type for a10 +col status for a7 +col snaptime for a20 + +select + snap.owner + ,snap.name + ,snap.snapid + ,snap.status + ,slog.snaptime + ,snap.master_owner + ,snap.master + ,snap.refresh_method + ,snap.type + ,snap.master_link +from + sys.slog$ slog +join dba_snapshots snap on slog.snapid=snap.snapid +where slog.mowner='DEMO' and slog.master='T1'; + + +col snapname for a30 +col snapsite for a30 +col snaptime for a30 + +select + r.name snapname, snapid, nvl(r.snapshot_site, 'not registered') snapsite, snaptime +from + sys.slog$ s, dba_registered_snapshots r +where + s.snapid=r.snapshot_id(+) and mowner='DEMO' and master='T1'; + + +exec dbms_mview.refresh('MW0'); +exec dbms_mview.refresh('MW1'); +exec dbms_mview.refresh('MW2'); + +-- point of view of snap "client" +select last_refresh_date,sysdate from dba_mviews where mview_name='MW0'; + +-- point of view of snap "server" +select sysdate,last_refresh from dba_snapshots where name='MW0'; + + +select log_table from dba_mview_logs where master='T1'; + +select count(*) from DEMO.MLOG$_T1; + + +exec dbms_refresh.make('MWGROUP0', list=>'MW0,MW1', next_date=>null, interval=>'null',parallelism=>2); + +exec dbms_refresh.refresh('MWGROUP0'); + + +-- https://www.oracleplsqltr.com/2021/03/14/how-to-unregister-materialized-view-from-source-db/ + +exec dbms_mview.unregister_mview(mviewowner=>'DEMO',mviewname=>'MW2',mviewsite=>'NIHILUS'); +exec dbms_mview.purge_mview_from_log(mview_id=>9); + +select segment_name,bytes/1024 Kb from dba_segments where segment_name='MLOG$_T1'; + + + diff --git a/materialized_views/mw03.txt b/materialized_views/mw03.txt new file mode 100644 index 0000000..b52d328 --- /dev/null +++ b/materialized_views/mw03.txt @@ -0,0 +1,85 @@ +-- setup PDB +------------ + +orapwd file=orapwSITHPRD password="ad420e57a205c9a7d80d!" + +create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret; +alter pluggable database NIHILUS open; +alter pluggable database NIHILUS save state; + +alter session set container=NIHILUS; +create user adm identified by "secret"; +grant sysdba to adm; + +alias NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba' + +create user MASTER identified by secret; +grant connect, resource to MASTER; +grant unlimited tablespace to MASTER; + + +alias MASTER='rlwrap sqlplus MASTER/"secret"@bakura:1521/NIHILUS' + +-- setup PDB +------------ + +orapwd file=orapwANDOPRD password="oIp757a205c9?jj90yhgf" + +create pluggable database RANDOR admin user RANDOR$OWNER identified by secret; +alter pluggable database RANDOR open; +alter pluggable database RANDOR save state; + +alter session set container=RANDOR; +create user adm identified by "secret"; +grant sysdba to adm; + + +alias RANDOR='rlwrap sqlplus adm/"secret"@togoria:1521/RANDOR as sysdba' + +create user REPLICA identified by secret; +grant connect, resource to REPLICA; +grant create materialized view to REPLICA; +grant create view to REPLICA; +grant create database link to REPLICA; +grant unlimited tablespace to REPLICA; + +alias REPLICA='rlwrap sqlplus REPLICA/"secret"@togoria:1521/RANDOR' + + +-- master site NIHILUS +drop table T1 purge; +create table T1 ( + id number generated always as identity, + n1 number(1), + c1 varchar2(10), + d1 DATE +); + +alter table T1 add constraint T1_PK primary key (ID); + + +-- replica site RANDOR +create database link RANDOR_TO_NIHILUS connect to MASTER identified by "secret" using 'bakura:1521/NIHILUS'; +select * from DUAL@RANDOR_TO_NIHILUS; + + +drop materialized view MW0; +drop materialized view MW1; +drop materialized view MW2; + +create materialized view MW0 as select * from T1@RANDOR_TO_NIHILUS where n1=0; +create materialized view MW1 as select * from T1@RANDOR_TO_NIHILUS where n1=1; +create materialized view MW2 as select * from T1@RANDOR_TO_NIHILUS where n1=2; + + +alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'; +select max(d1) from MW0; +select max(d1) from MW1; +select max(d1) from MW2; + + + + + + + diff --git a/partitioning/articles_01.txt b/partitioning/articles_01.txt new file mode 100644 index 0000000..f5eb898 --- /dev/null +++ b/partitioning/articles_01.txt @@ -0,0 +1,2 @@ +Implementing dynamic partitions AND subpartitions +https://connor-mcdonald.com/2022/04/22/implementing-dynamic-partitions-and-subpartitions/ diff --git a/partitioning/range_to_interval_01.txt b/partitioning/range_to_interval_01.txt new file mode 100644 index 0000000..d6cc3c1 --- /dev/null +++ b/partitioning/range_to_interval_01.txt @@ -0,0 +1,126 @@ +-- http://www.oraclefindings.com/2017/07/23/switching-range-interval-partitioning/ + +drop table DEMO purge; + +create table DEMO( + id INTEGER generated always as identity + ,day DATE not null + ,code VARCHAR2(2) not null + ,val NUMBER not null + ,PRIMARY KEY(id) +) +partition by range(day)( + partition P_2024_01 values less than (date'2024-02-01') + ,partition P_2024_02 values less than (date'2024-03-01') + ,partition INFINITY values less than (MAXVALUE) + ) +; + +create index IDX_VAL on DEMO(val) local; + + +insert into DEMO (day,code,val) values (date'2024-01-09','UK',1005); +insert into DEMO (day,code,val) values (date'2024-01-10','IT',900); +insert into DEMO (day,code,val) values (date'2024-01-11','IT',400); +insert into DEMO (day,code,val) values (date'2024-01-11','FR',400); +insert into DEMO (day,code,val) values (date'2024-01-12','UK',400); +insert into DEMO (day,code,val) values (date'2024-01-12','IT',500); + +insert into DEMO (day,code,val) values (date'2024-02-07','UK',765); +insert into DEMO (day,code,val) values (date'2024-02-09','IT',551); +insert into DEMO (day,code,val) values (date'2024-02-09','IT',90); +insert into DEMO (day,code,val) values (date'2024-02-09','FR',407); +insert into DEMO (day,code,val) values (date'2024-02-09','UK',101); +insert into DEMO (day,code,val) values (date'2024-02-10','IT',505); +insert into DEMO (day,code,val) values (date'2024-02-10','FR',2000); + +commit; + + +exec dbms_stats.gather_table_stats(user,'DEMO'); +exec dbms_stats.delete_table_stats(user,'DEMO'); + +-- IMPORTANT: the table should NOT have a MAXVALUE partition +-- ALTER TABLE… SET INTERVAL fails with: ORA-14759: SET INTERVAL is not legal on this table. (Doc ID 2926948.1) + +select count(*) from DEMO partition (INFINITY); +-- Drop the MAXVALUE partition. +alter table POC.DEMO drop partition INFINITY; + +alter table DEMO set interval(NUMTOYMINTERVAL(1, 'MONTH')); + +insert into DEMO (day,code,val) values (date'2024-04-01','IT',50); +insert into DEMO (day,code,val) values (date'2024-05-12','FR',60); +insert into DEMO (day,code,val) values (date'2024-05-14','UK',70); +commit; + + +------------------------------------------------------------- + +drop table DEMO purge; + +create table DEMO( + id INTEGER generated always as identity + ,day DATE not null + ,code VARCHAR2(2) not null + ,val NUMBER not null + ,PRIMARY KEY(id) +) +partition by range(day) subpartition by list (code)( + partition P_2024_01 values less than (date'2024-02-01') + ( + subpartition P_2024_01_UK values ('UK') + ,subpartition P_2024_01_IT values ('IT') + ,subpartition P_2024_01_FR values ('FR') + ) + ,partition P_2024_02 values less than (date'2024-03-01') + ( + subpartition P_2024_02_UK values ('UK') + ,subpartition P_2024_02_IT values ('IT') + ,subpartition P_2024_02_FR values ('FR') + ) + ,partition INFINITY values less than (MAXVALUE) + ( + subpartition INFINITY_UK values ('UK') + ,subpartition INFINITY_IT values ('IT') + ,subpartition INFINITY_FR values ('FR') + ) + ) +; + +create index IDX_VAL on DEMO(val) local; + +alter table POC.DEMO drop partition INFINITY; +alter table DEMO set interval(NUMTOYMINTERVAL(1, 'MONTH')); + +alter index POC.SYS_C007367 rebuild; + + +ALTER TABLE DEMO SPLIT SUBPARTITION SYS_SUBP3241 + VALUES ('UK') INTO ( + SUBPARTITION SYS_SUBP3241_UK, + SUBPARTITION SYS_SUBP3241_DIFF + ) + ONLINE; + +ALTER TABLE DEMO SPLIT SUBPARTITION SYS_SUBP3241_DIFF + VALUES ('IT') INTO ( + SUBPARTITION SYS_SUBP3241_IT, + SUBPARTITION SYS_SUBP3241_FR + ) + ONLINE; + +-- because wrong previous subpart name +alter table POC.DEMO rename subpartition SYS_SUBP3241_FR to SYS_SUBP3241_DIFF; + +ALTER TABLE DEMO SPLIT SUBPARTITION SYS_SUBP3241_DIFF + VALUES ('FR') INTO ( + SUBPARTITION SYS_SUBP3241_FR, + SUBPARTITION SYS_SUBP3241_OTHER + ) + ONLINE; + +select count(*) from DEMO subpartition(SYS_SUBP3241_OTHER); +alter table DEMO drop subpartition SYS_SUBP3241_OTHER; + + diff --git a/partitioning/range_to_interval_02 b/partitioning/range_to_interval_02 new file mode 100644 index 0000000..ed1fe24 --- /dev/null +++ b/partitioning/range_to_interval_02 @@ -0,0 +1,93 @@ +drop table DEMO purge; + +create table DEMO( + id INTEGER generated always as identity + ,day DATE not null + ,code VARCHAR2(2) not null + ,val NUMBER not null + ,PRIMARY KEY(id) +) +partition by range(day)( + partition P_2024_01 values less than (date'2024-02-01') + ,partition P_2024_02 values less than (date'2024-03-01') + ,partition P_2024_03 values less than (date'2024-04-01') + ,partition P_2024_04 values less than (date'2024-05-01') + ,partition P_2024_05 values less than (date'2024-06-01') + ,partition P_2024_06 values less than (date'2024-07-01') + ,partition INFINITY values less than (MAXVALUE) + ) +; + +create index IDX_VAL on DEMO(val) local; + +insert /*+ APPEND */ into DEMO (day,code,val) +select + DATE'2024-01-01' + trunc(DBMS_RANDOM.VALUE(1,30*4)) + ,DECODE(trunc(DBMS_RANDOM.VALUE(1,10)), + 1, 'UK' + ,2, 'UK' + ,3, 'UK' + ,4, 'UK' + ,5, 'UK' + ,6, 'IT' + ,7, 'IT' + ,8, 'FR' + ,9, 'FR' + ,10, 'FR' + ) + ,trunc(DBMS_RANDOM.VALUE(1,10000)) +from + xmltable('1 to 4000000') +; + +commit; + + +exec dbms_stats.gather_table_stats(user,'DEMO'); +exec dbms_stats.delete_table_stats(user,'DEMO'); + +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'INCREMENTAL', pvalue=>'TRUE'); +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'GRANULARITY', pvalue=>'PARTITION'); + +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'INCREMENTAL', pvalue=>'FALSE'); +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'GRANULARITY', pvalue=>'AUTO'); + + +execute dbms_stats.gather_table_stats(ownname=>'POC',tabname=>'DEMO', partname=>'P_2024_01'); + +execute dbms_stats.delete_table_stats(ownname=>'POC',tabname=>'DEMO', partname=>'P_2024_06'); +execute dbms_stats.gather_table_stats(ownname=>'POC',tabname=>'DEMO', partname=>'P_2024_06'); +execute dbms_stats.gather_table_stats(ownname=>'POC',tabname=>'DEMO', partname=>'P_2024_06',granularity=>'PARTITION'); + + +alter table POC.DEMO drop partition P_2024_05 update global indexes; +alter table POC.DEMO drop partition P_2024_06 update global indexes; +alter table POC.DEMO drop partition INFINITY update global indexes; + +alter table DEMO set interval(NUMTOYMINTERVAL(1, 'MONTH')); + +insert /*+ APPEND */ into DEMO (day,code,val) +select + DATE'2024-07-01' + trunc(DBMS_RANDOM.VALUE(1,30*1)) + ,DECODE(trunc(DBMS_RANDOM.VALUE(1,10)), + 1, 'UK' + ,2, 'UK' + ,3, 'UK' + ,4, 'UK' + ,5, 'UK' + ,6, 'IT' + ,7, 'IT' + ,8, 'FR' + ,9, 'FR' + ,10, 'FR' + ) + ,trunc(DBMS_RANDOM.VALUE(1,10000)) +from + xmltable('1 to 1000000') +; + +commit; + + + + diff --git a/partitioning/range_to_interval_03 b/partitioning/range_to_interval_03 new file mode 100644 index 0000000..a887db5 --- /dev/null +++ b/partitioning/range_to_interval_03 @@ -0,0 +1,131 @@ +drop table DEMO1 purge; + +create table DEMO1( + id INTEGER generated always as identity + ,day DATE not null + ,code VARCHAR2(2) not null + ,val NUMBER not null + ,PRIMARY KEY(id) +) +partition by range(day)( + partition P_2024_01 values less than (date'2024-02-01') + ,partition P_2024_02 values less than (date'2024-03-01') + ,partition P_2024_03 values less than (date'2024-04-01') + ,partition P_2024_04 values less than (date'2024-05-01') + ) +; + +create index IDX_VAL on DEMO1(val) local; + +insert /*+ APPEND */ into DEMO1 (day,code,val) +select + DATE'2024-01-01' + trunc(DBMS_RANDOM.VALUE(1,30*4)) + ,DECODE(trunc(DBMS_RANDOM.VALUE(1,10)), + 1, 'UK' + ,2, 'UK' + ,3, 'UK' + ,4, 'UK' + ,5, 'UK' + ,6, 'IT' + ,7, 'IT' + ,8, 'FR' + ,9, 'FR' + ,10, 'FR' + ) + ,trunc(DBMS_RANDOM.VALUE(1,10000)) +from + xmltable('1 to 4000000') +; +commit; + +-- create a copy of the table + +drop table DEMO2 purge; + +create table DEMO2( + id INTEGER generated always as identity + ,day DATE not null + ,code VARCHAR2(2) not null + ,val NUMBER not null + ,PRIMARY KEY(id) +) +partition by range(day)( + partition P_2024_01 values less than (date'2024-02-01') + ,partition P_2024_02 values less than (date'2024-03-01') + ,partition P_2024_03 values less than (date'2024-04-01') + ,partition P_2024_04 values less than (date'2024-05-01') + ) +; + +create index IDX_VAL2 on DEMO2(val) local; + +insert /*+ APPEND */ into DEMO2 (day,code,val) select day,code,val from DEMO1; +commit; + + +exec dbms_stats.gather_table_stats(user,'DEMO1'); + +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO2', pname=>'INCREMENTAL', pvalue=>'TRUE'); +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO2', pname=>'GRANULARITY', pvalue=>'PARTITION'); +exec dbms_stats.gather_table_stats(user,'DEMO2'); + +-- initial stats on DEM02 are faster because of incremental aggregation of GLOBAL stats instead basic calculation +-- repeating gather table stats is much faster on DEMO2 (last_analyzed is nut increased) + (test if inserting a couple of lines in every partition change someting) + + +-- convert tables to INTERVAL +alter table DEMO1 set interval(NUMTOYMINTERVAL(1, 'MONTH')); +alter table DEMO2 set interval(NUMTOYMINTERVAL(1, 'MONTH')); + +-- insert a lot of lines in 1-st partition +drop table DEMOAUX purge; +create table DEMOAUX as select * from DEMO1 where 1=0; +alter table DEMOAUX drop column ID; + +insert /*+ APPEND */ into DEMOAUX (day,code,val) +select + DATE'2024-01-01' + trunc(DBMS_RANDOM.VALUE(1,30*1)) + ,DECODE(trunc(DBMS_RANDOM.VALUE(1,10)), + 1, 'UK' + ,2, 'UK' + ,3, 'UK' + ,4, 'UK' + ,5, 'UK' + ,6, 'IT' + ,7, 'IT' + ,8, 'FR' + ,9, 'FR' + ,10, 'FR' + ) + ,trunc(DBMS_RANDOM.VALUE(1,10000)) +from + xmltable('1 to 1000000') +; +commit; + + +insert /*+ APPEND */ into DEMO1 (day,code,val) select day,code,val from DEMOAUX; +commit; + +insert /*+ APPEND */ into DEMO2 (day,code,val) select day,code,val from DEMOAUX; +commit; + + +-- stats on DEMO2 will be faster +exec dbms_stats.gather_table_stats(user,'DEMO1'); +exec dbms_stats.gather_table_stats(user,'DEMO2'); + + +=> check why GLOBAL stats on DEMO2 are staled + + + +-- insert 1 line in a new partition +insert into DEMO1 (day,code,val) values (date'2024-05-09','UK',1005); +insert into DEMO2 (day,code,val) values (date'2024-05-09','UK',1005); +commit; + + + + diff --git a/partitioning/range_to_interval_topics_01 b/partitioning/range_to_interval_topics_01 new file mode 100644 index 0000000..b5aa5aa --- /dev/null +++ b/partitioning/range_to_interval_topics_01 @@ -0,0 +1,10 @@ +- classic partitionned table DEMO transformation to INTERVAL partitionning + - insert data for 4 months + - try to modify / delete MAXVALUE partition UPDATING GLOBAL indexes + - transform table to INTERVAL + - insert 2 months of data (if old named partition exists, it will be used) + - rename automatic partitions + + +STATS +- with default stats parameters diff --git a/postgresql/draft_01.txt b/postgresql/draft_01.txt new file mode 100644 index 0000000..fd4c516 --- /dev/null +++ b/postgresql/draft_01.txt @@ -0,0 +1,60 @@ +en mode archive-wal + - pg_wal recycle les WAL (et garde la taille <= max_wal_size) tant que l'archivage des WAL peut se faire, sinon il grosssi au delà de max_wal_size + - si pb d'espace dans le FS d'archivage des WAL, le process d'archivage s'arrête mais il reprend si le pb d'espace est résolu + - on peut envoyer les archives à /dev/null + + + + + +barman + barman list-files aquaris_inst_5501 --target standalone 20240221T194242 + => tous les fichiers + WAL nécessaires à la resturation du backup en môde consistant + barman list-files aquaris_inst_5501 --target full 20240221T194242 + => tous les fichiers + WAL nécessaires à la resturation du backup en môde consistant + les autres WAL streamés depuis + barman list-files aquaris_inst_5501 --target wal 20240221T194242 + => seulement les WAL nécessaires à la resturation du backup en môde consistant + les autres WAL streamés depuis + +barman list-backup aquaris_inst_5501 + +aquaris_inst_5501 20240221T194242 - Wed Feb 21 19:47:27 2024 - Size: 1.2 GiB - WAL Size: 1.9 GiB + + => les 1,9Gib de WAL on été streamé après le backup + +si on fait un autre backup: + +barman backup aquaris_inst_5501 --wait + +barman list-backup aquaris_inst_5501 +aquaris_inst_5501 20240221T204618 - Wed Feb 21 20:50:56 2024 - Size: 1.2 GiB - WAL Size: 0 B +aquaris_inst_5501 20240221T194242 - Wed Feb 21 19:47:27 2024 - Size: 1.2 GiB - WAL Size: 1.9 GiB + +Si on fait encore 2 backup: + +arman list-backup aquaris_inst_5501 +aquaris_inst_5501 20240221T205658 - Wed Feb 21 20:57:08 2024 - Size: 1.2 GiB - WAL Size: 0 B +aquaris_inst_5501 20240221T205623 - Wed Feb 21 20:56:32 2024 - Size: 1.2 GiB - WAL Size: 48.0 MiB +aquaris_inst_5501 20240221T204618 - Wed Feb 21 20:50:56 2024 - Size: 1.2 GiB - WAL Size: 32.0 MiB +aquaris_inst_5501 20240221T194242 - Wed Feb 21 19:47:27 2024 - Size: 1.2 GiB - WAL Size: 1.9 GiB - OBSOLETE + +=> vu qu'on a une policy RETENSION=3, le premier backup est devenu obsolete +=> 2 minutes plus tard il a été automatiquement purgé par barman cron + +barman list-backup aquaris_inst_5501 +aquaris_inst_5501 20240221T205658 - Wed Feb 21 20:57:08 2024 - Size: 1.2 GiB - WAL Size: 0 B +aquaris_inst_5501 20240221T205623 - Wed Feb 21 20:56:32 2024 - Size: 1.2 GiB - WAL Size: 48.0 MiB +aquaris_inst_5501 20240221T204618 - Wed Feb 21 20:50:56 2024 - Size: 1.2 GiB - WAL Size: 32.0 MiB + + +=========== +Barman va streamer dans un premier temps les WAL dans le répertiore streaming +Ensuite il les déplace (+compress) dans le répertiore wal + +To do: + - aquaris => recréer 2 instances: + archivelog + noarchivelog + exegol: + - add /backup FS => OK + - réinstall barman dans une autre arborescence => OK + diff --git a/postgresql/draft_02.txt b/postgresql/draft_02.txt new file mode 100644 index 0000000..e760373 --- /dev/null +++ b/postgresql/draft_02.txt @@ -0,0 +1,91 @@ +barman recover \ + --remote-ssh-command 'ssh sembla' \ + --target-time="2024-02-25 18:07:00" \ + aquaris_5501 20240223T180120 \ + /data/restore + + +/app/postgres/product/16.2/bin/pg_ctl \ + --pgdata=/data/restore \ + -l /tmp/restore.log \ + start + + +/app/postgres/product/16.2/bin/pg_ctl \ + --pgdata=/data/restore \ + -l /tmp/restore.log \ + stop + + +barman list-backups aquaris_5501 +aquaris_5501 20240227T150757 - Tue Feb 27 15:12:54 2024 - Size: 237.0 MiB - WAL Size: 191.7 MiB +aquaris_5501 20240227T145343 - Tue Feb 27 14:58:54 2024 - Size: 239.6 MiB - WAL Size: 242.8 MiB +aquaris_5501 20240227T143931 - Tue Feb 27 14:44:54 2024 - Size: 239.9 MiB - WAL Size: 242.9 MiB + +après le premier backup: Tue Feb 27 02:52:33 PM CET 2024 +après le 2-ème backup: Tue Feb 27 03:06:47 PM CET 2024 +pendant le 3-ème backup: Tue Feb 27 03:09:10 PM CET 2024 +après le 3-ème backup: Tue Feb 27 03:20:17 PM CET 2024 + +arrêt des inserts: Tue Feb 27 03:23:48 PM CET 2024 + + +1. TARGET = Tue Feb 27 03:06:47 PM CET 2024 + - on utilise 20240227T145343 + - target time: "2024-02-27 15:06:47" + + +barman recover \ + --remote-ssh-command 'ssh sembla' \ + --target-time="2024-02-27 15:06:47" \ + aquaris_5501 20240227T145343 \ + /data/restore + + +barman recover \ + --remote-ssh-command 'ssh sembla' \ + --target-time="2024-02-27 15:06:47" \ + aquaris_5501 20240227T143931 \ + /data/restore + + +2. TARGET = le 1-er backup + +barman \ + recover --remote-ssh-command 'ssh sembla' \ + aquaris_5501 20240227T143931 \ + /data/restore + +=> surprise! le recovr s'est fait jusqu'au dernier WAL backupé par BARMAN + + +barman \ + recover --remote-ssh-command 'ssh sembla' \ + aquaris_5501 latest \ + /data/restore + + +jawa=> select min(t),max(t) from timeline; + min | max +----------------------------+---------------------------- + 2024-02-27 14:33:12.373606 | 2024-02-27 15:23:51.508241 +(1 row) + + + +Avec la clause "get-wal" il y a eu uhje restauration COMPLETE, WAL current (partiel streamé) inclus + +barman \ + recover --remote-ssh-command 'ssh sembla' \ + --get-wal \ + aquaris_5501 latest \ + /data/restore + + +jawa=> select min(t),max(t) from timeline; + min | max +----------------------------+---------------------------- + 2024-02-27 14:33:12.373606 | 2024-02-27 15:24:19.508331 +(1 row) + + diff --git a/postgresql/pg_install_01.txt b/postgresql/pg_install_01.txt new file mode 100644 index 0000000..4bacd6a --- /dev/null +++ b/postgresql/pg_install_01.txt @@ -0,0 +1,36 @@ +dnf install -y gcc.x86_64 make.x86_64 readline.x86_64 readline-devel.x86_64 zlib-devel.x86_64 zlib.x86_64 openssl-devel.x86_64 + +./configure \ + --prefix=/app/postgres/15.3 \ + --datarootdir=/data \ + --with-ssl=openssl + +make +make install + +useradd postgres-G postgres -g postgres +useradd postgres -G postgres -g postgres + +chown -R postgres:postgres /app/postgres /data /backup + +pg_ctl -D /data/postgresql/dbf -l logfile start + + +# add to .bash_profile + +export PS1="\u@\h:\w> " +alias listen='lsof -i -P | grep -i "listen"' + +export POSTGRES_HOME=/app/postgres/15.3 +export PGDATA=/data/postgresql/dbf + +export LD_LIBRARY_PATH=$POSTGRES_HOME/lib:$LD_LIBRARY_PATH +export PATH=$POSTGRES_HOME/bin:$PATH + + +# init database and startup +initdb +pg_ctl -D /data/postgresql/dbf -l /home/postgres/postgres.log start + + + diff --git a/postgresql/pg_install_suse_01.txt b/postgresql/pg_install_suse_01.txt new file mode 100644 index 0000000..55bf68b --- /dev/null +++ b/postgresql/pg_install_suse_01.txt @@ -0,0 +1,136 @@ +# install packages +zypper install -y gcc +zypper install -y make +zypper install -y automake +zypper install -y readline-devel +zypper install -y zlib-devel +zypper install -y openssl-devel + +# compile from sources +mkdir -p /app/kit/postgresql +wget https://ftp.postgresql.org/pub/source/v15.3/postgresql-15.3.tar.gz + +cd /app/kit/postgresql/postgresql-15.3 + +mkdir -p /app/postgres/15.3 + +./configure \ + --prefix=/app/postgres/15.3 \ + --datarootdir=/data \ + --with-ssl=openssl + +make +make install + +# create user postres and change owner from binaries, data and backup directories +groupadd postgres +useradd postgres -G postgres -g postgres + +# create/opdate .bash_profile for postgres user: +------------------------------------------------------------ +alias listen='lsof -i -P | grep -i "listen"' + +export POSTGRES_HOME=/app/postgres/15.3 +export PGDATA=/data/postgresql/dbf + +export LD_LIBRARY_PATH=$POSTGRES_HOME/lib:$LD_LIBRARY_PATH +export PATH=$POSTGRES_HOME/bin:$PATH +------------------------------------------------------------ + +chown -R postgres:postgres /app/postgres /data /backup + +# sinitialize and start PostgreSQL server +mkdir -p $PGDATA +pg_ctl -D /data/postgresql/dbf -l /home/postgresql.log start + + +# add to .bash_profile + +export PS1="\u@\h:\w> " +alias listen='lsof -i -P | grep -i "listen"' + +export POSTGRES_HOME=/app/postgres/15.3 +export PGDATA=/data/postgresql/dbf + +export LD_LIBRARY_PATH=$POSTGRES_HOME/lib:$LD_LIBRARY_PATH +export PATH=$POSTGRES_HOME/bin:$PATH + + +# init database and startup +initdb +pg_ctl -D /data/postgresql/dbf -l /home/postgres/postgres.log start + +# test local connection +psql +\db + +# define listening interfaces and ports in $PGDATA/postgresql.conf +listen_addresses = '127.0.0.1,192.168.0.101,192.168.1.101' +port = 5432 + +# change postgres user password +psql +alter user postgres password 'secret'; + +# activate password authentification on all interfaces, from any host, to any database, using any user +# add lines in $PGDATA/pg_hba.conf + +# TYPE DATABASE USER ADDRESS METHOD +host all all 127.0.0.1/24 md5 +host all all 192.168.0.101/24 md5 +host all all 192.168.1.101/24 md5 + +# tst a remote connection: +psql -h aquaris -U postgres + + +# create systemd service +# it was not possible for me to use environement variable do define the path of pg_ctl binary + +cat /usr/lib/systemd/system/postgresql.service + +[Unit] +Description=PostgreSQL database server +After=network.target + +[Service] +Type=forking + +User=postgres +Group=postgres + +Environment=PGDATA=/data/postgresql/dbf +Environment=PGLOG=/home/postgres/postgresql.log + +ExecStart=/app/postgres/15.3/bin/pg_ctl -D ${PGDATA} -l ${PGLOG} start +ExecStop=/app/postgres/15.3/bin/pg_ctl stop + +# Give a reasonable amount of time for the server to start up/shut down. +# Ideally, the timeout for starting PostgreSQL server should be handled more +# nicely by pg_ctl in ExecStart, so keep its timeout smaller than this value. +TimeoutSec=300 + +[Install] +WantedBy=multi-user.target + + +# start/stop/status and enable service for automatic startup +systemctl start postgresql +systemctl stop postgresql +systemctl status postgresql +systemctl enable postgresql + + +# to enable WAL archiving, set following parameters in $PGDATA/postgresql.conf +wal_level = replica +archive_mode = on +archive_command = ''test ! -f /backup/postgresql/wal/%f && cp %p /backup/postgresql/wal/%f'' +archive_timeout = 3600 # optional, to force a switch every 1 hour + +# https://public.dalibo.com/exports/formation/manuels/modules/i2/i2.handout.html + + + + + + diff --git a/postgresql/pg_online_backup_01.txt b/postgresql/pg_online_backup_01.txt new file mode 100644 index 0000000..3d7b931 --- /dev/null +++ b/postgresql/pg_online_backup_01.txt @@ -0,0 +1,37 @@ +# Online backup script using pg_basebackup +########################################## + +BACKUP_DIR=/backup/postgresql/pgdata +BACKUP_COMPRESSED_DIR=/backup/postgresql/daily + +if [ -z "${BACKUP_DIR}" ] +then + echo "\${BACKUP_DIR} variable is empty" + exit 1 +else + rm -rf ${BACKUP_DIR}/* +fi + +if [ -z "${BACKUP_COMPRESSED_DIR}" ] +then + echo "\${BACKUP_COMPRESSED_DIR} variable is empty" + exit 1 +fi + + +pg_basebackup -h localhost -P -D ${BACKUP_DIR} +pg_verifybackup ${BACKUP_DIR} + +if [ "$?" != "0" ] +then + echo "Verify backup failed" + exit 1 +fi + +NOW=$(date '+%Y-%m-%d__%H_%M_%S') +cd ${BACKUP_DIR} +tar -cvf - * | pigz > ${BACKUP_COMPRESSED_DIR}/${NOW}.tar.gz +########################################## + + +# https://public.dalibo.com/exports/formation/manuels/modules/i2/i2.handout.html \ No newline at end of file diff --git a/postgresql/pitr_example_01.txt b/postgresql/pitr_example_01.txt new file mode 100644 index 0000000..510b45e --- /dev/null +++ b/postgresql/pitr_example_01.txt @@ -0,0 +1,71 @@ +create table players (id int, about text, age int); +insert into players (id, about, age) + values (generate_series(1, 5000), + repeat('A cool player. ', 2) || 'My number is ' || trunc(random()*1000), + trunc(random()*10 * 2 + 10)); + + +******************************************* +dbaquaris=> select count(*) from players; + 5000 + +dbaquaris=> select current_timestamp; + 2023-07-09 17:13:00.860309+02 +******************************************* + + +insert into players (id, about, age) + values (generate_series(1, 100000), + repeat('A cool player. ', 2) || 'My number is ' || trunc(random()*1000), + trunc(random()*10 * 2 + 10)); + +******************************************* +dbaquaris=> select count(*) from players; + 105000 + +dbaquaris=> select current_timestamp; + 2023-07-09 17:36:08.502146+02 +******************************************* + +insert into players (id, about, age) + values (generate_series(1, 1000000), + repeat('A cool player. ', 2) || 'My number is ' || trunc(random()*1000), + trunc(random()*10 * 2 + 10)); + + +******************************************* +dbaquaris=> select count(*) from players; + 1105000 + +dbaquaris=> select current_timestamp; + 2023-07-09 17:37:32.076851+02 +******************************************* + +# PITR to 2023-07-09 17:36:08 + +- stop PostgreSQL +- take one of the base backup before ther PITR and put it ion a temporary folder + +mkdir /backup/postgresql/tmp +cd /backup/postgresql/tmp +gunzip -c /backup/postgresql/daily/2023-07-09__16_43_59_emptydb.tar.gz | tar -xvf - + +- add in modify postgresql.conf + +restore_command = 'cp /backup/postgresql/wal/%f %p' +recovery_target_time = '2023-07-09 17:36:08' +recovery_target_inclusive = true + +- create recovery.signal file +touch recovery.signal + +- start PostgreSQL server with the data in the temporary directory +pg_ctl start -D /backup/postgresql/tmp -l /tmp/reco.log + + +- check logfile; at the end of the recovery you will be asked to execute the following fonction in order to open the instance +select pg_wal_replay_resume(); + +- stop PostgreSQL server with the data in the temporary directory +pg_ctl stop -D /backup/postgresql/tmp -l /tmp/reco.log + diff --git a/postgresql/postghresql_in_docker_01.txt b/postgresql/postghresql_in_docker_01.txt new file mode 100644 index 0000000..c608033 --- /dev/null +++ b/postgresql/postghresql_in_docker_01.txt @@ -0,0 +1,33 @@ +# pul docker image +docker pull postgres + +# create persistent data directory +mkdir -p /app/persistent_docker/postgresql_17/data + +# start without docker-compose +docker run -d \ + --name postgresql \ + -e POSTGRES_PASSWORD=secret \ + -e PGDATA=/var/lib/postgresql/data/pgdata \ + -v /app/persistent_docker/postgresql_17/data:/var/lib/postgresql/data \ + -p 5432:5432 \ + postgres + +# run psql in interactive mode +docker run -it --rm postgres psql -h kamino -U postgres + +# docker-compose.yaml +services: + postgresql: + image: postgres + restart: always + shm_size: 128mb + container_name: postgresql + environment: + - POSTGRES_PASSWORD=secret + - PGDATA=/var/lib/postgresql/data/pgdata + volumes: + - /app/persistent_docker/postgresql_17/data:/var/lib/postgresql/data + ports: + - 5432:5432 + diff --git a/postgresql/postgres_16_on_Rocky_9_setup_01.txt b/postgresql/postgres_16_on_Rocky_9_setup_01.txt new file mode 100644 index 0000000..5c0a70a --- /dev/null +++ b/postgresql/postgres_16_on_Rocky_9_setup_01.txt @@ -0,0 +1,365 @@ +# create VM with Rocky Linux 9 + +dd if=/dev/zero of=/vm/ssd0/aquaris/boot_01.img bs=1G count=1 +dd if=/dev/zero of=/vm/ssd0/aquaris/root_01.img bs=1G count=8 +dd if=/dev/zero of=/vm/ssd0/aquaris/swap_01.img bs=1G count=2 +dd if=/dev/zero of=/vm/ssd0/aquaris/app_01.img bs=1G count=8 + +virt-install \ + --graphics vnc,password=secret,listen=0.0.0.0 \ + --name=aquaris \ + --vcpus=2 \ + --memory=4096 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/vm/hdd0/_kit_/Rocky-9.3-x86_64-minimal.iso \ + --disk /vm/ssd0/aquaris/boot_01.img \ + --disk /vm/ssd0/aquaris/root_01.img \ + --disk /vm/ssd0/aquaris/swap_01.img \ + --disk /vm/ssd0/aquaris/app_01.img \ + --os-variant=rocky9 + +# VM network setup after creation + +nmcli connection show +nmcli connection show --active + +nmcli connection modify enp1s0 ipv4.address 192.168.0.101/24 +nmcli connection modify enp1s0 ipv4.method manual ipv6.method ignore +nmcli connection modify enp1s0 ipv4.gateway 192.168.0.1 +nmcli connection modify enp1s0 ipv4.dns 192.168.0.8 +nmcli connection modify enp1s0 ipv4.dns-search swgalaxy + +nmcli connection modify enp2s0 ipv4.address 192.168.1.101/24 ipv4.method manual ipv6.method ignore + +# list host interfaces +hostname -I + +# set host name +hostnamectl hostname aquaris.swgalaxy + + +# install packages +dnf install -y gcc make automake readline-devel zlib-devel openssl-devel libicu-devel.x86_64 +dnf install -y zip.x86_64 tar.x86_64 libzip.x86_64 unzip.x86_64 bzip2.x86_64 bzip2-devel.x86_64 pigz.x86_64 +dnf install -y wget.x86_64 lsof.x86_64 bind-utils tree.x86_64 python3-devel.x86_64 rsync.x86_64 + +# add data and backup disks + +# on VM get next letter for devices +lsblk + +# on Dom0 create and attach the disk to VM +dd if=/dev/zero of=/vm/ssd0/aquaris/data_01.img bs=1G count=8 +dd if=/dev/zero of=/vm/ssd0/aquaris/backup_01.img bs=1G count=4 + +virsh attach-disk aquaris /vm/ssd0/aquaris/data_01.img vde --driver qemu --subdriver raw --targetbus virtio --persistent +virsh attach-disk aquaris /vm/ssd0/aquaris/backup_01.img vdf --driver qemu --subdriver raw --targetbus virtio --persistent + +# to list the disk of VM +virsh domblklist aquaris --details + +# on VM create partitions, format and mount devices +fdisk /dev/vde +fdisk /dev/vdf + +lsblk +pvs +pvcreate /dev/vde1 +pvcreate /dev/vdf1 +vgs +vgcreate vgdata /dev/vde1 +vgcreate vgbackup /dev/vdf1 +vgs +lvs +lvcreate -n data -l 100%FREE vgdata +lvcreate -n backup -l 100%FREE vgbackup +lvs +mkfs.xfs /dev/mapper/vgdata-data +mkfs.xfs /dev/mapper/vgbackup-backup + +mkdir -p /data /backup + + +echo "/dev/mapper/vgdata-data /data xfs defaults 1 1" >> /etc/fstab +echo "/dev/mapper/vgbackup-backup /backup xfs defaults 1 1" >> /etc/fstab + +systemctl daemon-reload + +mount -a +df -hT + +# build PostgreSQL from sources +mkdir -p /app/postgres/product/16.2 +mkdir -p /app/staging_area +cd /app/staging_area +wget https://ftp.postgresql.org/pub/source/v16.2/postgresql-16.2.tar.gz +gunzip -c postgresql-16.2.tar.gz | tar -xvf - + +cd postgresql-16.2 +./configure \ + --prefix=/app/postgres/product/16.2 \ + --with-ssl=openssl +make +make install + +# create user postres and change owner from binaries, data and backup directories +groupadd postgres +useradd postgres -G postgres -g postgres + +chown -R postgres:postgres /app /data /backup + +# create/opdate .bash_profile for postgres user: +export PS1="\u@\h:\w> " +alias listen='lsof -i -P | grep -i "listen"' +alias pgenv='source /app/postgres/admin/scripts/pgenv' + +# create PostgreSQL instance on port 5501 +mkdir -p /app/postgres/admin +mkdir -p scripts +cd /app/postgres/admin +mkdir -p aquaris_5501/divers +mkdir -p aquaris_5501/log +mkdir -p aquaris_5501/scripts + +# create a script to source PostgeSQL instance varaiables +cat <<'EOF' > /app/postgres/admin/scripts/pgenv +export PGPORT=$1 +export MYHOST=$(hostname -s) +export PGHOME=/app/postgres/product/16.2 + +export PGDATA=/data/${MYHOST}_${PGPORT} +export PGBACKUP=/backup/${MYHOST}_${PGPORT} +export PGLOG=/app/postgres/admin/${MYHOST}_${PGPORT}/log/${MYHOST}_${PGPORT}.log + +export LD_LIBRARY_PATH=$PGHOME/lib:$LD_LIBRARY_PATH +export PATH=$PGHOME/bin:$PATH +EOF + +# sinitialize and start PostgreSQL server +pgenv 5501 +initdb -D $PGDATA + +# update PostgreSQL instance configuration file +# $PGDATA/postgresql.conf +listen_addresses = '*' +port = 5501 + +# update $PGDATA/pg_hba.conf in order to allow remote connections using a password +cat <<'EOF' >> $PGDATA/pg_hba.conf +host all all all md5 +EOF + +# start PostgreSQL instance +pg_ctl start -D $PGDATA -l $PGLOG +or +pg_ctl start --pgdata $PGDATA --log $PGLOG + +# stop PostgreSQL instance +pg_ctl stop -m immediate + +# create a database + an owner from this database +psql +postgres=# create role jawa login password 'secret'; +postgres=# create database jawa; +postgres=# alter database jawa owner to jawa; +postgres=# \l +postgres=# \du + +# test connection +psql -p 5501 -h aquaris -U jawa + +# create users for barman: barman(superuser) and streaming_barman(replication) +createuser --superuser --replication -P barman +createuser --replication -P streaming_barman + +# update $PGDATA/pg_hba.conf in order to allow replication for streaming_barman user +cat <<'EOF' >> $PGDATA/pg_hba.conf +host replication streaming_barman all md5 +EOF + +# ensure that following parameter are >10 +postgres=# Show max_wal_senders; +postgres=# Show max_replication_slots; +# otherwise update +postgres=# ALTER SYSTEM SET max_wal_senders = 10; +postgres=# ALTER SYSTEM SET max_replication_slots = 10;. + + +# Barman can be installed on a remote machine where PosgeSQL binaries are installed +# customoze ~/.bashrc on remote machine + +cat <<'EOF' >> ~/.bashrc +export PS1="\u@\h:\w> " +alias listen='lsof -i -P | grep -i "listen"' + +export POSTGRES_HOME=/app/postgres/product/16.2 +export LD_LIBRARY_PATH=$POSTGRES_HOME/lib:$LD_LIBRARY_PATH + +export PATH=$POSTGRES_HOME/bin:$PATH +EOF + + +# barman install +mkdir /backup/barman +mkdir /app/barman +cd /app/barman +mkdir product conf log run scripts +mkdir conf/barman.d + +mkdir /app/barman/product/barman_3.10.0 + +python -m venv /app/barman/product/barman_3.10.0 +source /app/barman/product/barman_3.10.0/bin/activate + +python -m pip install --upgrade pip + +pip install psycopg2 +pip install barman + +barman -v + +# optinally, activate Barman in .bash_profile +cat <<'EOF' >> ~/.bash_profile +# Activate Barman +source /app/barman/product/barman_3.10.0/bin/activate +EOF + + +# store passwords +cat <<'EOF' >>~/.pgpass +aquaris:5501:*:barman:secret +aquaris:5501:*:streaming_barman:secret +EOF +chmod 0600 ~/.pgpass + +# test connection +psql -h aquaris -p 5501 -U barman -d postgres +psql -h aquaris -p 5501 -U streaming_barman -d postgres + +# create barman global configuration file +cat <<'EOF' > /app/barman/conf/barman.conf +[barman] +; System user +barman_user = postgres + +; Directory of configuration files. Place your sections in separate files with .conf extension +; For example place the 'main' server section in /etc/barman.d/main.conf +configuration_files_directory = /app/barman/conf/barman.d + +; Main directory +barman_home = /backup/barman + +; Locks directory - default: %(barman_home)s +;barman_lock_directory = /app/barman/run + +; Log location +log_file = /app/barman/log/barman.log + +; Log level (see https://docs.python.org/3/library/logging.html#levels) +log_level = INFO + +; Default compression level: possible values are None (default), bzip2, gzip, pigz, pygzip or pybzip2 +compression = pigz +EOF + +# for the global configuration file, create a symlync .barman.conf in the home directory +ln -s /app/barman/conf/barman.conf ~/.barman.conf + +# target postgres instance example +cat <<'EOF' > /app/barman/conf/barman.d/aquaris_5501.conf +[aquaris_5501] +description = "PostgreSQL instance on aquaris, port 5501" +conninfo = host=aquaris port=5501 user=barman dbname=postgres +streaming_conninfo = host=aquaris port=5501 user=streaming_barman dbname=postgres +backup_method = postgres +streaming_archiver = on +slot_name = barman +create_slot = auto +retention_policy = REDUNDANCY 4 +EOF + + +# create replication slot +barman receive-wal --create-slot aquaris_5501 + + +# create barman CRON script +cat <<'EOF' > /app/barman/scripts/barman_cron +# Setup environement +source ~/.bash_profile +# Run barmab CRON tasks +barman cron +EOF + +chmod +x /app/barman/scripts/barman_cron + +# scedule CRON script in crontab every 1 minute +crontab -l + +* * * * * /app/barman/scripts/barman_cron > /app/barman/log/barman_cron.log + +# force a switch wal on target PostgreSQL instance +barman switch-wal --force --archive aquaris_5501 + +# backup PostgreSQL instance and wait for all the required WAL files to be archived +barman backup aquaris_5501 --wait + +# list registered PostgeSQL servers +barman list-servers + +# list backup of one of all servers +barman list-backups all +barman list-backups aquaris_5501 + + +# list files of backup required to restore a minimal consistent image +# one or more WAL will be included +barman list-files aquaris_5501 --target standalone 20240223T165330 + +# list only WAL that can be used in addition with the backup +# that will include necessary WAL to restore a consistent image (as in previous command) + all streamed (and automatically compressed) since the backup +barman list-files aquaris_5501 --target wal 20240223T165330 + +# list all files (base + WAL) availablle for restiore since the backup +barman list-files aquaris_5501 --target full 20240223T165330 + +# show backup informations +barman show-backup aquaris_5501 20240223T165330 + +# verify cecksums of backup files +barman verify-backup aquaris_5501 20240220T174149 + +*************************************************** +barman \ + recover --remote-ssh-command 'ssh sembla' \ + aquaris_5501 latest \ + /data/restore + + +barman \ + recover --remote-ssh-command 'ssh sembla' \ + --get-wal \ + aquaris_5501 latest \ + /data/restore + + + +/app/postgres/product/16.2/bin/pg_ctl \ + --pgdata=/data/restore \ + -l /tmp/restore.log \ + start + + +/app/postgres/product/16.2/bin/pg_ctl \ + --pgdata=/data/restore \ + -l /tmp/restore.log \ + stop + + + + +barman-wal-restore -U postgres exegol aquaris_5501 --test Bugs Bunny +************************************************** + + diff --git a/postgresql/postgres_17_compile_01.txt b/postgresql/postgres_17_compile_01.txt new file mode 100644 index 0000000..2badae2 --- /dev/null +++ b/postgresql/postgres_17_compile_01.txt @@ -0,0 +1,24 @@ +dnf install bison.x86_64 +dnf install flex.x86_64 + +rpm -qa | grep -i bison +bison-3.7.4-5.el9.x86_64 + +rpm -qa | grep -i flex +flex-2.6.4-9.el9.x86_64 + +dnf install perl-FindBin.noarch +rpm -qa | grep -i FindBin +perl-FindBin-1.51-481.el9.noarch + +dnf install perl-lib.x86_64 +rpm -qa | grep -i perl-lib- +perl-lib-0.65-481.el9.x86_64 + +# single command +dnf install -y bison.x86_64 flex.x86_64 perl-lib.x86_64 perl-FindBin.noarch + +./configure \ + --prefix=/app/postgres/rdbms/17.2 \ + --with-ssl=openssl + diff --git a/postgresql/postgres_TLS_01.md b/postgresql/postgres_TLS_01.md new file mode 100644 index 0000000..fdd0492 --- /dev/null +++ b/postgresql/postgres_TLS_01.md @@ -0,0 +1,305 @@ +## Generate PostgreSQL server certificate + +PostgreSQL server generate **certificate private key** and a **certificate request** for the `CN = PostgreSQL server` + +```bash +openssl genrsa -out postgres_server.key 4096 +openssl req -new -key postgres_server.key -out postgres_server.csr +``` + +The CA root put the **certificate request** in a temporary location (ex. `generated` directory), generate a **signed certificate** and optionally create a **certificate full chain**. + +Configuration file used by root CA: + +``` +[ req ] +default_bits = 4096 +prompt = no +default_md = sha256 +distinguished_name = dn +req_extensions = req_ext + +[ dn ] +CN = PostgreSQL server + +[ req_ext ] +subjectAltName = @alt_names + +[ alt_names ] +DNS.1 = raxus.swgalaxy +DNS.2 = mobus.swgalaxy +DNS.3 = pgsql.swgalaxy +``` + +> **_NOTE:_** `CN` in CA configuration file can be diffrent than the `CN` of CSR. The CA does not replace it unless explicitly configured to do so. + +```bash +openssl x509 -req \ + -in generated/postgres_server.csr \ + -CA rootCA.pem -CAkey rootCA.key -CAserial rootCA.srl \ + -out generated/postgres_server.crt \ + -days 3650 \ + -sha256 \ + -extensions req_ext -extfile generated/postgres_server.cnf + +cat generated/postgres_server.crt rootCA.pem > generated/postgres_server.fullchain.crt +``` + +To inspect a certificate in Linux command line: + +```bash +openssl x509 -in rootCA.pem -text -noout +openssl x509 -in generated/postgres_server.crt -text -noout +``` + +## Generate PostgreSQL client certificate(s) + +We will generate 2 certificates for: +- `CN=PostgreSQL client1` +- `CN=PostgreSQL client2` + +```bash +openssl genrsa -out postgres_client1.key 4096 +openssl req -new -key postgres_client1.key -out postgres_client1.csr + +openssl genrsa -out postgres_client2.key 4096 +openssl req -new -key postgres_client2.key -out postgres_client2.csr +``` + +Configuration file used by root CA will be the same for both certificates: + +``` +[ req ] +default_bits = 4096 +prompt = no +default_md = sha256 +distinguished_name = dn +req_extensions = req_ext + +[ dn ] +CN = Generic Client PostgreSQL + +[ req_ext ] +subjectAltName = @alt_names + +[ alt_names ] +DNS.1 = anyhost.anydomain +``` + +```bash +openssl x509 -req \ + -in generated/postgres_client1.csr \ + -CA rootCA.pem -CAkey rootCA.key -CAserial rootCA.srl \ + -out generated/postgres_client1.crt \ + -days 3650 \ + -sha256 \ + -extensions req_ext -extfile generated/postgres_client.cnf + +cat generated/postgres_client1.crt rootCA.pem > generated/postgres_client1.fullchain.crt + +openssl x509 -req \ + -in generated/postgres_client2.csr \ + -CA rootCA.pem -CAkey rootCA.key -CAserial rootCA.srl \ + -out generated/postgres_client2.crt \ + -days 3650 \ + -sha256 \ + -extensions req_ext -extfile generated/postgres_client.cnf + +cat generated/postgres_client2.crt rootCA.pem > generated/postgres_client2.fullchain.crt +``` + + +## On PostgreSQL server, add the root CA certificate as trusted certificate. + +Put root CA certificate (`.crt` ou `.pem`)under `/etc/pki/ca-trust/source/anchors`, then update the system trust store. + +```bash +cd /etc/pki/ca-trust/source/anchors +chmod 644 rootCA.pem +update-ca-trust extract +``` + +```bash +openssl verify -CAfile /etc/pki/tls/certs/ca-bundle.crt /postgres_server.crt +# Inspect the root CA certificate to get the exact CN +grep -R "" /etc/pki/ca-trust/extracted/ +``` + +## Set up SSL on PostgreSQL server + +Place server certificate, server private key and root CA certificate (optional but recommended) in the PostgreSQL data directory. +Default paths: +- `$PGDATA/server.key` +- `$PGDATA/server.crt` + +Or override with: + +``` +ssl_cert_file = '/path/to/server.crt' +ssl_key_file = '/path/to/server.key' +ssl_ca_file = '/path/to/root.crt' +``` + + Set file pemisions: + +```bash +chmod 640 $PGDATA/postgres_server.crt +chmod 600 $PGDATA/postgres_server.key +chmod 640 $PGDATA/rootCA.pem +``` + +> root CA certificate is required **only for mTLS** mode when the server will validate the client certificate authenticity +> Add (concatebate) all CA intermediate (if any) in `ssl_cert_file` + +Enable TLS in `$PGDATA/postgresql.conf`. + +``` +ssl_cert_file = 'postgres_server.crt' +ssl_key_file = 'postgres_server.key' +ssl_ca_file = 'rootCA.pem' + +ssl = on +ssl_ciphers = 'HIGH:!aNULL:!MD5' +ssl_prefer_server_ciphers = on +ssl_min_protocol_version = 'TLSv1.2' +ssl_max_protocol_version = 'TLSv1.3' +``` +PostgreSQL will now listen for both encrypted and unencrypted connections on the **same port**. + +## Server side SSL modes + +### Request TLS mode + +Config in `$PGDATA/pg_hba.conf`: + +``` +host all all all md5 +``` + +### Require TLS mode + +Config in `$PGDATA/pg_hba.conf`: +``` +hostssl all all 0.0.0.0/0 md5 +``` + +### Require TLS mode + client certificate + +Config in `$PGDATA/pg_hba.conf`: +``` +hostssl all all 0.0.0.0/0 cert +``` + +## Client side SSL modes + +| Mode | Encrypts | Validates CA | Validates Hostname | Typical Use | +| --- | --- | --- | --- | --- | +| `require` | Yes | No | No | Basic encryption | +| `verify-ca` | Yes | Yes | No | Internal/IP-based | +| `verify-full` | Yes | Yes | Yes | Production | + +Examples: + +```bash +# mode: require +psql "host=raxus.swgalaxy port=5501 user=vplesnila dbname=postgres sslmode=require" + +# mode: verify-ca +psql "host=raxus.swgalaxy port=5501 user=vplesnila dbname=postgres sslmode=verify-ca sslrootcert=rootCA.pem" +``` + +For `verify-full` mode the client need client certificate, client private key and root CA certificate on client side. +In our example we will use previously generated certificats for: +- `CN=PostgreSQL client1` +- `CN=PostgreSQL client2` + +Set file pemisions: + +```bash +chmod 640 postgres_client1.crt +chmod 600 postgres_client1.key + +chmod 640 postgres_client2.crt +chmod 600 postgres_client2.key + +chmod 640 rootCA.pem +``` + +In `verify-full` mode we can get ride of client password by mapping the `CN` in the certificate to a **local PostgreSQL role** (aka **local user**). +Create local roles (with no password) in PostgreSQL instance: +```sql +CREATE ROLE app1 LOGIN; +CREATE ROLE app2 LOGIN; +``` + +Add in `$PGDATA/pg_ident.conf`: + +``` +# MAPNAME SYSTEM-IDENTITY PG-USERNAME +certmap_app1 "PostgreSQL client1" app1 +certmap_app2 "PostgreSQL client2" app2 +``` + +Add in `$PGDATA/pg_hba.conf`: + +``` +hostssl all app1 0.0.0.0/0 cert map=certmap_app1 +hostssl all app2 0.0.0.0/0 cert map=certmap_app2 +``` + +> Restart PostgreSQL instance after modifying `$PGDATA/pg_ident.conf` and `$PGDATA/pg_hba.conf`. + +Connect in `verify-full` mode using certificate for authentification: + +```bash +# mode: verify-full +psql "host=raxus.swgalaxy port=5501 user=app1 dbname=postgres sslmode=verify-full sslrootcert=rootCA.pem sslcert=postgres_client1.crt sslkey=postgres_client1.key" + +psql "host=raxus.swgalaxy port=5501 user=app2 dbname=postgres sslmode=verify-full sslrootcert=rootCA.pem sslcert=postgres_client2.crt sslkey=postgres_client2.key" +``` + +As `SAN` **(Subject Alternative Name)** of the **server** certificate match to: +- `raxus.swgalaxy` +- `mobus.swgalaxy` +- `pgsql.swgalaxy` + +it is possible to connect to all theses DNS enteries using the same **server** certificate. +Example: + +```bash +psql "host=pgsql.swgalaxy port=5501 user=app1 dbname=postgres sslmode=verify-full sslrootcert=rootCA.pem sslcert=postgres_client1.crt sslkey=postgres_client1.key" +``` + +> ⭐ Server certificates → client checks SAN/CN for hostname +> ⭐ Client certificates → server checks only CN for user identity + +> **IMPORTANT**: we can mix TLS authentification by certificate and password. +PostgreSQL processes pg_hba.conf top‑to‑bottom, and the first matching rule wins. +Once a connection matches a line (based on type, database, user, address, etc.), PostgreSQL stops and applies that rule’s authentication method. + +The folowing `$PGDATA/pg_hba.conf` allows authentification using certificates for PostgreSQL users app1 and app2 and authentification using password for all other users. + +``` +hostssl all app1 0.0.0.0/0 cert map=certmap_app1 +hostssl all app2 0.0.0.0/0 cert map=certmap_app2 +hostssl all all 0.0.0.0/0 md5 +``` + +Check if connections are using TLS: + +```sql +SELECT * FROM pg_stat_ssl; +``` + +Check for connection username: +```sql +select pid as process_id, + usename as username, + datname as database_name, + client_addr as client_address, + application_name, + backend_start, + state, + state_change +from pg_stat_activity; +``` diff --git a/postgresql/usefull_commands_01.txt b/postgresql/usefull_commands_01.txt new file mode 100644 index 0000000..aa4011f --- /dev/null +++ b/postgresql/usefull_commands_01.txt @@ -0,0 +1,32 @@ +# create user +create user wombat password 'secret'; + +# create database +create database dbaquaris; + +# grant ALL privileges on a database to a user +grant all on database dbaquaris to wombat; + + +\c dbaquaris +grant all on schema public to wombat; + +# connect +psql -h aquaris -U wombat -d dbaquaris; + +# create schema +create schema bank; + +# WAL archive status +select * from pg_stat_archiver; + +select * from pg_ls_dir ('pg_wal/archive_status') ORDER BY 1; + +# switch current WAL +select pg_switch_wal(); + +# show data directory +show data_directory; + +# list databases with details +\l+ diff --git a/push_all b/push_all new file mode 100755 index 0000000..615d889 --- /dev/null +++ b/push_all @@ -0,0 +1,5 @@ +NOW=$(date -u +"%Y-%m-%d %H:%M:%S" ) +git add . +git commit -m "${NOW}" +git push -u origin main + diff --git a/reverse_index/c_reverse_ind3_01.sql b/reverse_index/c_reverse_ind3_01.sql new file mode 100755 index 0000000..d13249f --- /dev/null +++ b/reverse_index/c_reverse_ind3_01.sql @@ -0,0 +1,56 @@ +rem https://jonathanlewis.wordpress.com/2015/06/17/reverse-key-2/ + +rem Script: c_reverse_ind3.sql +rem Author: Jonathan Lewis +rem Dated: Jun 2010 +rem Purpose: +rem + +drop table t1 purge; + +create table t1( + id not null +) +nologging +as +with generator as ( + select --+ materialize + rownum id + from dual + connect by + rownum <= 1e4 -- > comment to avoid wordpress format issue +) +select + 1e7 + rownum id +from + generator v1, + generator v2 +where + rownum <= 1e7 -- > comment to avoid WordPress format issue +; + +begin + dbms_stats.gather_table_stats( + ownname => user, + tabname => 'T1' + ); +end; +/ + +alter table t1 add constraint t1_pk primary key(id) +using index + reverse + nologging +; + +alter system flush shared_pool; +alter system flush buffer_cache; + +alter session set events '10046 trace name context forever, level 8'; + +begin + for i in 20000001..20010000 loop + insert into t1 values(i); + end loop; +end; +/ diff --git a/reverse_index/c_reverse_ind3_02.sql b/reverse_index/c_reverse_ind3_02.sql new file mode 100755 index 0000000..8ce431f --- /dev/null +++ b/reverse_index/c_reverse_ind3_02.sql @@ -0,0 +1,56 @@ +rem https://jonathanlewis.wordpress.com/2015/06/17/reverse-key-2/ + +rem Script: c_reverse_ind3.sql +rem Author: Jonathan Lewis +rem Dated: Jun 2010 +rem Purpose: +rem + +drop table t1 purge; + +create table t1( + id not null +) +nologging +as +with generator as ( + select --+ materialize + rownum id + from dual + connect by + rownum <= 1e4 -- > comment to avoid wordpress format issue +) +select + 1e7 + rownum id +from + generator v1, + generator v2 +where + rownum <= 1e7 -- > comment to avoid WordPress format issue +; + +begin + dbms_stats.gather_table_stats( + ownname => user, + tabname => 'T1' + ); +end; +/ + +alter table t1 add constraint t1_pk primary key(id) +using index + -- reverse + nologging +; + +alter system flush shared_pool; +alter system flush buffer_cache; + +alter session set events '10046 trace name context forever, level 8'; + +begin + for i in 20000001..20010000 loop + insert into t1 values(i); + end loop; +end; +/ diff --git a/reverse_index/reverse_index_vs_hash_index_01.txt b/reverse_index/reverse_index_vs_hash_index_01.txt new file mode 100644 index 0000000..91bfadc --- /dev/null +++ b/reverse_index/reverse_index_vs_hash_index_01.txt @@ -0,0 +1,187 @@ +-- regular index +---------------- + +drop table t purge; +create table t (id number, sometext varchar2(50)); +create index i on t(id); +create sequence id_seq; + + +create or replace procedure manyinserts as + begin + DBMS_APPLICATION_INFO.set_module(module_name => 'manyinserts', action_name => 'Do many insert'); + for i in 1..10000 loop + insert into t values (id_seq.nextval, 'DOES THIS CAUSE BUFFER BUSY WAITS?'); + end loop; + commit; + end; + / + + + create or replace procedure manysessions as + v_jobno number:=0; + begin + for i in 1..50 loop + dbms_job.submit(v_jobno,'manyinserts;', sysdate); + end loop; + commit; + end; + / + +exec manysessions; + +SQL> @ash/ashtop event2 "module='manyinserts'" sysdate-1/24/10 sysdate + + Total Distinct Distinct + Seconds AAS %This EVENT2 FIRST_SEEN LAST_SEEN Execs Seen Tstamps +--------- ------- ------- ------------------------------------------ ------------------- ------------------- ---------- -------- + 1015 2.8 84% | buffer busy waits [data block] 2023-04-10 09:39:17 2023-04-10 09:39:46 1015 29 + 121 .3 10% | enq: TX - index contention [mode=4] 2023-04-10 09:39:17 2023-04-10 09:39:41 121 8 + 43 .1 4% | ON CPU 2023-04-10 09:39:17 2023-04-10 09:39:46 39 25 + 14 .0 1% | row cache mutex 2023-04-10 09:39:34 2023-04-10 09:39:35 2 2 + 10 .0 1% | enq: CR - block range reuse ckpt [mode=6] 2023-04-10 09:39:35 2023-04-10 09:39:42 10 3 + 7 .0 1% | library cache: mutex X 2023-04-10 09:39:17 2023-04-10 09:39:41 7 3 + 2 .0 0% | buffer deadlock 2023-04-10 09:39:23 2023-04-10 09:39:27 2 2 + 1 .0 0% | buffer busy waits [segment header] 2023-04-10 09:39:32 2023-04-10 09:39:32 1 1 + + + +exec dbms_stats.gather_table_stats(user,'T', method_opt=>'for columns ID size AUTO'); + +select /*+ GATHER_PLAN_STATISTICS */ * from T where ID=100; +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +--------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | +--------------------------------------------------------------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | 1 | | | 4 (100)| 0 |00:00:00.01 | 3 | +| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 1 | 33 | 4 (0)| 0 |00:00:00.01 | 3 | +|* 2 | INDEX RANGE SCAN | I | 1 | 1 | | 3 (0)| 0 |00:00:00.01 | 3 | +--------------------------------------------------------------------------------------------------------------------------- + + +-- reverse index +---------------- + +drop index i; +truncate table t drop storage; +create index i on t(id) reverse; + +create or replace procedure manyinserts as + begin + DBMS_APPLICATION_INFO.set_module(module_name => 'manyinserts_reverseind', action_name => 'Do many insert'); + for i in 1..10000 loop + insert into t values (id_seq.nextval, 'DOES THIS CAUSE BUFFER BUSY WAITS?'); + end loop; + commit; + end; + / + +exec manysessions; + +SQL> @ash/ashtop event2 "module='manyinserts_reverseind'" sysdate-1/24/10 sysdate + + Total Distinct Distinct + Seconds AAS %This EVENT2 FIRST_SEEN LAST_SEEN Execs Seen Tstamps +--------- ------- ------- ------------------------------------------ ------------------- ------------------- ---------- -------- + 830 2.3 86% | buffer busy waits [data block] 2023-04-10 09:47:01 2023-04-10 09:47:21 813 21 + 61 .2 6% | row cache mutex 2023-04-10 09:47:16 2023-04-10 09:47:19 2 4 + 49 .1 5% | ON CPU 2023-04-10 09:47:01 2023-04-10 09:47:21 34 21 + 13 .0 1% | enq: CR - block range reuse ckpt [mode=6] 2023-04-10 09:47:14 2023-04-10 09:47:21 13 5 + 3 .0 0% | latch: redo copy 2023-04-10 09:47:18 2023-04-10 09:47:18 1 1 + 2 .0 0% | library cache: mutex X 2023-04-10 09:47:18 2023-04-10 09:47:18 2 1 + 2 .0 0% | reliable message 2023-04-10 09:47:14 2023-04-10 09:47:18 2 2 + 2 .0 0% | undo segment extension 2023-04-10 09:47:20 2023-04-10 09:47:20 2 1 + 1 .0 0% | buffer busy waits [undo header] 2023-04-10 09:47:04 2023-04-10 09:47:04 1 1 + + + +exec dbms_stats.gather_table_stats(user,'T', method_opt=>'for columns ID size AUTO'); + +select /*+ GATHER_PLAN_STATISTICS */ * from T where ID=100; +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +-------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | +--------------------------------------------------------------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | 1 | | | 4 (100)| 0 |00:00:00.01 | 3 | +| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 1 | 33 | 4 (0)| 0 |00:00:00.01 | 3 | +|* 2 | INDEX RANGE SCAN | I | 1 | 1 | | 3 (0)| 0 |00:00:00.01 | 3 | +--------------------------------------------------------------------------------------------------------------------------- + +select INDEX_TYPE from DBA_INDEXES where owner='SYS' and index_name='I'; + +INDEX_TYPE +--------------------------- +NORMAL/REV + + +-- hash index +------------- + +drop index i; +truncate table t drop storage; +create index i on t(id) global + partition by hash(id) partitions 32; + +create or replace procedure manyinserts as + begin + DBMS_APPLICATION_INFO.set_module(module_name => 'manyinserts_hashind', action_name => 'Do many insert'); + for i in 1..10000 loop + insert into t values (id_seq.nextval, 'DOES THIS CAUSE BUFFER BUSY WAITS?'); + end loop; + commit; + end; + / + + +SQL> @ash/ashtop event2 "module='manyinserts_hashind'" sysdate-1/24/10 sysdate + + Total Distinct Distinct + Seconds AAS %This EVENT2 FIRST_SEEN LAST_SEEN Execs Seen Tstamps +--------- ------- ------- ------------------------------------------ ------------------- ------------------- ---------- -------- + 776 2.2 80% | buffer busy waits [data block] 2023-04-10 09:50:56 2023-04-10 09:51:17 766 21 + 69 .2 7% | row cache mutex 2023-04-10 09:51:12 2023-04-10 09:51:15 1 4 + 44 .1 5% | ON CPU 2023-04-10 09:50:56 2023-04-10 09:51:17 31 19 + 34 .1 4% | log file switch (checkpoint incomplete) 2023-04-10 09:51:04 2023-04-10 09:51:16 7 3 + 19 .1 2% | log file switch completion 2023-04-10 09:51:13 2023-04-10 09:51:13 3 1 + 13 .0 1% | enq: CR - block range reuse ckpt [mode=6] 2023-04-10 09:51:01 2023-04-10 09:51:16 10 4 + 6 .0 1% | library cache: mutex X 2023-04-10 09:51:07 2023-04-10 09:51:14 6 3 + 3 .0 0% | reliable message 2023-04-10 09:50:57 2023-04-10 09:51:10 3 3 + 2 .0 0% | buffer busy waits [segment header] 2023-04-10 09:51:09 2023-04-10 09:51:09 2 1 + 1 .0 0% | buffer busy waits [undo header] 2023-04-10 09:50:58 2023-04-10 09:50:58 1 1 + 1 .0 0% | latch: cache buffers chains 2023-04-10 09:51:09 2023-04-10 09:51:09 1 1 + +set lines 200 pages 100 +col OBJECT_NAME for a30 +SUBOBJECT_NAME for a30 +select object_name,subobject_name,valu + +select object_name,subobject_name,value + from v$segment_statistics where owner='SYS' + and statistic_name='buffer busy waits' + and object_name = 'I'; + +exec dbms_stats.gather_table_stats(user,'T', method_opt=>'for columns ID size AUTO'); + +select /*+ GATHER_PLAN_STATISTICS */ * from T where ID=100; +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + +-------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows | A-Time | Buffers | +-------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | 1 | | | 2 (100)| | | 0 |00:00:00.01 | 2 | +| 1 | PARTITION HASH SINGLE | | 1 | 1 | 33 | 2 (0)| 31 | 31 | 0 |00:00:00.01 | 2 | +| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 1 | 33 | 2 (0)| | | 0 |00:00:00.01 | 2 | +|* 3 | INDEX RANGE SCAN | I | 1 | 1 | | 1 (0)| 31 | 31 | 0 |00:00:00.01 | 2 | +-------------------------------------------------------------------------------------------------------------------------------------------- + + +-- cleanup +---------- +drop table t purge; +drop sequence id_seq; +drop procedure manyinserts; +drop procedure manysessions; + + diff --git a/sql_baselines/draft_01.txt b/sql_baselines/draft_01.txt new file mode 100644 index 0000000..c820d06 --- /dev/null +++ b/sql_baselines/draft_01.txt @@ -0,0 +1,91 @@ +drop table DEMO purge; + +create table DEMO( + c1 INTEGER not null + ,c2 INTEGER not null + ,c3 DATE +); + +create index IDX1 on DEMO(c1); + +insert into DEMO(c1,c2,c3) +select + to_number(COLUMN_VALUE) + ,to_number(COLUMN_VALUE) + ,DATE'2024-01-01' + trunc(DBMS_RANDOM.VALUE(1,30*4)) +from xmltable('1 to 10000'); + +update DEMO set c1=1 where c1<=9000; + +commit; + +exec dbms_stats.delete_table_stats(user,'DEMO'); +exec dbms_stats.gather_table_stats(user,'DEMO'); +exec dbms_stats.gather_table_stats (user, 'DEMO', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats (user, 'DEMO', method_opt=>'for all columns size auto'); + +col column_name for a20 + +select column_name,num_distinct,density,num_nulls,num_buckets,sample_size,histogram +from user_tab_col_statistics +where table_name='DEMO'; + +-- best with FULL scan +alter system flush shared_pool; +var b1 NUMBER; +var b2 NUMBER; +execute :b1:=1; +execute :b2:=128; +select /*+ GATHER_PLAN_STATISTICS */ max(c3) from DEMO where c1=:b1 and c2=:b2; +@xlast + + +@coe_xfr_sql_profile 9g1m1cg9uprrp 2180342005 + +-- disable / drop SQL Profile +exec dbms_sqltune.alter_sql_profile('coe_9g1m1cg9uprrp_2180342005','STATUS','DISABLED'); +exec dbms_sqltune.drop_sql_profile('coe_9g1m1cg9uprrp_2180342005'); + +create index IDX2 on DEMO(c1,c2); + +-- best with INDEX scan +alter system flush shared_pool; +var b1 NUMBER; +var b2 NUMBER; +execute :b1:=9999; +execute :b2:=9999; +select /*+ GATHER_PLAN_STATISTICS */ max(c3) from DEMO where c1=:b1 and c2=:b2; +@xlast + + +-- drop SQL baseline(s) +declare + drop_result pls_integer; +begin + drop_result := DBMS_SPM.DROP_SQL_PLAN_BASELINE( + sql_handle => 'SQL_d6312d092279077a', + plan_name => NULL + ); + dbms_output.put_line(drop_result); +end; +/ + + + +SET LONG 10000 +var report clob; +begin + :report := dbms_spm.evolve_sql_plan_baseline( + sql_handle=>'SQL_7e1afe4c21a1e2af' + ,plan_name=>'SQL_PLAN_7w6ry9hhu3spg179a032f' + ,time_limit=>5 + ,VERIFY=>'YES' + ,COMMIT=>'NO' + ); +end; +/ +print :report + + + + diff --git a/sqlmon/sqlmon_01.txt b/sqlmon/sqlmon_01.txt new file mode 100644 index 0000000..a3919df --- /dev/null +++ b/sqlmon/sqlmon_01.txt @@ -0,0 +1,59 @@ +-- Setup +-------- +drop table T1 purge; + +create table T1 + tablespace USERS + as + select * from dba_extents +; + +drop table T2 purge; + +create table T2 + tablespace USERS + as + select * from dba_extents +; + + +insert into T1 select * from T1; + +insert into T2 select * from T2; +insert into T2 select * from T2; +insert into T2 select * from T2; + +commit; + +create index I1 on T1(OWNER) tablespace USERS; +create index I2 on T2(OWNER) tablespace USERS; + +exec dbms_stats.delete_table_stats(user,'T1'); +exec dbms_stats.delete_table_stats(user,'T2'); + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size 1'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size 1'); + +exec dbms_stats.gather_table_stats(user,'T1', method_opt=>'for all columns size skewonly'); +exec dbms_stats.gather_table_stats(user,'T2', method_opt=>'for all columns size skewonly'); + + +set lines 250 pages 999 +alter system flush shared_pool; + +var MYOWNER varchar2(30); + +execute :MYOWNER:='DBSNMP'; + +select /*+ GATHER_PLAN_STATISTICS MONITOR */ + count(1) +from + T1, + T2 +where + T1.OWNER=:MYOWNER and + T1.BLOCKS=T2.BLOCKS +/ + +select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS +PARALLEL +PARTITION +COST +BYTES')); + diff --git a/statistics/ex_when_cbo_use_global_stats_on_part_table_01.txt b/statistics/ex_when_cbo_use_global_stats_on_part_table_01.txt new file mode 100644 index 0000000..ff3a0d9 --- /dev/null +++ b/statistics/ex_when_cbo_use_global_stats_on_part_table_01.txt @@ -0,0 +1,259 @@ +-- https://franckpachot.medium.com/oracle-global-vs-partition-level-statistics-cbo-usage-1c2aa2aa3f32 + +create table DEMO (day date) partition by range(day) ( + partition P2018 values less than (date '2019-01-01'), + partition P2019 values less than (date '2020-01-01'), + partition P2020 values less than (date '2021-01-01'), + partition P2021 values less than (date '2022-01-01'), + partition INFINIT values less than (MAXVALUE) + ); + +insert into DEMO + select date '2019-01-01'+rownum from xmltable('1 to 100'); +insert into DEMO + select date '2018-01-01'-rownum/24 from xmltable('1 to 5000'); +commit; + +exec dbms_stats.gather_table_stats(user,'DEMO'); + +set lines 256 + +col OWNER for a20 +col TABLE_NAME for a20 +col PARTITION_NAME for a20 +col LAST_ANALYZED for a20 +col GLOBAL_STATS for a3 +col STALE_STATS for a5 + +select OWNER,TABLE_NAME,PARTITION_NAME,NUM_ROWS,LAST_ANALYZED,GLOBAL_STATS,STALE_STATS from dba_tab_statistics where table_name='DEMO'; + +OWNER TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED GLO STALE +-------------------- -------------------- -------------------- ---------- -------------------- --- ----- +POC DEMO 5100 20-JAN-24 YES NO +POC DEMO P2020 0 20-JAN-24 YES NO +POC DEMO INFINIT 0 20-JAN-24 YES NO +POC DEMO P2018 5000 20-JAN-24 YES NO +POC DEMO P2019 100 20-JAN-24 YES NO +POC DEMO P2021 0 20-JAN-24 YES NO + + + +Pstart = Pstop (PARTITION RANGE SINGLE) -> partition stats are used +-------------- + +select count(*) from DEMO where day between to_date( '2019-01-08','yyyy-mm-dd' ) and to_date( '2019-02-08','yyyy-mm-dd' ) ; + + COUNT(*) +---------- + 32 + +---------------------------------------------------------------- +| Id | Operation | Name | Rows | Pstart| Pstop | +---------------------------------------------------------------- +| 0 | SELECT STATEMENT | | | | | +| 1 | SORT AGGREGATE | | 1 | | | +| 2 | PARTITION RANGE SINGLE| | 33 | 2 | 2 | +| 3 | TABLE ACCESS FULL | DEMO | 33 | 2 | 2 | +---------------------------------------------------------------- + +Partition-level statistics are used when only one partition is accessed, known at the time of parsing. +You see that with Pstart=Pstop = number. + +Pstart <> Pstop (PARTITION RANGE ITERATOR/FULL) -> global stats are used +--------------- + +select count(*) from DEMO where day between to_date( '2019-01-08','yyyy-mm-dd' ) and to_date( '2019-02-08','yyyy-mm-dd' ) ; + + COUNT(*) +---------- + 94 + + +select * from dbms_xplan.display_cursor(format=>'basic +rows +outline +peeked_binds +partition'); + +------------------------------------------------------------------ +| Id | Operation | Name | Rows | Pstart| Pstop | +------------------------------------------------------------------ +| 0 | SELECT STATEMENT | | | | | +| 1 | SORT AGGREGATE | | 1 | | | +| 2 | PARTITION RANGE ITERATOR| | 705 | 2 | 3 | +| 3 | TABLE ACCESS FULL | DEMO | 705 | 2 | 3 | +------------------------------------------------------------------ + +Note that nomber of rows is overestimated. + +Global table level statistics are used when more than one partition is accessed, even when they are known at the time of parsing. +You see that with Pstart <> Pstop. + + +KEY — KEY (PARTITION RANGE ITERATOR/FULL) -> partition stats are used +---------- + +var d1 varchar2(10) +var d2 varchar2(10) +exec :d1:='2019-01-08'; +exec :d2:='2019-02-08'; + +select count(*) from DEMO where day between to_date(:d1,'yyyy-mm-dd' ) and to_date(:d2,'yyyy-mm-dd' ); + + COUNT(*) +---------- + 32 + + +------------------------------------------------------------------- +| Id | Operation | Name | Rows | Pstart| Pstop | +------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | | | | +| 1 | SORT AGGREGATE | | 1 | | | +| 2 | FILTER | | | | | +| 3 | PARTITION RANGE ITERATOR| | 33 | KEY | KEY | +| 4 | TABLE ACCESS FULL | DEMO | 33 | KEY | KEY | +------------------------------------------------------------------- + +Outline Data +------------- + + /*+ + BEGIN_OUTLINE_DATA + IGNORE_OPTIM_EMBEDDED_HINTS + OPTIMIZER_FEATURES_ENABLE('19.1.0') + DB_VERSION('19.1.0') + ALL_ROWS + OUTLINE_LEAF(@"SEL$1") + FULL(@"SEL$1" "DEMO"@"SEL$1") + END_OUTLINE_DATA + */ + +Peeked Binds (identified by position): +-------------------------------------- + + 1 - :D1 (VARCHAR2(30), CSID=873): '2019-01-08' + 2 - :D2 (VARCHAR2(30), CSID=873): '2019-02-08' + +Partition-level statistics are used when only one partition is accessed, known at the time of parsing, even the value is known with bind peeking. +You see that with KEY/KEY for Pstart/Pstop and with bind variables listed by +peeked_binds format + + +Without bind peeking -> global +-------------------- + +Same previous binded query but with the bind peeking disabled at the session level. + +alter system flush shared_pool; + +select count(*) from DEMO where day between to_date(:d1,'yyyy-mm-dd' ) and to_date(:d2,'yyyy-mm-dd' ); + + COUNT(*) +---------- + 32 + +select * from dbms_xplan.display_cursor(format=>'basic +rows +outline +peeked_binds +partition'); + +------------------------------------------------------------------- +| Id | Operation | Name | Rows | Pstart| Pstop | +------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | | | | +| 1 | SORT AGGREGATE | | 1 | | | +| 2 | FILTER | | | | | +| 3 | PARTITION RANGE ITERATOR| | 13 | KEY | KEY | +| 4 | TABLE ACCESS FULL | DEMO | 13 | KEY | KEY | +------------------------------------------------------------------- + +Outline Data +------------- + + /*+ + BEGIN_OUTLINE_DATA + IGNORE_OPTIM_EMBEDDED_HINTS + OPTIMIZER_FEATURES_ENABLE('19.1.0') + DB_VERSION('19.1.0') + OPT_PARAM('_optim_peek_user_binds' 'false') + ALL_ROWS + OUTLINE_LEAF(@"SEL$1") + FULL(@"SEL$1" "DEMO"@"SEL$1") + END_OUTLINE_DATA + */ + + The number of rows is underestimated, global stats has been used. + + In conclusion, there is only one case where the partition level statistics can be used: + partition pruning to one single partition, at parse time by the literal value or the peeked bind. + +-------------------------------------- +Extra (example online split partition) +-------------------------------------- + +-- add the next 5000 days since 1 Jan 2022 +insert into DEMO + select date '2022-01-01'+rownum from xmltable('1 to 5000'); + +commit; + +-- create P2022, P2022 and P2023 partitions using split online + +ALTER TABLE DEMO + SPLIT PARTITION INFINIT AT (date '2023-01-01') + INTO (PARTITION P2022, + PARTITION INFINIT) + ONLINE; + + + ALTER TABLE DEMO + SPLIT PARTITION INFINIT AT (date '2024-01-01') + INTO (PARTITION P2023, + PARTITION INFINIT) + ONLINE; + + + ALTER TABLE DEMO + SPLIT PARTITION INFINIT AT (date '2025-01-01') + INTO (PARTITION P2024, + PARTITION INFINIT) + ONLINE; + + +exec dbms_stats.gather_table_stats(user,'DEMO'); + +col OWNER for a20 +col TABLE_NAME for a20 +col PARTITION_NAME for a20 +col LAST_ANALYZED for a20 +col GLOBAL_STATS for a3 +col STALE_STATS for a5 + +select OWNER,TABLE_NAME,PARTITION_NAME,NUM_ROWS,LAST_ANALYZED,GLOBAL_STATS,STALE_STATS from dba_tab_statistics where table_name='DEMO'; + +OWNER TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED GLO STALE +-------------------- -------------------- -------------------- ---------- -------------------- --- ----- +POC DEMO 10100 20-JAN-24 YES NO +POC DEMO P2020 0 20-JAN-24 YES NO +POC DEMO P2022 364 20-JAN-24 YES NO +POC DEMO INFINIT 3905 20-JAN-24 YES NO +POC DEMO P2018 5000 20-JAN-24 YES NO +POC DEMO P2019 100 20-JAN-24 YES NO +POC DEMO P2021 0 20-JAN-24 YES NO +POC DEMO P2024 366 20-JAN-24 YES NO +POC DEMO P2023 365 20-JAN-24 YES NO + + +-- check boundaries + +select min(day),max(day) from DEMO partition (P2022); + +MIN(DAY) MAX(DAY) +--------- --------- +02-JAN-22 31-DEC-22 + +select min(day),max(day) from DEMO partition (P2023); + +MIN(DAY) MAX(DAY) +--------- --------- +01-JAN-23 31-DEC-23 + +select min(day),max(day) from DEMO partition (P2024); + +MIN(DAY) MAX(DAY) +--------- --------- +01-JAN-24 31-DEC-24 + diff --git a/statistics/example_incremental_stats__01 b/statistics/example_incremental_stats__01 new file mode 100644 index 0000000..ecfe530 --- /dev/null +++ b/statistics/example_incremental_stats__01 @@ -0,0 +1,133 @@ +create table DEMO (day date) partition by range(day) ( + partition P2000 values less than (date '2001-01-01'), + partition P2001 values less than (date '2002-01-01'), + partition P2002 values less than (date '2003-01-01'), + partition P2003 values less than (date '2004-01-01'), + partition P2004 values less than (date '2005-01-01'), + partition P2005 values less than (date '2006-01-01'), + partition INFINITY values less than (MAXVALUE) + ); + +-- by default GRANULARITY=AUTO and INCREMENTAL=FALSE +-- on this table overwrite property GRANULARITY +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'GRANULARITY', pvalue=>'PARTITION'); + +-- insert some lines in the first 4 partitions +insert into DEMO select date '2000-01-01'+rownum from xmltable('1 to 1460'); +commit; + +-- gather stats on the first 5 partitions +exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEMO',partname=>'P2000'); +exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEMO',partname=>'P2001'); +exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEMO',partname=>'P2002'); +exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEMO',partname=>'P2003'); +exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEMO',partname=>'P2004'); + +-- show table stats +set lines 256 +col OWNER for a20 +col TABLE_NAME for a20 +col PARTITION_NAME for a20 +col LAST_ANALYZED for a20 +col GLOBAL_STATS for a3 +col STALE_STATS for a5 + +select OWNER,TABLE_NAME,PARTITION_NAME,NUM_ROWS,LAST_ANALYZED,GLOBAL_STATS,STALE_STATS from dba_tab_statistics where table_name='DEMO' order by PARTITION_POSITION asc; + +OWNER TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED GLO STALE +-------------------- -------------------- -------------------- ---------- -------------------- --- ----- +POC DEMO P2000 365 21-JAN-24 YES NO +POC DEMO P2001 365 21-JAN-24 YES NO +POC DEMO P2002 365 21-JAN-24 YES NO +POC DEMO P2003 365 21-JAN-24 YES NO +POC DEMO P2004 0 21-JAN-24 YES NO +POC DEMO P2005 NO +POC DEMO INFINITY NO +POC DEMO NO + + +-- as expected, 365 lines in the first 4 partitions, 0 lines in ther 5-th partition and no stats on other partitions +-- also, no global stats at table level + +-- now set INCREMENTAL property to TRUE for the table + +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'INCREMENTAL', pvalue=>'TRUE'); + +-- get statistics on P2005 +exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEMO',partname=>'P2005'); + +-- still no global stats, because no stats last partition + +OWNER TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED GLO STALE +-------------------- -------------------- -------------------- ---------- -------------------- --- ----- +POC DEMO P2000 365 21-JAN-24 YES NO +POC DEMO P2001 365 21-JAN-24 YES NO +POC DEMO P2002 365 21-JAN-24 YES NO +POC DEMO P2003 365 21-JAN-24 YES NO +POC DEMO P2004 0 21-JAN-24 YES NO +POC DEMO P2005 0 21-JAN-24 YES NO +POC DEMO INFINITY NO +POC DEMO NO + +-- get statistics on the last partition +exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEMO',partname=>'INFINITY'); + +-- now we can see global stats at the table level +-- note that GLOBAL_STATS=NO and STALE_STATS=YES for the global (aggregate) table stats + +OWNER TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED GLO STALE +-------------------- -------------------- -------------------- ---------- -------------------- --- ----- +POC DEMO P2000 365 21-JAN-24 YES NO +POC DEMO P2001 365 21-JAN-24 YES NO +POC DEMO P2002 365 21-JAN-24 YES NO +POC DEMO P2003 365 21-JAN-24 YES NO +POC DEMO P2004 0 21-JAN-24 YES NO +POC DEMO P2005 0 21-JAN-24 YES NO +POC DEMO INFINITY 0 21-JAN-24 YES NO +POC DEMO 1460 21-JAN-24 NO YES + + +-- now delete table stats and use gather_table_stats to gather stats on the whole table +exec dbms_stats.delete_table_stats(user,'DEMO'); +exec dbms_stats.gather_table_stats(user,'DEMO'); + +-- the result is identical as getting stats individually for each partition + +OWNER TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED GLO STALE +-------------------- -------------------- -------------------- ---------- -------------------- --- ----- +POC DEMO P2000 365 21-JAN-24 YES NO +POC DEMO P2001 365 21-JAN-24 YES NO +POC DEMO P2002 365 21-JAN-24 YES NO +POC DEMO P2003 365 21-JAN-24 YES NO +POC DEMO P2004 0 21-JAN-24 YES NO +POC DEMO P2005 0 21-JAN-24 YES NO +POC DEMO INFINITY 0 21-JAN-24 YES NO +POC DEMO 1460 21-JAN-24 NO YES + + +-- reset table prefs to default values +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'GRANULARITY', pvalue=>'AUTO'); +exec DBMS_STATS.SET_TABLE_PREFS(ownname=>'POC',tabname=>'DEMO', pname=>'INCREMENTAL', pvalue=>'FALSE'); + +-- delete and gather table stats +exec dbms_stats.delete_table_stats(user,'DEMO'); +exec dbms_stats.gather_table_stats(user,'DEMO'); + + +-- note that GLOBAL_STATS=YES and STALE_STATS=NO for the global (aggregate) table stats + +OWNER TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED GLO STALE +-------------------- -------------------- -------------------- ---------- -------------------- --- ----- +POC DEMO P2000 365 21-JAN-24 YES NO +POC DEMO P2001 365 21-JAN-24 YES NO +POC DEMO P2002 365 21-JAN-24 YES NO +POC DEMO P2003 365 21-JAN-24 YES NO +POC DEMO P2004 0 21-JAN-24 YES NO +POC DEMO P2005 0 21-JAN-24 YES NO +POC DEMO INFINITY 0 21-JAN-24 YES NO +POC DEMO 1460 21-JAN-24 YES NO + +-- CONCLUSION: when INCREMENTAL stats are activated: + - ALL partitions stats are required in order to estimate the GLOBAL table stats + - in DBA_TAB_STATISTICS, columns GLOBAL_STATS=NO and STALE_STATS=YES for the aggregated table stats + diff --git a/statistics/stats_getpref_global.sql b/statistics/stats_getpref_global.sql new file mode 100755 index 0000000..3b0e0ce --- /dev/null +++ b/statistics/stats_getpref_global.sql @@ -0,0 +1,36 @@ +SET SERVEROUTPUT ON +DECLARE + PROCEDURE display(p_param IN VARCHAR2) AS + l_result VARCHAR2(50); + BEGIN + l_result := DBMS_STATS.get_prefs (pname => p_param); + DBMS_OUTPUT.put_line(RPAD(p_param, 30, ' ') || ' : ' || l_result); + END; +BEGIN + display('APPROXIMATE_NDV_ALGORITHM'); + display('AUTO_STAT_EXTENSIONS'); + display('AUTO_TASK_STATUS'); + display('AUTO_TASK_MAX_RUN_TIME'); + display('AUTO_TASK_INTERVAL'); + display('CASCADE'); + display('CONCURRENT'); + display('DEGREE'); + display('ESTIMATE_PERCENT'); + display('GLOBAL_TEMP_TABLE_STATS'); + display('GRANULARITY'); + display('INCREMENTAL'); + display('INCREMENTAL_STALENESS'); + display('INCREMENTAL_LEVEL'); + display('METHOD_OPT'); + display('NO_INVALIDATE'); + display('OPTIONS'); + display('PREFERENCE_OVERRIDES_PARAMETER'); + display('PUBLISH'); + display('STALE_PERCENT'); + display('STAT_CATEGORY'); + display('TABLE_CACHED_BLOCKS'); + display('STALE_PERCENT'); + display('WAIT_TIME_TO_UPDATE_STATS'); +END; +/ + diff --git a/statistics/stats_getpref_table.sql b/statistics/stats_getpref_table.sql new file mode 100755 index 0000000..a57a4b4 --- /dev/null +++ b/statistics/stats_getpref_table.sql @@ -0,0 +1,36 @@ +SET SERVEROUTPUT ON +DECLARE + PROCEDURE display(p_param IN VARCHAR2) AS + l_result VARCHAR2(50); + BEGIN + l_result := DBMS_STATS.get_prefs (pname=>p_param,ownname=>'&1',tabname=>'&2'); + DBMS_OUTPUT.put_line(RPAD(p_param, 30, ' ') || ' : ' || l_result); + END; +BEGIN + display('APPROXIMATE_NDV_ALGORITHM'); + display('AUTO_STAT_EXTENSIONS'); + display('AUTO_TASK_STATUS'); + display('AUTO_TASK_MAX_RUN_TIME'); + display('AUTO_TASK_INTERVAL'); + display('CASCADE'); + display('CONCURRENT'); + display('DEGREE'); + display('ESTIMATE_PERCENT'); + display('GLOBAL_TEMP_TABLE_STATS'); + display('GRANULARITY'); + display('INCREMENTAL'); + display('INCREMENTAL_STALENESS'); + display('INCREMENTAL_LEVEL'); + display('METHOD_OPT'); + display('NO_INVALIDATE'); + display('OPTIONS'); + display('PREFERENCE_OVERRIDES_PARAMETER'); + display('PUBLISH'); + display('STALE_PERCENT'); + display('STAT_CATEGORY'); + display('TABLE_CACHED_BLOCKS'); + display('STALE_PERCENT'); + display('WAIT_TIME_TO_UPDATE_STATS'); +END; +/ + diff --git a/stats_history/stats_hist_01.txt b/stats_history/stats_hist_01.txt new file mode 100644 index 0000000..fe9561d --- /dev/null +++ b/stats_history/stats_hist_01.txt @@ -0,0 +1,61 @@ +create pluggable database NIHILUS admin user NIHILUS$OWNER identified by secret; +alter pluggable database NIHILUS open; +alter pluggable database NIHILUS save state; + + +alter session set container=NIHILUS; + +create tablespace USERS datafile size 32M autoextend ON next 32M; +alter database default tablespace USERS; + +create user adm identified by "secret"; +grant sysdba to adm; + +create user usr identified by "secret"; +grant CONNECT,RESOURCE to usr; +grant alter session to usr; +alter user usr quota unlimited on USERS; + +alias adm_NIHILUS='rlwrap sqlplus adm/"secret"@bakura:1521/NIHILUS as sysdba' +alias usr_NIHILUS='rlwrap sqlplus usr/"secret"@bakura:1521/NIHILUS' + + + + +create table USR.T1 as select * from dba_extents; +create index USR.IDX_T1_BLOCKS on USR.T1(blocks); +exec dbms_stats.gather_table_stats('USR','T1', method_opt=>'for all columns size AUTO'); + +insert into USR.T1 select * from USR.T1; +commit; +exec dbms_stats.gather_table_stats('USR','T1', method_opt=>'for all columns size AUTO'); + + +insert into USR.T1 select * from USR.T1; +insert into USR.T1 select * from USR.T1; +insert into USR.T1 select * from USR.T1; +commit; +exec dbms_stats.gather_table_stats('USR','T1', method_opt=>'for all columns size AUTO'); + + +insert into USR.T1 select * from USR.T1; +insert into USR.T1 select * from USR.T1; +insert into USR.T1 select * from USR.T1; +insert into USR.T1 select * from USR.T1; +insert into USR.T1 select * from USR.T1; +commit; +exec dbms_stats.gather_table_stats('USR','T1', method_opt=>'for all columns size AUTO'); + + +-- @stats_history.sql USR + +-- stats history for table and index +@stats_history.sql USR T1 % % +@stats_history.sql USR IDX_T1_BLOCKS % INDEX + +-- display gather stats operations and details +@stats_opls.sql sysdate-1/24 sysdate BASIC TEXT +@stats_opdet.sql 1083 TYPICAL + + +-- see also Metalink note: How to View Table Statistics History (Doc ID 761554.1) diff --git a/stratis_fs/stratis_01.txt b/stratis_fs/stratis_01.txt new file mode 100644 index 0000000..2dc3dc4 --- /dev/null +++ b/stratis_fs/stratis_01.txt @@ -0,0 +1,63 @@ +-- https://computingforgeeks.com/configure-stratis-storage-on-rocky-almalinux/?expand_article=1 + +dnf install stratisd stratis-cli +systemctl enable --now stratisd +systemctl status stratisd + +lsblk + +wipefs --all /dev/vdd /dev/vde + +stratis pool create pool_data /dev/vdd +stratis pool create pool_backup /dev/vde + +--no-overprovision + +lsblk + +stratis pool list + +stratis fs create --size 19iB pool_data fs_data +stratis fs create pool_backup fs_backup + +stratis fs list + +lsblk --output=UUID /dev/stratis/pool_data/fs_data +lsblk --output=UUID /dev/stratis/pool_backup/fs_backup + +# use the UUID for persistent mount points in /etc/fstab +UUID=0122acad-a17b-4897-a52b-21e2ad75df70 /data xfs defaults,x-systemd.requires=stratisd.service 0 0 +UUID=1630773d-4b4c-490e-84bb-caceecf6d370 /backup xfs defaults,x-systemd.requires=stratisd.service 0 0 + + +stratis blockdev + +stratis fs snapshot pool_data fs_data fs_data-$(date +%Y-%m-%d) +stratis fs list + +mkdir -p /snap/data-2023-07-05 +mount /dev/stratis/pool_data/fs_data-2023-07-05 /snap/data-2023-07-05 + +stratis fs snapshot pool_data fs_data fs_data-$(date +%Y-%m-%d)-bis + +mkdir -p /snap/data-2023-07-05-bis +mount /dev/stratis/pool_data/fs_data-2023-07-05-bis /snap/data-2023-07-05-bis + +stratis fs destroy pool_data fs_data-2023-07-05 +stratis fs destroy pool_data fs_data-2023-07-05-bis + + +# cleanup +######### +umount /data +umount /backup + +stratis fs destroy pool_data fs_data +stratis fs destroy pool_backup fs_backup + +stratis pool destroy pool_data +stratis pool destroy pool_backup + +stratis pool list +stratis fs list + diff --git a/tiddlywiki/001.txt b/tiddlywiki/001.txt new file mode 100755 index 0000000..541516f --- /dev/null +++ b/tiddlywiki/001.txt @@ -0,0 +1,35 @@ +ALTER SYSTEM SET events 'trace[rdbms.SQL_Optimizer.*][sql:9s5u1k3vshsw4]' +ALTER SYSTEM SET events 'trace[rdbms.SQL_Optimizer.*][sql:9s5u1k3vshsw4] off' + +https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=368303807596291&id=19708342.8&_adf.ctrl-state=1b44p4xesv_237 + +README +------ + + Name: SQL Developer SQLcl + Desc: Oracle SQL Developer Command Line (SQLcl) is a free command line + interface for Oracle Database. It allows you to interactively or + batch execute SQL and PL/SQL. SQLcl provides in-line editing, statement + completion, and command recall for a feature-rich experience, all while + also supporting your previously written SQL*Plus scripts. + Version: 20.4.1 + Build: 20.4.1.351.1718 + +Release Notes +============= + +Ansiconsole as default SQLFormat +-------------------------------- +From SQLcl 20.2, AnsiConsole Format is on by default. This means that certain +features will not work as expect until the format is set to default. + +These include the SQL\*Plus features + * HEADING + * TTITLE + * BREAK + * COMPUTE + +SQL> set sqlformat default. + +If you have extensive use of SQL\*Plus style reports, you need to unset +sqlformat via login.sql or add it to your reports diff --git a/tiddlywiki/002.txt b/tiddlywiki/002.txt new file mode 100755 index 0000000..3e9f554 --- /dev/null +++ b/tiddlywiki/002.txt @@ -0,0 +1,8 @@ +# ssh-pageant + eval $(/usr/bin/ssh-pageant -r -a "/tmp/.ssh-pageant-$USERNAME") + =================================================================== +# ssh-pageant +# Cygwin maps /tmp to c:\cygwin\tmp +# MinGW maps /tmp to %TEMP% (%LocalAppData%\Temp) +# Both consume $TEMP (user-specific) from the Windows environment though. +eval $(/usr/local/bin/ssh-pageant -ra $TEMP/.ssh-pageant) diff --git a/tiddlywiki/01.md b/tiddlywiki/01.md new file mode 100755 index 0000000..12d71ed --- /dev/null +++ b/tiddlywiki/01.md @@ -0,0 +1,7 @@ +- ala + - a `@fef <>` + - b `@fef <> <>` + - c +- bala +- portocala +TEST \ No newline at end of file diff --git a/tiddlywiki/11gR2 dataguard RAC example.txt b/tiddlywiki/11gR2 dataguard RAC example.txt new file mode 100755 index 0000000..3cc041e --- /dev/null +++ b/tiddlywiki/11gR2 dataguard RAC example.txt @@ -0,0 +1,372 @@ +== CONTEXT +========== + +~~ PRIMARY cluster: vortex-db01,vortex-db02 +~~ PRIMARY database: DB_NAME=GOTAL, DB_UNIQUE_NAME=GOTALPRD + +~~ STANDBY cluster: kessel-db01,kessel-db02 +~~ STANDBY database: DB_NAME=GOTAL, DB_UNIQUE_NAME=GOTALDRP + + +== PRIMARY database creation +============================ + +-- in 11gR2 version, if we want different DB_NAME <> DB_UNIQUE_NAME, for exammple: DB_NAME=GOTAL and DB_UNIQUE_NAME=GOTALPRD +-- we should manualy create DB_NAME directory under data diskgroup before starting dbca + +asmcmd mkdir +DATA/GOTAL + +$ORACLE_HOME/bin/dbca \ +-silent \ +-createDatabase \ +-templateName General_Purpose.dbc \ +-gdbName GOTAL \ +-sid GOTALPRD \ +-initParams db_unique_name=GOTALPRD \ +-characterSet AL32UTF8 \ +-sysPassword secret \ +-systemPassword secret \ +-emConfiguration NONE \ +-storageType ASM \ +-diskGroupName DATA \ +-redoLogFileSize 100 \ +-sampleSchema FALSE \ +-totalMemory 1000 \ +-nodelist vortex-db01,vortex-db02 + +-- dbca will create 2 directory under data diskgroup: DB_NAME and DB_UNIQUE_NAME +-- DB_NAME directory contains only a link to physical spfile in DB_UNIQUE_NAME directory +-- DB_NAME can pe supressed if we crete the spfile directly link under DB_UNIQUE_NAME directory and we modify the database spfile parameter value in CRS + +SQL> create pfile='/tmp/pfile.txt' from spfile='+DATA/gotal/spfilegotalprd.ora'; +ASMCMD > rm -rf +DATA/GOTAL +SQL> create spfile='+DATA/GOTALPRD/spfilegotalprd.ora' from pfile='/tmp/pfile.txt'; + +srvctl modify database -d GOTALPRD -p +DATA/GOTALPRD/spfilegotalprd.ora +srvctl stop database -d GOTALPRD +srvctl start database -d GOTALPRD +srvctl status database -d GOTALPRD -v + + +~~ enable ARCHIVELG mode on the PRIMARY database + +alter system set db_recovery_file_dest_size = 4G scope=both sid='*'; +alter system set db_recovery_file_dest = '+RECO' scope=both sid='*'; +alter system set log_archive_dest_1 = 'location=USE_DB_RECOVERY_FILE_DEST' scope=both sid='*'; + +srvctl stop database -d GOTALPRD + +startup mount exclusive +alter database archivelog; +alter database open; + +srvctl stop database -d GOTALPRD +srvctl start database -d GOTALPRD + +alter system archive log current; + + +== STANDBY database creation +============================ + +- create pfile from PRIMARY spfile +- modify pfile by replacing required values like DB_UNIQUE_NAME, INSTANCE_NAME, remote_listener etc. +- copy pfile on a STANDBY host and test a startup nomount +- copy the passwordfile from PRIMARY to STANDBY hosts + + +== NETWORK configuration +======================== + +-- listener.ora enteries on vortex-db01 +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = GOTALPRD_DGMGRL) + (SID_NAME = GOTALPRD1) + (ORACLE_HOME = /app/oracle/product/11.2/db_1) + ) + ) + +# ...For DATAGUARD + +-- listener.ora enteries on vortex-db02 +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = GOTALPRD_DGMGRL) + (SID_NAME = GOTALPRD2) + (ORACLE_HOME = /app/oracle/product/11.2/db_1) + ) + ) +# ...For DATAGUARD + + +-- listener.ora enteries on kessel-db01 +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = GOTALDRP_DGMGRL) + (SID_NAME = GOTALDRP1) + (ORACLE_HOME = /app/oracle/product/11.2/db_1) + ) + ) + +# ...For DATAGUARD + +-- listener.ora enteries on kessel-db02 +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = GOTALDRP_DGMGRL) + (SID_NAME = GOTALDRP2) + (ORACLE_HOME = /app/oracle/product/11.2/db_1) + ) + ) +# ...For DATAGUARD + + +-- GLOBAL_DBNAME value is the name of the service visible with: +lsnrctl services LISTENER_DG + +-- cross connection tests; we should be able to connect to iddle instances too +sqlplus /nolog +connect sys/secret@vortex-db01-dba-vip:1541/GOTALPRD_DGMGRL as sysdba +connect sys/secret@vortex-db02-dba-vip:1541/GOTALPRD_DGMGRL as sysdba +connect sys/secret@kessel-db01-dba-vip:1541/GOTALDRP_DGMGRL as sysdba +(for the moment the standby pfile/passwordfile are not deployed on second node of the standby cluster) + +-- aliases to add on tnsnames.ora on all database nodes +# For DATAGUARD... +GOTALPRD_DG = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-db01-dba-vip)(PORT = 1541)) + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-db02-dba-vip)(PORT = 1541)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = GOTALPRD_DGMGRL) + ) +) + + +GOTALDRP_DG = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-db01-dba-vip)(PORT = 1541)) + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-db02-dba-vip)(PORT = 1541)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = GOTALDRP_DGMGRL) + ) +) +# ...For DATAGUARD + + +-- connexion test using TNS aliases +-- we should be able to connect to iddle instances + +sqlplus /nolog +connect sys/secret@GOTALPRD_DG as sysdba +connect sys/secret@GOTALDRP_DG as sysdba + +-- put the primary database in FORCE LOGGING mode +SQL> alter database force logging; +SQL> select force_logging from gv$database; + +-- from the spfile of primary DB we create an spfile for the secondary DB and we start thesecondary DB in nomount +rman target sys/secret@GOTALPRD_DG auxiliary sys/secret@GOTALDRP_DG +run { + allocate channel pri1 device type DISK; + allocate channel pri2 device type DISK; + allocate auxiliary channel aux1 device type DISK; + allocate auxiliary channel aux2 device type DISK; + duplicate target database + for standby + from active database + nofilenamecheck; + +} + +~~ Dataguard Broker configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +-- on primary database +alter system set dg_broker_start=FALSE scope=both sid='*'; +alter system set dg_broker_config_file1='+DATA/GOTALPRD/dr1GOTALPRD.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/GOTALPRD/dr2GOTALPRD.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + +-- on secondary database +alter system set dg_broker_start=FALSE scope=both sid='*'; +alter system set dg_broker_config_file1='+DATA/GOTALDRP/dr1GOTALDRP.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/GOTALDRP/dr2GOTALFRP.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + + +-- creation of STANDBY REDELOG on both databases + +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; + +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; + + +select GROUP#,THREAD#,STATUS, BYTES from v$standby_log; + +col MEMBER for a60 +select * from v$logfile; + + +-- create DGMGRL configuration +dgmgrl +DGMGRL> connect sys/secret@GOTALPRD_DG +DGMGRL> create configuration GOTAL as + primary database is GOTALPRD + connect identifier is GOTALPRD_DG; +DGMGRL> add database GOTALDRP + as connect identifier is GOTALDRP_DG + maintained as physical; + +DGMGRL> edit database 'gotaldrp' set property ArchiveLagTarget=0; +DGMGRL> edit database 'gotaldrp' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'gotaldrp' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'gotaldrp' set property StandbyFileManagement='AUTO'; + +DGMGRL> edit database 'gotalprd' set property ArchiveLagTarget=0; +DGMGRL> edit database 'gotalprd' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'gotalprd' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'gotalprd' set property StandbyFileManagement='AUTO'; + +DGMGRL> enable configuration; +DGMGRL> show configuration; + +-- VERY IMPORANT +-- set StaticConnectIdentifier for all PRIMARY/DATAGUARD database instances +-- use complete DESCRIPTION syntax to uniquely identifiing the instances of each node + +EDIT INSTANCE 'GOTALPRD1' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vortex-db01-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=GOTALPRD_DGMGRL)(INSTANCE_NAME=GOTALPRD1)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'GOTALPRD2' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vortex-db02-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=GOTALPRD_DGMGRL)(INSTANCE_NAME=GOTALPRD2)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'GOTALDRP1' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=kessel-db01-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=GOTALDRP_DGMGRL)(INSTANCE_NAME=GOTALDRP1)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'GOTALDRP2' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=kessel-db02-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=GOTALDRP_DGMGRL)(INSTANCE_NAME=GOTALDRP2)(SERVER=DEDICATED)))'; + +-- move on ASM the spfile of the secondary database +create pfile='/tmp/pfile.txt' from spfile='/app/oracle/product/11.2/db_1/dbs/spfileGOTALDRP1.ora'; +create spfile ='+DATA/gotaldrp/spfileGOTALDRP.ora' from pfile='/tmp/pfile.txt'; + +-- on secodary servers (kessel-db01 and kessel-db02) +init.ora: +spfile ='+DATA/gotaldrp/spfileGOTALDRP.ora' + +-- register standby database in the CRS +srvctl add database -d GOTALDRP -o /app/oracle/product/11.2/db_1 -c RAC -p '+DATA/gotaldrp/spfileGOTALDRP.ora' -r physical_standby -n GOTAL + +-- pay attention to -s ; the default value is OPEN, that means that your DATAGUARD will be OPENED (active DATAGUARD) + +srvctl add instance -d GOTALDRP -i GOTALDRP1 -n kessel-db01 +srvctl add instance -d GOTALDRP -i GOTALDRP2 -n kessel-db02 + +srvctl start database -d GOTALDRP +srvctl status database -d GOTALDRP -v + + + +== SWITCHOVER/SWITCHBACK +======================== + +~~ Switchover +~~~~~~~~~~~~~ +DGMGRL> switchover to 'gotaldrp' + +~~ Switchback +~~~~~~~~~~~~~ +DGMGRL> switchover to 'gotaldrp' + + +== Other operations +=================== + +-- STOP/START Media Recovery Process (MRP) on the STANDBY +DGMGRL> edit database 'gotalprd' set STATE='LOG-APPLY-OFF'; +DGMGRL> edit database 'gotalprd' set STATE='ONLINE'; + +== DATABASE SEVICES considerations +================================== + +~~ keep in mind that in a RAC environement, database services are declared in the CRS and stored in the CRS and in the database also +~~ as CRS are differents on PRIMARY / SECONDARY clusters, we should declare every service twice: on PRIMARY CRS and on SECONDARY CRS +~~ to differentiate target status of a service along a database role +~~ the services should be created with -l option + +~~in the next exemple, we will create a GOTAL_WEB_APPLICATION service for primary database and a GOTAL_ADHOC_REPORTING on the read-only standby + +~~ on vortex-db01 (part of primary cluster) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +srvctl add service -d GOTALPRD -s GOTAL_WEB_APPLICATION -r "GOTALPRD1,GOTALPRD2" -P BASIC -l primary +srvctl start service -d GOTALPRD -s GOTAL_WEB_APPLICATION + +srvctl add service -d GOTALPRD -s GOTAL_ADHOC_REPORTING -r "GOTALPRD1,GOTALPRD2" -P BASIC -l physical_standby + +~~ the service will be created in the database when the service is starting +~~ for propagation on the standby, force archive of log current logfile + +srvctl start service -d GOTALPRD -s GOTAL_ADHOC_REPORTING +srvctl stop service -d GOTALPRD -s GOTAL_ADHOC_REPORTING + +SQL> alter system archive log current; + + +~~ on vkessel-db01 (part of secondary cluster) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +srvctl add service -d GOTALDRP -s GOTAL_ADHOC_REPORTING -r "GOTALDRP1,GOTALDRP2" -P BASIC -l physical_standby +srvctl start service -d GOTALDRP -s GOTAL_ADHOC_REPORTING + +srvctl add service -d GOTALDRP -s GOTAL_WEB_APPLICATION -r "GOTALDRP1,GOTALDRP2" -P BASIC -l primary + +~~ on CLIENT side +~~~~~~~~~~~~~~~~~ +~~ aliases in tnsnames.ora for transparent switchover/failover + +GOTAL_WEB_APPLICATION = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-scan)(PORT = 1521)) + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-scan)(PORT = 1521)) + ) + (CONNECT_DATA = + (SERVICE_NAME = GOTAL_WEB_APPLICATION) + ) +) + +GOTAL_ADHOC_REPORTING = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-scan)(PORT = 1521)) + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-scan)(PORT = 1521)) + ) + (CONNECT_DATA = + (SERVICE_NAME = GOTAL_ADHOC_REPORTING) + ) +) + + + + + + + + + diff --git a/tiddlywiki/12.1 dataguard RAC CDB example.txt b/tiddlywiki/12.1 dataguard RAC CDB example.txt new file mode 100755 index 0000000..d77e38c --- /dev/null +++ b/tiddlywiki/12.1 dataguard RAC CDB example.txt @@ -0,0 +1,282 @@ +~~ creation of CDB database + +$ORACLE_HOME/bin/dbca \ +-silent \ +-createDatabase \ +-templateName General_Purpose.dbc \ +-gdbName EWOK \ +-sid EWOKPRD \ +-initParams db_unique_name=EWOKPRD \ +-characterSet AL32UTF8 \ +-sysPassword secret \ +-systemPassword secret \ +-emConfiguration NONE \ +-createAsContainerDatabase TRUE \ +-storageType ASM \ +-diskGroupName DATA \ +-redoLogFileSize 100 \ +-sampleSchema FALSE \ +-totalMemory 2048 \ +-databaseConfType RAC \ +-nodelist vortex-db01,vortex-db02 + + +~~ identify the spfile and passwordfile ASM location and more readable aliases +srvctl config database -d EWOKPRD + +ASMCMD [+] > cd +DATA/EWOKPRD/ +ASMCMD [+DATA/EWOKPRD] > mkalias +DATA/EWOKPRD/PARAMETERFILE/spfile.333.957718565 spfileewokprd.ora +ASMCMD [+DATA/EWOKPRD] > mkalias +DATA/EWOKPRD/PASSWORD/pwdewokprd.308.957717627 orapwewokprd + +~~ update spfile location in the CRS +srvctl modify database -db EWOKPRD -spfile +DATA/EWOKPRD/spfileewokprd.ora +srvctl modify database -db EWOKPRD -pwfile +DATA/EWOKPRD/orapwewokprd +srvctl stop database -d EWOKPRD +srvctl start database -d EWOKPRD +srvctl status database -d EWOKPRD -v + + +~~ enable ARCHIVELG mode and FORCE LOGGING on the PRIMARY database + +alter system set db_recovery_file_dest_size = 4G scope=both sid='*'; +alter system set db_recovery_file_dest = '+RECO' scope=both sid='*'; +alter system set log_archive_dest_1 = 'location=USE_DB_RECOVERY_FILE_DEST' scope=both sid='*'; + +srvctl stop database -d EWOKPRD + +startup mount exclusive +alter database archivelog; +alter database open; +alter database force logging; + +srvctl stop database -d EWOKPRD +srvctl start database -d EWOKPRD + +alter system archive log current; + +~~ copy pfile and passwordfile from primary cluster to first node of the stabdby cluster + +SQL> create pfile='/tmp/pfile_EWOK.ora' from spfile; +asmcmd cp +DATA/EWOKPRD/orapwewokprd /tmp +cd /tmp +scp orapwewokprd pfile_EWOK.ora kessel-db01/tmp + +~~ make adjustements in pfile and put all in $ORACLE_HOME/dbs + +SQL> create spfile from pfile='/tmp/standby.ora'; +cp orapwewokprd $ORACLE_HOME/dbs/orapwEWOKDRP1 + +SQL> startup nomount + +~~ NETWORK configuration - listeners +~~ in my confoguration I have a dedicated listener for DATAGUARD; following definitions has been added on primary cluster: + +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = EWOKPRD_DGMGRL) + (SID_NAME = EWOKPRD1) + (ORACLE_HOME = /app/oracle/product/12.1/db_1) + ) + ) + +# ...For DATAGUARD + +~~ and on standby cluster: + +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = EWOKDRP_DGMGRL) + (SID_NAME = EWOKDRP1) + (ORACLE_HOME = /app/oracle/product/12.1/db_1) + ) + ) +# ...For DATAGUARD + + +~~ cross connection tests; we should be able to connect to iddle instances too +sqlplus /nolog +connect sys/secret@vortex-db01-dba-vip:1541/EWOKPRD_DGMGRL as sysdba +connect sys/secret@vortex-db02-dba-vip:1541/EWOKPRD_DGMGRL as sysdba +connect sys/secret@kessel-db01-dba-vip:1541/EWOKDRP_DGMGRL as sysdba +(for the moment the standby pfile/passwordfile are not deployed on second node of the standby cluster) + +~~ aliases to add on tnsnames.ora on all primary/standby database nodes +# For DATAGUARD... +EWOKPRD_DG = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-db01-dba-vip)(PORT = 1541)) + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-db02-dba-vip)(PORT = 1541)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = EWOKPRD_DGMGRL) + ) +) + +EWOKDRP_DG = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-db01-dba-vip)(PORT = 1541)) + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-db02-dba-vip)(PORT = 1541)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = EWOKDRP_DGMGRL) + ) +) +# ...For DATAGUARD + + +~~ cross connexion test using TNS aliases; we should be able to connect to iddle instances + +sqlplus /nolog +connect sys/secret@EWOKPRD_DG as sysdba +connect sys/secret@EWOKDRP_DG as sysdba + + +~~ from the spfile of primary DB we create an spfile for the secondary DB and we start thesecondary DB in nomount +rman target sys/secret@EWOKPRD_DG auxiliary sys/secret@EWOKDRP_DG +run { + allocate channel pri1 device type DISK; + allocate channel pri2 device type DISK; + allocate auxiliary channel aux1 device type DISK; + allocate auxiliary channel aux2 device type DISK; + duplicate target database + for standby + from active database + nofilenamecheck + using compressed backupset section size 1G; +} + + +~~ Dataguard Broker configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +~~ on primary database +alter system set dg_broker_start=FALSE scope=both sid='*'; +alter system set dg_broker_config_file1='+DATA/EWOKPRD/dr1EWOKPRD.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/EWOKPRD/dr2EWOKPRD.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + +~~ on secondary database +alter system set dg_broker_start=FALSE scope=both sid='*'; +alter system set dg_broker_config_file1='+DATA/EWOKDRP/dr1EWOKDRP.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/EWOKDRP/dr2EWOKFRP.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + +~~ creation of STANDBY REDELOG on both databases + +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; + +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; + + +select GROUP#,THREAD#,STATUS, BYTES from v$standby_log; + +col MEMBER for a60 +select * from v$logfile; + + +~~ create DGMGRL configuration +dgmgrl +DGMGRL> connect sys/secret@EWOKPRD_DG +DGMGRL> create configuration EWOK as + primary database is EWOKPRD + connect identifier is EWOKPRD_DG; +DGMGRL> add database EWOKDRP + as connect identifier is EWOKDRP_DG + maintained as physical; + +DGMGRL> edit database 'ewokdrp' set property ArchiveLagTarget=0; +DGMGRL> edit database 'ewokdrp' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'ewokdrp' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'ewokdrp' set property StandbyFileManagement='AUTO'; + +DGMGRL> edit database 'ewokprd' set property ArchiveLagTarget=0; +DGMGRL> edit database 'ewokprd' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'ewokprd' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'ewokprd' set property StandbyFileManagement='AUTO'; + +DGMGRL> enable configuration; +DGMGRL> show configuration; + +~~ VERY IMPORANT +~~ set StaticConnectIdentifier for all PRIMARY/DATAGUARD database instances +~~ use complete DESCRIPTION syntax to uniquely identifiing the instances of each node + +EDIT INSTANCE 'EWOKPRD1' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vortex-db01-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKPRD_DGMGRL)(INSTANCE_NAME=EWOKPRD1)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'EWOKPRD2' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vortex-db02-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKPRD_DGMGRL)(INSTANCE_NAME=EWOKPRD2)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'EWOKDRP1' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=kessel-db01-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKDRP_DGMGRL)(INSTANCE_NAME=EWOKDRP1)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'EWOKDRP2' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=kessel-db02-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKDRP_DGMGRL)(INSTANCE_NAME=EWOKDRP2)(SERVER=DEDICATED)))'; + + +~~ move spfile from file system to ASM +create pfile='/tmp/pfile_EWOKDRP.ora' from spfile; +create spfile ='+DATA/ewokdrp/spfileEWOKDRP.ora' from pfile='/tmp/pfile_EWOKDRP.ora'; + +~~ register standby database in the CRS +srvctl add database -d EWOKDRP -o /app/oracle/product/12.1/db_1 -c RAC -p '+DATA/EWOKDRP/spfileEWOKDRP.ora' -r physical_standby -n EWOK + +~~ pay attention to -s ; the default value is OPEN, that means that your DATAGUARD will be OPENED (active DATAGUARD) +srvctl add instance -d EWOKDRP -i EWOKDRP1 -n kessel-db01 +srvctl add instance -d EWOKDRP -i EWOKDRP2 -n kessel-db02 + +srvctl start database -d EWOKDRP -o mount +srvctl status database -d EWOKDRP -v + +~~ finally, move passwordfile to ASM using pwcopy under asmcmd +~~ note that if the passwordfile is created on DB_UNKNOW ASM directory using --dbuniquename in pwdcopy could be necessary +asmcmd pwcopy +DATA/EWOKPRD/orapwewokprd /tmp/orapwewokprd +scp /tmp/orapwewokprd kessel-db01:/tmp/orapwewokprd +asmcmd pwcopy /tmp/orapwewokprd +DATA/EWOKDRP/orapwewokdrp + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/tiddlywiki/ASH - examples.txt b/tiddlywiki/ASH - examples.txt new file mode 100755 index 0000000..f93944e --- /dev/null +++ b/tiddlywiki/ASH - examples.txt @@ -0,0 +1,43 @@ +col SAMPLE_TIME for a21 +col Mb for 999 999 999 + +select + SAMPLE_TIME + ,SQL_ID + ,SESSION_ID + ,PGA_ALLOCATED/1024/1024 Mb + ,TEMP_SPACE_ALLOCATED/1024 Mb +from + DBA_HIST_ACTIVE_SESS_HISTORY +where + SAMPLE_TIME between to_date('2020-05-16 11:00','YYYY-MM-DD HH24:MI') and to_date('2020-05-16 12:00','YYYY-MM-DD HH24:MI') +order by + SAMPLE_TIME asc +/ + + + +select + max(PGA_ALLOCATED/1024/1024) Mb + , max(TEMP_SPACE_ALLOCATED/1024) Mb +from + DBA_HIST_ACTIVE_SESS_HISTORY +where + SAMPLE_TIME between sysdate-14 and sysdate +/ + + +select + SAMPLE_TIME + ,SQL_ID + ,SESSION_ID + ,PGA_ALLOCATED/1024/1024 Mb + ,TEMP_SPACE_ALLOCATED/1024 Mb +from + DBA_HIST_ACTIVE_SESS_HISTORY +where + SAMPLE_TIME between sysdate-14 and sysdate + and PGA_ALLOCATED is not null +order by + 4 asc +/ diff --git a/tiddlywiki/ASH_waits_by _wait_class_for_last_2_hours.sql.txt b/tiddlywiki/ASH_waits_by _wait_class_for_last_2_hours.sql.txt new file mode 100755 index 0000000..6e12292 --- /dev/null +++ b/tiddlywiki/ASH_waits_by _wait_class_for_last_2_hours.sql.txt @@ -0,0 +1,44 @@ +set lines 288 +col sample_time for a14 +col CONFIGURATION head "CONFIG" for 99.99 +col ADMINISTRATIVE head "ADMIN" for 99.99 +col OTHER for 99.99 + +SELECT TO_CHAR(SAMPLE_TIME, 'HH24:MI ') AS SAMPLE_TIME, + ROUND(OTHER / 60, 3) AS OTHER, + ROUND(CLUST / 60, 3) AS CLUST, + ROUND(QUEUEING / 60, 3) AS QUEUEING, + ROUND(NETWORK / 60, 3) AS NETWORK, + ROUND(ADMINISTRATIVE / 60, 3) AS ADMINISTRATIVE, + ROUND(CONFIGURATION / 60, 3) AS CONFIGURATION, + ROUND(COMMIT / 60, 3) AS COMMIT, + ROUND(APPLICATION / 60, 3) AS APPLICATION, + ROUND(CONCURRENCY / 60, 3) AS CONCURRENCY, + ROUND(SIO / 60, 3) AS SYSTEM_IO, + ROUND(UIO / 60, 3) AS USER_IO, + ROUND(SCHEDULER / 60, 3) AS SCHEDULER, + ROUND(CPU / 60, 3) AS CPU, + ROUND(BCPU / 60, 3) AS BACKGROUND_CPU + FROM (SELECT TRUNC(SAMPLE_TIME, 'MI') AS SAMPLE_TIME, + DECODE(SESSION_STATE, + 'ON CPU', + DECODE(SESSION_TYPE, 'BACKGROUND', 'BCPU', 'ON CPU'), + WAIT_CLASS) AS WAIT_CLASS + FROM V$ACTIVE_SESSION_HISTORY + WHERE SAMPLE_TIME > SYSDATE - INTERVAL '2' + HOUR + AND SAMPLE_TIME <= TRUNC(SYSDATE, 'MI')) ASH PIVOT(COUNT(*) + FOR WAIT_CLASS IN('ON CPU' AS CPU,'BCPU' AS BCPU, +'Scheduler' AS SCHEDULER, +'User I/O' AS UIO, +'System I/O' AS SIO, +'Concurrency' AS CONCURRENCY, +'Application' AS APPLICATION, +'Commit' AS COMMIT, +'Configuration' AS CONFIGURATION, +'Administrative' AS ADMINISTRATIVE, +'Network' AS NETWORK, +'Queueing' AS QUEUEING, +'Cluster' AS CLUST, +'Other' AS OTHER)) +/ \ No newline at end of file diff --git a/tiddlywiki/AWR - extract a statistic history.txt b/tiddlywiki/AWR - extract a statistic history.txt new file mode 100755 index 0000000..fb9f26a --- /dev/null +++ b/tiddlywiki/AWR - extract a statistic history.txt @@ -0,0 +1,28 @@ +col STAT_NAME for a20 +col VALUE_DIFF for 9999999999 +col STAT_PER_MIN for 9999,999,999 +set lines 200 pages 1500 long 99999999 +col BEGIN_INTERVAL_TIME for a30 +col END_INTERVAL_TIME for a30 +set pagesize 40 +set pause on + + +select hsys.SNAP_ID, + hsnap.BEGIN_INTERVAL_TIME, + hsnap.END_INTERVAL_TIME, + hsys.STAT_NAME, + hsys.VALUE, + hsys.VALUE - LAG(hsys.VALUE,1,0) OVER (ORDER BY hsys.SNAP_ID) AS "VALUE_DIFF", + round((hsys.VALUE - LAG(hsys.VALUE,1,0) OVER (ORDER BY hsys.SNAP_ID)) / + round(abs(extract(hour from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME))*60 + + extract(minute from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME)) + + extract(second from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME))/60),1)) "STAT_PER_MIN" +from dba_hist_sysstat hsys, dba_hist_snapshot hsnap where + hsnap.BEGIN_INTERVAL_TIME between to_date('30-11-2019','DD-MM-YYYY') and to_date('01-12-2019','DD-MM-YYYY') + and hsys.snap_id = hsnap.snap_id + and hsnap.instance_number in (select instance_number from v$instance) + and hsnap.instance_number = hsys.instance_number + and hsys.STAT_NAME='logons current' + order by 1; + \ No newline at end of file diff --git a/tiddlywiki/Anglais - draft.txt b/tiddlywiki/Anglais - draft.txt new file mode 100755 index 0000000..8bcd2c6 --- /dev/null +++ b/tiddlywiki/Anglais - draft.txt @@ -0,0 +1,19 @@ +I had been in town less than a half hour. + +scar = cicatrice +scarce = rare + +pick over something = to talk about something in detail +embroide = broder +quite = assez // I'm quite sure + +these = ceux-ci/celles-ci +their = leur +neat and tidy = propre et net +fit lean = maigre +elbow = coude +textbook move = comme dans un manuel, classique +it could have ended badly = OK + +somehow = en quelque sorte +drawback = inconvéniant diff --git a/tiddlywiki/Apache HTTPD.tid b/tiddlywiki/Apache HTTPD.tid new file mode 100755 index 0000000..d794926 --- /dev/null +++ b/tiddlywiki/Apache HTTPD.tid @@ -0,0 +1,8 @@ +created: 20190622102026604 +creator: vplesnila +modified: 20190622102030669 +modifier: vplesnila +tags: Contents +title: Apache HTTPD +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/Apache httpd - divers.txt b/tiddlywiki/Apache httpd - divers.txt new file mode 100755 index 0000000..322a8b3 --- /dev/null +++ b/tiddlywiki/Apache httpd - divers.txt @@ -0,0 +1,8 @@ +-- Reverse Proxy +ProxyPass "/app/" "http://server1551:9027/" +ProxyPassReverse "/app/" "http://server1551:9027/" + +-- Replace content using mod_substitute module +AddOutputFilterByType SUBSTITUTE text/html +Substitute "s|/AdminLTE/|/dbservices-dev/AdminLTE/" + diff --git a/tiddlywiki/Apache httpd starting with systemd.tid b/tiddlywiki/Apache httpd starting with systemd.tid new file mode 100755 index 0000000..e11e321 --- /dev/null +++ b/tiddlywiki/Apache httpd starting with systemd.tid @@ -0,0 +1,42 @@ +created: 20190617091940764 +creator: vplesnila +modified: 20200122095305456 +modifier: vplesnila +tags: Linux [[Apache HTTPD]] +title: Apache httpd starting with systemd +type: text/vnd.tiddlywiki + +Create `httpd.service` unit file in `/usr/lib/systemd/system` + + +``` +[Unit] +Description=Apache Web Server +After=network.target + +[Service] +Type=forking +PIDFile=/app/apache_httpd/2.4.39/logs/httpd.pid +ExecStart=/app/apache_httpd/2.4.39/bin/apachectl start +ExecStop=/app/apache_httpd/2.4.39/bin/apachectl graceful-stop +ExecReload=/app/apache_httpd/2.4.39/bin/apachectl graceful +PrivateTmp=true +LimitNOFILE=infinity + +[Install] +WantedBy=multi-user.target +``` + + + + +``` +systemctl daemon-reload +systemctl enable httpd +systemctl stop httpd +systemctl start httpd +systemctl status httpd +``` + +//Note//: in 2.4.41 version, PIDFile in unit definition do not works. + diff --git a/tiddlywiki/Bookmarks.md b/tiddlywiki/Bookmarks.md new file mode 100755 index 0000000..37d1cc0 --- /dev/null +++ b/tiddlywiki/Bookmarks.md @@ -0,0 +1,5 @@ +- [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) +- [Transfer a Windows 10 license to a new PC](https://www.windowscentral.com/how-transfer-windows-10-license-new-computer-or-hard-drive) +- [Comment l'optimiseur d'Oracle calcule le coût](https://marius-nitu.developpez.com/tutoriels/oracle/optimiseur/comment-optimiseur-oracle-calcule-cout/) +- [How to set up "RedoRoutes" in a Data Guard Broker configuration](https://minimalistic-oracle.blogspot.com/2021/04/how-to-set-up-redoroutes-in-data-guard.html) + diff --git a/tiddlywiki/Brico.md b/tiddlywiki/Brico.md new file mode 100755 index 0000000..5eb18b0 --- /dev/null +++ b/tiddlywiki/Brico.md @@ -0,0 +1,35 @@ +https://demolition-debarras.com/ + +# Scripts + +### make-fedora-rpm.py + +Fedora-specific script that ties it all together. Run it like: + + ./make-fedora-rpm.py + +What it does roughly: + +* Extracts all the .zip files in $scriptdir/new-builds/ to a temporary directory. The .zip files should contain all the build input for `make-driver-dir.py`. I prepopulate this with `fetch-latest-builds.py` but other people can use the build input mirror mentioned above. +* Runs `make-driver-dir.py` on the unzipped output +* Runs `make-virtio-win-rpm-archive.py` on the make-driver-dir.py output +* Updates the virtio-win.spec +* Runs `./make-repo.py` + + +Jfrjfr r ng tng th t,h ,ty h + + vfvf + gthth + hythyt + + +### make-installer.py + +This uses a [virtio-win-guest-tools-installer.git](https://github.com/virtio-win/virtio-win-guest-tools-installer]) git submodule to build .msi installers +for all the drivers. Invoking this successfully requires quite a few RPMs installed on the host + +* `wix-toolset-binaries`, example: https://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc32/noarch/wix-toolset-binaries-3.11.1-2.fc32.noarch.rpm +* `ovirt-guest-agent-windows`, example: https://resources.ovirt.org/pub/ovirt-4.3-snapshot/rpm/fc30/noarch/ovirt-guest-agent-windows-1.0.16-1.20191009081759.git1048b68.fc30.noarch.rpm +* `wine` from distro repos + diff --git a/tiddlywiki/CRS resources check examples.txt b/tiddlywiki/CRS resources check examples.txt new file mode 100755 index 0000000..acc146b --- /dev/null +++ b/tiddlywiki/CRS resources check examples.txt @@ -0,0 +1,8 @@ +# not started databases +crsctl status resource -w "((TYPE = ora.database.type) AND (LAST_SERVER = $(hostname -s)) AND (STATE != ONLINE))" + +# not started services +crsctl status resource -w "((TYPE = ora.service.type) AND (LAST_SERVER = $(hostname -s)) AND (STATE != ONLINE))" + +# list in tabular mode services not started but having a target=ONLINE +crsctl status resource -t -w "((TYPE = ora.service.type) AND (LAST_SERVER = $(hostname -s)) AND (TARGET = ONLINE) AND (STATE != ONLINE))" diff --git a/tiddlywiki/Captures.txt b/tiddlywiki/Captures.txt new file mode 100755 index 0000000..b48281f --- /dev/null +++ b/tiddlywiki/Captures.txt @@ -0,0 +1,24 @@ +-- LIST CAPTURES + +set lines 180 +col CAPTURE_NAME for a50 + +select CAPTURE_NAME,STATUS from dba_capture; + +-- STOP CAPTURE + +BEGIN +DBMS_CAPTURE_ADM.STOP_CAPTURE( + capture_name => 'OGG$CAP_OEDCLJJ', + force => true); +END; +/ + +-- DROP CAPTURE + +BEGIN + DBMS_CAPTURE_ADM.DROP_CAPTURE( + capture_name => 'OGG$CAP_OEDINJJ', + drop_unused_rule_sets => true); +END; +/ diff --git a/tiddlywiki/Change hostname in Linux.txt b/tiddlywiki/Change hostname in Linux.txt new file mode 100755 index 0000000..79cb497 --- /dev/null +++ b/tiddlywiki/Change hostname in Linux.txt @@ -0,0 +1 @@ +hostnamectl set-hostname host.example.com \ No newline at end of file diff --git a/tiddlywiki/Changing the Oracle Grid Infrastructure Home Path.tid b/tiddlywiki/Changing the Oracle Grid Infrastructure Home Path.tid new file mode 100755 index 0000000..0edb3a6 --- /dev/null +++ b/tiddlywiki/Changing the Oracle Grid Infrastructure Home Path.tid @@ -0,0 +1,39 @@ +created: 20200218153929609 +creator: vplesnila +modified: 20200218161821409 +modifier: vplesnila +tags: Oracle +title: Changing the Oracle Grid Infrastructure Home Path +type: text/plain + +~~ Moving ORACLE_HOME +~~ from /app/grid/product/19.3 +~~ to /app/grid/product/19 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +~~ as root stop the CRS +crsctl stop crs + +~~ as grid, detach the grid ORACLE_HOME +/app/grid/product/19.3/oui/bin/runInstaller -silent -waitforcompletion\ +-detachHome ORACLE_HOME='/app/grid/product/19.3' -local + +~~ as root, move the Grid binaries from the old Grid home location to the new Grid home location +cp -Rp /app/grid/product/19.3 /app/grid/product/19 + +~~ as root Unlock the destination Grid home +/app/grid/product/19/crs/install/rootcrs.sh -unlock -dstcrshome /app/grid/product/19 + +~~ as grid relink Grid binaries +~~ set up your environement variable according to the new ORACLE_HOME +/app/grid/product/19/bin/relink + +~~ as root lock the destination Grid home +/app/grid/product/19/crs/install/rootcrs.sh -lock + +~~ as root move Grid home to the new destination and start CRS +/app/grid/product/19/crs/install/rootcrs.sh -move -dstcrshome /app/grid/product/19 + +~~ as grid, attach the new home in Oracle Inventory +/app/grid/product/19/oui/bin/runInstaller -attachhome ORACLE_HOME=/app/grid/product/19 ORACLE_HOME_NAME="OraGI19Home1" +/app/grid/product/19/OPatch/opatch lsinventory diff --git a/tiddlywiki/Citrix - ALT+TAB remote.tid b/tiddlywiki/Citrix - ALT+TAB remote.tid new file mode 100755 index 0000000..9e2608f --- /dev/null +++ b/tiddlywiki/Citrix - ALT+TAB remote.tid @@ -0,0 +1,17 @@ +created: 20191026073454809 +creator: vplesnila +modified: 20191026073843159 +modifier: vplesnila +tags: Divers +title: Citrix - ALT+TAB remote +type: text/plain + +-- source: https://www.lewan.com/blog/2013/06/14/enable-alttab-application-toggling-in-a-citrix-xenapp-desktop-session + +- Open regedit on the client device to edit the registry +- Navigate to the key: + HKEY_LOCAL_MACHINE \SOFTWARE\Citrix\ICAClient\Engine\Lockdown Profiles\All Regions\Lockdown\Virtual Channels\Keyboard\ +- Open Key: TransparentKeyPassthrough +- Set the value to: Remote +- Exit the Citrix receiver if it is started and log back into your Citrix desktop. +- When the Citrix desktop session is the Active window, you will be able to toggle between the applications in that session with Alt+Tab. diff --git a/tiddlywiki/Contents.tid b/tiddlywiki/Contents.tid new file mode 100755 index 0000000..b3d19e1 --- /dev/null +++ b/tiddlywiki/Contents.tid @@ -0,0 +1,8 @@ +created: 20190616214114844 +creator: vplesnila +modified: 20190618155452589 +modifier: vplesnila +title: Contents +type: text/vnd.tiddlywiki + +<$list filter={{$:/core/Filters/AllTiddlers!!filter}} template="$:/core/ui/ListItemTemplate"/> \ No newline at end of file diff --git a/tiddlywiki/Create RAC CDB database manually.txt b/tiddlywiki/Create RAC CDB database manually.txt new file mode 100755 index 0000000..b333fa9 --- /dev/null +++ b/tiddlywiki/Create RAC CDB database manually.txt @@ -0,0 +1,132 @@ +~~ Context: DBNAME=HUTT, db_unique_name=HUTTPRD, instances HUTT1/HUTT2 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +~~ NOTE: the procedure is identical to creating a non CDB database +~~ the ONLY difference is enable_pluggable_database=true parameter in init.ora + +mkdir -p /app/base/admin/HUTTPRD +cd /app/base/admin/HUTTPRD +mkdir scripts divers adump init diag + +~~~~~~~~~~~~ +initHUTT.ora +~~~~~~~~~~~~ +*.enable_pluggable_database=true +*.cluster_database=false +*.db_name=HUTT +*.db_unique_name=HUTTPRD +*.compatible=19.0.0 +*.control_files=(+DATA/HUTTPRD/control01.ctl,+DATA/HUTTPRD/control02.ctl) +*.db_create_file_dest=+DATA +*.db_create_online_log_dest_1=+DATA +*.db_recovery_file_dest_size=4G +*.db_recovery_file_dest=+RECO +*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' +*.log_archive_format=%t_%s_%r.arc +*.db_block_size=8192 +*.open_cursors=300 +*.diagnostic_dest=/app/base/admin/HUTTPRD +*.sga_max_size=3G +*.sga_target=3G +*.pga_aggregate_target=512M +*.pga_aggregate_limit=2G +*.processes=350 +*.audit_file_dest=/app/base/admin/HUTTPRD/adump +*.audit_trail=db +*.remote_login_passwordfile=exclusive +HUTT1.instance_number=1 +HUTT2.instance_number=2 +HUTT1.thread=1 +HUTT2.thread=2 +HUTT1.undo_tablespace=UNDOTBS1 +HUTT2.undo_tablespace=UNDOTBS2 + + +~~~~ + +startup nomount pfile='/mnt/yavin4/tmp/_oracle_/ad-hoc/initHUTT.ora'; + +create database HUTT +datafile size 700M autoextend on next 64M +extent management local +SYSAUX datafile size 512M autoextend on next 64M +default temporary tablespace TEMP tempfile size 256M autoextend off +undo tablespace UNDOTBS1 datafile size 256M autoextend off +character set AL32UTF8 +national character set AL16UTF16 +logfile group 1 size 64M, + group 2 size 64M +user SYS identified by secret user SYSTEM identified by secret; + +create undo tablespace UNDOTBS2 datafile size 256M autoextend off; +create tablespace USERS datafile size 32M autoextend ON next 32M; +alter database default tablespace USERS; + + +alter database add logfile thread 2 + group 3 size 64M, + group 4 size 64M; + +alter database enable public thread 2; + +~~ create dictionary objects on CDB$ROOT +@?/rdbms/admin/catalog.sql +@?/rdbms/admin/catproc.sql +@?/rdbms/admin/catclust.sql +@?/rdbms/admin/utlrp.sql + +~~ open PDB$SEED in read/write mode and create dictionary objects on PDB$SEED +alter session set "_oracle_script"=true; +alter pluggable database PDB$SEED close immediate; +alter pluggable database PDB$SEED open; +alter session set "_oracle_script"=false; +alter session set container=PDB$SEED; +@?/rdbms/admin/catalog.sql +@?/rdbms/admin/catproc.sql +@?/rdbms/admin/catclust.sql +@?/rdbms/admin/utlrp.sql +alter session set "_oracle_script"=true; +alter pluggable database PDB$SEED close immediate; +alter pluggable database PDB$SEED open read only; +alter session set "_oracle_script"=false; + + +~~ add cluster_database=true in init.ora and restart instance on 2 nodes +startup pfile='/mnt/yavin4/tmp/_oracle_/ad-hoc/initHUTT.ora'; + +~~ create spfile on ASM and create $ORACLE_HOME/dbs/initXXXXX.ora on both nodes +create spfile='+DATA/HUTTPRD/spfileHUTT.ora' from pfile='/mnt/yavin4/tmp/_oracle_/ad-hoc/initHUTT.ora'; +echo "spfile='+DATA/HUTTPRD/spfileHUTT.ora'" > $ORACLE_HOME/dbs/initHUTT1.ora +echo "spfile='+DATA/HUTTPRD/spfileHUTT.ora'" > $ORACLE_HOME/dbs/initHUTT2.ora + +~~ register DB in CRS +srvctl add database -d HUTTPRD -o /app/oracle/product/19 -p '+DATA/HUTTPRD/spfileHUTT.ora' +srvctl add instance -d HUTTPRD -i HUTT1 -n vortex-db01 +srvctl add instance -d HUTTPRD -i HUTT2 -n vortex-db02 + +~~ create passwordfile on ASM; if the DB is not yet registered on CRS, you will get a WARNING +orapwd FILE='+DATA/HUTTPRD/orapwHUTT' ENTRIES=10 DBUNIQUENAME='HUTTPRD' password=secret00! + +~~ check database config in clusterware +srvctl config database -db HUTTPRD + +~~ shutdown instances with SQL*Plus and start database with srvctl +srvctl start database -db HUTTPRD +srvctl status database -db HUTTPRD -v + +~~ optionally, put database in archivelog mode +alter system set cluster_database=false scope=spfile sid='*'; +alter system set db_recovery_file_dest_size=8G scope=both sid='*'; +alter system set db_recovery_file_dest='+RECO' scope=both sid='*'; +alter system set log_archive_dest_1 = 'location=USE_DB_RECOVERY_FILE_DEST' scope=both sid='*'; + +srvctl stop database -db HUTTPRD + +startup mount exclusive +alter database archivelog; +alter database open; + +srvctl stop database -db HUTTPRD +srvctl start database -db HUTTPRD + +alter system archive log current; diff --git a/tiddlywiki/Create RAC non-CDB database manually.txt b/tiddlywiki/Create RAC non-CDB database manually.txt new file mode 100755 index 0000000..b2499c9 --- /dev/null +++ b/tiddlywiki/Create RAC non-CDB database manually.txt @@ -0,0 +1,104 @@ +~~ Context: DBNAME=JABBA, db_unique_name=JABBAPRD, instances JABBA1/JABBA2 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +mkdir -p /app/base/admin/JABBA +cd /app/base/admin/JABBA +mkdir scripts divers adump init diag + +~~ initJABBA.ora +~~~~~~~~~~~~~~~~ + +*.db_name=JABBA +*.db_unique_name=JABBAPRD +*.compatible=12.1.0.2.0 +*.control_files=(+DATA/JABBAPRD/control01.ctl,+DATA/JABBAPRD/control02.ctl) +*.db_create_file_dest=+DATA +*.db_create_online_log_dest_1=+DATA +*.db_recovery_file_dest_size=4G +*.db_recovery_file_dest=+RECO +*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' +*.log_archive_format=%t_%s_%r.arc +*.db_block_size=8192 +*.open_cursors=300 +*.diagnostic_dest=/app/base/admin/JABBA +*.sga_max_size=3G +*.sga_target=3G +*.pga_aggregate_target=512M +*.pga_aggregate_limit=1G +*.processes=350 +*.audit_file_dest=/app/base/admin/JABBA/adump +*.audit_trail=db +*.remote_login_passwordfile=exclusive +JABBAPRD1.instance_number=1 +JABBAPRD2.instance_number=2 +JABBAPRD1.thread=1 +JABBAPRD2.thread=2 +JABBAPRD1.undo_tablespace=UNDOTBS1 +JABBAPRD2.undo_tablespace=UNDOTBS2 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +startup nomount pfile='/mnt/yavin4/tmp/_oracle_/ad-hoc/initJABBA.ora' + +create database JABBA +datafile size 700M autoextend on next 64M +extent management local +SYSAUX datafile size 512M autoextend on next 64M +default temporary tablespace TEMP tempfile size 256M autoextend off +undo tablespace UNDOTBS1 datafile size 256M autoextend off +character set AL32UTF8 +national character set AL16UTF16 +logfile group 1 size 64M, + group 2 size 64M +user SYS identified by secret user SYSTEM identified by secret; + +create undo tablespace UNDOTBS2 datafile size 256M autoextend off; +create tablespace USERS datafile size 32M autoextend ON next 32M; +alter database default tablespace USERS; + +@?/rdbms/admin/catalog.sql +@?/rdbms/admin/catproc.sql +@?/rdbms/admin/catclust.sql +@?/rdbms/admin/utlrp.sql + + +alter database add logfile thread 2 + group 3 size 64M, + group 4 size 64M; + +alter database enable public thread 2; + + +~~ add cluster_database=true in init.ora and restart instance on 2 nodes +startup pfile='/mnt/yavin4/tmp/_oracle_/ad-hoc/initJABBA.ora' + +~~ create spfile on ASM +create spfile='+DATA/JABBAPRD/spfileJABBA.ora' from pfile='/mnt/yavin4/tmp/_oracle_/ad-hoc/initJABBA.ora'; + +~~ on both nodes, create init.ora under $ORACLE_HOME/dbs +echo "spfile='+DATA/JABBAPRD/spfileJABBA.ora'" > $ORACLE_HOME/dbs/initJABBAPRD1.ora +echo "spfile='+DATA/JABBAPRD/spfileJABBA.ora'" > $ORACLE_HOME/dbs/initJABBAPRD2.ora + +~~ register DB in CRS +srvctl add database -d JABBAPRD -pwfile '+DATA/JABBAPRD/orapwJABBA' -o /app/oracle/product/12.1 -p '+DATA/JABBAPRD/spfileJABBA.ora' + +~~ create passwordfile on ASM; if the DB is not yet registered on CRS, you will get a WARNING +orapwd FILE='+DATA/JABBAPRD/orapwJABBA' ENTRIES=10 DBUNIQUENAME='JABBAPRD' password=secret + +srvctl add instance -d JABBAPRD -i JABBAPRD1 -n vortex-db01 +srvctl add instance -d JABBAPRD -i JABBAPRD2 -n vortex-db02 + +~~ shutdown both instances with SQL*Plus, therfore start DB with srvctl +srvctl start database -db JABBAPRD +srvctl status database -db JABBAPRD -v + +~~ enable ARCHIVELOG mode +alter system set cluster_database=false scope=spfile sid='*'; +srvctl stop database -db JABBAPRD + +startup mount exclusive +alter database archivelog; +alter database open; +alter system set cluster_database=true scope=spfile sid='*'; + +~~ shutdown database within SQL*Plus, therefore start with srvctl +srvctl start database -db JABBAPRD diff --git a/tiddlywiki/Cuisson au four.tid b/tiddlywiki/Cuisson au four.tid new file mode 100755 index 0000000..788d521 --- /dev/null +++ b/tiddlywiki/Cuisson au four.tid @@ -0,0 +1,12 @@ +created: 20210131134915788 +creator: vplesnila +modified: 20210131135040265 +modifier: vplesnila +tags: Divers +title: Cuisson au four +type: text/plain + +Rôti de porc basse temperature +------------------------------ +Préchauffez le four à 100°C +Enfoncez une sonde en son coeur et quand il atteint 68°C, c’est cuit diff --git a/tiddlywiki/DBCA example.txt b/tiddlywiki/DBCA example.txt new file mode 100755 index 0000000..248787f --- /dev/null +++ b/tiddlywiki/DBCA example.txt @@ -0,0 +1,27 @@ +# create database +$ORACLE_HOME/bin/dbca \ + -silent \ + -createDatabase \ + -templateName General_Purpose.dbc \ + -gdbName KITKATPRD \ + -sid KITKAT \ + -initParams db_unique_name=KITKATPRD \ + -characterSet AL32UTF8 \ + -sysPassword ***** \ + -systemPassword ***** \ + -emConfiguration NONE \ + -createAsContainerDatabase TRUE \ + -storageType ASM \ + -diskGroupName DATA \ + -redoLogFileSize 200 \ + -sampleSchema FALSE \ + -totalMemory 4096 \ + -databaseConfType RAC \ + -nodelist dbnode1,dbnode2 + +# remove database +$ORACLE_HOME/bin/dbca \ + -silent -deleteDatabase \ + -sourceDB KITKATPRD \ + -sysDBAUserName sys \ + -sysDBAPassword ***** diff --git a/tiddlywiki/DBMS_FILE_TRANSFER examples.md b/tiddlywiki/DBMS_FILE_TRANSFER examples.md new file mode 100755 index 0000000..b6ad970 --- /dev/null +++ b/tiddlywiki/DBMS_FILE_TRANSFER examples.md @@ -0,0 +1,131 @@ +On **target** database create a directory and an user for database link: +```sql +create directory DIR_DEST as '/mnt/yavin4/tmp/_oracle_/dir_dest'; +create user USER_DBLINK identified by *****; +grant create session to USER_DBLINK; +grant read,write on directory DIR_DEST to user_dblink; +``` + +On **source** database create a directory and a database link: +```sql +create directory DIR_SOURCE as '/mnt/yavin4/tmp/_oracle_/dir_source'; +create database link REMOTE_DB connect to USER_DBLINK identified by ***** using 'taris/WEDGEPRD'; +select * from dual@REMOTE_DB; +``` + +Use `DBMS_FILE_TRANSFER` from soure database to copy a single file from source directory to target directory: +```sql +BEGIN + DBMS_FILE_TRANSFER.put_file( + source_directory_object => 'DIR_SOURCE', + source_file_name => 'Full_GREEDOPRD_01.dmp', + destination_directory_object => 'DIR_DEST', + destination_file_name => 'Full_GREEDOPRD_01.dmp', + destination_database => 'REMOTE_DB'); +END; +/ +``` + +`DBMS_FILE_TRANSFER` don't have a **parallel** option, but we can run parallel transfers using `DBMS_SCHEDULER` jobs: +```sql +create or replace procedure FILECOPY1 as +BEGIN + DBMS_FILE_TRANSFER.put_file( + source_directory_object => 'DIR_SOURCE', + source_file_name => 'Full_GREEDOPRD_01.dmp', + destination_directory_object => 'DIR_DEST', + destination_file_name => 'Full_GREEDOPRD_01.dmp', + destination_database => 'REMOTE_DB'); +END; +/ + +create or replace procedure FILECOPY2 as +BEGIN + DBMS_FILE_TRANSFER.put_file( + source_directory_object => 'DIR_SOURCE', + source_file_name => 'Full_GREEDOPRD_02.dmp', + destination_directory_object => 'DIR_DEST', + destination_file_name => 'Full_GREEDOPRD_02.dmp', + destination_database => 'REMOTE_DB'); +END; +/ + +create or replace procedure FILECOPY3 as +BEGIN + DBMS_FILE_TRANSFER.put_file( + source_directory_object => 'DIR_SOURCE', + source_file_name => 'Full_GREEDOPRD_03.dmp', + destination_directory_object => 'DIR_DEST', + destination_file_name => 'Full_GREEDOPRD_03.dmp', + destination_database => 'REMOTE_DB'); +END; +/ + +create or replace procedure FILECOPY4 as +BEGIN + DBMS_FILE_TRANSFER.put_file( + source_directory_object => 'DIR_SOURCE', + source_file_name => 'Full_GREEDOPRD_04.dmp', + destination_directory_object => 'DIR_DEST', + destination_file_name => 'Full_GREEDOPRD_04.dmp', + destination_database => 'REMOTE_DB'); +END; +/ + +begin + DBMS_SCHEDULER.create_job + ( + job_name => 'JOB_FILECOPY1', + job_type => 'PLSQL_BLOCK', + job_action => 'BEGIN FILECOPY1; END;', + start_date => sysdate, + enabled => TRUE, + auto_drop => TRUE, + comments => 'one-time job'); + end; + / + +begin + DBMS_SCHEDULER.create_job + ( + job_name => 'JOB_FILECOPY2', + job_type => 'PLSQL_BLOCK', + job_action => 'BEGIN FILECOPY2; END;', + start_date => sysdate, + enabled => TRUE, + auto_drop => TRUE, + comments => 'one-time job'); + end; + / + +begin + DBMS_SCHEDULER.create_job + ( + job_name => 'JOB_FILECOPY3', + job_type => 'PLSQL_BLOCK', + job_action => 'BEGIN FILECOPY3; END;', + start_date => sysdate, + enabled => TRUE, + auto_drop => TRUE, + comments => 'one-time job'); + end; + / + +begin + DBMS_SCHEDULER.create_job + ( + job_name => 'JOB_FILECOPY4', + job_type => 'PLSQL_BLOCK', + job_action => 'BEGIN FILECOPY4; END;', + start_date => sysdate, + enabled => TRUE, + auto_drop => TRUE, + comments => 'one-time job'); + end; + / + +drop procedure FILECOPY1; +drop procedure FILECOPY2; +drop procedure FILECOPY3; +drop procedure FILECOPY4; +``` \ No newline at end of file diff --git a/tiddlywiki/DBMS_METADATA examples.txt b/tiddlywiki/DBMS_METADATA examples.txt new file mode 100755 index 0000000..f05022a --- /dev/null +++ b/tiddlywiki/DBMS_METADATA examples.txt @@ -0,0 +1,14 @@ +-- beautifully the output +SET LONG 20000 LONGCHUNKSIZE 20000 PAGESIZE 0 LINESIZE 1000 FEEDBACK OFF VERIFY OFF TRIMSPOOL ON + +BEGIN + DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true); + DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true); +END; +/ + +-- for a TRIGGER +SELECT DBMS_METADATA.get_ddl ('TRIGGER', trigger_name, owner) +FROM all_triggers +WHERE owner = '&OWNER' +AND trigger_name = '&TRIGGER_NAME'; diff --git a/tiddlywiki/DBMS_STATS - examples.txt b/tiddlywiki/DBMS_STATS - examples.txt new file mode 100755 index 0000000..207ad35 --- /dev/null +++ b/tiddlywiki/DBMS_STATS - examples.txt @@ -0,0 +1,3 @@ +-- Dictionary and fixed objects (X$) stats +execute dbms_stats.gather_dictionary_stats; +execute dbms_stats.gather_fixed_objects_stats; diff --git a/tiddlywiki/DGMGRL commands.txt b/tiddlywiki/DGMGRL commands.txt new file mode 100755 index 0000000..9802680 --- /dev/null +++ b/tiddlywiki/DGMGRL commands.txt @@ -0,0 +1,6 @@ +-- stop/start MRP on standby +edit database 'DRF1DRPEXA' set state='APPLY-OFF'; +edit database 'DRF1DRPEXA' set state='ONLINE'; +-- display / set APPLY delay +show database 'jabbadrp' delaymins +edit database 'jabbadrp' set property delaymins=30; diff --git a/tiddlywiki/Data Generator & Swing Bench.md b/tiddlywiki/Data Generator & Swing Bench.md new file mode 100755 index 0000000..83cdce8 --- /dev/null +++ b/tiddlywiki/Data Generator & Swing Bench.md @@ -0,0 +1,48 @@ +> Author home page: [dominicgiles.com](http://www.dominicgiles.com) +> +Install JDK +```bash +dnf install java-1.8.0-openjdk.x86_64 +``` + +Create linux user and directories for Data Generator & Swing Bench +```bash +groupadd orabench +useradd orabench -g orabench -G orabench +mkdir -p /app/datagenerator +mkdir -p /app/swingbench +chown -R orabench:orabench /app/datagenerator /app/swingbench +``` + +Download and run Data Generator +```bash +su - orabench +wget http://www.dominicgiles.com/swingbench/datageneratorlatest.zip +unzip datageneratorlatest.zip +rm -rf datageneratorlatest.zip +mv datagenerator stable + +export DISPLAY=:0.0 +/app/datagenerator/stable/bin/datagenerator +``` + +Depending of schemas to install, create corresponding schemas/tablespaces +```sql +create bigfile tablespace SH datafile size 64M autoextend ON next 64M; +create user SH identified by SH default tablespace SH; +grant connect,resource to SH; + +create bigfile tablespace SOE datafile size 64M autoextend ON next 64M; +create user SOE identified by SOE default tablespace SOE; +grant connect,resource to SOE; +``` + +Download and run Swing Bench +```bash +cd /app/swingbench/ +wget https://github.com/domgiles/swingbench-public/releases/download/production/swingbenchlatest.zip +unzip swingbenchlatest.zip +rm -rf swingbenchlatest.zip +mv swingbench stable +/app/swingbench/stable/bin/swingbench +``` diff --git a/tiddlywiki/Dataguard - sync using incremental backup.md b/tiddlywiki/Dataguard - sync using incremental backup.md new file mode 100755 index 0000000..9ff72d6 --- /dev/null +++ b/tiddlywiki/Dataguard - sync using incremental backup.md @@ -0,0 +1,149 @@ +# Dataguard configuration + + DGMGRL> show configuration + + Configuration - asty + + Protection Mode: MaxPerformance + Members: + astyprd - Primary database + astydrp - Physical standby database + + + DGMGRL> show database 'astydrp' + + Database - astydrp + + Role: PHYSICAL STANDBY + Intended State: APPLY-ON + Transport Lag: 0 seconds (computed 1 second ago) + Apply Lag: 0 seconds (computed 1 second ago) + Average Apply Rate: 803.00 KByte/s + Real Time Query: OFF + Instance(s): + ASTYDRP + +# Simulate a gap + +Stop the standby database. + +On primary, switch 3-4 times the archivelog on primary and delete all archived logs: + + SQL> alter system archive log current; + RMAN> delete noprompt force archivelog all; + +To complicate the situation, add 2 new datafile and create a new tablespace on primary. + + SQL> alter tablespace SYSTEM add datafile size 10M autoextend OFF; + SQL> alter tablespace SYSAUX add datafile size 10M autoextend OFF; + SQL> create tablespace NAL_HUTTA datafile size 10M autoextend ON next 10M; + +Repeat switch/delete archivelog operation on primary: + + SQL> alter system archive log current; + RMAN> delete noprompt force archivelog all; + +Start the standby database in **MOUNT** mode, let it trying to resolve the gap and check the status of the syncronisation. +On primary: + + alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; + set lines 200 + + select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) + from gv$archived_log group by THREAD#; + +On standby: + + alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; + set lines 200 + + select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) + from gv$archived_log + where APPLIED='YES' group by THREAD#; + + +# Syncronize the standby + +Cancel **MRP** on standby: + + DGMGRL> edit database 'astydrp' set STATE='LOG-APPLY-OFF'; + +Try to recover the standby and note down the required `SCN`: + + SQL> recover standby database; + +Normaly it should be the same as: + + SQL> select 1+CURRENT_SCN from v$database; + +On primary, identify all datafiles created after this `SCN`; in my example `SCN=5681090` + + SQL> select FILE#,NAME from v$datafile where CREATION_CHANGE# >= 5681090; + +Backup datafiles and generate a new standby controlfile: + + run{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/%d_%U_%s_%t.bck'; + backup as compressed backupset datafile 17,18,19; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/%d_%U_%s_%t.ctl'; + backup current controlfile; + release channel ch01; + } + + +Restart the standby in mode **NOMOUNT** and restore the standby controfile: + + RMAN> restore standby controlfile from '/mnt/yavin4/tmp/_oracle_/orabackup/temp/ASTY_0l1678fs_21_1_1_21_1113825788.ctl'; + +Alternatively, you can restore the standby controfile from active database: + + RMAN> restore standby controlfile from service ASTYPRD_DGMGRL; + +Mount the standby database: + + RMAN> alter database mount; + +Restore and new datafiles: + + RMAN> restore datafile 17,18,19; + +Catalog recovery area and old standby datafiles: + + RMAN> catalog start with '/data/ASTYDRP' noprompt; + RMAN> catalog start with '/fra/ASTYDRP' noprompt; + +At this moment, because of fresh restored controlfile, Oracle see the datafiles as datafile copy: + + RMAN> list datafilecopy all; + +Switch database to copy: + + RMAN> switch database to copy; + + +To recover standby using *from SCN* backupset, we can proceed from active database or using physical backupset: + + rman auxiliary / + run { + allocate channel pri1 device type DISK; + allocate channel pri2 device type DISK; + allocate channel pri3 device type DISK; + allocate channel pri4 device type DISK; + recover database from service ASTYPRD_DGMGRL using compressed backupset section size 8G; + } + +Clear standby redolog: + + SQL> select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log; + +Enable **MRP**: + + DGMGRL> edit database 'astydrp' set STATE='ONLINE'; diff --git a/tiddlywiki/Dataguard 21c standalone creation - example.md b/tiddlywiki/Dataguard 21c standalone creation - example.md new file mode 100755 index 0000000..9903681 --- /dev/null +++ b/tiddlywiki/Dataguard 21c standalone creation - example.md @@ -0,0 +1,239 @@ +Network configuration +--------------------- + +`/etc/listener.ora` on primary server: + + + LISTENER_DG = + (ADDRESS_LIST= + (ADDRESS=(PROTOCOL=tcp)(HOST=taris.swgalaxy)(PORT=1523)) + ) + + SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = ASTYPRD_DGMGRL) + (SID_NAME = ASTYPRD) + (ORACLE_HOME = /app/oracle/product/21) + ) + ) + + +`/etc/listener.ora` on secondary server: + + LISTENER_DG = + (ADDRESS_LIST= + (ADDRESS=(PROTOCOL=tcp)(HOST=mandalore.swgalaxy)(PORT=1523)) + ) + + SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = ASTYDRP_DGMGRL) + (SID_NAME = ASTYDRP) + (ORACLE_HOME = /app/oracle/product/21) + ) + ) + + +Start `LISTENER_DG` on both servers: + + lsnrctl start LISTENER_DG + + +`/etc/tnsnames.ora` on both servers: + + ASTYPRD_DGMGRL = + (DESCRIPTION = + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = taris.swgalaxy)(PORT = 1523)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = ASTYPRD_DGMGRL) + ) + ) + + ASTYDRP_DGMGRL = + (DESCRIPTION = + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = mandalore.swgalaxy)(PORT = 1523)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = ASTYDRP_DGMGRL) + ) + ) + + +Dataguard initial duplication +----------------------------- + +From primary init.ora, create an init.ora from secondary database, test-it starting the secondary database in nomount and create a spfile from this init.ora. Startup secondary database in nomount mode. Copy also the passwordfile from primry to secondary server. + +Try cross connections from both primary and secondary servers: + + sqlplus 'sys/"*****"'@ASTYPRD_DGMGRL as sysdba + sqlplus 'sys/"*****"'@ASTYDRP_DGMGRL as sysdba + + +Create standby redolog on primary database using the result of following queries: + + select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; + select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; + +If you plan to use backups on standby database, set required RMAN parameters **prior** to duplicate step: + + configure archivelog deletion policy to applied on all standby; + configure db_unique_name 'ASTYDRP' connect identifier 'ASTYDRP_DGMGRL'; + configure db_unique_name 'ASTYPRD' connect identifier 'ASTYPRD_DGMGRL'; + + +Duplicate primary database *for standby*: + + rman target='sys/"*****"'@ASTYPRD_DGMGRL auxiliary='sys/"*****"'@ASTYDRP_DGMGRL + + run + { + allocate channel pri01 device type disk; + allocate channel pri02 device type disk; + allocate channel pri03 device type disk; + allocate channel pri04 device type disk; + allocate channel pri05 device type disk; + allocate channel pri06 device type disk; + allocate channel pri07 device type disk; + allocate channel pri08 device type disk; + allocate channel pri09 device type disk; + allocate channel pri10 device type disk; + + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + + duplicate database 'ASTY' for standby + from active database using compressed backupset section size 512M; + } + + +It is nor mandatory but recommanded to activate flashback on both databases (leaving for exmple then default retention target of 1 day): + + alter database flashback ON; + + +Dataguard broker configuration +------------------------------ + +On primary database: + + alter system set dg_broker_config_file1='/app/oracle/base/admin/ASTYPRD/dgmgrl/dr1ASTYPRD.dat' scope=both sid='*'; + alter system set dg_broker_config_file2='/app/oracle/base/admin/ASTYPRD/dgmgrl/dr2ASTYPRD.dat' scope=both sid='*'; + alter system set dg_broker_start=TRUE scope=both sid='*'; + +On secondary database: + + alter system set dg_broker_config_file1='/app/oracle/base/admin/ASTYDRP/dgmgrl/dr1ASTYDRP.dat' scope=both sid='*'; + alter system set dg_broker_config_file2='/app/oracle/base/admin/ASTYDRP/dgmgrl/dr2ASTYDRP.dat' scope=both sid='*'; + alter system set dg_broker_start=TRUE scope=both sid='*'; + +On primary or secondary server: + + dgmgrl + connect sys/*****@ASTYPRD_DGMGRL + + create configuration ASTY as + primary database is ASTYPRD + connect identifier is ASTYPRD_DGMGRL; + + add database ASTYDRP + as connect identifier is ASTYDRP_DGMGRL + maintained as physical; + + enable configuration; + + edit database 'astyprd' set property ArchiveLagTarget=0; + edit database 'astyprd' set property LogArchiveMaxProcesses=2; + edit database 'astyprd' set property LogArchiveMinSucceedDest=1; + edit database 'astyprd' set property StandbyFileManagement='AUTO'; + + edit database 'astydrp' set property ArchiveLagTarget=0; + edit database 'astydrp' set property LogArchiveMaxProcesses=2; + edit database 'astydrp' set property LogArchiveMinSucceedDest=1; + edit database 'astydrp' set property StandbyFileManagement='AUTO'; + + edit instance 'ASTYPRD' set property 'StaticConnectIdentifier'='ASTYPRD_DGMGRL'; + edit instance 'ASTYDRP' set property 'StaticConnectIdentifier'='ASTYPRD_DGMGRL'; + + edit instance 'ASTYPRD' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=taris.swgalaxy)(PORT=1523))(CONNECT_DATA=(SERVICE_NAME=ASTYPRD_DGMGRL)(INSTANCE_NAME=ASTYPRD)(SERVER=DEDICATED)))'; + edit instance 'ASTYDRP' set property 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=mandalore.swgalaxy)(PORT=1523))(CONNECT_DATA=(SERVICE_NAME=ASTYDRP_DGMGRL)(INSTANCE_NAME=ASTYDRP)(SERVER=DEDICATED)))'; + + +Wait a couple of minutes (after eventually archiving current log on primary database) therefore: + + show configuration + show database 'astyprd' + show database 'astydrp' + + validate database 'astyprd' + validate database 'astydrp' + + +To disable/enable redo apply on standby database: + + edit database 'astydrp' set state='APPLY-OFF'; + edit database 'astydrp' set state='ONLINE'; + + + +Backup primary and standby databases +------------------------------------ + +Backup primary database: + + rman target / + + run + { + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYPRD/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYPRD/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYPRD/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYPRD/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYPRD/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; + } + + +Backup standby database: + + rman target='"sys/*****"' + + run + { + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYDRP/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYDRP/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYDRP/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYDRP/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTYDRP/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; + } + \ No newline at end of file diff --git a/tiddlywiki/Dataguard CDB 12.1 exemple.txt b/tiddlywiki/Dataguard CDB 12.1 exemple.txt new file mode 100755 index 0000000..b7f6b09 --- /dev/null +++ b/tiddlywiki/Dataguard CDB 12.1 exemple.txt @@ -0,0 +1,243 @@ +~~ creation of CDB database + +$ORACLE_HOME/bin/dbca \ +-silent \ +-createDatabase \ +-templateName General_Purpose.dbc \ +-gdbName EWOK \ +-sid EWOKPRD \ +-initParams db_unique_name=EWOKPRD \ +-characterSet AL32UTF8 \ +-sysPassword secret \ +-systemPassword secret \ +-emConfiguration NONE \ +-createAsContainerDatabase TRUE \ +-storageType ASM \ +-diskGroupName DATA \ +-redoLogFileSize 100 \ +-sampleSchema FALSE \ +-totalMemory 2048 \ +-databaseConfType RAC \ +-nodelist vortex-db01,vortex-db02 + + +~~ identify the spfile and passwordfile ASM location and more readable aliases +srvctl config database -d EWOKPRD + +ASMCMD [+] > cd +DATA/EWOKPRD/ +ASMCMD [+DATA/EWOKPRD] > mkalias +DATA/EWOKPRD/PARAMETERFILE/spfile.333.957718565 spfileewokprd.ora +ASMCMD [+DATA/EWOKPRD] > mkalias +DATA/EWOKPRD/PASSWORD/pwdewokprd.308.957717627 orapwewokprd + +~~ update spfile location in the CRS +srvctl modify database -db EWOKPRD -spfile +DATA/EWOKPRD/spfileewokprd.ora +srvctl modify database -db EWOKPRD -pwfile +DATA/EWOKPRD/orapwewokprd +srvctl stop database -d EWOKPRD +srvctl start database -d EWOKPRD +srvctl status database -d EWOKPRD -v + + +~~ enable ARCHIVELG mode and FORCE LOGGING on the PRIMARY database + +alter system set db_recovery_file_dest_size = 4G scope=both sid='*'; +alter system set db_recovery_file_dest = '+RECO' scope=both sid='*'; +alter system set log_archive_dest_1 = 'location=USE_DB_RECOVERY_FILE_DEST' scope=both sid='*'; + +srvctl stop database -d EWOKPRD + +startup mount exclusive +alter database archivelog; +alter database open; +alter database force logging; + +srvctl stop database -d EWOKPRD +srvctl start database -d EWOKPRD + +alter system archive log current; + +~~ copy pfile and passwordfile from primary cluster to first node of the stabdby cluster + +SQL> create pfile='/tmp/pfile_EWOK.ora' from spfile; +asmcmd cp +DATA/EWOKPRD/orapwewokprd /tmp +cd /tmp +scp orapwewokprd pfile_EWOK.ora kessel-db01/tmp + +~~ make adjustements in pfile and put all in $ORACLE_HOME/dbs + +SQL> create spfile from pfile='/tmp/standby.ora'; +cp orapwewokprd $ORACLE_HOME/dbs/orapwEWOKDRP1 + +SQL> startup nomount + +~~ NETWORK configuration - listeners +~~ in my confoguration I have a dedicated listener for DATAGUARD; following definitions has been added on primary cluster: + +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = EWOKPRD_DGMGRL) + (SID_NAME = EWOKPRD1) + (ORACLE_HOME = /app/oracle/product/12.1/db_1) + ) + ) + +# ...For DATAGUARD + +~~ and on standby cluster: + +# For DATAGUARD... +SID_LIST_LISTENER_DG = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = EWOKDRP_DGMGRL) + (SID_NAME = EWOKDRP1) + (ORACLE_HOME = /app/oracle/product/12.1/db_1) + ) + ) +# ...For DATAGUARD + + +~~ cross connection tests; we should be able to connect to iddle instances too +sqlplus /nolog +connect sys/secret@vortex-db01-dba-vip:1541/EWOKPRD_DGMGRL as sysdba +connect sys/secret@vortex-db02-dba-vip:1541/EWOKPRD_DGMGRL as sysdba +connect sys/secret@kessel-db01-dba-vip:1541/EWOKDRP_DGMGRL as sysdba +(for the moment the standby pfile/passwordfile are not deployed on second node of the standby cluster) + +~~ aliases to add on tnsnames.ora on all primary/standby database nodes +# For DATAGUARD... +EWOKPRD_DG = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-db01-dba-vip)(PORT = 1541)) + (ADDRESS = (PROTOCOL = TCP)(HOST = vortex-db02-dba-vip)(PORT = 1541)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = EWOKPRD_DGMGRL) + ) +) + +EWOKDRP_DG = + (DESCRIPTION = + (FAILOVER = YES) + (ADDRESS_LIST = + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-db01-dba-vip)(PORT = 1541)) + (ADDRESS = (PROTOCOL = TCP)(HOST = kessel-db02-dba-vip)(PORT = 1541)) + ) + (CONNECT_DATA = + (SERVER = DEDICATED) + (SERVICE_NAME = EWOKDRP_DGMGRL) + ) +) +# ...For DATAGUARD + + +~~ cross connexion test using TNS aliases; we should be able to connect to iddle instances + +sqlplus /nolog +connect sys/secret@EWOKPRD_DG as sysdba +connect sys/secret@EWOKDRP_DG as sysdba + + +~~ from the spfile of primary DB we create an spfile for the secondary DB and we start thesecondary DB in nomount +rman target sys/secret@EWOKPRD_DG auxiliary sys/secret@EWOKDRP_DG +run { + allocate channel pri1 device type DISK; + allocate channel pri2 device type DISK; + allocate auxiliary channel aux1 device type DISK; + allocate auxiliary channel aux2 device type DISK; + duplicate target database + for standby + from active database + nofilenamecheck + using compressed backupset section size 1G; +} + + +~~ Dataguard Broker configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +~~ on primary database +alter system set dg_broker_start=FALSE scope=both sid='*'; +alter system set dg_broker_config_file1='+DATA/EWOKPRD/dr1EWOKPRD.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/EWOKPRD/dr2EWOKPRD.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + +~~ on secondary database +alter system set dg_broker_start=FALSE scope=both sid='*'; +alter system set dg_broker_config_file1='+DATA/EWOKDRP/dr1EWOKDRP.dat' scope=both sid='*'; +alter system set dg_broker_config_file2='+DATA/EWOKDRP/dr2EWOKFRP.dat' scope=both sid='*'; +alter system set dg_broker_start=TRUE scope=both sid='*'; + +~~ creation of STANDBY REDELOG on both databases + +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 1 size 100M; + +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; +ALTER DATABASE ADD STANDBY LOGFILE thread 2 size 100M; + + +select GROUP#,THREAD#,STATUS, BYTES from v$standby_log; + +col MEMBER for a60 +select * from v$logfile; + + +~~ create DGMGRL configuration +dgmgrl +DGMGRL> connect sys/secret@EWOKPRD_DG +DGMGRL> create configuration EWOK as + primary database is EWOKPRD + connect identifier is EWOKPRD_DG; +DGMGRL> add database EWOKDRP + as connect identifier is EWOKDRP_DG + maintained as physical; + +DGMGRL> edit database 'ewokdrp' set property ArchiveLagTarget=0; +DGMGRL> edit database 'ewokdrp' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'ewokdrp' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'ewokdrp' set property StandbyFileManagement='AUTO'; +DGMGRL> edit database 'ewokdrp' set property set property TransportDisconnectedThreshold='0'; + +DGMGRL> edit database 'ewokprd' set property ArchiveLagTarget=0; +DGMGRL> edit database 'ewokprd' set property LogArchiveMaxProcesses=2; +DGMGRL> edit database 'ewokprd' set property LogArchiveMinSucceedDest=1; +DGMGRL> edit database 'ewokprd' set property StandbyFileManagement='AUTO'; +DGMGRL> edit database 'ewokprd' set property set property TransportDisconnectedThreshold='0'; + +DGMGRL> enable configuration; +DGMGRL> show configuration; + +~~ VERY IMPORANT +~~ set StaticConnectIdentifier for all PRIMARY/DATAGUARD database instances +~~ use complete DESCRIPTION syntax to uniquely identifiing the instances of each node + +EDIT INSTANCE 'EWOKPRD1' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vortex-db01-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKPRD_DGMGRL)(INSTANCE_NAME=EWOKPRD1)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'EWOKPRD2' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vortex-db02-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKPRD_DGMGRL)(INSTANCE_NAME=EWOKPRD2)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'EWOKDRP1' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=kessel-db01-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKDRP_DGMGRL)(INSTANCE_NAME=EWOKDRP1)(SERVER=DEDICATED)))'; +EDIT INSTANCE 'EWOKDRP2' SET PROPERTY 'StaticConnectIdentifier'='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=kessel-db02-dba-vip)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=EWOKDRP_DGMGRL)(INSTANCE_NAME=EWOKDRP2)(SERVER=DEDICATED)))'; + + +~~ move spfile from file system to ASM +create pfile='/tmp/pfile_EWOKDRP.ora' from spfile; +create spfile ='+DATA/ewokdrp/spfileEWOKDRP.ora' from pfile='/tmp/pfile_EWOKDRP.ora'; + +~~ register standby database in the CRS +srvctl add database -d EWOKDRP -o /app/oracle/product/12.1/db_1 -c RAC -p '+DATA/EWOKDRP/spfileEWOKDRP.ora' -r physical_standby -n EWOK + +~~ pay attention to -s ; the default value is OPEN, that means that your DATAGUARD will be OPENED (active DATAGUARD) +srvctl add instance -d EWOKDRP -i EWOKDRP1 -n kessel-db01 +srvctl add instance -d EWOKDRP -i EWOKDRP2 -n kessel-db02 + +srvctl start database -d EWOKDRP -o mount +srvctl status database -d EWOKDRP -v + +~~ finally, move passwordfile to ASM using pwcopy under asmcmd +asmcmd pwcopy +DATA/EWOKPRD/orapwewokprd /tmp/orapwewokprd +scp /tmp/orapwewokprd kessel-db01:/tmp/orapwewokprd +asmcmd pwcopy /tmp/orapwewokprd +DATA/EWOKDRP/orapwewokdrp diff --git a/tiddlywiki/Dataguard archivelog apply check.txt b/tiddlywiki/Dataguard archivelog apply check.txt new file mode 100755 index 0000000..ae53331 --- /dev/null +++ b/tiddlywiki/Dataguard archivelog apply check.txt @@ -0,0 +1,11 @@ +alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; +set lines 200 + +-- on PRIMARY database +---------------------- +select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log group by THREAD#; + +-- on STANDBY database +---------------------- +select THREAD#, max(SEQUENCE#), max(FIRST_TIME),max(NEXT_TIME),max(COMPLETION_TIME) from gv$archived_log + where APPLIED='YES' group by THREAD#; diff --git a/tiddlywiki/Divers.tid b/tiddlywiki/Divers.tid new file mode 100755 index 0000000..5df090a --- /dev/null +++ b/tiddlywiki/Divers.tid @@ -0,0 +1,9 @@ +color: #ff80ff +created: 20191026073349424 +creator: vplesnila +modified: 20200203165611842 +modifier: vplesnila +tags: Contents +title: Divers +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/Docker notes.md b/tiddlywiki/Docker notes.md new file mode 100755 index 0000000..1e502f7 --- /dev/null +++ b/tiddlywiki/Docker notes.md @@ -0,0 +1,49 @@ +Install Docker +-------------- + + dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo + dnf install -y docker-ce --nobest + + systemctl enable --now docker + systemctl status docker + + +Docker is installed by default on `/var/lib/docker`. To move it on other file system, for example `/app/docker`: + + systemctl stop docker + + cd /var/lib/ + mv docker /app/ + ln -s /app/docker . + + systemctl start docker + systemctl status docker + +Usefull commands +---------------- + +Alias to stop and remove all containers: + + alias cclean='docker stop $(docker ps -a -q); docker rm $(docker ps -a -q)' + +Truncate all container logs: + + truncate -s 0 $(docker inspect --format='{{.LogPath}}' $(docker ps -a -q)) + +Save an image: + + docker save sabnzbd/sabnzbd | pigz > /mnt/yavin4/tmp/sabnzbd.tar.gz + +Load an image: + + gunzip -c /mnt/yavin4/tmp/sabnzbd.tar.gz | docker load + +Set auto start for all host containers: + + docker update --restart unless-stopped $(docker ps -q) + +Run command in bash: + + docker exec -it bash + + diff --git a/tiddlywiki/Draft of 'New Tiddler' by vplesnila.tid b/tiddlywiki/Draft of 'New Tiddler' by vplesnila.tid new file mode 100755 index 0000000..84d21ea --- /dev/null +++ b/tiddlywiki/Draft of 'New Tiddler' by vplesnila.tid @@ -0,0 +1,5 @@ +created: 20200225085307862 +modified: 20200225085307862 +modifier: vplesnila +title: Draft of 'New Tiddler' by vplesnila +type: text/vnd.tiddlywiki \ No newline at end of file diff --git a/tiddlywiki/Draft of 'ssh - ProxyJump' by vplesnila.tid b/tiddlywiki/Draft of 'ssh - ProxyJump' by vplesnila.tid new file mode 100755 index 0000000..43f451e --- /dev/null +++ b/tiddlywiki/Draft of 'ssh - ProxyJump' by vplesnila.tid @@ -0,0 +1,4 @@ +modified: 20220703081209354 +modifier: vplesnila +title: Draft of 'ssh - ProxyJump' by vplesnila +type: text/vnd.tiddlywiki \ No newline at end of file diff --git a/tiddlywiki/Draft.tid b/tiddlywiki/Draft.tid new file mode 100755 index 0000000..185c3d1 --- /dev/null +++ b/tiddlywiki/Draft.tid @@ -0,0 +1,8 @@ +created: 20190628135534599 +creator: vplesnila +modified: 20190628135556880 +modifier: vplesnila +tags: Contents +title: Draft +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/English.tid b/tiddlywiki/English.tid new file mode 100755 index 0000000..6b51e4f --- /dev/null +++ b/tiddlywiki/English.tid @@ -0,0 +1,8 @@ +color: #ff8080 +created: 20191107141106371 +creator: vplesnila +modified: 20191107143338459 +modifier: vplesnila +tags: Divers +title: English +type: text/vnd.tiddlywiki \ No newline at end of file diff --git a/tiddlywiki/Enterprise Manager Database Express setup.md b/tiddlywiki/Enterprise Manager Database Express setup.md new file mode 100755 index 0000000..14b504b --- /dev/null +++ b/tiddlywiki/Enterprise Manager Database Express setup.md @@ -0,0 +1,29 @@ +Setup for CDB with all PDB on the same port: + + select dbms_xdb_config.gethttpsport() from dual; + exec dbms_xdb_config.sethttpsport(5500); + exec dbms_xdb_config.sethttpport(5511); + alter system set dispatchers='(PROTOCOL=TCP)(SERVICE=THANASPRD)'; + exec dbms_xdb_config.setglobalportenabled(TRUE); + alter system register; + + +Acces URL: https://192.168.0.64:5500/em +Acces URL: http://192.168.0.64:5511/em + + +Setup for CDB with each PDB on different port: + +On CDB$ROOT: + + exec dbms_xdb_config.setglobalportenabled(FALSE); + +On PDB: + + alter session set container=NEREUS; + select dbms_xdb_config.gethttpsport() from dual; + exec dbms_xdb_config.sethttpsport(5555); + alter system register;` + + +Acces URL: https://192.168.0.64:5555/em diff --git a/tiddlywiki/Environement_variable_of_a_running_process.txt b/tiddlywiki/Environement_variable_of_a_running_process.txt new file mode 100755 index 0000000..c24e306 --- /dev/null +++ b/tiddlywiki/Environement_variable_of_a_running_process.txt @@ -0,0 +1,8 @@ +AIX: +ps eww + +SOLARIS: +pargs -e + +LINUX: +cat /proc//environ | tr '\0' '\n' diff --git a/tiddlywiki/Failover example.txt b/tiddlywiki/Failover example.txt new file mode 100755 index 0000000..200ffeb --- /dev/null +++ b/tiddlywiki/Failover example.txt @@ -0,0 +1,29 @@ +###Check max sequence is ok on both dataguard and cascades both are in sync + +## normally this part is ok ,Check the protection mode and role in standby cascade database it should be maximum performance, if not change to maximum performance. Once mode is fine +SELECT OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE FROM V$DATABASE; OPEN_MODE PROTECTION_MODE DATABASE_ROLE +NOTE: If protection_mode is other than maximum performance, then alter it as below. +SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE; + + ##Stop recovery on both dataguard and cascades +recover managed standby database cancel; +####Now Action on cascades only : +###Now activate standby +alter database recover managed standby database finish; +alter database activate standby database; +##Check the role and mode of the database cascade, it should be primary now +select name,open_mode,database_role from v$database; +#### open database and bounce , recovery should be stopped to avoid active dataguard flag . +Alter databas open; +Bounce your database and verify database name its open mode and its role. +select database_role from v$database; + +###Change dbname with nid utility +Step:1 Mount the database with old db name(standby) +Step:2 Run the nid utility (syntax: nid sys/password@CURRENT_DBNAME DBNAME=NEW_DBNAME) +Step:3 Once you run the nid utility the name will be changed to new db name. +Step:4 Then you have to change the db_name in the parameter file. Using alter system command and start db in nomount to check ok . +Step:5 change the spfile to a new db name. Check spfile name correct with new dbname. +Step:6 Now open the database with reset logs option. +Step7: register database information for listener alter system register. +###And check connection using sqlplus from client . diff --git a/tiddlywiki/Flask tutorials.txt b/tiddlywiki/Flask tutorials.txt new file mode 100755 index 0000000..6040da7 --- /dev/null +++ b/tiddlywiki/Flask tutorials.txt @@ -0,0 +1,4 @@ +https://www.fullstackpython.com/flask.html + +https://testdriven.io/blog/developing-a-single-page-app-with-flask-and-vuejs/#conclusion + diff --git a/tiddlywiki/Generate Rebuild Index commands.txt b/tiddlywiki/Generate Rebuild Index commands.txt new file mode 100755 index 0000000..934391f --- /dev/null +++ b/tiddlywiki/Generate Rebuild Index commands.txt @@ -0,0 +1,9 @@ +set lines 256 pages 0 + +select 'alter index "'|| owner || '"."' || index_name || '" rebuild online compute statistics;' +from + dba_indexes +where + owner='DRIVE' and + table_name in ('FUEL_INTERIM_TRANS_HEADERS', 'FUEL_INTERIM_TRANS_DETAILS', 'FUEL_TRANSACTION_ERRORS'); + \ No newline at end of file diff --git a/tiddlywiki/HAProxy - configuration example with HTTP_HTTPS_SSH_VPN.txt b/tiddlywiki/HAProxy - configuration example with HTTP_HTTPS_SSH_VPN.txt new file mode 100755 index 0000000..f0e5a79 --- /dev/null +++ b/tiddlywiki/HAProxy - configuration example with HTTP_HTTPS_SSH_VPN.txt @@ -0,0 +1,54 @@ +-- host IP address is 192.168.0.8 +-- Apache use ports 9080/90443 +-- all HTTP requests on 80 will be redirected to 9080 except flower.databasepro.fr wich will go on 192.168.0.82:80 +-- incoming HTTP 443 requests will be redirected to 9433 except flower.databasepro.fr wich will go on 192.168.0.82:443 +-- incoming SSH requests on 443 port will be redirected to the port 22 +-- incoming OpenVPN requests on 443 port will be redirected to 192.168.0.9:1194 + + +frontend in_80 + bind 192.168.0.8:80 + default_backend out_80_default + # Define hosts + acl host_flower hdr(host) -i flower.databasepro.fr + # Figure out which one to use + use_backend out_80_flower if host_flower + +backend out_80_default + server sv1 192.168.0.8:9080 maxconn 32 + +backend out_80_flower + server sv1 192.168.0.82:80 maxconn 32 + + +frontend in_443 + bind 192.168.0.8:443 + mode tcp + option tcplog + tcp-request inspect-delay 5s + tcp-request content accept if HTTP + # Define hosts + acl host_flower hdr(host) -i flower.databasepro.fr + # Figure out which one to use + use_backend out_443_flower if { req_ssl_sni -i flower.databasepro.fr } + use_backend out_443_https if { req.ssl_hello_type 1 } + use_backend out_ssh if { payload(0,7) -m bin 5353482d322e30 } + default_backend openvpn + +backend out_443_flower + server sv1 192.168.0.82:443 + mode tcp + + +backend out_443_https + server sv1 192.168.0.8:9443 + mode tcp + +backend openvpn + mode tcp + server openvpn-server 192.168.0.9:1194 + +backend out_ssh + mode tcp + timeout server 2h + server ssh-local 192.168.0.8:22 diff --git a/tiddlywiki/Installing Oracle Database 11gR2 on Grid Infrastructure 19c.txt b/tiddlywiki/Installing Oracle Database 11gR2 on Grid Infrastructure 19c.txt new file mode 100755 index 0000000..c413ffe --- /dev/null +++ b/tiddlywiki/Installing Oracle Database 11gR2 on Grid Infrastructure 19c.txt @@ -0,0 +1,4 @@ +~~ prior running runInstaller from 11gR2 distribution, execute the following command as oracle user +$GRID_HOME/oui/bin/runInstaller -ignoreSysPrereqs -updateNodeList \ + ORACLE_HOME=/app/grid/product/19/ "CLUSTER_NODES=vortex-db01.swgalaxy,vortex-db02.swgalaxy" \ + CRS=true LOCAL_NODE=vortex-db01.swgalaxy diff --git a/tiddlywiki/KVM - rename VM example.txt b/tiddlywiki/KVM - rename VM example.txt new file mode 100755 index 0000000..388bac9 --- /dev/null +++ b/tiddlywiki/KVM - rename VM example.txt @@ -0,0 +1,8 @@ +# rename atrisia3 to ivera-mongo03 changing also the storage path + +virsh dumpxml atrisia3 > atrisia3.xml +sed -i 's/atrisia3/ivera-mongo03/g' atrisia3.xml +sed -i 's/\/vm\/hdd0\/ivera-mongo03/\/vm\/hdd0\/ivera-mongodb\/ivera-mongo03/g' atrisia3.xml +mv /vm/hdd0/atrisia3 /vm/hdd0/ivera-mongodb/ivera-mongo03 +virsh undefine atrisia3 --remove-all-storage +virsh define --file atrisia3.xml \ No newline at end of file diff --git a/tiddlywiki/KVM - some example commands to create Oracle RAC.txt b/tiddlywiki/KVM - some example commands to create Oracle RAC.txt new file mode 100755 index 0000000..7ea4a9c --- /dev/null +++ b/tiddlywiki/KVM - some example commands to create Oracle RAC.txt @@ -0,0 +1,132 @@ +qemu-img create -f raw /vm/hdd0/mandalore/hdd_01.img 8G +qemu-img create -f raw /vm/hdd0/mandalore/swap_01.img 16G +qemu-img create -f raw /vm/hdd0/mandalore/app_01.img 60G + + + +virt-install \ + --graphics vnc,listen=0.0.0.0 \ + --name=mandalore \ + --vcpus=4 \ + --memory=32768 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/mnt/yavin4/kit/Oracle/OEL7/V1003434-01.iso \ + --disk /vm/hdd0/mandalore/hdd_01.img \ + --disk /vm/hdd0/mandalore/swap_01.img \ + --disk /vm/hdd0/mandalore/app_01.img \ + --os-variant=ol7.6 + + + +qemu-img create -f raw /vm/hdd0/mandalore/app_02.img 30G +virsh attach-disk mandalore /vm/hdd0/mandalore/app_02.img vdd --driver qemu --subdriver raw --targetbus virtio --persistent + +lvextend -l +100%FREE /dev/vg_app/lv_app +xfs_growfs /app + + + +dd if=/dev/zero of=/vm/hdd0/mandalore/data_01.img bs=1G count=20 +dd if=/dev/zero of=/vm/hdd0/mandalore/data_02.img bs=1G count=20 +dd if=/dev/zero of=/vm/hdd0/mandalore/fra_01.img bs=1G count=20 + +qemu-img create -f raw /vm/hdd0/mandalore/data_01.img 20G + +virsh attach-disk mandalore --source /vm/hdd0/mandalore/data_01.img --target vde --persistent +virsh attach-disk mandalore --source /vm/hdd0/mandalore/data_02.img --target vdf --persistent +virsh attach-disk mandalore --source /vm/hdd0/mandalore/fra_01.img --target vdg --persistent + +vgcreate vg_data /dev/vde /dev/vdf +vgcreate vg_fra /dev/vdg + + +lvcreate -n lv_data -l 100%FREE vg_data +lvcreate -n lv_fra -l 100%FREE vg_fra + +mkfs.xfs /dev/vg_data/lv_data +mkfs.xfs /dev/vg_fra/lv_fra + + +virsh detach-disk --domain mandalore /vm/hdd0/mandalore/data_01.img --persistent --config --live +virsh attach-interface --domain vortex-db01 --type network \ + --source br0 \ + --model virtio \ + --config --live + +virsh attach-interface --domain vortex-db01 --type bridge --source br0 --model virtio --config --live + +dd if=/dev/zero of=/vm/ssd0/vortex-rac/disk_array/asm_01.img bs=1G count=20 +dd if=/dev/zero of=/vm/ssd0/vortex-rac/disk_array/asm_02.img bs=1G count=20 +dd if=/dev/zero of=/vm/ssd0/vortex-rac/disk_array/asm_03.img bs=1G count=20 +dd if=/dev/zero of=/vm/ssd0/vortex-rac/disk_array/asm_04.img bs=1G count=20 +dd if=/dev/zero of=/vm/ssd0/vortex-rac/disk_array/asm_05.img bs=1G count=20 + + +virsh domblklist vortex-db01 --details + + +virsh attach-disk vortex-db01 --source /vm/ssd0/vortex-rac/disk_array/asm_01.img --target vde --persistent +virsh attach-disk vortex-db01 --source /vm/ssd0/vortex-rac/disk_array/asm_02.img --target vdf --persistent +virsh attach-disk vortex-db01 --source /vm/ssd0/vortex-rac/disk_array/asm_03.img --target vdg --persistent +virsh attach-disk vortex-db01 --source /vm/ssd0/vortex-rac/disk_array/asm_04.img --target vdh --persistent +virsh attach-disk vortex-db01 --source /vm/ssd0/vortex-rac/disk_array/asm_05.img --target vdi --persistent + +virsh attach-disk vortex-db02 --source /vm/ssd0/vortex-rac/disk_array/asm_01.img --target vde --persistent +virsh attach-disk vortex-db02 --source /vm/ssd0/vortex-rac/disk_array/asm_02.img --target vdf --persistent +virsh attach-disk vortex-db02 --source /vm/ssd0/vortex-rac/disk_array/asm_03.img --target vdg --persistent +virsh attach-disk vortex-db02 --source /vm/ssd0/vortex-rac/disk_array/asm_04.img --target vdh --persistent +virsh attach-disk vortex-db02 --source /vm/ssd0/vortex-rac/disk_array/asm_05.img --target vdi --persistent + + +# need PARTITIONS for ASM disk +fdisk /dev/vdXXXXX + +groupadd -g 54327 asmoper +groupadd -g 54328 asmdba +groupadd -g 54329 asmadmin + +useradd -g oinstall -G asmoper,asmdba,asmadmin -c "Grid Infrastructure Owner" grid +usermod -g oinstall -G asmdba,dba,oper -c "Oracle Sotfware Owner" oracle + + +systemctl stop firewalld.service +systemctl disable firewalld.service + +yum install -y kmod-oracleasm.x86_64 oracleasm-support +oracleasm configure -i +(choose grid for user and asmdba for group) +oracleasm init + + +oracleasm createdisk DATA_01 /dev/vde1 +oracleasm createdisk DATA_02 /dev/vdf1 +oracleasm createdisk DATA_03 /dev/vdg1 +oracleasm createdisk DATA_04 /dev/vdh1 +oracleasm createdisk DATA_05 /dev/vdi1 + + + +dd if=/dev/zero of=/vm/hdd0/vortex-rac/disk_array/asm_fra_01.img bs=1G count=20 +dd if=/dev/zero of=/vm/hdd0/vortex-rac/disk_array/asm_fra_02.img bs=1G count=20 +dd if=/dev/zero of=/vm/hdd0/vortex-rac/disk_array/asm_fra_03.img bs=1G count=20 +dd if=/dev/zero of=/vm/hdd0/vortex-rac/disk_array/asm_fra_04.img bs=1G count=20 + + +virsh attach-disk vortex-db01 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_01.img --target vdj --persistent +virsh attach-disk vortex-db01 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_02.img --target vdk --persistent +virsh attach-disk vortex-db01 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_03.img --target vdl --persistent +virsh attach-disk vortex-db01 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_04.img --target vdm --persistent + + +virsh attach-disk vortex-db02 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_01.img --target vdj --persistent +virsh attach-disk vortex-db02 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_02.img --target vdk --persistent +virsh attach-disk vortex-db02 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_03.img --target vdl --persistent +virsh attach-disk vortex-db02 --source /vm/hdd0/vortex-rac/disk_array/asm_fra_04.img --target vdm --persistent + + + +oracleasm createdisk RECO_01 /dev/vdj1 +oracleasm createdisk RECO_02 /dev/vdk1 +oracleasm createdisk RECO_03 /dev/vdl1 +oracleasm createdisk RECO_04 /dev/vdm1 diff --git a/tiddlywiki/KVM notes.txt b/tiddlywiki/KVM notes.txt new file mode 100755 index 0000000..ba06810 --- /dev/null +++ b/tiddlywiki/KVM notes.txt @@ -0,0 +1,91 @@ +-- virsh usefull commands +------------------------- +# create new domain +virt-install \ + --graphics vnc,listen=0.0.0.0 \ + --name=mandalore \ + --vcpus=2 \ + --memory=4096 \ + --network bridge=br0 \ + --network bridge=br0 \ + --cdrom=/mnt/yavin4/kit/CentOS/CentOS-8.2.2004-x86_64-minimal.iso \ + --disk /datastore/mandalore/hdd_01.img,size=6 \ + --os-variant=centos8 + +# get OS Variant +osinfo-query os + +# destroy a domain +virsh destroy mandalore + +# delete VM and underlying storage +virsh undefine mandalore --remove-all-storage + + +# adding disk to VM +# on Dom0 create the disk (RAW format in exemple) +qemu-img create -f raw /datastore/mandalore/app_01.img 8G +# change the owner of the image ad permissions +chown qemu:qemu /datastore/mandalore/app_01.img +chmod 600 /datastore/mandalore/app_01.img +# on DomU list block devices +lsblk +# or to have the sorted list of block devices +fdisk -l | grep '^Disk /dev/vd[a-z]' +# pick the next available device, ex: vdb +# return to Dom0 and attach the disk +virsh attach-disk mandalore /datastore/mandalore/app_01.img vdb --driver qemu --subdriver raw --targetbus virtio --persistent +# to list the disk of a domain, execute from Dom0: +virsh domblklist seedmachine --details + +# to detach a disk +virsh detach-disk mandalore vdb --persistent + + +# to list the network interfaces of a domain, execute from Dom0: +virsh domiflist mandalore +# add network interface +virsh attach-interface --domain vortex-db01 --type bridge --source br0 --model virtio --persistent +# remove network interface +virsh detach-interface --domain ylesia-db01 --mac 52:54:00:8f:40:3c --type bridge + + +# dump domain XML config file +virsh dumpxml mandalore + +# define domain from XML config file +virsh define --file /mnt/yavin4/tmp/seedmachine.xml + + +# list all defined pool on Dom0 +virsh pool-list --all + +# deleting a pool +virsh pool-destroy atrisia1 +virsh pool-undefine atrisia1 + +# import (define) VM from XML file +virsh define /mnt/yavin4/data/d.backup_vm/KVM_seed/Centos8_2020-10-25/seedmachine.xml + +# clone VM +virt-clone \ + --original mandalore \ + --name ossus \ + --file /datastore/ossus/hdd_01.img \ + --file /datastore/ossus/app_01.img + +# KVM BUG: error: internal error: unknown feature amd-sev-es +Workaround: +mkdir -p /etc/qemu/firmware +touch /etc/qemu/firmware/50-edk2-ovmf-cc.json + +# Install KVM on CentOS8 +https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-8-headless-server/ + +# Online unicast MAC adress generator for network interface +https://www.hellion.org.uk/cgi-bin/randmac.pl + +# Static MAC Generator for KVM +# from http://blog.zencoffee.org/2016/06/static-mac-generator-kvm/ +MAC=$(date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/') +echo $MAC diff --git a/tiddlywiki/LVM - create PV_VG_LV and file system.txt b/tiddlywiki/LVM - create PV_VG_LV and file system.txt new file mode 100755 index 0000000..3564385 --- /dev/null +++ b/tiddlywiki/LVM - create PV_VG_LV and file system.txt @@ -0,0 +1,31 @@ +# display device informations +fdisk -l /dev/xvdf + +# create PV and VG +pvcreate /dev/xvdf +vgcreate vg_fra /dev/xvdf + +vgdisplay vg_fra -v + +# create LV using 100% of free space in the VG +lvcreate -n lv_fra -l 100%FREE vg_fra + +# extend LV using 100% of free space in the VG +lvextend -l +100%FREE /dev/vg-test/lv-test + +# create XFS file system on LV +mkfs.xfs /dev/vg_fra/lv_fra + +# mount the file system +mkdir -p /fra +mount /dev/vg_fra/lv_fra /fra + +df -hT /fra + +# fstab entry +/dev/mapper/vg_fra-lv_fra /fra xfs defaults 1 1 + + +umount /fra +mount -a +df -hT diff --git a/tiddlywiki/LVM - extend VG_LV and file system.txt b/tiddlywiki/LVM - extend VG_LV and file system.txt new file mode 100755 index 0000000..630d397 --- /dev/null +++ b/tiddlywiki/LVM - extend VG_LV and file system.txt @@ -0,0 +1,8 @@ +-- create a new PV with the nee device +pvcreate /dev/xvdg +-- extend the VG +vgextend vg_app /dev/xvdg +-- extend the LV +lvextend -l +100%FREE /dev/vg_app/lv_app +-- extend XFS file system +xfs_growfs /app diff --git a/tiddlywiki/LVM example.txt b/tiddlywiki/LVM example.txt new file mode 100755 index 0000000..39c7a97 --- /dev/null +++ b/tiddlywiki/LVM example.txt @@ -0,0 +1,25 @@ +lvdisplay +vgdisplay +pvdisplay + + +pvcreate /dev/xvdd1 +pvcreate /dev/xvde1 + + +vgextend vg_pgdata /dev/xvdd1 /dev/xvde1 + +lvextend -l +100%FREE /dev/vg_pgdata/lv_pgdata + + +-- For EXT4 partitions: +resize2fs /dev/vg_pgdata/lv_pgdata + + +-- For XFS: +xfs_growfs -d /dev/vg_pgdata/lv_pgdata + +-- to avoid WARNING: Not using lvmetad because duplicate PVs were found +-- add in /etc/lvm/lvm.conf +global_filter = [ "a|/dev/xvd*|", "r|/dev/sd*|" ] + diff --git a/tiddlywiki/LVM snapshots.txt b/tiddlywiki/LVM snapshots.txt new file mode 100755 index 0000000..fe25a57 --- /dev/null +++ b/tiddlywiki/LVM snapshots.txt @@ -0,0 +1,67 @@ +-- setup +pvcreate /dev/xvdc1 +pvcreate /dev/xvdd1 + +pvs + PV VG Fmt Attr PSize PFree + /dev/xvdc1 lvm2 --- <100.00g <100.00g + /dev/xvdd1 lvm2 --- <100.00g <100.00g + +vgcreate vg_data /dev/xvdc1 /dev/xvdd1 + +vgs + VG #PV #LV #SN Attr VSize VFree + vg_data 2 0 0 wz--n- 199.99g 199.99g + + +lvcreate -n lv_data -L 99G vg_data + +mkfs.xfs /dev/vg_data/lv_data + +mkdir /mnt/{original,snap} + +mount /dev/vg_data/lv_data /mnt/original +echo "/dev/vg_data/lv_data /mnt/original xfs defaults 0 0" >> /etc/fstab + +-- snapshot creation +lvcreate -L 99G -s /dev/vg_data/lv_data -n lv_snapshot + +-- mount the snapshot LV (on XFS you should use -o nouuid option) +mount -o nouuid /dev/vg_data/lv_snapshot /mnt/snap/ + +-- emptying file on snapshot FS +> /mnt/snap/file_90G.raw + +df -h /mnt/snap +Filesystem Size Used Avail Use% Mounted on +/dev/mapper/vg_data-lv_snapshot 99G 33M 99G 1% /mnt/snap + +-- changes on snaphot FS does not affects data Data% usage on the snaphot LV +lvs + LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert + lv_data vg_data owi-aos--- 99.00g + lv_snapshot vg_data swi-aos--- 99.00g lv_data 0.00 + +-- change 10Gb of data on source LV +dd if=/dev/zero of=/mnt/original/file_90G.raw bs=1G count=10 + +lvdisplay /dev/vg_data/lv_snapshot | grep "Allocated to snapshot" + Allocated to snapshot 10.14% + +-- revert to a snapshot +umount /mnt/{original,snap} + +lvconvert --merge /dev/vg_data/lv_snapshot + +-- if the COW space is exhausted, LV snapshot status become INACTIVE +-- we cannot revert from an INACTIVE snapshot + +lvdisplay /dev/vg_data/lv_snapshot | grep "LV snapshot status" + LV snapshot status INACTIVE destination for lv_data + +lvconvert --merge /dev/vg_data/lv_snapshot + Unable to merge invalidated snapshot LV "lv_snapshot". + +-- remove a snapshot +lvremove /dev/vg_data/lv_snapshot + diff --git a/tiddlywiki/Les verbes irreguliers.tid b/tiddlywiki/Les verbes irreguliers.tid new file mode 100755 index 0000000..c5175c4 --- /dev/null +++ b/tiddlywiki/Les verbes irreguliers.tid @@ -0,0 +1,186 @@ +created: 20191023142957496 +creator: vplesnila +modified: 20191107143218857 +modifier: vplesnila +tags: English +title: Les verbes irréguliers +type: text/vnd.tiddlywiki + +|!Anglais ( Infinitif )|!Prétérit|!Participe passé|!Français ( Infinitif )| +|abide|abode|abode|souffrir, suporter | +|arise|arose|arisen|survenir| +|awake|awoke|awoken|se réveiller| +|be|was, were|been|être| +|bear|bore|borne / born|porter / supporter| +|beat|beat|beaten|battre| +|become|became|become|devenir| +|beget|begat / begot|begotten|engendrer| +|begin|began|begun|commencer| +|bend|bent|bent|plier / se courber| +|bereave|bereft / bereaved|bereft / bereaved|déposséder / priver| +|bet|bet|bet|parier| +|bid|bid / bade|bid / bidden|offrir| +|bite|bit|bitten|mordre| +|bleed|bled|bled|saigner| +|blow|blew|blown|souffler / gonfler| +|break|broke|broken|casser| +|breed|bred|bred|élever (des animaux)| +|bring|brought|brought|apporter| +|broadcast|broadcast|broadcast|diffuser / émettre| +|build|built|built|construire| +|burn|burnt / burned|burnt / burned|brûler| +|burst|burst|burst|éclater| +|buy|bought|bought|acheter| +|can|could|could|pouvoir| +|cast|cast|cast|jeter / distribuer (rôles)| +|catch|caught|caught|attraper| +|chide|chid|chiden|gronder| +|choose|chose|chosen|choisir| +|cling|clung|clung|s’accrocher| +|clothe|clad / clothed|clad / clothed|habiller / recouvrir| +|come|came|come|venir| +|cost|cost|cost|coûter| +|creep|crept|crept|ramper| +|cut|cut|cut|couper| +|deal|dealt|dealt|distribuer| +|dig|dug|dug|creuser| +|dive|dived|dived / dove|plonger| +|do|did|done|faire| +|draw|drew|drawn|dessiner / tirer| +|dream|dreamt / dreamed|dreamt / dreamed|rêver| +|drink|drank|drunk|boire| +|drive |drove|driven|conduire| +|dwell|dwelt|dwelt / dwelled|habiter| +|eat|ate|eaten|manger| +|fall|fell|fallen|tomber| +|feed|fed|fed|nourrir| +|feel|felt|felt|se sentir / ressentir| +|fight|fought|fought|se battre| +|find|found|found|trouver| +|flee|fled|fled|s’enfuir| +|fling|flung|flung|lancer| +|fly|flew|flown|voler| +|forbid|forbade|forbidden|interdire| +|forecast|forecast|forecast|prévoir| +|forget|forgot|forgotten / forgot|oublier| +|forgive|forgave|forgiven|pardonner| +|forsake|forsook|forsaken|abandonner| +|forsee|foresaw|foresawn|prévoir / présentir| +|freeze|froze|frozen|geler| +|get|got|gotten / got|obtenir| +|give|gave|given|donner| +|go|went|gone|aller| +|grind|ground|ground|moudre / opprimer| +|grow|grew|grown|grandir / pousser| +|hang|hung|hung|tenir / pendre| +|have|had|had|avoir| +|hear|heard|heard|entendre| +|hide|hid|hidden|cacher| +|hit|hit|hit|taper / appuyer| +|hold|held|held|tenir| +|hurt|hurt|hurt|blesser| +|keep|kept|kept|garder| +|kneel|knelt / knelled|knelt / kneeled|s’agenouiller| +|know|knew|known|connaître / savoir| +|lay|laid|laid|poser| +|lead|led|led|mener / guider| +|lean|leant / leaned|leant / leaned|s’incliner / se pencher| +|leap|leapt / leaped|leapt / leaped|sauter / bondir| +|learn|learnt|learnt|apprendre| +|leave|left|left|laisser / quitter / partir| +|lend|lent|lent|prêter| +|let|let|let|permettre / louer / laisser| +|lie|lay|lain|s’allonger| +|light|lit / lighted|lit / lighted|allumer| +|lose|lost|lost|perdre| +|make|made|made|fabriquer| +|mean|meant|meant|signifier| +|meet|met|met|rencontrer| +|mow|mowed|mowed / mown|tondre| +|offset|offset|offset|compenser| +|overcome|overcame|overcome|surmonter| +|partake|partook|partaken|prendre part à| +|pay|paid|paid|payer| +|plead|pled / pleaded|pled / pleaded|supplier / plaider| +|preset|preset|preset|programmer| +|prove|proved|proven / proved|prouver| +|put|put|put|mettre| +|quit|quit|quit|quitter| +|read|read|read|lire| +|relay|relaid|relaid|relayer| +|rend|rent|rent|déchirer| +|rid|rid|rid|débarrasser| +|ride|rode|ridden|monter (vélo, cheval)| +|ring|rang|rung|sonner / téléphoner| +|rise|rose|risen|lever| +|run|ran|run|courir| +|saw|saw / sawed|sawn / sawed|scier| +|say|said|said|dire| +|see|saw|seen|voir| +|seek|sought|sought|chercher| +|sell|sold|sold|vendre| +|send|sent|sent|envoyer| +|set|set|set|fixer| +|shake|shook|shaken|secouer| +|shed|shed|shed|répandre / laisser tomber| +|shine|shone|shone|briller| +|shoe|shod|shod|chausser| +|shoot|shot|shot|tirer / fusiller| +|show|showed|shown|montrer| +|shut|shut|shut|fermer| +|sing|sang|sung|chanter| +|sink|sank / sunk|sunk / sunken|couler| +|sit|sat|sat|s’asseoir| +|slay|slew|slain|tuer| +|sleep|slept|slept|dormir| +|slide|slid|slid|glisser| +|slink|slunk / slinked|slunk / slinked|s’en aller furtivement| +|slit|slit|slit|fendre| +|smell|smelt|smelt|sentir| +|sow|sowed|sown / sowed|semer| +|speak|spoke|spoken|parler| +|speed|sped|sped|aller vite| +|spell|spelt|spelt|épeler / orthographier| +|spend|spent|spent|dépenser / passer du temps| +|spill|spilt / spilled|spilt / spilled|renverser| +|spin|spun|spun|tourner / faire tourner| +|spit|spat / spit|spat / spit|cracher| +|split|split|split|fendre| +|spoil|spoilt|spoilt|gâcher / gâter| +|spread|spread|spread|répandre| +|spring|sprang|sprung|surgir / jaillir / bondir| +|stand|stood|stood|être debout| +|steal|stole|stolen|voler / dérober| +|stick|stuck|stuck|coller| +|sting|stung|stung|piquer| +|stink|stank|stunk|puer| +|strew|strewed|strewn / strewed|éparpiller| +|strike|struck|stricken / struck|frapper| +|strive|strove|striven|s’efforcer| +|swear|swore|sworn|jurer| +|sweat|sweat / sweated|sweat / sweated|suer| +|sweep|swept|swept|balayer| +|swell|swelled|swollen / swelled|gonfler / enfler| +|swim|swam|swum|nager| +|swing|swung|swung|se balancer| +|take|took|taken|prendre| +|teach|taught|taught|enseigner| +|tear|tore|torn|déchirer| +|tell|told|told|dire / raconter| +|think|thought|thought|penser| +|thrive|throve / thrived|thriven / thrived|prospérer| +|throw|threw|thrown|jeter| +|thrust|thrust|thrust|enfoncer| +|tread|trod|trodden|piétiner quelque chose| +|typeset|typeset|typeset|composer| +|undergo|underwent|undergone|subir| +|understand|understood|understood|comprendre| +|wake|woke|woken|réveiller| +|wear|wore|worn|porter (avoir sur soi)| +|weep|wept|wept|pleurer| +|wet|wet / wetted|wet / wetted|mouiller| +|win|won|won|gagner| +|wind|wound|wound|enrouler / remonter| +|withdraw|withdrew|withdrawn|se retirer| +|wring|wrung|wrung|tordre| +|write|wrote|written|écrire| \ No newline at end of file diff --git a/tiddlywiki/Linux - listening ports.txt b/tiddlywiki/Linux - listening ports.txt new file mode 100755 index 0000000..d0950c5 --- /dev/null +++ b/tiddlywiki/Linux - listening ports.txt @@ -0,0 +1 @@ +alias listen='lsof -i -P | grep -i "listen"' diff --git a/tiddlywiki/Linux - remove a systemd service.txt b/tiddlywiki/Linux - remove a systemd service.txt new file mode 100755 index 0000000..4151ed3 --- /dev/null +++ b/tiddlywiki/Linux - remove a systemd service.txt @@ -0,0 +1,6 @@ +systemctl stop .service +systemctl disable .service +rm -rf /etc/systemd/system/.service +rm -rf /usr/lib/systemd/system/.service +systemctl daemon-reload +systemctl reset-failed diff --git a/tiddlywiki/Linux 1.tid b/tiddlywiki/Linux 1.tid new file mode 100755 index 0000000..e3431ca --- /dev/null +++ b/tiddlywiki/Linux 1.tid @@ -0,0 +1,9 @@ +color: #424200 +created: 20190622232815693 +creator: vplesnila +modified: 20190622233250612 +modifier: vplesnila +tags: Contents +title: Linux +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/Linux Mint.md b/tiddlywiki/Linux Mint.md new file mode 100755 index 0000000..37f6183 --- /dev/null +++ b/tiddlywiki/Linux Mint.md @@ -0,0 +1,3 @@ +[Install Touchegg](https://ubuntuhandbook.org/index.php/2021/06/multi-touch-gestures-ubuntu-20-04/) +[How to add a shell script to launcher as shortcut](https://askubuntu.com/questions/141229/how-to-add-a-shell-script-to-launcher-as-shortcut) + diff --git a/tiddlywiki/Linux perf tips.txt b/tiddlywiki/Linux perf tips.txt new file mode 100755 index 0000000..0ccda5c --- /dev/null +++ b/tiddlywiki/Linux perf tips.txt @@ -0,0 +1,4 @@ +iostat -x 2 5 + +# show environement variable of a linux process +cat /proc//environ | tr '\0' '\n' diff --git a/tiddlywiki/Linux prompt examples.txt b/tiddlywiki/Linux prompt examples.txt new file mode 100755 index 0000000..a3313d9 --- /dev/null +++ b/tiddlywiki/Linux prompt examples.txt @@ -0,0 +1,5 @@ +export PS1='`whoami`@`hostname | cut -d "." -f1`:${PWD}> ' +# PROMPT: pgadmin@mobus/home/pgadmin> + +export PS1='$USER@`hostname`[$ORACLE_SID]:$PWD\$ ' +# PROMPT: oracle@ambria.swgalaxy[C3PXPRD]:/app$ diff --git a/tiddlywiki/Markdown example 01.md b/tiddlywiki/Markdown example 01.md new file mode 100755 index 0000000..6263f95 --- /dev/null +++ b/tiddlywiki/Markdown example 01.md @@ -0,0 +1,22 @@ +Inline code: file `myfile.txt`is the good example :) + +Code example: +```sql +SET SERVEROUTPUT ON +SET FEEDBACK OFF +declare + + CURSOR c_user_tablespaces is + select tablespace_name + from dba_tablespaces + where contents not in ('UNDO','TEMPORARY') and tablespace_name not in ('SYSTEM','SYSAUX'); + +BEGIN + for r_user_tablespaces in c_user_tablespaces + loop + s1 := s1 || r_user_tablespaces.tablespace_name || ','; + s2 := s2 || r_user_tablespaces.tablespace_name || ''','|| chr(13)||''''; + end loop; +END; +/ +``` \ No newline at end of file diff --git a/tiddlywiki/Markdown test 2.md b/tiddlywiki/Markdown test 2.md new file mode 100755 index 0000000..a1c159f --- /dev/null +++ b/tiddlywiki/Markdown test 2.md @@ -0,0 +1,110 @@ +## Mon titre 2 +#### Mon Sous-titre 2 + +Bla, blah!Bla, blah!Bla, blah!Bla,\blah!Bla, blah!Bla, blah!Bla, blah! + +#### Mon Sous-titre 2bis + + +```ruby +def index + puts "hello world" +end +``` + +Bla bla bla, +bla bla bla +```sql +select * from dual; +``` + +# Markdown syntax guide + +## Headers + +# This is a Heading h1 +## This is a Heading h2 +###### This is a Heading h6 + +## Emphasis + +*This text will be italic* +_This will also be italic_ + +**This text will be bold** +__This will also be bold__ + +_You **can** combine them_ + +## Lists + +### Unordered + +* Item 1 +* Item 2 +* Item 2a +* Item 2b + +### Ordered + +1. Item 1 +1. Item 2 +1. Item 3 + 1. Item 3a + 1. Item 3b + +## Images + +![This is a alt text.](/image/sample.png "This is a sample image.") + +## Links + +You may be using [Markdown Live Preview](https://markdownlivepreview.com/). + +## Blockquotes + +> Markdown is a lightweight markup language with plain-text-formatting syntax, created in 2004 by John Gruber with Aaron Swartz. +> +>> Markdown is often used to format readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor. + +## Tables + +| Left columns | Right columns | +| ------------- |:-------------:| +| left foo | right foo | +| left bar | right bar | +| left baz | right baz | + + +| Tables | Are | Cool | +|----------|:-------------:|------:| +| col 1 is | left-aligned | $1600 | +| col 2 is | centered | $12 | +| col 3 is | right-aligned | $1 | + +| Script | Description | Example | +|----------------------------------------------------------------------------------------------------------------------------------------------| |... |... | +|... |... |... | +|... |... |... | +|... |... |... | +|... |... |... | + +## Blocks of code + +``` +let message = 'Hello world'; +alert(message); +``` + +## Inline code + +This web site is using `markedjs/marked`. + + + + - Antebellum - film 2020 - AlloCiné + - Mission trésor - film 2017 - AlloCiné (Noah) + - Color Out of Space - film 2019 - AlloCiné + - Miss Fisher & the Crypt of Tears (2020) - IMDb + - Ghost Killers vs. Bloody Mary - film 2017 - AlloCiné + diff --git a/tiddlywiki/Markdown test.md b/tiddlywiki/Markdown test.md new file mode 100755 index 0000000..e31a8c2 --- /dev/null +++ b/tiddlywiki/Markdown test.md @@ -0,0 +1,67 @@ +# Markdown syntax guide + +## Headers + +# This is a Heading h1 +## This is a Heading h2 +###### This is a Heading h6 + +## Emphasis + +*This text will be italic* +_This will also be italic_ + +**This text will be bold** +__This will also be bold__ + +_You **can** combine them_ + +## Lists + +### Unordered + +* Item 1 +* Item 2 +* Item 2a +* Item 2b + +### Ordered + +1. Item 1 +1. Item 2 +1. Item 3 + 1. Item 3a + 1. Item 3b + +## Images + +![This is a alt text.](/image/sample.png "This is a sample image.") + +## Links + +You may be using [Markdown Live Preview](https://markdownlivepreview.com/). + +## Blockquotes + +> Markdown is a lightweight markup language with plain-text-formatting syntax, created in 2004 by John Gruber with Aaron Swartz. +> +>> Markdown is often used to format readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor. + +## Tables + +| Left columns | Right columns | +| ------------- |:-------------:| +| left foo | right foo | +| left bar | right bar | +| left baz | right baz | + +## Blocks of code + +``` +let message = 'Hello world'; +alert(message); +``` + +## Inline code + +This web site is using `markedjs/marked`. diff --git a/tiddlywiki/Migration + upgrade cross-platform using incremental backups.txt b/tiddlywiki/Migration + upgrade cross-platform using incremental backups.txt new file mode 100755 index 0000000..244e894 --- /dev/null +++ b/tiddlywiki/Migration + upgrade cross-platform using incremental backups.txt @@ -0,0 +1,287 @@ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~ Walkthrough for: +~~ V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This procedure apply on 11.2.0.4 or higher soure database. +The target database can be in higher version than source database (upgrade) + +~~~~~~~~~~~~~~~~~~~~~~~~ +~~ Source database setup +~~~~~~~~~~~~~~~~~~~~~~~~ +initGREEDOPRD.ora: + +db_name=GREEDO +instance_name=GREEDOPRD +db_unique_name=GREEDOPRD +compatible=11.2.0.0 +control_files=(/data/GREEDOPRD/control01.ctl) +db_create_file_dest=/data +db_create_online_log_dest_1=/data +db_recovery_file_dest_size=4G +db_recovery_file_dest=/fra +log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' +log_archive_format=%t_%s_%r.arc +db_block_size=8192 +open_cursors=300 +diagnostic_dest=/app/oracle/base/admin/GREEDOPRD +sga_max_size=3G +sga_target=3G +pga_aggregate_target=512M +processes=350 +audit_file_dest=/app/oracle/base/admin/GREEDOPRD/adump +audit_trail=db +remote_login_passwordfile=exclusive +undo_tablespace=UNDOTBS + + +-- tablespace setup +create tablespace TS1 datafile size 16M autoextend ON next 16M; +create tablespace TS2 datafile size 16M autoextend ON next 16M; +create tablespace TS3 datafile size 16M autoextend ON next 16M; + +alter tablespace TS1 add datafile size 16M autoextend ON next 16M; +alter tablespace TS1 add datafile size 16M autoextend ON next 16M; +alter tablespace TS2 add datafile size 16M autoextend ON next 16M; +alter tablespace TS2 add datafile size 16M autoextend ON next 16M; +alter tablespace TS2 add datafile size 16M autoextend ON next 16M; + +-- schema setup +grant connect, resource, unlimited tablespace to user1 identified by user1; +grant connect, resource, unlimited tablespace to user2 identified by user2; + +grant create view to user1; +grant create view to user2; + +create profile STANDARD_USER limit + SESSIONS_PER_USER 10 + CONNECT_TIME 30; + +create profile VIP_USER limit + SESSIONS_PER_USER 20 + CONNECT_TIME 60; + +alter user user1 profile STANDARD_USER; +alter user user2 profile VIP_USER; + +-- schema contents setup +create table user1.tab1 as select * from dba_extents; +alter table user1.tab1 move tablespace TS1; +insert into user1.tab1 select * from user1.tab1; +insert into user1.tab1 select * from user1.tab1; +insert into user1.tab1 select * from user1.tab1; +commit; +insert into user1.tab1 select * from user1.tab1; +insert into user1.tab1 select * from user1.tab1; +commit; + +create table user2.tab2 as select * from user1.tab1; +insert into user2.tab2 select * from user2.tab2; +commit; +insert into user2.tab2 select * from user2.tab2; +commit; + +alter table user1.tab1 move tablespace TS2; + +create index user1.ind1 on user1.tab1(blocks) tablespace TS3; +create index user2.ind2 on user2.tab2(blocks) tablespace TS3; + +alter table user2.tab2 move tablespace TS2; +alter index user2.ind2 rebuild tablespace TS3; + + + +create table user1.message(m varchar2(30), d date) tablespace TS3; +insert into user1.message values('Setup',sysdate); +commit; + + +grant select on v_$session to user1; +grant select on v_$tablespace to user2; + +connect user1/user1 +create view sess as select * from v$session; + + +connect user2/user2 + +create or replace procedure TSLIST +is + cursor c_ts is select * from v$tablespace; +begin + for r_ts in c_ts + loop + dbms_output.put_line( 'Tablespace: ' ||r_ts.name ); + end loop; +end; +/ + + +-- check if the tablespaces are self_contained +SQL> exec sys.dbms_tts.transport_set_check(ts_list => 'TS1,TS2,TS3', incl_constraints => true); +SQL> Select * from transport_set_violations; + +PL/SQL procedure successfully completed. + +-- backup source database +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/GREEDO/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/GREEDO/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/GREEDO/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/GREEDO/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database include current controlfile plus archivelog delete input; +} + + +~~~~~~~~~~~~~~~~~~~~~~~~ +~~ Target database setup +~~~~~~~~~~~~~~~~~~~~~~~~ +initWEDGEPRD.ora: + +db_name=WEDGE +instance_name=WEDGEPRD +db_unique_name=WEDGEPRD +compatible=19.0.0.0.0 +control_files=(/data/WEDGEPRD/control01.ctl) +db_create_file_dest=/data +db_create_online_log_dest_1=/data +db_recovery_file_dest_size=4G +db_recovery_file_dest=/fra +log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' +log_archive_format=%t_%s_%r.arc +db_block_size=8192 +open_cursors=300 +diagnostic_dest=/app/oracle/base/admin/WEDGEPRD +sga_max_size=3G +sga_target=3G +pga_aggregate_target=512M +pga_aggregate_limit=2G +processes=350 +audit_file_dest=/app/oracle/base/admin/WEDGEPRD/adump +audit_trail=db +remote_login_passwordfile=exclusive +undo_tablespace=TBS_UNDO + + +-- backup target database +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/WEDGE/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/WEDGE/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/WEDGE/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/WEDGE/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database include current controlfile plus archivelog delete input; +} + + +~~ + +-- downnload scripts (attached to note, currently: rman_xttconvert_VER4.3.zip) to source machine +-- unzip to a temporary location +-- edit xtt.properties file at least with mandatory filds: + + tablespaces + platformid + src_scratch_location + dest_scratch_location + dest_datafile_location + (if using 12c) -- usermantransport=1 + + +-- get PLATFORM_ID for SOURCE and DESTINATION databases +SQL> select PLATFORM_ID from V$DATABASE; + +-- once xtt.properties OK on source, copy to dest in $TEMPDIR + +-- set TEMPDIR environement variable for BOTH machines: +export TMPDIR=/mnt/yavin4/tmp/_oracle_/tmp/TEMP_SOURCE_XTTCONVERT +export TMPDIR=/mnt/yavin4/tmp/_oracle_/tmp/TEMP_DEST_XTTCONVERT + + +-- Run the backup on the source system +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup + + +-- Restore the datafiles on the destination system +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore + + +-- Roll Forward Phase +-- as long as necessary perform backup/restore (incremental!) using previous commands + +-- in order to trace, we add a new datafile and some data + +insert into user1.message values('Roll Forward Phase',sysdate); +commit; + +alter tablespace TS2 add datafile size 8M autoextend ON next 8M; + + + + +-- Phase final Incremental Backup +-- If you are running 12c, this step can be replaced by Phase 4 in Note 2005729.1 + +insert into user1.message values('Just before RO tablespaces',sysdate); + +alter tablespace TS1 read only; +alter tablespace TS2 read only; +alter tablespace TS3 read only; + +-- take final incremental backup ignoring errors like: +ORA-20001: TABLESPACE(S) IS READONLY OR, +OFFLINE JUST CONVERT, COPY +ORA-06512: at line 284 + +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup + +-- restore final incremental backup on target database +cd $TMPDIR +$ORACLE_HOME/perl/bin/perl xttdriver.pl --restore + + +-- on source +------------ +mkdir -p /mnt/yavin4/tmp/_oracle_/tmp/DATAPUMP_SOURCE_XTTCONVERT +SQL> create directory DPUMP_TTS as '/mnt/yavin4/tmp/_oracle_/tmp/DATAPUMP_SOURCE_XTTCONVERT'; + +cd /mnt/yavin4/tmp/_oracle_/tmp/DATAPUMP_SOURCE_XTTCONVERT + +-- export metadata +expdp userid=system/secret directory=DPUMP_TTS LOGFILE=metadata.log FULL=y INCLUDE=USER,ROLE,ROLE_GRANT,PROFILE dumpfile=metadata.dmp CONTENT=METADATA_ONLY + +-- parfile exp.par: +dumpfile=xttdump.dmp +directory=DPUMP_TTS +statistics=NONE +transport_tablespaces=TS1,TS2,TS3 +transport_full_check=y +logfile=tts_export.log + +-- expdp en mode "transportable tablespace" +expdp userid=system/***** parfile=exp.par + +-- copy dumpfiles from source to destination +cp /mnt/yavin4/tmp/_oracle_/tmp/DATAPUMP_SOURCE_XTTCONVERT/xttdump.dmp /mnt/yavin4/tmp/_oracle_/tmp/DATAPUMP_DEST_XTTCONVERT/ + +-- on target +------------ +-- import metadata +impdp userid=system/secret directory=DPUMP_TTS dumpfile=metadata.dmp logfile=import_metadata.log remap_tablespace=TEMP:TMS_TEMP +-- import "transportable tablespace" +impdp userid=system/secret parfile=imp.par + + +~~~~~~~~~~~~~~ +~~ Other links +~~~~~~~~~~~~~~ +-- https://dbavivekdhiman.wordpress.com/2015/05/31/cross-platform-migration-from-aix-oracle-11-2-0-3-to-linux11-2-0-3/ +-- 11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1) + + diff --git a/tiddlywiki/MongoDB - enable authentication using SCRAM-SHA-1.txt b/tiddlywiki/MongoDB - enable authentication using SCRAM-SHA-1.txt new file mode 100755 index 0000000..2ab8077 --- /dev/null +++ b/tiddlywiki/MongoDB - enable authentication using SCRAM-SHA-1.txt @@ -0,0 +1,24 @@ +-- create database for user management +use admin +-- create first superuser +> db.createUser({ user: "superhero", pwd: "secret", roles: ["root"]}); +-- to list all users +> show users + +-- add in MongoDB configuration file -> +security: + authorization: 'enabled' +<------------------------------------- + +-- restart MongoDB +systemctl stop mongod +systemctl start mongod + +-- to connect within mongo shell +> use admin +> db.auth('superhero', 'secret'); + +-- authentification at mongo shell connection +mongo --host frdrpsrv4483 --username "superhero" --password "secret" --authenticationDatabase "admin" + + diff --git a/tiddlywiki/MongoDB - example replication.txt b/tiddlywiki/MongoDB - example replication.txt new file mode 100755 index 0000000..e9d094e --- /dev/null +++ b/tiddlywiki/MongoDB - example replication.txt @@ -0,0 +1,43 @@ +-- create keyfile for communication between MongoDB instances +openssl rand -base64 756 > /app/mongodb/conf/keyfile.basic +chmod 600 /app/mongodb/conf/keyfile.basic + +-- copy keyfile on ivera-mongo02 and ivera-mongo03 + +-- mongod.conf on ivera-mongo01 +------------------------------- +storage: + dbPath: "/data/mongodb/" + journal: + enabled: true + wiredTiger: + engineConfig: + cacheSizeGB: 1 + +net: + port: 27017 + bindIp: 127.0.0.1,ivera-mongo01,ivera-mongo01-priv + +security: + authorization: 'enabled' + keyFile: /app/mongodb/conf/keyfile.basic + +replication: + replSetName: majrc + oplogSizeMB: 100 + enableMajorityReadConcern: true + + +-- similar config files on ivera-mongo02 and ivera-mongo03 + + +-- on ivera-mongo01 that will be defined as PRIMARY + +cfg = { "_id" : "majrc", "members" : [ { "_id" : 0, "host":"ivera-mongo01-priv:27017", } ] } +rs.initiate(cfg) + +rs.add('ivera-mongo02-priv:27017'); +rs.add('ivera-mongo03-priv:27017'); + +rs.conf(); +rs.status(); diff --git a/tiddlywiki/MongoDB - extrenal ressources.tid b/tiddlywiki/MongoDB - extrenal ressources.tid new file mode 100755 index 0000000..f77bdb7 --- /dev/null +++ b/tiddlywiki/MongoDB - extrenal ressources.tid @@ -0,0 +1,22 @@ +created: 20200207141410929 +creator: vplesnila +modified: 20210110091855530 +modifier: vplesnila +tags: MongoDB +title: MongoDB - extrenal ressources +type: text/vnd.tiddlywiki + +|!comment |!url | +|Guru99|https://www.guru99.com/mongodb-tutorials.html| +|Tutorials Point|https://www.tutorialspoint.com/mongodb/index.htm| +||http://andreiarion.github.io/TP7_MongoDB_Replication_exercices| +||https://medium.com/codeops/how-to-setup-a-mongodb-replica-set-918f21da50ed| +|Manual point-in-time recovery|https://www.tothenew.com/blog/mongo-point-in-time-restoration/| +|Shard setup example|https://www.linode.com/docs/guides/build-database-clusters-with-mongodb/| +|Shard setup example|https://www.howtoforge.com/tutorial/deploying-mongodb-sharded-cluster-on-centos-7/#three-sharding-components| +|MongoDB Workbook|http://nicholasjohnson.com/mongo/course/workbook/| +|MongoDB Exam Guide|https://university.mongodb.com/exam/guide| +|Sharding data collections with MongoDB|http://vargas-solar.com/big-data-analytics/hands-on/sharding/| +|Learn MongoDB The Hard Way|http://learnmongodbthehardway.com/| + + diff --git a/tiddlywiki/MongoDB - install on CentOS8.txt b/tiddlywiki/MongoDB - install on CentOS8.txt new file mode 100755 index 0000000..fa59fa5 --- /dev/null +++ b/tiddlywiki/MongoDB - install on CentOS8.txt @@ -0,0 +1,87 @@ +# Linux packages +dnf -y wget install net-snmp-agent-libs + +groupadd mongod +useradd mongod -g mongod -G mongod +mkdir -p /app/mongodb +chown -R mongod:mongod /app/mongodb + +# disable selinux +# in /etc/selinux/config --> +SELINUX=disabled +# <------------------------- + +# Disable Transparent Huge Pages (THP) following https://docs.mongodb.com/manual/tutorial/transparent-huge-pages/ + +su - mongod +cd /app/mongodb +mkdir product conf data log +cd product +wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel80-4.2.3.tgz +gunzip -c mongodb-linux-x86_64-rhel80-4.2.3.tgz | tar -xvf - +rm -rf mongodb-linux-x86_64-rhel80-4.2.3.tgz +ln -s mongodb-linux-x86_64-rhel80-4.2.3 current_version + +# create configuration file +# /app/mongodb/conf/mongod.conf --> +storage: + dbPath: "/app/mongodb/data" + journal: + enabled: true + +net: + port: 27017 + bindIp: 127.0.0.1,192.168.0.127 +# <------------------------------- + +# Test MongoDB server startup (press Ctrl-C to stop) +/app/mongodb/product/current_version/bin/mongod --config=/app/mongodb/conf/mongod.conf --logpath=/app/mongodb/log/mongod.log + +# Add to systemd as service +# create /etc/systemd/system/mongod.service --> +[Unit] +Description=MongoDB +After=multi-user.target + +[Service] +Type=simple +# (file size) +LimitFSIZE=infinity +# (cpu time) +LimitCPU=infinity +# (virtual memory size) +LimitAS=infinity +# (locked-in-memory size) +LimitMEMLOCK=infinity +# (open files) +LimitNOFILE=64000 +# (processes/threads) +LimitNPROC=64000 + +User=mongod +Group=mongod + +ExecStart=/app/mongodb/product/current_version/bin/mongod --config /app/mongodb/conf/mongod.conf --logpath=/app/mongodb/log/mongod.log + +[Install] +WantedBy=multi-user.target +# <-------------------------------------------- + +systemctl daemon-reload +systemctl status mongod +systemctl start mongod +systemctl status mongod +systemctl stop mongod +systemctl status mongod +systemctl start mongod +systemctl enable mongod + +# check listening port +lsof -i -P | grep -i "listen" + +# Test mongo shell +/app/mongodb/product/current_version/bin/mongo --host ajara + +# In order to avoid the message [Browserslist: caniuse-lite is outdated. Please run: npx browserslist@latest --update-db] when running MongoShell: +export BROWSERSLIST_IGNORE_OLD_DATA=1 + diff --git a/tiddlywiki/MongoDB - point in time recovery.txt b/tiddlywiki/MongoDB - point in time recovery.txt new file mode 100755 index 0000000..472e18c --- /dev/null +++ b/tiddlywiki/MongoDB - point in time recovery.txt @@ -0,0 +1,63 @@ +~~ getting min/max timestamp in oplog can be done on PRIMARY or on any SECONDARY member of a replica set +rs.slaveOk(); + +~~ display usefull oplog informations +rs.printReplicationInfo() + +use local +db.oplog.rs.find({}, {ts: 1,}).sort({ts: -1}).limit(1) +db.oplog.rs.find({}, {ts: 1,}).sort({ts: 1}).limit(1) + +~~ exemple +x=Timestamp(1590072867, 1) +>> Timestamp(1590072867, 1) +new Date(x.t * 1000) +>> ISODate("2020-05-21T14:54:27Z") + +x=Timestamp(1581603867, 1) +>> Timestamp(1581603867, 1) +new Date(x.t * 1000) +>> ISODate("2020-02-13T14:24:27Z") + +~~ note that a ISODate finishing by Z is a UTC date +~~ pay attention to the diffrence between your local time and UTC; for example CEST=UTC+2 + +~~ exemple: find min/max timestamp for oplog records for the last hour +var SECS_PER_HOUR = 3600 +var now = Math.floor((new Date().getTime()) / 1000) // seconds since epoch right now +db.oplog.rs.find({ "ts" : { "$lt" : Timestamp(now, 1), "$gt" : Timestamp(now - SECS_PER_HOUR, 1) } }).sort({ts:-1}).limit(1); +db.oplog.rs.find({ "ts" : { "$lt" : Timestamp(now, 1), "$gt" : Timestamp(now - SECS_PER_HOUR, 1) } }).sort({ts:1}).limit(1); + +~~ exemple: list oplog records between 2 dates +var since = Math.floor(ISODate("2020-05-21T15:43:16Z").getTime() / 1000) +var until = Math.floor(ISODate("2020-05-21T15:43:18Z").getTime() / 1000) +db.oplog.rs.find({ "ts" : { "$lt" : Timestamp(until, 1), "$gt" : Timestamp(since, 1) } }) + +~~ exemple: get lst oplog record before a date (usefull for Point In Time Recovery) +var until = Math.floor(ISODate("2020-05-22T15:08:25Z").getTime() / 1000) +db.oplog.rs.find({ "ts" : { "$lt" : Timestamp(until, 1) } }).sort({ts:-1}).limit(1); + +~~ oplog is a collection, it can be dump umping mongodump tool +mongodump -u superhero -p secret --authenticationDatabase admin -d local -c oplog.rs -o oplogdump +~~ the format is BSON, if you want to query-it, you should to convert the file in JSON format: +cd oplogdump/local +bsondump oplog.rs.bson > archive.json + +~~ Point In Time Recovery exemple for PITR=2020-05-22 17:08:25 CEST +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~ create a new empty stand-alone MondgoDB without authentificztion and replica set in configuration file +-- restore the last FULL BACKUP of data before your PITR +~~ convert CEST in UTC: PITR=2020-05-22T15:08:25Z and note it down + +~~ find the correspondind Timestamp in oplog +var until = Math.floor(ISODate("2020-05-22T15:08:25Z").getTime() / 1000) +db.oplog.rs.find({ "ts" : { "$lt" : Timestamp(until, 1) } }).sort({ts:-1}).limit(1); +~~ in my exemple I obtained Timestamp(1590160104, 1) + +~~ copy oplog.rs.bson file locally in a EMPTY folder and rename-it oplog.bson +~~ in my exemple, the folder is: /mnt/yavin4/tmp/_mongodb_/tmp +~~ optionally perform a dryrun in order to check your mongorestore command +mongorestore --dryRun --oplogReplay --oplogLimit 1590160104:1 /mnt/yavin4/tmp/_mongodb_/tmp +~~ recover until the corresponding Timespamp +mongorestore --oplogReplay --oplogLimit 1590160104:1 /mnt/yavin4/tmp/_mongodb_/tmp + diff --git a/tiddlywiki/MongoDB - reconfigure replicaset examples.md b/tiddlywiki/MongoDB - reconfigure replicaset examples.md new file mode 100755 index 0000000..3d165ba --- /dev/null +++ b/tiddlywiki/MongoDB - reconfigure replicaset examples.md @@ -0,0 +1,19 @@ +## Change hostname + + cfg = rs.conf() + cfg.members[0].host = "ivera-mongo01.swgalaxy:27017" + cfg.members[1].host = "ivera-mongo02.swgalaxy:27017" + rs.reconfig(cfg) + + +## Change priority + + cfg = rs.conf() + cfg.members[0].priority = 1 + cfg.members[1].priority = 1 + rs.reconfig(cfg) + +## Add new member + + rs.add('ivera-mongo03.swgalaxy:27017'); + diff --git a/tiddlywiki/MongoDB - replication setup.txt b/tiddlywiki/MongoDB - replication setup.txt new file mode 100755 index 0000000..df85e80 --- /dev/null +++ b/tiddlywiki/MongoDB - replication setup.txt @@ -0,0 +1,100 @@ +~~ server names: +~~ ajara +~~ atrisia +~~ anaxes + + +~~ enable SCRAM authentification on ALL MongoDB instances +mongo +> use admin +> db.createUser({ user: "superhero", pwd: "secret", roles: ["root"]}); +> db.shutdownServer(); + +~~ add in MongoDB configuration file -> +security: + authorization: 'enabled' +<------------------------------------- + +~~ start MongoDB instance +/app/mongodb/product/current_version/bin/mongod --config=/app/mongodb/conf/mongod.conf --logpath=/app/mongodb/log/mongod.log --fork + +~~ test connection +mongo --username=superhero --password=secret + +~~ for internal communication between instances we will use a basic keyFile method + +~~ generate keyfile +openssl rand -base64 756 > /app/mongodb/conf/keyfile.basic +chmod 600 /app/mongodb/conf/keyfile.basic + +~~ add the keyfile in MongoDB configuration file -> +security: + authorization: 'enabled' + keyFile: /app/mongodb/conf/keyfile.basic +<------------------------------------- + +~~ restart MongoDB instance and test connection again +/app/mongodb/product/current_version/bin/mongod --config=/app/mongodb/conf/mongod.conf --shutdown +/app/mongodb/product/current_version/bin/mongod --config=/app/mongodb/conf/mongod.conf --logpath=/app/mongodb/log/mongod.log --fork + +mongo --username=superhero --password=secret + +~~ repeat theses operations on other 2 MongoDB instances using the SAME keyfile generated for the first instance + +~~ for all MongoDB instances, declare the replication in configuration file + +------------------------------------------> +replication: + replSetName: rs0 +<----------------------------------------- + + +mongo --username=superhero --password=secret + +rsconf = { + _id: "rs0", + members: [ + { + _id: 0, + host: "ajara:27017" + } + ] + } + + +rs.initiate(rsconf); + +rs.add('atrisia:27017'); +rs.add('anaxes:27017'); + +rs.conf(); +rs.status(); + + +~~ ckeck if replication works +~~ on PRIMARY instance create a database and a collection +rs0:PRIMARY> use db01; +rs0:PRIMARY> db.movies.insertOne({"title" : "Stand by Me"}); + +~~ on SECONDARIES check if the collection has been replicated +~~ note that a slave, before running a query, we should activate the read-only acces using the following command +rs0:SECONDARY> rs.slaveOk(); + +rs0:SECONDARY> use db01; +rs0:SECONDARY> db.movies.find(); + +~~ finaly, drop the test database from the master node +rs0:PRIMARY> db.dropDatabase(); + +~~ to user on SECONDARY replica to display lag and oplog size +db.getReplicationInfo(); + +~~ to find the mester of a replica set, use the following command on any member of replica set +db.isMaster(); + +~~ get replica set congig +config = rs.conf(); + +~~ remove a member from a replica set +rs.remove('anaxes:27017'); +rs.reconfig(config, {force: true}); diff --git a/tiddlywiki/MongoDB - scratchpad.txt b/tiddlywiki/MongoDB - scratchpad.txt new file mode 100755 index 0000000..5906027 --- /dev/null +++ b/tiddlywiki/MongoDB - scratchpad.txt @@ -0,0 +1,35 @@ +> use db01 +-- create database db01 if it does not exists +> db +-- show current database +> show dbs +-- list databases +-- db01 is not listed until it does not contain at least a document +> db.movies.insertOne({"title" : "Stand by Me"}) + + +-- create index +> db.users.createIndex({"age" : 1, "username" : 1}); + +-- list indexes of a collection +> db.users.getIndexes(); + +-- show explain plan +> db.users.find({"username": "user999999", "age":"19"}).explain("executionStats"); + +# https://www.mysoftkey.com/mongodb/how-to-enable-authentication-and-authorization-using-scram-sha-1-in-mongodb/ + +-- Connection String URI Format +https://docs.mongodb.com/manual/reference/connection-string/index.html#connections-dns-seedlist + +-- shutdown MongoDB +> use admin +> db.shutdownServer(); + +-- count(*) of a collection +db.elements.countDocuments({}) +-- truncte a collection +db.elements.remove({}) +-- display the last 5 inserted documents of a collection +db.elements.find().sort({_id:-1}).limit(5); + diff --git a/tiddlywiki/MongoDB - setup SHARD notes.txt b/tiddlywiki/MongoDB - setup SHARD notes.txt new file mode 100755 index 0000000..47d897d --- /dev/null +++ b/tiddlywiki/MongoDB - setup SHARD notes.txt @@ -0,0 +1,188 @@ +~~~~~~~~~~~~~~~~~~~~~ +~~ CONGFIG servers ~~ +~~~~~~~~~~~~~~~~~~~~~ + +-- IMPORTANT: note that the SECURITY are initially disabled + +~~ example mongod.conf for CONFIG server +-----------------------------------------------> +storage: + dbPath: "/data/mongodb/" + journal: + enabled: true + wiredTiger: + engineConfig: + cacheSizeGB: 1 + +net: + port: 27017 + bindIp: 127.0.0.1,ivera-conf01,ivera-conf01-priv + +#security: + #authorization: 'enabled' + #keyFile: /app/mongodb/conf/keyfile.basic + +replication: + replSetName: ivera_conf + oplogSizeMB: 100 + enableMajorityReadConcern: true + +sharding: + clusterRole: configsvr +<----------------------------------------------- + +-- replication setup +cfg = { + _id : "ivera_conf", + members : [ { "_id" : 0, "host":"ivera-conf01-priv:27017"},], + configsvr: true, +} +rs.initiate(cfg) + +rs.add('ivera-conf02-priv:27017'); + +rs.conf(); +rs.status(); + +-- security setup on PRIMARY +use admin +db.createUser({ user: "superhero", pwd: "secret", roles: ["root"]}); + +-- uncomment SECURITY lines from config file on PRIMARY/SECONDARY and restart MongoDB instances + +~~~~~~~~~~~~~~~~~~ +~~ DATA servers ~~ +~~~~~~~~~~~~~~~~~~ + +-- on DATA servers, the security can be implemented before or after replication setup + +~~ example mongod.conf for DATA server +-----------------------------------------------> +storage: + dbPath: "/data/mongodb/" + journal: + enabled: true + wiredTiger: + engineConfig: + cacheSizeGB: 1 + +net: + port: 27017 + bindIp: 127.0.0.1,ivera-mongo01,ivera-mongo01-priv + +security: + authorization: 'enabled' + keyFile: /app/mongodb/conf/keyfile.basic + +replication: + replSetName: ivera_data_01_02 + oplogSizeMB: 100 + enableMajorityReadConcern: true + +sharding: + clusterRole: shardsvr +<----------------------------------------------- + +-- replication setup +cfg = { + _id : "ivera_conf", + members : [ { "_id" : 0, "host":"ivera-conf01-priv:27017"},], + configsvr: true, +} +rs.initiate(cfg) + +rs.add('ivera-conf02-priv:27017'); + +rs.conf(); +rs.status(); + + +~~~~~~~~~~~~~~~~~~~~ +~~ ROUTER servers ~~ +~~~~~~~~~~~~~~~~~~~~ + +~~ example mongos.conf +-----------------------------------------------> +net: + port: 27017 + bindIp: 127.0.0.1,ivera-router01,ivera-router01-priv + +sharding: + configDB: "ivera_conf/ivera-conf01:27017,ivera-conf01:27017" + +security: + keyFile: /app/mongodb/conf/keyfile.basic +<----------------------------------------------- + +-- create SYSTEMD service for MongoDB Router +-- create service unit file /etc/systemd/system/mongos.service +-----------------------------------------------> +[Unit] +Description=MongoDB Router +After=multi-user.target + +[Service] +Type=simple +# (file size) +LimitFSIZE=infinity +# (cpu time) +LimitCPU=infinity +# (virtual memory size) +LimitAS=infinity +# (locked-in-memory size) +LimitMEMLOCK=infinity +# (open files) +LimitNOFILE=64000 +# (processes/threads) +LimitNPROC=64000 + +User=mongod +Group=mongod + +ExecStart=/app/mongodb/product/server/current_version/bin/mongos --config=/app/mongodb/conf/mongos.conf --logpath=/app/mongodb/log/mongos.log + +[Install] +WantedBy=multi-user.target +<----------------------------------------------- + +systemctl daemon-reload +systemctl start mongos +systemctl status mongos +systemctl enable mongos + +-- connect to MongoDB Router un authentified mode and add shards +mongo --username "superhero" --password "******" + +sh.addShard( "ivera_data_01_02/ivera-mongo01-priv:27017") +sh.addShard( "ivera_data_01_02/ivera-mongo02-priv:27017") +sh.addShard( "ivera_data_03_04/ivera-mongo03-priv:27017") +sh.addShard( "ivera_data_03_04/ivera-mongo04-priv:27017") +sh.addShard( "ivera_data_05_06/ivera-mongo05-priv:27017") +sh.addShard( "ivera_data_05_06/ivera-mongo06-priv:27017") + +-- NOTE: a MongoDB router don't have any data locally -- except the mongos.conf file +-- We can create multiple MongoDB routers and use a load balancer to redirect user's calls + +~~~~~~~~~~~~~~~~~~ +~~ Test Cluster ~~ +~~~~~~~~~~~~~~~~~~ + +-- create a database and activate sharding at the database level +use exampleDB +sh.enableSharding("exampleDB") + +-- check database sharding +use config +db.databases.find() + +-- create a collection and hash value of _id +db.exampleCollection.ensureIndex( { _id : "hashed" } ) + +-- shard the collection +sh.shardCollection( "exampleDB.exampleCollection", { "_id" : "hashed" } ) + +-- insert documents +for (var i = 1; i <= 500; i++) db.exampleCollection.insert( { x : i } ) + +-- display collection documents distributions across shards +db.exampleCollection.getShardDistribution() diff --git a/tiddlywiki/MongoDB.tid b/tiddlywiki/MongoDB.tid new file mode 100755 index 0000000..1624e6d --- /dev/null +++ b/tiddlywiki/MongoDB.tid @@ -0,0 +1,8 @@ +color: #ff8000 +created: 20200128094934994 +creator: vplesnila +modified: 20200128095150176 +modifier: vplesnila +tags: Contents +title: MongoDB +type: text/vnd.tiddlywiki \ No newline at end of file diff --git a/tiddlywiki/My Oracle Toolbox.md b/tiddlywiki/My Oracle Toolbox.md new file mode 100755 index 0000000..6b9b0f6 --- /dev/null +++ b/tiddlywiki/My Oracle Toolbox.md @@ -0,0 +1,185 @@ +Performance +=========== + +Displaying daily top SQL for last 7 days: + + @exadata/mon_topsql.sql + +ASH +--- + +ASH Report for SQL ID: + + @ash/sqlid_activity.sql + + +AWR +--- + +Display Execution plan history from AWR: + + @awr_xplan + Example: + @awr_xplan h6hYfr4esZrz % 14 "and 1=1" + +Display SQL Text from AWR: + + @awr/awr_sqlid + +SQL Monitor +----------- + +List SQL Monitor reports based on a where clause: + + @sqlmon_lsrep.sql + # @sqlmon_lsrep "x.session_id='303'" + # @sqlmon_lsrep "x.sql_id='g9n768y28mu9m'" + # @sqlmon_lsrep "x.sql_id='g9n768y28mu9m'" 6 asc + +SQL Monitor report detail: + + @sqlmon_detrep + # @sqlmon_detrep 172 + # @sqlmon_detrep 172 html + # @sqlmon_detrep 172 active + +Run DBMS_SQLTUNE.REPORT_SQL_MONITOR (text mode) for session: + + @xp + +Run custom DBMS_SQLTUNE.REPORT_SQL_MONITOR: + + @xprof + # Protect sql_id as in example: @xprof BASIC TEXT sql_id "'a4fqzw4mszwck'" + +Explain plan +------------ + +Display execution plan for last statement for this session from library cache: + + @x.sql + @xb.sql + +Plan for library cache: + + @xi + @xbi + +Plan for AWR: + + @xawr.sql + +Statistics +---------- +Column stats details: + + @stats_col % % % % + +History of the optimizer statistic operations. +Optionally filters on the start time in the format DD/MM/YYYY and the target name (which supports wildcards) + + @list_optstat_history.sql + +List STATS operations: + + @stats_opls + # @stats_opls sysdate-14 sysdate BASIC TEXT + # @stats_opls "timestamp'2023-01-12 14:00:00'" "timestamp'2023-02-12 14:00:00'" TYPICAL HTML + +Detail of a STATS operation: + + @stats_opdet + # @stats_opdet 1482 + # @stats_opdet 1482 TYPICAL HTML + + +Trace activation +---------------- + +Display current trace file name: + + @t + +Activate/deactivate 10046 trace: + + @46on + @46off + + +Divers +------ + +Display SQL_ID and PHV for the last SQL: + + @hash + +Display SQL hint: + + @hint + + +Database layout +=============== + +Tablespaces +----------- + + @tbs % + +Redolog +------- + +Redolog informations + + @redolog + +Redolog switch history + + @perf_log_switch_history_count_daily_all.sql + +Oracle Directories + + @dba_directories + +Database links + + @dblinks.sql + +Table informations: + + @dba_table_info + @tab . + @tab_details + +Partition informations: + + @part_info.sql + @tabpart . + + @tab_parts_summary + @tab_parts + +Restore points: + + @restore_points + +Locks +===== + +Blocking locks tree RAC aware: + + @raclock + +Blocking Locks in the databases: + + @locks_blocking.sql + @locks_blocking2.sql + +Undo +==== + +Active undo segments and the sessions that are using them: + + @undo_users.sql + + diff --git a/tiddlywiki/My private cloud - NAS share on Linux.txt b/tiddlywiki/My private cloud - NAS share on Linux.txt new file mode 100755 index 0000000..a6a4d1c --- /dev/null +++ b/tiddlywiki/My private cloud - NAS share on Linux.txt @@ -0,0 +1,9 @@ +yum install -y cifs-utils.x86_64 +mkdir -p /mnt/yavin4 +echo "//192.168.0.9/share /mnt/yavin4 cifs vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred 0 0" >> /etc/fstab +groupadd smbuser +useradd smbuser -G smbuser -g smbuser +echo "username=vpl" > /root/.smbcred +echo "password=*****" >> /root/.smbcred +mount -a +df -h diff --git a/tiddlywiki/Network interface SPEED check.txt b/tiddlywiki/Network interface SPEED check.txt new file mode 100755 index 0000000..20053db --- /dev/null +++ b/tiddlywiki/Network interface SPEED check.txt @@ -0,0 +1,6 @@ +-- for standard interface +ethtool eth0 + +-- for infiniband +ibstatus + diff --git a/tiddlywiki/Oracle - SQL Quarantine - example.md b/tiddlywiki/Oracle - SQL Quarantine - example.md new file mode 100755 index 0000000..a22cff0 --- /dev/null +++ b/tiddlywiki/Oracle - SQL Quarantine - example.md @@ -0,0 +1,60 @@ +> [Original article](https://oracle-base.com/articles/19c/sql-quarantine-19c) + + + +We can manually quarantine a statement based on SQL_ID or SQL_TEXT. +Both methods accept a PLAN_HASH_VALUE parameter, which allows us to quarantine a single execution plan. +If this is not specified, all execution plans for the statement are quarantined. + + + -- Quarantine all execution plans for a SQL_ID. + DECLARE + l_sql_quarantine VARCHAR2(100); + BEGIN + l_sql_quarantine := sys.DBMS_SQLQ.create_quarantine_by_sql_id( + sql_id => 'gs59hr0xtjrf8' + ); + DBMS_OUTPUT.put_line('l_sql_quarantine=' || l_sql_quarantine); + END; + / + + +SQL quarantine display: + + set lines 256 + COLUMN sql_text FORMAT A50 TRUNC + COLUMN plan_hash_value FORMAT 999999999999 + COLUMN name FORMAT A30 + COLUMN enabled FORMAT A3 HEAD "Ena" + COLUMN cpu_time FORMAT A10 + COLUMN io_megabytes FORMAT A10 + COLUMN io_requests FORMAT A10 + COLUMN elapsed_time FORMAT A10 + COLUMN io_logical FORMAT A10 + + select + name, enabled,cpu_time, io_megabytes, io_requests, elapsed_time, io_logical, plan_hash_value, sql_text + from + dba_sql_quarantine; + + +The ALTER_QUARANTINE procedure allows us to alter the thresholds, to make them look more like automatically generated quarantines. +We can use the procedure to alter the following parameters: + +- CPU_TIME +- ELAPSED_TIME +- IO_MEGABYTES +- IO_REQUESTS +- IO_LOGICAL +- ENABLED +- AUTOPURGE + +Example of setting the CPU_TIME threshold for the manually created quarantines: + + BEGIN + DBMS_SQLQ.alter_quarantine( + quarantine_name => 'SQL_QUARANTINE_8zpc9pwdmb8vr', + parameter_name => 'CPU_TIME', + parameter_value => '1'); + END; + / diff --git a/tiddlywiki/Oracle - external links.md b/tiddlywiki/Oracle - external links.md new file mode 100755 index 0000000..d3dd2f3 --- /dev/null +++ b/tiddlywiki/Oracle - external links.md @@ -0,0 +1,5 @@ +- [Oracle 12.2 Cool New Features](https://gotodba.com/2016/09/22/oracle-12-2-cool-new-features/) +- [Upgrade to 19(oracle-base)](https://oracle-base.com/articles/19c/upgrading-to-19c) +- [Restoring a database without having any controlfile backup](https://blog.dbi-services.com/restoring-a-database-without-having-any-controlfile-backup/) +- [What is the Oracle ASH time waited column?](https://blog.orapub.com/20150827/what-is-the-oracle-ash-time-waited-column.html) +- [Data Pump API for PL/SQL (DBMS_DATAPUMP)](https://oracle-base.com/articles/misc/data-pump-api) \ No newline at end of file diff --git a/tiddlywiki/Oracle 1.tid b/tiddlywiki/Oracle 1.tid new file mode 100755 index 0000000..b661c61 --- /dev/null +++ b/tiddlywiki/Oracle 1.tid @@ -0,0 +1,9 @@ +color: #800080 +created: 20190622084111439 +creator: vplesnila +modified: 20190622233226722 +modifier: vplesnila +tags: Contents +title: Oracle +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/Oracle 19c manual CDB creation.md b/tiddlywiki/Oracle 19c manual CDB creation.md new file mode 100755 index 0000000..0fbdde3 --- /dev/null +++ b/tiddlywiki/Oracle 19c manual CDB creation.md @@ -0,0 +1,60 @@ +initASTYPRD.ora: + + db_name=ASTY + instance_name=ASTYPRD + db_unique_name=ASTYPRD + compatible=19.0.0.0.0 + control_files=(/data/ASTYPRD/control01.ctl) + db_create_file_dest=/data + db_create_online_log_dest_1=/data + db_recovery_file_dest_size=4G + db_recovery_file_dest=/fra + log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST' + log_archive_format=%t_%s_%r.arc + db_block_size=8192 + open_cursors=300 + diagnostic_dest=/app/oracle/base/admin/ASTYPRD + sga_max_size=3G + sga_target=3G + pga_aggregate_target=512M + pga_aggregate_limit=2G + processes=350 + audit_file_dest=/app/oracle/base/admin/ASTYPRD/adump + audit_trail=db + remote_login_passwordfile=exclusive + undo_tablespace=TS_UNDO + enable_pluggable_database=TRUE + + +Create database: + + spool createdb.log + + create database ASTY + datafile size 700M autoextend on next 64M + extent management local + SYSAUX datafile size 512M autoextend on next 64M + default temporary tablespace TS_TEMP tempfile size 256M autoextend off + undo tablespace TS_UNDO datafile size 256M autoextend off + character set AL32UTF8 + national character set AL16UTF16 + logfile group 1 size 64M, + group 2 size 64M + user SYS identified by secret user SYSTEM identified by secret + enable pluggable database; + + create tablespace USERS datafile size 32M autoextend ON next 32M; + alter database default tablespace USERS; + + spool off + + + +Ensure using Oracle provided perl: + + export PATH=$ORACLE_HOME/perl/bin:$PATH + +Run `catcdb.sql` providing the required informations: + + @?/rdbms/admin/catcdb.sql + diff --git a/tiddlywiki/Oracle RAC - create network and add listener.txt b/tiddlywiki/Oracle RAC - create network and add listener.txt new file mode 100755 index 0000000..a3b6bc5 --- /dev/null +++ b/tiddlywiki/Oracle RAC - create network and add listener.txt @@ -0,0 +1,16 @@ +# list interface usage +oifcfg getif +# list existing networks +srvctl config network + +# vortex-db01-dba-vip: 192.168.3.88 +# vortex-db02-dba-vip: 192.168.3.90 +# as ROOT user +srvctl add network -netnum 2 -subnet 192.168.3.0/255.255.255.0/eth3 -nettype STATIC +srvctl add vip -node vortex-db01 -address vortex-db01-dba-vip/255.255.255.0/eth3 -netnum 2 +srvctl add vip -node vortex-db02 -address vortex-db02-dba-vip/255.255.255.0/eth3 -netnum 2 +srvctl start vip -vip vortex-db01-dba-vip +srvctl start vip -vip vortex-db02-dba-vip +# as GRID user +srvctl add listener -listener LISTENER_DG -netnum 2 -endpoints TCP:1600 +srvctl start listener -listener LISTENER_DG diff --git a/tiddlywiki/Oracle RAC - divers.txt b/tiddlywiki/Oracle RAC - divers.txt new file mode 100755 index 0000000..cf3d638 --- /dev/null +++ b/tiddlywiki/Oracle RAC - divers.txt @@ -0,0 +1,40 @@ +-- verify software integrity +cluvfy comp software -n all -verbose + +-- MGMTDB creation +------------------ +$GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +DATA -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal + + +-- Wallet creation for patching +------------------------------- +cd /app/grid/product/12cR2/grid_1/OPatch/auto/core/bin +./patchingWallet.sh -walletDir /home/grid -create grid:theron-db01:ssh grid:theron-db02:ssh root:theron-db01:ssh root:theron-db02:ssh -log /home/grid/wallet.log + + +cd /app/oracle/product/12cR2/db_1/OPatch/auto/core/bin +./patchingWallet.sh -walletDir /home/oracle -create oracle:theron-db01:ssh oracle:theron-db02:ssh root:theron-db01:ssh root:theron-db02:ssh -log /home/oracle/wallet.log + + +-- Patch apply with opatchauto +------------------------------ +/app/grid/product/12cR2/grid_1/OPatch/opatchauto apply /home/grid/tmp/26610291 -oh /app/grid/product/12cR2/grid_1 -wallet /home/grid + +-- Install the last version of OPach +------------------------------------ + +As root user, after uncompres the downloaded of the last OPatch under /mnt/yavin4/tmp/0/01/OPatch + +cd /app/grid/product/12cR2/grid_1/ +OPatch/opatchauto version +rm -rf OPatch/ +cp -R /mnt/yavin4/tmp/0/01/OPatch . +chown -R grid:oinstall OPatch +OPatch/opatchauto version + +cd /app/oracle/product/12cR2/db_1/ +OPatch/opatchauto version +rm -rf OPatch/ +cp -R /mnt/yavin4/tmp/0/01/OPatch . +chown -R oracle:oinstall OPatch +OPatch/opatchauto version diff --git a/tiddlywiki/Oracle RAC os users setup.txt b/tiddlywiki/Oracle RAC os users setup.txt new file mode 100755 index 0000000..ede9495 --- /dev/null +++ b/tiddlywiki/Oracle RAC os users setup.txt @@ -0,0 +1,26 @@ +# https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/identifying-an-oracle-software-owner-user-account.html#GUID-0A95F4B1-1045-455D-9897-A23012E4E27F + +$ grep "oinstall" /etc/group +oinstall:x:54321:grid,oracle + +$ id oracle +uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba), +54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54330(racdba) + + +$ id grid +uid=54331(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba), +54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba) + +# extract from /etc/group + +oinstall:x:54321: +dba:x:54322:oracle,grid +oper:x:54323:oracle +backupdba:x:54324:oracle +dgdba:x:54325:oracle +kmdba:x:54326:oracle +racdba:x:54330:oracle,grid +asmoper:x:54327:grid +asmdba:x:54328:grid,oracle +asmadmin:x:54329:grid diff --git a/tiddlywiki/Oracle SSL connection.md b/tiddlywiki/Oracle SSL connection.md new file mode 100755 index 0000000..1f9bc48 --- /dev/null +++ b/tiddlywiki/Oracle SSL connection.md @@ -0,0 +1,211 @@ +## Source + +- https://oracle-base.com/articles/misc/configure-tcpip-with-ssl-and-tls-for-database-connections + + +## Folder creation for configuration files + + + mkdir -p /mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet + mkdir -p /mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet + mkdir -p /mnt/yavin4/tmp/_oracle_/labo_ssl/client/tnsadmin + mkdir -p /mnt/yavin4/tmp/_oracle_/labo_ssl/exchange_zone/ + +## Server wallet and certificate + +Create the wallet: + + orapki wallet create -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet" -pwd "C0mpl1cated#Ph|rase" -auto_login_local + +Create certificate in wallet: + + orapki wallet add -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet" -pwd "C0mpl1cated#Ph|rase" \ + -dn "CN=`hostname`" -keysize 1024 -self_signed -validity 3650 + +Display wallet contents: + + orapki wallet display -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet" -pwd "C0mpl1cated#Ph|rase" + +Export certificate: + + orapki wallet export -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet" -pwd "C0mpl1cated#Ph|rase" \ + -dn "CN=`hostname`" -cert /mnt/yavin4/tmp/_oracle_/labo_ssl/exchange_zone/`hostname`-certificate.crt + + +## Client wallet and certificate + +Create the wallet: + + orapki wallet create -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet" -pwd "1m#the|Client#" -auto_login_local + +Create certificate in wallet: + + orapki wallet add -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet" -pwd "1m#the|Client#" \ + -dn "CN=`hostname`" -keysize 1024 -self_signed -validity 3650 + +Display wallet contents: + + orapki wallet display -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet" -pwd "1m#the|Client#" + +Export certificate: + + orapki wallet export -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet" -pwd "1m#the|Client#" \ + -dn "CN=`hostname`" -cert /mnt/yavin4/tmp/_oracle_/labo_ssl/exchange_zone/`hostname`-certificate.crt + + +## Exchange certificates between server and client + +Load client certificate into server wallet: + + orapki wallet add -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet" -pwd "C0mpl1cated#Ph|rase" \ + -trusted_cert -cert /mnt/yavin4/tmp/_oracle_/labo_ssl/exchange_zone/taris.swgalaxy-certificate.crt + + +Display server wallet contents: + + orapki wallet display -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet" -pwd "C0mpl1cated#Ph|rase" + + +Load server certificate into client wallet: + + orapki wallet add -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet" -pwd "1m#the|Client#" \ + -trusted_cert -cert /mnt/yavin4/tmp/_oracle_/labo_ssl/exchange_zone/mandalore.swgalaxy-certificate.crt + + +Display client wallet contents: + + orapki wallet display -wallet "/mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet" -pwd "1m#the|Client#" + + +## Server network configuration + +> I did not succed to user custom `$TNS_ADMIN` location for server configuration files + +> In this example we will register the database on standard `LISTENER` and on custom `LISTENER_APP` listeners + +Edit `$ORACLE_HOME/network/admin/sqlnet.ora`: + + WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet) + ) + ) + SQLNET.AUTHENTICATION_SERVICES = (TCPS,NTS,BEQ) + SSL_CLIENT_AUTHENTICATION = FALSE + SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA) + + +Edit `$ORACLE_HOME/network/admin/listener.ora`: + + + SSL_CLIENT_AUTHENTICATION = FALSE + WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /mnt/yavin4/tmp/_oracle_/labo_ssl/server/wallet) + ) + ) + LISTENER_APP = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCP)(HOST = mandalore.swgalaxy)(PORT = 12000)) + (ADDRESS = (PROTOCOL = TCPS)(HOST = mandalore.swgalaxy)(PORT = 24000)) + ) + ) + + +Edit `$ORACLE_HOME/network/admin/tnsnames.ora`: + + LOCAL_LISTENER = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCP)(HOST = mandalore.swgalaxy)(PORT = 1521)) + (ADDRESS = (PROTOCOL = TCP)(HOST = mandalore.swgalaxy)(PORT = 12000)) + ) + ) + + +Set `local_listener` at the database level: + + alter system set local_listener='LOCAL_LISTENER' scope=memory sid='*'; + alter system register; + + +## Client network configuration + + export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/labo_ssl/client/tnsadmin + + +Edit `$TNS_ADMIN/sqlnet.ora`: + + WALLET_LOCATION = + (SOURCE = + (METHOD = FILE) + (METHOD_DATA = + (DIRECTORY = /mnt/yavin4/tmp/_oracle_/labo_ssl/client/wallet) + ) + ) + + SQLNET.AUTHENTICATION_SERVICES = (TCPS,NTS) + SSL_CLIENT_AUTHENTICATION = FALSE + SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA) + + +Edit `$TNS_ADMIN/tnsnames.ora`: + + EWOKPRD_APP_SSL= + (DESCRIPTION= + (ADDRESS= + (PROTOCOL=TCPS)(HOST=mandalore.swgalaxy)(PORT=24000) + ) + (CONNECT_DATA= + (SERVICE_NAME=EWOKPRD) + ) + ) + + EWOKPRD_STANDARD= + (DESCRIPTION= + (ADDRESS= + (PROTOCOL=TCP)(HOST=mandalore.swgalaxy)(PORT=1521) + ) + (CONNECT_DATA= + (SERVICE_NAME=EWOKPRD) + ) + ) + + EWOKPRD_APP= + (DESCRIPTION= + (ADDRESS= + (PROTOCOL=TCP)(HOST=mandalore.swgalaxy)(PORT=12000) + ) + (CONNECT_DATA= + (SERVICE_NAME=EWOKPRD) + ) + ) + + + +Test connections: + + connect system/*****@EWOKPRD_APP_SSL + connect system/*****@EWOKPRD_APP + connect system/*****@EWOKPRD_STANDARD + + +Get the current protocol for your session: + + select SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') from dual; + + +Use the following query do display the current network options for your session: + + select NETWORK_SERVICE_BANNER + from v$session_connect_info + where SID = sys_context('USERENV','SID'); + +- If you get a row with NETWORK_SERVICE_BANNER like '%TCP/IP%', then you use TCP (without SSL) +- If you get a row with NETWORK_SERVICE_BANNER like '%BEQUEATH%, then you use Bequeath (LOCAL=YES) +- If you get a row with NETWORK_SERVICE_BANNER is null, then you use TCPS diff --git a/tiddlywiki/Oracle Toolbox Example.tid b/tiddlywiki/Oracle Toolbox Example.tid new file mode 100755 index 0000000..2d98988 --- /dev/null +++ b/tiddlywiki/Oracle Toolbox Example.tid @@ -0,0 +1,7 @@ +created: 20200831160216240 +creator: vplesnila +modified: 20200831160219460 +modifier: vplesnila +tags: Oracle +title: Oracle Toolbox Example +type: text/vnd.tiddlywiki \ No newline at end of file diff --git a/tiddlywiki/Oracle resource manager example.md b/tiddlywiki/Oracle resource manager example.md new file mode 100755 index 0000000..26d7ffc --- /dev/null +++ b/tiddlywiki/Oracle resource manager example.md @@ -0,0 +1,119 @@ +> [Original article](https://oracle-base.com/articles/8i/resource-manager-8i) + +Create application users: + + create user web_user identified by "iN_j8sC#d!kX6b:_"; + create user batch_user identified by "r~65ktuFYyds+P_X"; + grant connect,resource to web_user; + grant connect,resource to batch_user; + + +Create a pending area: + + BEGIN + DBMS_RESOURCE_MANAGER.clear_pending_area; + DBMS_RESOURCE_MANAGER.create_pending_area; + END; + / + + +Create a plan: + + BEGIN + DBMS_RESOURCE_MANAGER.create_plan( + plan => 'hybrid_plan', + comment => 'Plan for a combination of high and low priority tasks.'); + END; + / + + +Create a web and a batch consumer group: + + BEGIN + DBMS_RESOURCE_MANAGER.create_consumer_group( + consumer_group => 'WEB_CG', + comment => 'Web based OTLP processing - high priority'); + + DBMS_RESOURCE_MANAGER.create_consumer_group( + consumer_group => 'BATCH_CG', + comment => 'Batch processing - low priority'); + END; + / + + +Assign the consumer groups to the plan and indicate their relative priority, remembering to add the OTHER_GROUPS plan directive: + + + BEGIN + DBMS_RESOURCE_MANAGER.create_plan_directive ( + plan => 'hybrid_plan', + group_or_subplan => 'web_cg', + comment => 'High Priority', + cpu_p1 => 80, + cpu_p2 => 0, + parallel_degree_limit_p1 => 4); + + DBMS_RESOURCE_MANAGER.create_plan_directive ( + plan => 'hybrid_plan', + group_or_subplan => 'batch_cg', + comment => 'Low Priority', + cpu_p1 => 0, + cpu_p2 => 80, + parallel_degree_limit_p1 => 4); + + DBMS_RESOURCE_MANAGER.create_plan_directive( + plan => 'hybrid_plan', + group_or_subplan => 'OTHER_GROUPS', + comment => 'all other users - level 3', + cpu_p1 => 0, + cpu_p2 => 0, + cpu_p3 => 100); + END; + / + + +Validate and apply the resource plan: + + BEGIN + DBMS_RESOURCE_MANAGER.validate_pending_area; + DBMS_RESOURCE_MANAGER.submit_pending_area; + END; + / + + +Assign our users to individual consumer groups: + + BEGIN + -- Assign users to consumer groups + DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group( + grantee_name => 'web_user', + consumer_group => 'web_cg', + grant_option => FALSE); + + DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group( + grantee_name => 'batch_user', + consumer_group => 'batch_cg', + grant_option => FALSE); + + DBMS_RESOURCE_MANAGER.set_initial_consumer_group('web_user', 'web_cg'); + + DBMS_RESOURCE_MANAGER.set_initial_consumer_group('batch_user', 'batch_cg'); + END; + / + + +Connect users: + + connect web_user/"iN_j8sC#d!kX6b:_" + connect batch_user/"r~65ktuFYyds+P_X" + +Check `resource_consumer_group` column in `v$session`: + +SELECT username, resource_consumer_group + FROM v$session + WHERE username IN ('WEB_USER','BATCH_USER'); + +Note that the value change for a connecte session if `RESOURCE_MANAGER_PLAN` change at instance level: + + alter system set RESOURCE_MANAGER_PLAN = 'hybrid_plan' scope=both sid='*'; + alter system set RESOURCE_MANAGER_PLAN = '' scope=both sid='*'; diff --git a/tiddlywiki/Oracle scripts.tid b/tiddlywiki/Oracle scripts.tid new file mode 100755 index 0000000..5148efa --- /dev/null +++ b/tiddlywiki/Oracle scripts.tid @@ -0,0 +1,9 @@ +color: #0000a0 +created: 20190622074003397 +creator: vplesnila +modified: 20190715082632837 +modifier: vplesnila +tags: Oracle +title: Oracle scripts +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/Oracle toolbox.tid b/tiddlywiki/Oracle toolbox.tid new file mode 100755 index 0000000..784396b --- /dev/null +++ b/tiddlywiki/Oracle toolbox.tid @@ -0,0 +1,15 @@ +created: 20200823075307403 +creator: vplesnila +modified: 20200831161547163 +modifier: vplesnila +tags: Oracle +title: Oracle toolbox +type: text/vnd.tiddlywiki + + +|!usage |!description |!notes| +|''@ash/ashtop'' [grouping_cols] [filters] [fromtime] [totime]|Display top ASH time (count of ASH samples) grouped by your specified dimensions|[[exemples|ashtop]]| +|''@ash/ash_wait_chains'' [grouping_cols] [filters] [fromtime] [totime]|Display ASH (based on DBA_HIST) wait chains (multi-session wait signature, a session waiting for another session etc.)|[[exemples|ash_wait_chains]]| +|''@x'' |Display SQL execution plan for the last SQL statement|| + + diff --git a/tiddlywiki/Orcle Resource Manager.txt b/tiddlywiki/Orcle Resource Manager.txt new file mode 100755 index 0000000..bf95534 --- /dev/null +++ b/tiddlywiki/Orcle Resource Manager.txt @@ -0,0 +1,38 @@ +select name from v$rsrc_plan where is_top_plan='TRUE' and cpu_managed='ON'; + +col plan for a30 +col group_or_subplan for a30 + +select plan, group_or_subplan, type, cpu_p1, cpu_p2, cpu_p3, cpu_p4, status +from dba_rsrc_plan_directives order by 1,2,3,4,5,6 desc; + + +SELECT group_or_subplan, + cpu_p1, + mgmt_p1, + mgmt_p2, + mgmt_p3, + mgmt_p4, + mgmt_p5, + mgmt_p6, + mgmt_p7, + mgmt_p8, + max_utilization_limit + FROM dba_rsrc_plan_directives + WHERE plan = (SELECT name + FROM v$rsrc_plan + WHERE is_top_plan = 'TRUE'); + +SELECT TO_CHAR (m.begin_time, 'YYYY-MM-DD HH24:MI:SS') time, + m.consumer_group_name, + m.cpu_consumed_time / 60000 avg_running_sessions, + m.cpu_wait_time / 60000 avg_waiting_sessions, + d.mgmt_p1 + * (SELECT VALUE + FROM v$parameter + WHERE name = 'cpu_count') + / 100 + allocation + FROM v$rsrcmgrmetric_history m, dba_rsrc_plan_directives d, v$rsrc_plan p + WHERE m.consumer_group_name = d.group_or_subplan AND p.name = d.plan +ORDER BY m.begin_time, m.consumer_group_name; diff --git a/tiddlywiki/PDB - divers.txt b/tiddlywiki/PDB - divers.txt new file mode 100755 index 0000000..b439e12 --- /dev/null +++ b/tiddlywiki/PDB - divers.txt @@ -0,0 +1,82 @@ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~ configurable spfile parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +COLUMN name FORMAT A35 +COLUMN value FORMAT A35 + +SELECT name, value +FROM v$system_parameter +WHERE ispdb_modifiable = 'TRUE' +ORDER BY name; + +~~~~~~~~~~~~~~~~~~ +~~ Rename of a PDB +~~~~~~~~~~~~~~~~~~ +~~ rename PDB database from JABBAPRD to ZAX +alter pluggable database JABBAPRD close immediate; +alter pluggable database JABBAPRD open restricted; +alter session set container=JABBAPRD; +alter pluggable database rename global_name to ZAX; +alter pluggable database ZAX close immediate; +alter pluggable database ZAX open; +alter pluggable database ZAX save state; + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~ Switch a CDB in LOCAL_UNDO mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +~~ check if LOCAL_UNDO is enable +COLUMN property_name FORMAT A30 +COLUMN property_value FORMAT A30 + +SELECT property_name, property_value +FROM database_properties +WHERE property_name = 'LOCAL_UNDO_ENABLED'; + + + +~~ disable cluster_database and stop the database +alter system set cluster_database=false scope=spfile sid='*'; +srvctl stop database -db HUTTPRD + +~~ start just one instance in upgrade mode +startup upgrade; +~~ enable LOCAL_UNDO +alter database local undo ON; + +~~ enable cluster_database and start the database +alter system set cluster_database=true scope=spfile sid='*'; + +~~ stop the instance and start database +srvctl start database -db HUTTPRD + +~~~~~~~~~~~~~~~~~~ +~~ Refreshable PDB +~~~~~~~~~~~~~~~~~~ + +~~ context +~~ source ZAX@HUTTPRD(on vortex-scan) +~~ target HARRA@BOTHAN(on kessel-scan) + +~~ on source HUTTPRD +CREATE USER c##kaminoan IDENTIFIED BY secret CONTAINER=ALL; +GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO c##kaminoan CONTAINER=ALL; + +~~ on target BOTHAN +CREATE DATABASE LINK kaminoan_link + CONNECT TO c##kaminoan IDENTIFIED BY secret USING 'vortex-scan/HUTTPRD'; + +select * from dual@kaminoan_link; + +create pluggable database HARRA from ZAX@kaminoan_link parallel 2 refresh mode manual; +alter pluggable database HARRA open read only instances=ALL; + +SELECT status, refresh_mode FROM dba_pdbs WHERE pdb_name = 'HARRA'; + +~~ to perform a refresh +alter pluggable database HARRA close immediate instances=ALL; +alter pluggable database HARRA refresh; +alter pluggable database HARRA open read only instances=ALL; + + + diff --git a/tiddlywiki/PDB clone examples.md b/tiddlywiki/PDB clone examples.md new file mode 100755 index 0000000..2b15142 --- /dev/null +++ b/tiddlywiki/PDB clone examples.md @@ -0,0 +1,101 @@ +Clone PDB from a remote CDB using RMAN "from active database" +------------------------------------------------------------- + +On target CDB, set the source CDB archivelog location: + + alter system set REMOTE_RECOVERY_FILE_DEST='/fra' scope=MEMORY sid='*'; + +Run RMAN duplicate command: + + rman target='sys/"*****"@taris/ASTYPRD' auxiliary='sys/"*****"@mandalore/ELLOPRD' + + run + { + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate pluggable database WEDGEPRD as ANTILLESPRD + from active database using compressed backupset section size 400M; + } + + +Clone PDB from a remote CDB through a database link +--------------------------------------------------- + +On source CDB create an user to be use by the database link: + + create user c##adminpdb identified by adminpdb container=ALL; + grant create session, create pluggable database to c##adminpdb container=all; + + +On target CDB create the database link and clone the remote PDB. + + create database link ASTYPRD connect to c##adminpdb identified by "adminpdb" using 'taris/ASTYPRD'; + select * from dual@ASTYPRD; + + create pluggable database ANTILLESPRD from WEDGEPRD@ASTYPRD parallel 10; + alter pluggable database ANTILLESPRD open; + + +> Note that in both method we can choose the parallelism degree. + + +Clone PDB from a remote CDB using a RMAN backup +----------------------------------------------- + +Beacause in Oracle 21c is is still not possible to duplicate a pluggable database directly from a backup (aka *duplicate backup location*), wi will perform this operation in 2 steps: +1. duplicate from location the *root* PDB + source PDB into an auxiliary CDB +2. unplug the PDB from auxiliary CDB and plug it on target PDB + + +> A *set until time* clause can be specified in duplicate command. + +Start AUXCDB CDB instance using a basic spfile, therefore run the duplicate command: + + rman auxiliary / + + run + { + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + set until time "TIMESTAMP'2021-11-08 15:40:00'"; + duplicate database to AUXCDB + pluggable database WEDGEPRD,root + backup location '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY'; + } + + +Unplug PDB from auxiliary CDB: + + alter pluggable database WEDGEPRD close immediate; + alter pluggable database WEDGEPRD open read only; + + + alter session set container=WEDGEPRD; + exec DBMS_PDB.DESCRIBE('/mnt/yavin4/tmp/_oracle_/tmp/WEDGE.xml'); + alter pluggable database WEDGEPRD close immediate; + + +Plug in PDB on target CDB (with copy, move or nocopy option): + + create pluggable database ANTILLESPRD using '/mnt/yavin4/tmp/_oracle_/tmp/WEDGE.xml' move; + alter pluggable database ANTILLESPRD open; + alter pluggable database ANTILLESPRD save state; + + +At this momment we can destroy auxiliary CDB. diff --git a/tiddlywiki/PL_SQL insert lines for testing PITR.txt b/tiddlywiki/PL_SQL insert lines for testing PITR.txt new file mode 100755 index 0000000..4fd0707 --- /dev/null +++ b/tiddlywiki/PL_SQL insert lines for testing PITR.txt @@ -0,0 +1,21 @@ +alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'; + +drop table u0.t0 purge; +create table u0.t0(d date); + +declare + + i integer; + maxi integer default 10000000; + + +begin + for i in 1..maxi loop + begin + insert into u0.t0 values (sysdate); + commit; + sys.dbms_session.sleep(1); + end; + end loop; +end; +/ diff --git a/tiddlywiki/Pending stats - scratchpad - 01.txt b/tiddlywiki/Pending stats - scratchpad - 01.txt new file mode 100755 index 0000000..802ee70 --- /dev/null +++ b/tiddlywiki/Pending stats - scratchpad - 01.txt @@ -0,0 +1,68 @@ +Optimizer Statistics Gathering – pending and history +https://www.dbi-services.com/blog/optimizer-statistics-gathering-pending-and-history/ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +create user XIZOR identified by secret; +grant connect, resource to XIZOR; +grant unlimited tablespace to XIZOR; +grant select any dictionary to XIZOR; + +connect XIZOR/secret + +create table DEMO as select rownum n from dual; + +col analyzed for a30 +col published_prefs for a30 + +select num_rows,cast(last_analyzed as timestamp) analyzed,dbms_stats.get_prefs('PUBLISH',owner,table_name) published_prefs from dba_tab_statistics where owner='XIZOR' and table_name in ('DEMO'); + + +insert into DEMO select rownum n from xmltable('1 to 41'); + + +set pages 999 lines 200 + +select /*+ gather_plan_statistics */ count(*) from DEMO; + +select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last')); + +exec dbms_stats.set_table_prefs('XIZOR','DEMO','PUBLISH','FALSE'); + + +exec dbms_stats.gather_table_stats('XIZOR','DEMO'); + + +select num_rows,cast(last_analyzed as timestamp) analyzed,dbms_stats.get_prefs('PUBLISH',owner,table_name) published_prefs from dba_tab_pending_stats where owner='XIZOR' and table_name in ('DEMO'); + +exec dbms_stats.delete_pending_stats('XIZOR','DEMO'); + +exec dbms_stats.publish_pending_stats('XIZOR','DEMO',no_invalidate=>false); + +exec dbms_stats.set_table_prefs('XIZOR','DEMO','PUBLISH','TRUE'); + +exec dbms_stats.restore_table_stats('XIZOR','DEMO',sysdate-1,no_invalidate=>false); + + + +select report from table(dbms_stats.diff_table_stats_in_history('XIZOR','DEMO',sysdate-1,sysdate,0)); + +select + end_time,end_time-start_time,operation,target, + regexp_replace(regexp_replace(notes,'" val="','=>'),'(||)',' '), + status +from + DBA_OPTSTAT_OPERATIONS where regexp_like(target,'"?'||'XIZOR'||'"?."?'||'DEMO'||'"?') order by end_time desc fetch first 10 rows only +/ + + +select table_name,stats_update_time from dba_tab_stats_history where owner='XIZOR' and table_name='DEMO'; + +set long 2000000 +set pagesize 1000 + +select * from table(dbms_stats.diff_table_stats_in_history( + ownname => 'XIZOR', + tabname => 'DEMO', + time1 => systimestamp-1, + time2 => systimestamp, + pctthreshold => 0)); diff --git a/tiddlywiki/PostgreSQL - pgSentinel.tid b/tiddlywiki/PostgreSQL - pgSentinel.tid new file mode 100755 index 0000000..cff7390 --- /dev/null +++ b/tiddlywiki/PostgreSQL - pgSentinel.tid @@ -0,0 +1,44 @@ +created: 20190616221128760 +creator: vplesnila +modified: 20190616221559458 +modifier: vplesnila +tags: PostgreSQL +title: PostgreSQL - pgSentinel +type: text/vnd.tiddlywiki + +! Parameteres for pg_stat_statements +``` +shared_preload_libraries = 'pg_stat_statements' + +pg_stat_statements.max = 10000 +pg_stat_statements.track = all +``` + +! Parameteres for pg_sentinel +``` +shared_preload_libraries = 'pg_stat_statements,pgsentinel' +# Icncrease the max size of the query strings Postgres records +track_activity_query_size = 2048 +# Track statements generated by stored procedures as well +pg_stat_statements.track = all +``` + +! Create the extensions at the DATABASE level + +``` +create extension pg_stat_statements; +create extension pgsentinel; +``` + +! Performance views +* `pg_stat_activity` +* `pg_stat_statements` +* `pg_active_session_history` (history of `pg_stat_activity`) + +! Examples + +``` +select ash_time,top_level_query,query,queryid,wait_event_type,wait_event from pg_active_session_history where query != 'ROLLBACK' order by ash_time desc limit 15; + +select ash_time, wait_event, wait_event_type from pg_active_session_history where queryid=3548524963606505593 order by ash_time desc limit 15; +``` diff --git a/tiddlywiki/PostgreSQL.tid b/tiddlywiki/PostgreSQL.tid new file mode 100755 index 0000000..a01050c --- /dev/null +++ b/tiddlywiki/PostgreSQL.tid @@ -0,0 +1,9 @@ +color: #000040 +created: 20190622074252852 +creator: vplesnila +modified: 20190622233240929 +modifier: vplesnila +tags: Contents +title: PostgreSQL +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/PowerTools Repository on Rocky Linux 8.md b/tiddlywiki/PowerTools Repository on Rocky Linux 8.md new file mode 100755 index 0000000..9d268e4 --- /dev/null +++ b/tiddlywiki/PowerTools Repository on Rocky Linux 8.md @@ -0,0 +1,22 @@ +> [Original article](https://www.how2shout.com/linux/how-to-enable-powertools-repository-on-rocky-linux-8/) + +Install DNF plugins package: + + dnf install dnf-plugins-core + + +Install EPEL: + + dnf install epel-release + +Enable PowerTools repository on Rocky Linux 8: + + dnf config-manager --set-enabled powertools + +Update command: + + dnf update + +Check the Added repository on Rocky Linux: + + dnf repolist diff --git a/tiddlywiki/Proxy user.txt b/tiddlywiki/Proxy user.txt new file mode 100755 index 0000000..d41b2f7 --- /dev/null +++ b/tiddlywiki/Proxy user.txt @@ -0,0 +1,5 @@ +create user DEPLOY identified by Alabalaportocala1#; +grant create session to DEPLOY; +alter user DRIVE grant connect through DEPLOY; +connect DEPLOY[DRIVE]/Alabalaportocala1#@dmp01-scan/DRF1PRDEXA +show user; diff --git a/tiddlywiki/Put archivelogs on Recovery Area.txt b/tiddlywiki/Put archivelogs on Recovery Area.txt new file mode 100755 index 0000000..fce6775 --- /dev/null +++ b/tiddlywiki/Put archivelogs on Recovery Area.txt @@ -0,0 +1 @@ +alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' scope=both sid='*'; \ No newline at end of file diff --git a/tiddlywiki/Python - pip examples.md b/tiddlywiki/Python - pip examples.md new file mode 100755 index 0000000..307bcbc --- /dev/null +++ b/tiddlywiki/Python - pip examples.md @@ -0,0 +1,9 @@ +Download module dependencies: + + pip download Flask -d . + pip download fabric2 -d . + +Offline install a module using pip: + + pip install --no-index --find-links ./ fabric2 + pip install --no-index --find-links ./ Flask diff --git a/tiddlywiki/Python examples.tid b/tiddlywiki/Python examples.tid new file mode 100755 index 0000000..5595681 --- /dev/null +++ b/tiddlywiki/Python examples.tid @@ -0,0 +1,7 @@ +created: 20200310084114845 +creator: vplesnila +modified: 20200310084135099 +modifier: vplesnila +tags: Divers +title: Python examples +type: text/vnd.tiddlywiki \ No newline at end of file diff --git a/tiddlywiki/RAT example.md b/tiddlywiki/RAT example.md new file mode 100755 index 0000000..0f6e23b --- /dev/null +++ b/tiddlywiki/RAT example.md @@ -0,0 +1,192 @@ +[Original article](https://blog.yannickjaquier.com/oracle/database-replay-by-example.html) + + +Setup user workload +------------------- + + create user XIZOR identified by secret; + grant connect, resource to XIZOR; + grant unlimited tablespace to XIZOR; + + connect XIZOR/secret + + DROP TABLE test1 purge; + CREATE TABLE test1(id NUMBER, descr VARCHAR(50)); + + -- execute 4 times in order to generate 2.000.000 lines + + DECLARE + i NUMBER; + nbrows NUMBER; + BEGIN + SELECT NVL(MAX(id),0) INTO i FROM test1; + i:=i+1; + nbrows:=i+5000000; + LOOP + EXIT WHEN i>nbrows; + INSERT INTO test1 VALUES(i, RPAD('A',49,'A')); + i:=i+1; + END LOOP; + COMMIT; + END; + / + + select count(*) from test1; + + +Setup capture environement +-------------------------- + + create or replace directory dbcapture AS '/home/oracle/rat'; + +Filter capture in order to catch only our user ations: + + exec dbms_workload_capture.add_filter('XIZOR user','USER','XIZOR'); + + + col type format a10 + col status format a10 + col name format a20 + col attribute format a10 + col value format a30 + SET lines 150 + + SELECT type,status,name,attribute,value FROM dba_workload_filters; + +Start capture +------------- + + exec dbms_workload_capture.start_capture(name => 'XIZOR capture', dir => 'DBCAPTURE', default_action => 'EXCLUDE'); + + + SET lines 150 + col name FOR a20 + col directory FOR a20 + col status FOR a20 + col filters_used FOR 999 + + + SELECT id,name,directory,status,filters_used from DBA_WORKLOAD_CAPTURES; + +Run user worlkoad +----------------- + +Run as XIZOR user: + + SET serveroutput ON SIZE 999999 + DECLARE + i NUMBER; + random_id NUMBER; + maxid NUMBER; + stmt VARCHAR2(100); + BEGIN + SELECT NVL(MAX(id),0) INTO maxid FROM test1; + FOR i IN 1..10 LOOP + random_id:=ROUND(DBMS_RANDOM.VALUE(1,maxid)); + DBMS_OUTPUT.PUT_LINE('UPDATE test1 SET id=' || random_id || ' WHERE id=' || random_id || ';'); + UPDATE test1 SET id=random_id WHERE id=random_id; + END LOOP; + COMMIT; + END; + / + + +Finish capture +-------------- + + exec DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE(timeout => 0, reason => 'Load test over'); + +Generate capture report: + + SET lines 150 + SET pagesize 1000 + SET LONG 999999 + SET longchunksize 150 + + select DBMS_WORKLOAD_CAPTURE.report(1,'HTML') from dual; + + +Prepare replay environement +--------------------------- + + exec DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE('DBCAPTURE'); + exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name => 'XIZOR replay', replay_dir => 'DBCAPTURE'); + + + SET lines 150 + col name FOR a20 + col directory FOR a20 + col status FOR a20 + + select id,name,directory,status from dba_workload_replays; + + + exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY(); + +Calibrate replay: + + wrc replaydir=/home/oracle/rat mode=calibrate + +(optional) change structure +--------------------------- + +Before running the replay we will change the structure on database in order to generate a real difference between capture and replay scenario: + + create index test1_idx_id ON test1(id); + exec dbms_stats.gather_table_stats('XIZOR','TEST1'); + + +Replay +------ + +Create a user for RAT replay: + + create user rat_user identified by secret; + + + create role rat_role; + grant create session to rat_role; + grant execute on dbms_workload_capture to rat_role; + grant execute on dbms_workload_replay to rat_role; + grant create session to rat_role; + grant create any directory to rat_role; + grant select_catalog_role to rat_role; + grant execute on dbms_workload_repository to rat_role; + grant administer sql tuning set to rat_role; + grant oem_advisor to rat_role; + grant create job to rat_role; + grant become user to rat_role; + + + grant rat_role to rat_user; + + +Start worker client: + + wrc rat_user/secret replaydir=/home/oracle/rat + +Start the replay: + + exec DBMS_WORKLOAD_REPLAY.START_REPLAY(); + +Check replay status and wait until status is *COMPLTED*: + + SET lines 150 + col name FOR a20 + col directory FOR a20 + col status FOR a20 + + select id,name,directory,status from dba_workload_replays; + + +Generate reply report +--------------------- + +Identify the *ID* of the replay and generate the report: + + SET lines 150 + SET pagesize 1000 + SET LONG 999999 + SET longchunksize 150 + + SELECT dbms_workload_replay.report(1,'HTML') FROM dual; diff --git a/tiddlywiki/RMAN backup - examples.txt b/tiddlywiki/RMAN backup - examples.txt new file mode 100755 index 0000000..76b83e0 --- /dev/null +++ b/tiddlywiki/RMAN backup - examples.txt @@ -0,0 +1,18 @@ +rman target / + +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; +} diff --git a/tiddlywiki/RMAN duplicate from active database.txt b/tiddlywiki/RMAN duplicate from active database.txt new file mode 100755 index 0000000..c602838 --- /dev/null +++ b/tiddlywiki/RMAN duplicate from active database.txt @@ -0,0 +1,42 @@ +oracle@bakura[EWOKPRD]:/mnt/yavin4/tmp/_oracle_/tmp$ cat listener.ora + +MYLSNR = + (DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCP)(HOST = bakura)(PORT = 1600)) + ) + ) + +SID_LIST_MYLSNR = + (SID_LIST = + (SID_DESC = + (GLOBAL_DBNAME = EWOKPRD_STATIC) + (SID_NAME = EWOKPRD) + (ORACLE_HOME = /app/oracle/product/19) + ) + ) + + +export TNS_ADMIN=/mnt/yavin4/tmp/_oracle_/tmp +lsnrctl start MYLSNR +lsnrctl status MYLSNR + + +connect sys/"*****"@//bakura:1600/EWOKPRD_STATIC as sysdba +connect sys/"*****"@//togoria:1521/ANDOPRD as sysdba + + +rman target=sys/"*****"@//togoria:1521/ANDOPRD auxiliary=sys/"*****"@//bakura:1600/EWOKPRD_STATIC +run { + allocate channel pri1 device type DISK; + allocate channel pri2 device type DISK; + allocate channel pri3 device type DISK; + allocate channel pri4 device type DISK; + allocate auxiliary channel aux1 device type DISK; + allocate auxiliary channel aux2 device type DISK; + allocate auxiliary channel aux3 device type DISK; + allocate auxiliary channel aux4 device type DISK; + duplicate target database to 'EWOK' + from active database + using compressed backupset section size 1G; +} diff --git a/tiddlywiki/RMAN duplicate from location for STANDBY.txt b/tiddlywiki/RMAN duplicate from location for STANDBY.txt new file mode 100755 index 0000000..fe6b41e --- /dev/null +++ b/tiddlywiki/RMAN duplicate from location for STANDBY.txt @@ -0,0 +1,22 @@ +-- on new standby database + +alter system set db_name='' scope=spfile sid='*'; +alter system set db_unique_name='' scope=spfile sid='*'; + +-- duplicate using RMAN +rman auxiliary / + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate database '' for standby backup location ''; +} diff --git a/tiddlywiki/RMAN duplicate from location from a cold backup.txt b/tiddlywiki/RMAN duplicate from location from a cold backup.txt new file mode 100755 index 0000000..38ddc98 --- /dev/null +++ b/tiddlywiki/RMAN duplicate from location from a cold backup.txt @@ -0,0 +1,16 @@ +rman auxiliary / + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate target database to QTFRINT1 backup location '/backup/QTFRPRD/' noredo; +} diff --git a/tiddlywiki/RMAN duplicate from location from a hot backup.txt b/tiddlywiki/RMAN duplicate from location from a hot backup.txt new file mode 100755 index 0000000..7b2e329 --- /dev/null +++ b/tiddlywiki/RMAN duplicate from location from a hot backup.txt @@ -0,0 +1,16 @@ +rman auxiliary / + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate target database to QTFRINT1 backup location '/backup/QTFRPRD/'; +} diff --git a/tiddlywiki/RMAN restore UNTIL TIME.txt b/tiddlywiki/RMAN restore UNTIL TIME.txt new file mode 100755 index 0000000..a0b2614 --- /dev/null +++ b/tiddlywiki/RMAN restore UNTIL TIME.txt @@ -0,0 +1,24 @@ +rman target / + +run +{ + set until time "to_date('26-06-2019:09:00:00','DD-MM-YYYY:HH24:MI:SS')"; + shutdown abort; + startup mount; + allocate channel ch01 device type disk; + allocate channel ch02 device type disk; + allocate channel ch03 device type disk; + allocate channel ch04 device type disk; + allocate channel ch05 device type disk; + allocate channel ch06 device type disk; + allocate channel ch07 device type disk; + allocate channel ch08 device type disk; + allocate channel ch09 device type disk; + allocate channel ch10 device type disk; + allocate channel ch11 device type disk; + allocate channel ch12 device type disk; + restore database; + recover database; + alter database open resetlogs; +} + diff --git a/tiddlywiki/Recompile Invalid Objects in PDB$SEED.txt b/tiddlywiki/Recompile Invalid Objects in PDB$SEED.txt new file mode 100755 index 0000000..33f0dd7 --- /dev/null +++ b/tiddlywiki/Recompile Invalid Objects in PDB$SEED.txt @@ -0,0 +1,43 @@ +-- from http://www.dbstar.com/dispref.asp?ref_name=recompile_invalid_objs_seed.sql + +conn / as sysdba +alter session set container=PDB$SEED; + +show con_name + +select open_mode from v$database; + +alter session set "_oracle_script"=TRUE; + +alter pluggable database pdb$seed close immediate instances=all; + +select open_mode from v$database; + +alter pluggable database pdb$seed OPEN READ WRITE; + +show con_name; + +select open_mode from v$database; + +@?/rdbms/admin/utlrp.sql + +col COMP_NAME for a35; +col COMP_ID for a10; +set linesize 120; +select comp_id, comp_name, version, status from dba_registry; +select count(*) from dba_objects where status='INVALID'; + +col OBJECT_NAME for a30; + +select object_name, object_type, owner from dba_objects where status='INVALID' order by owner; +select owner, object_type, count(*) from dba_objects where status = 'INVALID' group by owner, object_type; + +alter pluggable database pdb$seed close immediate instances=all; + +alter pluggable database pdb$seed OPEN READ ONLY; + +show con_name; + +select open_mode from v$database; + +alter session set "_oracle_script"=FALSE; diff --git a/tiddlywiki/Redologs _ Standby redologs.txt b/tiddlywiki/Redologs _ Standby redologs.txt new file mode 100755 index 0000000..2263d8e --- /dev/null +++ b/tiddlywiki/Redologs _ Standby redologs.txt @@ -0,0 +1,14 @@ +-- list Redologs / Standby redologs groups and members +select GROUP#,THREAD#,MEMBERS,STATUS, BYTES/(1024*1024) Mb from v$log; +select GROUP#,THREAD#,STATUS, BYTES/(1024*1024) Mb from v$standby_log; + +col MEMBER for a60 +select * from v$logfile; + +-- create standby redologs +select 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; +select distinct 'ALTER DATABASE ADD STANDBY LOGFILE THREAD '||thread#||' size '||bytes||';' from v$log; + +-- clear / drop standby redologs +select 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' from v$standby_log; +select 'ALTER DATABASE DROP STANDBY LOGFILE GROUP '||GROUP#||';' from v$standby_log; diff --git a/tiddlywiki/Remove trailing whitespace in cells in Libre Office spreadsheet.txt b/tiddlywiki/Remove trailing whitespace in cells in Libre Office spreadsheet.txt new file mode 100755 index 0000000..355e148 --- /dev/null +++ b/tiddlywiki/Remove trailing whitespace in cells in Libre Office spreadsheet.txt @@ -0,0 +1,6 @@ +Search for: ^\s+|\s+$ +Replace with: empty +Options/Current selection only: ON +Options/Regular expressions: ON + +Click Replace All \ No newline at end of file diff --git a/tiddlywiki/Restore points.txt b/tiddlywiki/Restore points.txt new file mode 100755 index 0000000..eca3a0d --- /dev/null +++ b/tiddlywiki/Restore points.txt @@ -0,0 +1,40 @@ +-- create a Restore Point guarantee +create restore point BEFORE_BILLING_FUEL guarantee flashback database; + +-- drop a Restore Point +drop restore point BEFORE_BILLING_FUEL; + +------------------------------- +-- list Restore Points +-- based on Vishal Gupta script +------------------------------- +set lines 180 pages 100 + +DEFINE BYTES_FORMAT="9,999,999" +DEFINE BYTES_HEADING="MB" +DEFINE BYTES_DIVIDER="1024/1024" + +PROMPT ***************************************************************** +PROMPT * Restore Points +PROMPT ***************************************************************** + +COLUMN time HEADING "Time" FORMAT a18 +COLUMN name HEADING "Name" FORMAT a40 +COLUMN guarantee_flashback_database HEADING "Guar|ant'd" FORMAT a5 +COLUMN preserved HEADING "Pre|ser|ved" FORMAT a3 +COLUMN restore_point_time HEADING "Restore|Point|Time" FORMAT a18 +COLUMN scn HEADING "SCN" FORMAT 999999999999999 +COLUMN database_incarnation# HEADING "DB|Inc#" FORMAT 9999 +COLUMN storage_size HEADING "Size(&&BYTES_HEADING)" FORMAT &&BYTES_FORMAT + +SELECT TO_CHAR(r.time,'DD-MON-YY HH24:MI:SS') time + , r.name + , r.guarantee_flashback_database + , r.preserved + , r.database_incarnation# + , r.scn + , (r.storage_size)/&&BYTES_DIVIDER storage_size + , TO_CHAR(r.restore_point_time,'DD-MON-YY HH24:MI:SS') restore_point_time + FROM v$restore_point r +ORDER BY r.time +; diff --git a/tiddlywiki/Resume astreinte fevrier 2020.txt b/tiddlywiki/Resume astreinte fevrier 2020.txt new file mode 100755 index 0000000..f82c3e3 --- /dev/null +++ b/tiddlywiki/Resume astreinte fevrier 2020.txt @@ -0,0 +1,36 @@ +Samedi 01/02 05h30-15h30 (10h) + - G4: creation index pour ls FL + - 10 JIRA à deployer + - check OGG + - resync OGG UK + - EBF pour Ange (avec problèmes) + - installation manuelle d'un package pour DRIVE France + - backup Spain pour Ashley + +Dimanche 02/02 (6h) + - backup Spain, 06-08h + - ckeck OGG Spain, 11h-12h + - appel astreinte ODS pour pb tablespace dans ODSSPPRDEXA, 14h-15h + - Backup Spain: 22h-24h + +Samedi 29/02 08h00-14h00 (6h) + +Total: +Astreinte: du samedi 01/02 19h00 au lundi 03/02 8h00 + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +01/02/2020 05:30 01/02/2020 15:30 10,00 Planifié Opération Métier Production "** Drive REL 5.0 -multiples operations on G4 Drive databases (see details) +** Drive REL 5.0 -multiples operations on G4 Drive databases (see details) + creating missing index for FK columns +-> deploy JIRAs +-> OGG Resync for UK +-> unplanned EBF for Datastage +-> Drive FR issues - performance + manual reinstall of quatation package +-> backup and put aside a SPAIN backup +Drive REL 5.0 -multiples operations on G4 Drive databases (see details)" +02/02/2020 06:00 02/02/2020 08:00 2,00 Planifié Opération Métier Production Drive REL 5.0 - backup before DATA MIGRATION & put aside DRS1PRDEXA +02/02/2020 11:00 02/02/2020 12:00 1,00 Planifié Opération Métier Production Drive REL 5.0 - Spain Data Migration - check OGG Sync +02/02/2020 14:00 02/02/2020 15:00 1,00 Non_planifié_hors_batch Incident batch métier Call from ODS on duty - ITPODS01_FULL_ODI215 - incrementt tablespece size on ODS ITALY +02/02/2020 22:00 03/02/2020 00:00 2,00 Planifié Opération Métier Production Drive REL 5.0 - backup & put aside DRS1PRDEXA +29/02/2020 08:00 29/02/2020 14:00 6,00 Planifié Opération I&P Production DBFSOGG12 resize on production cluster + diff --git a/tiddlywiki/Resume astreinte janvier 2020.txt b/tiddlywiki/Resume astreinte janvier 2020.txt new file mode 100755 index 0000000..94cf639 --- /dev/null +++ b/tiddlywiki/Resume astreinte janvier 2020.txt @@ -0,0 +1,44 @@ +Lundi 27/01 (3h) + - 21h-23h, backup DRS1PRE for Spain Data Migration + - 23h-24h, INC000001320919 WEBDEALER backup fails + +Vendredi 31/01 (4h) + - 20h-21h -- Job FRPDBA01_DRS1PRDEXA_HOT005 ended not ok + - 21h-24h -- issues on patching Spain + +Samedi 01/02 05h30-15h30 (10h) + - G4: creation index pour ls FL + - 10 JIRA à deployer + - check OGG + - resync OGG UK + - EBF pour Ange (avec problèmes) + - installation manuelle d'un package pour DRIVE France + - backup Spain pour Ashley + +Dimanche 02/02 (6h) + - backup Spain, 06-08h + - ckeck OGG Spain, 11h-12h + - appel astreinte ODS pour pb tablespace dans ODSSPPRDEXA, 14h-15h + - Backup Spain: 22h-24h + +Total: 23h +Astreinte: du lundi 27/01 19h00 au lundi 03/02 8h00 + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +27/01/2020 21:00 27/01/2020 23:00 2,00 Planifié Opération Métier Production Drive REL 5.0 - backup & put aside DRS1PRDEXA +27/01/2020 23:00 28/01/2020 00:00 1,00 Non_planifié_batch Incident batch technique - Sauvegardes INC000001320919 MySQL WEBDEALER backup fails +31/01/2020 20:00 31/01/2020 21:00 1,00 Non_planifié_batch Incident batch technique - Sauvegardes Job FRPDBA01_DRS1PRDEXA_HOT005 ended not ok at 31/01/2020 20:10:11 +31/01/2020 21:00 01/02/2020 00:00 3,00 Non_planifié_hors_batch Opération Métier Production Technical issue during Spain patching with Jenkins +01/02/2020 05:30 01/02/2020 15:30 10,00 Planifié Opération Métier Production "** Drive REL 5.0 -multiples operations on G4 Drive databases (see details) +** Drive REL 5.0 -multiples operations on G4 Drive databases (see details) + creating missing index for FK columns +-> deploy JIRAs +-> OGG Resync for UK +-> unplanned EBF for Datastage +-> Drive FR issues - performance + manual reinstall of quatation package +-> backup and put aside a SPAIN backup +Drive REL 5.0 -multiples operations on G4 Drive databases (see details)" +02/02/2020 06:00 02/02/2020 08:00 2,00 Planifié Opération Métier Production Drive REL 5.0 - backup before DATA MIGRATION & put aside DRS1PRDEXA +02/02/2020 11:00 02/02/2020 12:00 1,00 Planifié Opération Métier Production Drive REL 5.0 - Spain Data Migration - check OGG Sync +02/02/2020 14:00 02/02/2020 15:00 1,00 Non_planifié_hors_batch Incident batch métier Call from ODS on duty - ITPODS01_FULL_ODI215 - incrementt tablespece size on ODS ITALY +02/02/2020 22:00 03/02/2020 00:00 2,00 Planifié Opération Métier Production Drive REL 5.0 - backup & put aside DRS1PRDEXA diff --git a/tiddlywiki/Resume astreinte juillet 2019.txt b/tiddlywiki/Resume astreinte juillet 2019.txt new file mode 100755 index 0000000..81e2945 --- /dev/null +++ b/tiddlywiki/Resume astreinte juillet 2019.txt @@ -0,0 +1,51 @@ +Astreinte: lundi 15-07-2019 => lundi 22-07-2019 + +16-07-2019 mardi +~~~~~~~~~~~~~~~~ +22h00 (1h) INC000001174127 / DET problemTbsp:pctUsed on Cluster Database +Total: 1h + +17-07-2019 mercredi +~~~~~~~~~~~~~~~~~~~~ +05h00 (2h) INC000001174229 - DET Load:swapUtil on Host dmp01dbadm01.france.intra.corp +19h30 (1h) DEPLOY-90529 +20h30 (2h) INC000001174855 RMAN backups fails due of unavaileble catalog +23h00 (1h) MEP Motortrade DEPLOY-90262 +Total: 6h + + +18-07-2019 jeudi +~~~~~~~~~~~~~~~~ +03h (1h): INC000001174862: Job FRPDBAO_KSPFAPRD_BLACKOUT_OFF ended not ok +19h (2h): MobUP for Drive FRANCE +21h (1h): EBF DEPLOY-90641 +22h (1h): INC000001176579 : Alert for LED1PRD database status +23h (1h) INC000001176595 / FRPDBA01_RMANPRD_EXP025 backup OEM +Total: 6h + + +19-07-2019 vendredi +~~~~~~~~~~~~~~~~~~~ +00h-02h: Alerts SCOM + INC000001176600 + INC000001176599 + INC000001176598 + +05h (1h) INC000001176487 / FR – SCOM : % Free Space is too low +23h (1h) INC000001178141 - FR - CTRLM : Job Cyclique FRPDBA01_XBE1PRDEXA_HOT005 ended not ok +Total: 4h + +20-07-2019 samedi +~~~~~~~~~~~~~~~~~ +15h (1h) call Citrix On Duty for account locked +16h (1h) start DRF1PRDEXA abended extract OEDRF1P +Total: 2h + +21-07-2019 dimanche +~~~~~~~~~~~~~~~~~~~ +22h (1h) INC000001178161 FRPDBA01_NAS1PRDEXA_HOT005 KO +23h (1h) INC000001178163 - FR - CTRLM : Job FRPDBA01_XBE1PRDEXA_HOT005 ended not ok at 21/07/2019 23:00:12 +Total: 2h + +~~~~~~~~~~ +TOTAL: 21h \ No newline at end of file diff --git a/tiddlywiki/Resume astreinte juin 2019.tid b/tiddlywiki/Resume astreinte juin 2019.tid new file mode 100755 index 0000000..90e766b --- /dev/null +++ b/tiddlywiki/Resume astreinte juin 2019.tid @@ -0,0 +1,50 @@ +created: 20190628135744627 +creator: vplesnila +modified: 20190702135815598 +modifier: vplesnila +tags: Draft +title: Resumé astreinte juin 2019 +type: text/plain + +ASTREINTE +========= + +mardi 11/06 -> dimanche 16/06 + +4 jours SEMAINE +2 jours DIMANCHE + + +INTERVENTIONS +============= + +vendredi 14/06 +~~~~~~~~~~~~~~ + +23h-24h (1h): [Drive Release 4.11] task F17' + + +samedi 15/06 +~~~~~~~~~~~~~~ +9h-18h (9h): [Drive Release 4.11] tasks D57, D58, G29, G32 + - D57 - Synchro OGG on G4 for CLEVA (FR, IT, SP, UK) + - D58 - DRP build for CLEVA + - G29 – DRIVE FR - PRODUCTION backup and UAT refresh + - G32 - ODS synchro OGG DRIVE UAT France + +dimanche 16/06 +~~~~~~~~~~~~~~ +9h-16h (7h): [Drive Release 4.11] tasks G53, G56 + - G53 - DRIVE UK - PRODUCTION backup and UAT refresh + - G56 - 2 ODS synchro OGG DRIVE UAT UK (ODS/ODI) + +samedi 29/06 +~~~~~~~~~~~~~~ +10h-13h: RFCxxxxx Move bases 12.2 + +dimanche 30/06 +~~~~~~~~~~~~~~ +02h-5h: RFCxxxxx Move bases 12.2 + +================= +TOTAL HEURES: 23h diff --git a/tiddlywiki/Resume astreinte mars 2020.txt b/tiddlywiki/Resume astreinte mars 2020.txt new file mode 100755 index 0000000..b776e67 --- /dev/null +++ b/tiddlywiki/Resume astreinte mars 2020.txt @@ -0,0 +1,34 @@ +Mercredi 04/03 + - 22h-23h (1h): call from ODS on duty for check a session + +Jeudi 05/03 + - 05h-06h (1h): call from ODS on duty for kill staging sessions + - 20h-24h (4h): issues on ODS FRANCE - ODIFR.ODS_O_CONTRACT + +Vendredi 06/03 + - 00h-02h (2h): issues on ODS FRANCE - ODIFR.ODS_O_CONTRACT + +Samedi 07/03 + - 06h-08h (2h): call from ODS on duty; issue locking VM_D_INVOICE_LINE table by a Datapump Job + - 08h-11h (3h): issue with ODS Daytona EXP-IMP jobs ; ZFS appliance issue 100% + - 20h-21h (1h): FR - CTRLM : Job FRPDBA01_PHX1PRDEXA_HOT005 ended not ok at 07/03/2020 20:00:01 - INC000001371688 + - 21h-22h (1h): RPDBA01_RAT1PRDEXA_HOT005 ended not ok at 07/03/2020 20:52:59 - INC000001371689 + +Dimanche 08/03 + - 12h-13h (1h): FR - CTRLM : Job FRPDBA01_FRSQLPRDXENMOB5_CHK020 ended not ok + - 13h-14h (1h): call from ODS on duty for killing sessions using VM_F_INVOICE on ODS Spain + +Total heures = 1 + 5 + 2 + 7 + 2 = 17h +Total: +Astreinte: du lundi 02/03 19h00 au lundi 09/03 8h00 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +PLESNILA Valeriu 04/03/2020 22:00 04/03/2020 23:00 1,00 Non_planifié_batch Incident batch métier Call from ODS on duty for check a locked session +PLESNILA Valeriu 05/03/2020 05:00 05/03/2020 06:00 1,00 Non_planifié_batch Incident batch métier Call from ODS on duty for kill staging sessions +PLESNILA Valeriu 05/03/2020 20:00 06/03/2020 02:00 6,00 Non_planifié_batch Incident batch métier issues on ODS FRANCE - ODIFR.ODS_O_CONTRACT +PLESNILA Valeriu 07/03/2020 06:00 07/03/2020 08:00 2,00 Non_planifié_batch Incident batch métier call from ODS on duty; issue locking VM_D_INVOICE_LINE table by a Datapump Job +PLESNILA Valeriu 07/03/2020 08:00 07/03/2020 11:00 3,00 Non_planifié_batch Incident batch technique issue with ODS Daytona EXP-IMP jobs ; ZFS appliance issue 100% +PLESNILA Valeriu 07/03/2020 20:00 07/03/2020 21:00 1,00 Non_planifié_batch Incident batch technique - Sauvegardes FR - CTRLM : Job FRPDBA01_PHX1PRDEXA_HOT005 ended not ok at 07/03/2020 20:00:01 - INC000001371688 +PLESNILA Valeriu 07/03/2020 21:00 07/03/2020 22:00 1,00 Non_planifié_batch Incident batch technique - Sauvegardes RPDBA01_RAT1PRDEXA_HOT005 ended not ok at 07/03/2020 20:52:59 - INC000001371689 +PLESNILA Valeriu 08/03/2020 12:00 08/03/2020 13:00 1,00 Non_planifié_batch Incident batch technique - Base de Données FR - CTRLM : Job FRPDBA01_FRSQLPRDXENMOB5_CHK020 ended not ok +PLESNILA Valeriu 08/03/2020 13:00 08/03/2020 14:00 1,00 Non_planifié_batch Incident batch métier call from ODS on duty for killing sessions using VM_F_INVOICE on ODS Spain +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/tiddlywiki/Resume astreinte novembre 2019.txt b/tiddlywiki/Resume astreinte novembre 2019.txt new file mode 100755 index 0000000..63225c3 --- /dev/null +++ b/tiddlywiki/Resume astreinte novembre 2019.txt @@ -0,0 +1,7 @@ +congé: 18-22 novembre +astreinte: du lundi 4/11 au lundi 11/11 (férier) +interventions: + - dimanche 3/11 9h=08h-17h VMAX Recovery + - samedi 16/11 8h=08h-16h + + diff --git a/tiddlywiki/Resume astreinte octobre 2019.txt b/tiddlywiki/Resume astreinte octobre 2019.txt new file mode 100755 index 0000000..9b04d12 --- /dev/null +++ b/tiddlywiki/Resume astreinte octobre 2019.txt @@ -0,0 +1,12 @@ +Congés: 4j: 3,4,7,8 octobre +Jours ouvrés: 22 +Jours travaillés: 19 +Astreinte: vendredi 11/10 -> samedi 12:10 (1 jour) +Interventions: + 1h: 11/10/2019 23:00 12/10/2019 00:00, SQL Server CHECK step fails in a maintenance plan + 1h: 12/10/2019 03:00 12/10/2019 04:00, Wrong alarm from Pilotage -- not many alerts but not DBA perimeter + 4h: 27/10/2019 08:00 27/10/2019 12:00, Database MOVE to 12.1 (190115) - Lot 1 + 5h: 27/10/2019 14:00 27/10/2019 19:00, Database MOVE to 12.1 (190115) - Lot 2 + + + diff --git a/tiddlywiki/Resume astreinte septembre 2019.txt b/tiddlywiki/Resume astreinte septembre 2019.txt new file mode 100755 index 0000000..4d89815 --- /dev/null +++ b/tiddlywiki/Resume astreinte septembre 2019.txt @@ -0,0 +1,3 @@ +congé: 16-20septembre +astreinte dimanche 1 septembre +intervention dimanche 8 septembre 3h=08h-11h - patch CRS sur DMP01 diff --git a/tiddlywiki/SAMBA (cifs) mount on Linux.txt b/tiddlywiki/SAMBA (cifs) mount on Linux.txt new file mode 100755 index 0000000..7b037c6 --- /dev/null +++ b/tiddlywiki/SAMBA (cifs) mount on Linux.txt @@ -0,0 +1,23 @@ +-- install cifs support +yum install cifs-utils.x86_64 + +-- create group/user owner for mapping ownership into cifs mount +-- we choosed to fix uid/guid identical across all machines using the Samba share +groupadd smbuser --gid 1502 +useradd smbuser --uid 1502 -g smbuser -G smbuser + +-- test cifs mount +mount -t cifs //192.168.0.9/share /mnt/yavin4 -o vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,user=vplesnila + +umount /mnt/yavin4 + +-- create credentials file for automount: /root/.smbcred +username=vplesnila +password=***** + +-- add in /etc/fstab +//192.168.0.9/share /mnt/yavin4 cifs vers=2.0,uid=smbuser,gid=smbuser,file_mode=0775,dir_mode=0775,credentials=/root/.smbcred 0 0 + +-- test +mount -a + diff --git a/tiddlywiki/SQL Baseline from AWR.tid b/tiddlywiki/SQL Baseline from AWR.tid new file mode 100755 index 0000000..84f6255 --- /dev/null +++ b/tiddlywiki/SQL Baseline from AWR.tid @@ -0,0 +1,142 @@ +created: 20190623003755119 +creator: vplesnila +modified: 20190906141112050 +modifier: vplesnila +tags: Oracle +title: SQL Baseline from AWR +type: text/plain + +-- Création d'un SQL baseline à partir du AWR +--------------------------------------------- + + +-- Création d'un SQLSET +--------------------- +BEGIN + DBMS_SQLTUNE.DROP_SQLSET( + sqlset_name => 'MySTS01'); +END; +/ +BEGIN + DBMS_SQLTUNE.CREATE_SQLSET( + sqlset_name => 'MySTS01', + description => 'SQL Tuning Set for loading plan into SQL Plan Baseline'); +END; +/ + + +-- Dans le SQLSET=MySTS01 créé précédemment, nous chargons l'historique du SQL_ID=d1khdngkga3nm +-- entre les snapshot AWR 50947 et 50951 +DECLARE + cur sys_refcursor; +BEGIN + OPEN cur FOR + SELECT VALUE(P) + FROM TABLE( + dbms_sqltune.select_workload_repository( + begin_snap=>50947, + end_snap=>50951, + basic_filter=>'sql_id = ''d1khdngkga3nm''', + attribute_list=>'ALL') + ) p; + DBMS_SQLTUNE.LOAD_SQLSET( sqlset_name=> 'MySTS01', populate_cursor=>cur); + CLOSE cur; +END; +/ + +-- Pour voir ce que contient le SQLSET=MySTS01 +----------------------------------------------- +SELECT + buffer_gets , + optimizer_cost , + plan_hash_value , + sql_id + FROM TABLE(DBMS_SQLTUNE.SELECT_SQLSET(sqlset_name => 'MySTS01') + ) +/ + + +-- On crée un SQL Baseline à partir du plan_hash_value=473902782 du SQLSET=MySTS01 +---------------------------------------------------------------------------------- +DECLARE +my_plans pls_integer; +BEGIN + my_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( + sqlset_name => 'MySTS01', + basic_filter=>'plan_hash_value = ''473902782''' + ); +END; +/ + + +-- Liste des SQL baseslines +---------------------------- +set lines 180 pages 999 +col created for a20 trunc +col signature for 9999999999999999999 + +select signature, sql_handle, plan_name, enabled, accepted, fixed, origin,created +from dba_sql_plan_baselines; + + +-- Pour fixer un SQL basesline +------------------------------ +begin dbms_output.put_line(dbms_spm.ALTER_SQL_PLAN_BASELINE(plan_name=>'SQL_PLAN_8uy4magb69vtuf8f30e4c',attribute_name=>'fixed',attribute_value=>'yes')); +end; +/ + + +-- Obtenir le PLAN_ID et le OUTLINE d'un SQL basesline +-- A partir de la SIGNATURE=9005682359107037619 du SQL Baseline, nous retrouvons le PLAN_ID +------------------------------------------------------------------------------------------- +SELECT TO_CHAR(so.signature) signature + , so.plan_id + , DECODE(ad.origin, 1, 'MANUAL-LOAD', + 2, 'AUTO-CAPTURE', + 3, 'MANUAL-SQLTUNE', + 4, 'AUTO-SQLTUNE', + 5, 'STORED-OUTLINE', + 'UNKNOWN') origin + , DECODE(BITAND(so.flags, 1), 1, 'YES', 'NO') enabled + , DECODE(BITAND(so.flags, 2), 2, 'YES', 'NO') accepted + , DECODE(BITAND(so.flags, 64), 64, 'NO', 'YES') reproduced + FROM sys.sqlobj$ so + , sys.sqlobj$auxdata ad + WHERE ad.signature = so.signature + AND ad.plan_id = so.plan_id + AND so.signature = 9005682359107037619; + + +-- Pour obtenir le OUTLINE du SQL Baseline, il nous faut connaître +-- la SIGNATURE=9005682359107037619 et le PLAN_ID=263533726 +------------------------------------------------------------------ +select cast(extractvalue(value(x), '/hint') as varchar2(500)) as outline_hints + from xmltable('/outline_data/hint' + passing (select xmltype(comp_data) xml + from sys.sqlobj$data + where signature = 9005682359107037619 + and plan_id = 263533726)) x; + + + +-- Supression du SQL Baseline ayant le SQL_HANDLE=SQL_7cfa9c643693a9b3 et +-- le PLAN_NAME=SQL_PLAN_7tynwchv97admdbd90e8e +------------------------------------------------------------------------- +set serveroutput ON +DECLARE + v_dropped_plans number; +BEGIN + v_dropped_plans := DBMS_SPM.DROP_SQL_PLAN_BASELINE ( + sql_handle => 'SQL_7cfa9c643693a9b3', + plan_name=>'SQL_PLAN_7tynwchv97admdbd90e8e' +); + DBMS_OUTPUT.PUT_LINE('dropped ' || v_dropped_plans || ' plans'); +END; +/ + + +-- Afficher le plan d'exécution d'un SQL basesline +-- Dans notre exemple, le SQL Baseline a le SQL_HANDLE=SQL_24c0db16ff852641 +------------------------------------------------------------------------------ +set lines 200 pages 0 +select * from table(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE('SQL_24c0db16ff852641')); \ No newline at end of file diff --git a/tiddlywiki/SQL Baseline from library cache.txt b/tiddlywiki/SQL Baseline from library cache.txt new file mode 100755 index 0000000..7d47020 --- /dev/null +++ b/tiddlywiki/SQL Baseline from library cache.txt @@ -0,0 +1,77 @@ +-- Creation du SQL baseline à partir d'un plan dans le library cache +-- et injection dans un sql_text spécifique + +-- Dans l'exemple suivant, nous créons un SQL Baseline pour le SQL ayant le texte du SQL_ID=81qv4d7vkb571, +-- à partir d'un plan dans le library cache: SQL_ID=chhkmc32mdkak, PLAN_HASH_VALUE=2494645258 + + +set serveroutput ON +declare + sqltext_without_hint clob; + ret pls_integer; +begin + select SQL_FULLTEXT into sqltext_without_hint from V$SQL where sql_id = '81qv4d7vkb571' and CHILD_NUMBER=0; + ret := + DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE( + SQL_ID =>'chhkmc32mdkak', + PLAN_HASH_VALUE => 2494645258, + SQL_TEXT => sqltext_without_hint); + dbms_output.put_line(ret || ' SQL plan baseline(s) created'); +end; +/ + + +-- Liste des SQL baseslines, par contre, il n'y a pas le plan_id +----------------------------------------------------------------- +set lines 180 pages 999 + +col created for a20 trunc +col signature for 9999999999999999999 + +select signature, sql_handle, plan_name, enabled, accepted, fixed, origin,created +from dba_sql_plan_baselines; + + +-- Pour avoir le plan_id +------------------------ +SELECT TO_CHAR(so.signature) signature + , so.plan_id + , DECODE(ad.origin, 1, 'MANUAL-LOAD', + 2, 'AUTO-CAPTURE', + 3, 'MANUAL-SQLTUNE', + 4, 'AUTO-SQLTUNE', + 5, 'STORED-OUTLINE', + 'UNKNOWN') origin + , DECODE(BITAND(so.flags, 1), 1, 'YES', 'NO') enabled + , DECODE(BITAND(so.flags, 2), 2, 'YES', 'NO') accepted + , DECODE(BITAND(so.flags, 64), 64, 'NO', 'YES') reproduced + FROM sys.sqlobj$ so + , sys.sqlobj$auxdata ad + WHERE ad.signature = so.signature + AND ad.plan_id = so.plan_id + AND so.signature = 9005682359107037619; + + +-- Pour avoir le OUTLINE du plan_id +----------------------------------- +select cast(extractvalue(value(x), '/hint') as varchar2(500)) as outline_hints + from xmltable('/outline_data/hint' + passing (select xmltype(comp_data) xml + from sys.sqlobj$data + where signature = 9005682359107037619 + and plan_id = 263533726)) x; + + +-- Drop d'un SQL baseline +------------------------- +set serveroutput ON +DECLARE + v_dropped_plans number; +BEGIN + v_dropped_plans := DBMS_SPM.DROP_SQL_PLAN_BASELINE ( + sql_handle => 'SQL_7cfa9c643693a9b3', + plan_name=>'SQL_PLAN_7tynwchv97admdbd90e8e' +); + DBMS_OUTPUT.PUT_LINE('dropped ' || v_dropped_plans || ' plans'); +END; +/ \ No newline at end of file diff --git a/tiddlywiki/SQL Profile from AWR.txt b/tiddlywiki/SQL Profile from AWR.txt new file mode 100755 index 0000000..d7cf4a6 --- /dev/null +++ b/tiddlywiki/SQL Profile from AWR.txt @@ -0,0 +1,141 @@ +-- SQL Profile -- chargement à partir d'un plan d'exécution du Library Cache +-- d'après le blog de Kerry Osborne ( http://kerryosborne.oracle-guy.com/2009/04/oracle-sql-profiles/ ) +-- et un exemple de Christian Antognini ( http://antognini.ch/top/ ) + +DROP TABLE t1; + +CREATE TABLE t1 (id, col1, col2, pad) +AS +SELECT rownum, CASE WHEN rownum>5000 THEN 5000 ELSE rownum END, rownum, lpad('*',100,'*') +FROM dual +CONNECT BY level <= 10000; + +CREATE INDEX t1_col1 ON t1 (col1); + +-- Calcul des statistiques avec des histogrammes automatiques +BEGIN + dbms_stats.gather_table_stats( + ownname=>user, + tabname=>'T1', + cascade=>TRUE, + estimate_percent=>100, + method_opt=>'for all columns size skewonly', + no_invalidate=>FALSE); +END; +/ + +-- Du moment que cursor_sharing n'est pas FORCE, des plans d'exécution differents sont +-- générés en foction de la valeur du litteral +-- On peut vérifier avec une 2ème session SYSDBA et les requêtes suivantes: +-- SQL> select sid,serial#,SQL_ID,SQL_CHILD_NUMBER,PREV_SQL_ID,PREV_CHILD_NUMBER from v$session where username=&username; +-- SQL> select * from table(dbms_xplan.display_cursor(&SQL_ID,&SQL_CHILD_NUMBER,'typical')); +-- probablement &PREV_SQL_ID et &PREV_CHILD_NUMBER car les requêtes sont rapides + + +------------------------------------- +-- Dans une 2ème session AS SYSDBA => +-- nettoyage du Library Cache +alter system flush shared_pool; +------------------------------------- +-- Session applicative => +-- FULL SCAN +select * from t1 where col1=5000; + +-- Dans une 2ème session AS SYSDBA => +-- 1er snapshoot AWR +execute dbms_workload_repository.CREATE_SNAPSHOT; + + +-- Session applicative => +-- ACCES PAR INDEX +select * from t1 where col1=128; + + +-- Dans une 2ème session AS SYSDBA => +-- 2ème snapshoot AWR +execute dbms_workload_repository.CREATE_SNAPSHOT; + +------------------------------------------------------ +-- SESSION 2 as SYSDBA +-- on récupère le SQL_ID, ainsi que le Plan hash value du SQL qui fait en FULL + +set pages 999 +set line 200 +col SQL_TEXT for a70 wrap + +select dbid,sql_id,sql_text from DBA_HIST_SQLTEXT where SQL_TEXT like '%select * from t1 where col1=%'; + +select * from table(dbms_xplan.display_awr('b9tum9b80gsjx')); -- FULL + +-- Nous alons créer un SQL Profile qui utilise ce denier plan d'exécution (FULL) +-- les valeurs d'entrée: +-- sql_id = 'b9tum9b80gsjx' / plan_hash_value=3617692013 / category => 'SQLPROF_CAT_01' / name => 'SQLPROF_SQLPROF_02' / force_match => true + +/* Randolf Giest */ +-- create sql profile from awr +-- sql_id plan_hash_valeu category force_matching +declare + ar_profile_hints sys.sqlprof_attr; + cl_sql_text clob; +begin + select + extractvalue(value(d), '/hint') as outline_hints + bulk collect + into + ar_profile_hints + from + xmltable('/*/outline_data/hint' + passing ( + select + xmltype(other_xml) as xmlval + from + dba_hist_sql_plan + where + sql_id = 'b9tum9b80gsjx' + and plan_hash_value = 3617692013 + and other_xml is not null + ) + ) d; + + select + sql_text + into + cl_sql_text + from + dba_hist_sqltext + where + sql_id = 'b9tum9b80gsjx'; + + dbms_sqltune.import_sql_profile( + sql_text => cl_sql_text + , profile => ar_profile_hints + , category => 'SQLPROF_CAT_01' + , name => 'SQLPROF_SQLPROF_02' + -- use force_match => true + -- to use CURSOR_SHARING=SIMILAR + -- behaviour, i.e. match even with + -- differing literals + , force_match => true + ); +end; +/ + +-- On vérifie que le SQL Plan a été créé +-- normallement c'est afficher dans la note + +set line 128 +col sql_text for a40 wrap +select name,category,sql_text,status,force_matching from DBA_SQL_PROFILES; + +------------------------------------------------------ +-- On reviens dans la première session (celle applicative) +-- on change la catègorie courante de SQL Tune +-- on vérifie que le pla d'exécution est bien celui du SQL Profile + +alter session set sqltune_category='SQLPROF_CAT_01'; + +explain plan for select * from t1 where col1=5000; +select * from table(dbms_xplan.display); + +explain plan for select * from t1 where col1=1200; +select * from table(dbms_xplan.display); diff --git a/tiddlywiki/SQL Profile from library cache.txt b/tiddlywiki/SQL Profile from library cache.txt new file mode 100755 index 0000000..97aa079 --- /dev/null +++ b/tiddlywiki/SQL Profile from library cache.txt @@ -0,0 +1,127 @@ +-- SQL Profile -- chargement à partir d'un plan d'exécution du Library Cache +-- d'après le blog de Kerry Osborne ( http://kerryosborne.oracle-guy.com/2009/04/oracle-sql-profiles/ ) +-- et un exemple de Christian Antognini ( http://antognini.ch/top/ ) + +DROP TABLE t1; + +CREATE TABLE t1 (id, col1, col2, pad) +AS +SELECT rownum, CASE WHEN rownum>5000 THEN 5000 ELSE rownum END, rownum, lpad('*',100,'*') +FROM dual +CONNECT BY level <= 10000; + +CREATE INDEX t1_col1 ON t1 (col1); + +-- Calcul des statistiques avec des histogrammes automatiques +BEGIN + dbms_stats.gather_table_stats( + ownname=>user, + tabname=>'T1', + cascade=>TRUE, + estimate_percent=>100, + method_opt=>'for all columns size skewonly', + no_invalidate=>FALSE); +END; +/ + +-- Du moment que cursor_sharing n'est pas FORCE, des plans d'exécution differents sont +-- générés en foction de la valeur du litteral +-- On peut vérifier avec une 2ème session SYSDBA et les requêtes suivantes: +-- SQL> select sid,serial#,SQL_ID,SQL_CHILD_NUMBER,PREV_SQL_ID,PREV_CHILD_NUMBER from v$session where username=&username; +-- SQL> select * from table(dbms_xplan.display_cursor(&SQL_ID,&SQL_CHILD_NUMBER,'typical')); +-- probablement &PREV_SQL_ID et &PREV_CHILD_NUMBER car les requêtes sont rapides + + +set pages 999 +set line 200 + +-- FULL SCAN +select * from t1 where col1=5000; +-- ACCES PAR INDEX +select * from t1 where col1=1100; + +------------------------------------------------------ +-- SESSION 2 as SYSDBA +-- avec une 2ème session sysdba, on récupère les SQL_ID, CHILD_NUMBER des 2 requêtes précédentes +-- et on met en évidence les 2 plans d'exécution DIFFERENTS + +set pages 999 +set line 200 +col SQL_TEXT for a70 wrap + +select SQL_ID,CHILD_NUMBER,SQL_TEXT from V$SQL where SQL_TEXT like '%select * from t1 where col1=%'; + +select * from table(dbms_xplan.display_cursor('66bfb6r237g69',0,'typical')); -- INDEX +select * from table(dbms_xplan.display_cursor('b9tum9b80gsjx',0,'typical')); -- FULL + +-- Nous alons créer un SQL Profile qui utilise le plan d'xécution avec l'accès sur l'INDEX +-- les valeurs d'entrée: +-- sql_id = '66bfb6r237g69' / child_number = 0 / category => 'SQLPROF_CAT_01' / name => 'SQLPROF_SQLPROF_01' / force_match => true + +declare + ar_profile_hints sys.sqlprof_attr; + cl_sql_text clob; +begin + select + extractvalue(value(d), '/hint') as outline_hints + bulk collect + into + ar_profile_hints + from + xmltable('/*/outline_data/hint' + passing ( + select + xmltype(other_xml) as xmlval + from + v$sql_plan + where + sql_id = '66bfb6r237g69' + and child_number = 0 + and other_xml is not null + ) + ) d; + + select + sql_fulltext + into + cl_sql_text + from + v$sql + where + sql_id = '66bfb6r237g69' + and child_number = 0; + + dbms_sqltune.import_sql_profile( + sql_text => cl_sql_text + , profile => ar_profile_hints + , category => 'SQLPROF_CAT_01' + , name => 'SQLPROF_SQLPROF_01' + -- use force_match => true + -- to use CURSOR_SHARING=SIMILAR + -- behaviour, i.e. match even with + -- differing literals + , force_match => true + ); + +end; +/ + +-- On vérifie que le SQL Plan a été créé +-- normallement c'est afficher dans la note + +set line 128 +col sql_text for a40 wrap +select name,category,sql_text,status,force_matching from DBA_SQL_PROFILES; + +------------------------------------------------------ +-- On reviens dans la première session (celle applicative) +-- on change la catègorie courante de SQL Tune +-- on vérifie que le pla d'exécution est bien celui du SQL Profile + +alter session set sqltune_category='SQLPROF_CAT_01'; + +explain plan for select * from t1 where col1=5000; +select * from table(dbms_xplan.display); + +select * from t1 where col1=1200; +select * from table(dbms_xplan.display); diff --git a/tiddlywiki/SQL monitor.txt b/tiddlywiki/SQL monitor.txt new file mode 100755 index 0000000..461bbe9 --- /dev/null +++ b/tiddlywiki/SQL monitor.txt @@ -0,0 +1 @@ +-- https://jonathanlewis.wordpress.com/2018/04/06/sql-monitor/ diff --git a/tiddlywiki/SRVCTL commands.txt b/tiddlywiki/SRVCTL commands.txt new file mode 100755 index 0000000..d7a2f53 --- /dev/null +++ b/tiddlywiki/SRVCTL commands.txt @@ -0,0 +1,14 @@ +-- add database, instance & services +srvctl add database -db YODAEXA -o /u01/app/oracle/product/12.1.0.2/dbhome_1_opt -p '+OTHER1/YODAEXA/spfileYODA' + +srvctl add instance -db YODAEXA -i YODA3 -n dmt01dbadm03 +srvctl add instance -db YODAEXA -i YODA4 -n dmt01dbadm04 + +srvctl add service -db YODAEXA -service YODA_TWO_NODES -preferred "YODA3,YODA4" -role primary +srvctl add service -db YODAEXA -service YODA_ONE_NODE -preferred YODA3 -available YODA4 -role primary + +-- check /enable ADVM proxy +srvctl status asm -proxy +srvctl enable asm -proxy +srvctl start asm -proxy + diff --git a/tiddlywiki/See listening port.txt b/tiddlywiki/See listening port.txt new file mode 100755 index 0000000..34cb8c4 --- /dev/null +++ b/tiddlywiki/See listening port.txt @@ -0,0 +1 @@ +alias listen='lsof -i -P | grep -i "listen"' \ No newline at end of file diff --git a/tiddlywiki/Style.txt b/tiddlywiki/Style.txt new file mode 100755 index 0000000..f6f3516 --- /dev/null +++ b/tiddlywiki/Style.txt @@ -0,0 +1,4 @@ +pre, code { + word-wrap: normal; + white-space: pre; +} \ No newline at end of file diff --git a/tiddlywiki/Sublime Text licence key.txt b/tiddlywiki/Sublime Text licence key.txt new file mode 100755 index 0000000..6b706e6 --- /dev/null +++ b/tiddlywiki/Sublime Text licence key.txt @@ -0,0 +1,13 @@ +----- BEGIN LICENSE ----- +Valeriu PLESNILA +Single User License +EA7E-1240553-450272 +4B66D57B 9FCE0922 A939ABFF EA41C323 +C486F5F2 4E2D4A62 339EEB9C 9782B756 +885C0AB5 E899AE90 78696886 9B7D4533 +9D67B85A 5F49105F 3536CE07 B1D0A4BD +F42D0D2B B4B8F4BA EC1B2660 28CCD7C8 +18501B31 43228730 41B9EF22 3CEC68C9 +E5A9BEA7 C2403FD9 A758C991 04593724 +BFE4209D C15B2F56 7649AA35 D5C52CEA +------ END LICENSE ------ \ No newline at end of file diff --git a/tiddlywiki/Sublime text - cursor horizontal in MacosX.txt b/tiddlywiki/Sublime text - cursor horizontal in MacosX.txt new file mode 100755 index 0000000..c37f1c2 --- /dev/null +++ b/tiddlywiki/Sublime text - cursor horizontal in MacosX.txt @@ -0,0 +1 @@ +Option + Command + o \ No newline at end of file diff --git a/tiddlywiki/Summary.tid b/tiddlywiki/Summary.tid new file mode 100755 index 0000000..5e5b31a --- /dev/null +++ b/tiddlywiki/Summary.tid @@ -0,0 +1,11 @@ +created: 20190622072346350 +creator: vplesnila +modified: 20190622073207717 +modifier: vplesnila +tags: +title: Summary +type: text/vnd.tiddlywiki + +
+<> +
\ No newline at end of file diff --git a/tiddlywiki/TEMPORARY tablespaces.tid b/tiddlywiki/TEMPORARY tablespaces.tid new file mode 100755 index 0000000..3d27816 --- /dev/null +++ b/tiddlywiki/TEMPORARY tablespaces.tid @@ -0,0 +1,50 @@ +created: 20190816092600823 +creator: vplesnila +modified: 20190816123958688 +modifier: vplesnila +tags: Oracle +title: TEMPORARY tablespaces +type: text/plain + +-- temporary tablespace size +select tablespace_name, sum(bytes)/1024/1024 MB +from dba_temp_files +group by tablespace_name; + +-- temporary tablespace allocation/usage report +select tablespace_name, sum(bytes_used) /1024/1024 mb_used, sum(bytes_cached) /1024/1024 mb_allocated + from gv$temp_extent_pool + group by tablespace_name; + +select tablespace_name, + file_id, + extents_cached extents_allocated, + extents_used, + bytes_cached/1024/1024 mb_allocated, + bytes_used/1024/1024 mb_used +from gv$temp_extent_pool; + + +--Sort Space Usage by Session-- +set lines 200 +col SID_SERIAL for a10 +col USERNAME for a15 +col OSUSER for a10 +col SPID for a10 +col MODULE for a25 +col PROGRAM for a25 +col TABLESPACE for a15 +SELECT S.sid || ',' || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module, + S.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace, + COUNT(*) sort_ops +FROM gv$sort_usage T, gv$session S, dba_tablespaces TBS, gv$process P +WHERE T.session_addr = S.saddr +AND S.paddr = P.addr +AND T.tablespace = TBS.tablespace_name +GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module, + S.program, TBS.block_size, T.tablespace +ORDER BY sid_serial; + + +-- temp space uded by a query +select temp_space from gv$sql_plan where sql_id = '&sql_id'; \ No newline at end of file diff --git a/tiddlywiki/Tanel_Poder_toolbox.txt b/tiddlywiki/Tanel_Poder_toolbox.txt new file mode 100755 index 0000000..98561bb --- /dev/null +++ b/tiddlywiki/Tanel_Poder_toolbox.txt @@ -0,0 +1,59 @@ +-- https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/V-ACTIVE_SESSION_HISTORY.html +-- ashtop examples +-- interesting cols: SQL_OPNAME, time_model_name, top_level_call_name +@ashtop time_model_name,event2,wait_class 1=1 sysdate-1/24/6 sysdate +@dashtop top_level_call_name,event2,wait_class 1=1 "timestamp'2020-07-12 15:38:27'" "timestamp'2020-07-12 16:59:36'" + +-- show event description +@sed "log switch" + +-- show parameter details +@p sga +@pd sga +-- Display valid parameter values +@pvalid + +-- show latch statistics +@l redo + +-- show SESSION WAIT details for a SID +@s 122 + +-- sql_id details +@sqlid @sqlid 34mt4skacwwwd % + +-- SQL workarea memory and TEMP usage details +@wrk +@wrka +@wrksum + +-- display SQL execution plan line level activity breakdown from ASH +@ash/asqlmon 0sh0fn7r21020 % sysdate-1/24 sysdate + +-- ash wait chains +@ash_wait_chains +@dash_wait_chains + +-- Display background processes +@bg + +-- Display redo log layout +@log + +-- Display wait events description +@sed + +-- Display user sessoin and process information +@usid + +-- Display a histogram of the number of waits from MEMORY +@evh +-- Display a histogram of the number of waits from ASH/AWR +@ash/event_hist.sql +@ash/devent_hist.sql + +-- Display lock type info +@lt + +-- Search SQL_ID in library cahe +@sqlt diff --git a/tiddlywiki/TiddlyWiki - upgrade on Node.js.txt b/tiddlywiki/TiddlyWiki - upgrade on Node.js.txt new file mode 100755 index 0000000..d061279 --- /dev/null +++ b/tiddlywiki/TiddlyWiki - upgrade on Node.js.txt @@ -0,0 +1,2 @@ +npm update -g tiddlywiki +-- then, restart node.js \ No newline at end of file diff --git a/tiddlywiki/To check.txt b/tiddlywiki/To check.txt new file mode 100755 index 0000000..faf5ec2 --- /dev/null +++ b/tiddlywiki/To check.txt @@ -0,0 +1 @@ +https://blog.pythian.com/how-to-accurately-measure-data-guard-lag-events/ \ No newline at end of file diff --git a/tiddlywiki/Trace activation.txt b/tiddlywiki/Trace activation.txt new file mode 100755 index 0000000..44f1953 --- /dev/null +++ b/tiddlywiki/Trace activation.txt @@ -0,0 +1,97 @@ +--------------- +- Trace 10046 - +--------------- + +-- For current session +---------------------- +ALTER SESSION SET tracefile_identifier=Wookie; + +exec DBMS_SESSION.session_trace_enable(waits => TRUE, binds=> TRUE); +exec DBMS_SESSION.session_trace_disable(); + +-- For another session +---------------------- +exec DBMS_MONITOR.session_trace_enable (session_id =>21, serial_num=>1143, waits => TRUE, binds=> TRUE); + +-- with DBMS_MONITOR we can also trace service_name, module_name or action_name + +-- Trace with a TRIGGER "AFTER LOGON" +--------------------------------------------- + +create or replace trigger TRACE_LOGIN_TRIGGER +after logon on database +begin + if user = 'APRESS' then + begin + execute immediate 'alter session set tracefile_identifier=APRESS_LOGON'; + dbms_session.session_trace_enable (waits => TRUE, binds => TRUE); + end; + end if; +end; +/ + +-- From 11g, we can activate the trace for a specific SQL_ID at session/instance level +-- http://oraclue.com/2009/03/24/oracle-event-sql_trace-in-11g/ +-- http://tech.e2sn.com/oracle/troubleshooting/oradebug-doc + +-- example for tracing one SQL_ID at the system level +alter system set events 'sql_trace [sql:5vy5qjd3fsn5c] wait=true, bind=true, plan_stat=all_executions, level = 12'; +alter system set events 'sql_trace [sql:5vy5qjd3fsn5c] off'; + + +-- classic 1046 at the session level +alter session set tracefile_identifier='10046'; +alter session set timed_statistics = true; +alter session set statistics_level=all; +alter session set max_dump_file_size = unlimited; + +alter session set events 'sql_trace level 12'; + +-- Execute the queries or operations to be traced here -- +alter session set events 'sql_trace off'; + + +-- same as previous but limited to only 2 SQL_ID +alter session set tracefile_identifier='10046'; +alter session set timed_statistics = true; +alter session set statistics_level=all; +alter session set max_dump_file_size = unlimited; + +alter session set events 'sql_trace [sql:g3yc1js3g2689|7ujay4u33g337] level 12'; + +-- Execute the queries or operations to be traced here -- +alter session set events 'sql_trace off'; + + +--------------- +- Trace 10035 - +--------------- + +-- For current session +alter session set events '10053 trace name context forever, level 1'; + + +-- From 11g, we can dump the trace of a SQL_ID from library cache + +begin + dbms_sqldiag.dump_trace( + p_sql_id=>'3wv7pga0wqxkb', + p_child_number=>0, + p_component=>'Compiler', + p_file_id=>'FIND_SPD_ID_TRACE'); +end; +/ + +-- trace for a specific SQL_ID +-- http://laurentleturgez.wordpress.com/2011/11/29/trace-cbo-computation-for-a-specific-sql_id/ + +alter session set max_dump_file_size = unlimited; +ALTER SESSION SET EVENTS 'trace[rdbms.SQL_Optimizer.*][sql:5vy5qjd3fsn5c]'; +ALTER SESSION SET EVENTS 'trace[rdbms.SQL_Optimizer.*] off'; + + +-- from an external session +oradebug setospid 28027 +oradebug unlimit +oradebug event trace[RDBMS.SQL_Optimizer.*][sql:5vy5qjd3fsn5c] +oradebug tracefile_name diff --git a/tiddlywiki/Transparent Data Encryption (TDE) setup.txt b/tiddlywiki/Transparent Data Encryption (TDE) setup.txt new file mode 100755 index 0000000..377288e --- /dev/null +++ b/tiddlywiki/Transparent Data Encryption (TDE) setup.txt @@ -0,0 +1,233 @@ +-- + +create pluggable database VORAS admin user VORAS_ADM identified by VORAS_ADM; + +alter pluggable database VORAS open instances=ALL; +alter pluggable database VORAS save state instances=ALL; + + +Step 1: Configure the Wallet Root +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +~~ create directory for storing TDE WALLET +~~ in a RAC configuration, this directory can be on a shared file system or manually copied from install node to all other nodes + +mkdir -p /app/base/admin/HUTTPRD/wallet/tde + +CDB$ROOT> alter system set WALLET_ROOT="/app/base/admin/HUTTPRD/wallet" scope=spfile sid='*'; + +srvctl stop database -d HUTTPRD +srvctl start database -d HUTTPRD + +alter system set TDE_CONFIGURATION="KEYSTORE_CONFIGURATION=FILE" scope=both sid='*'; + +Step 2: Create the password protected key store +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +CDB$ROOT> administer key management create keystore '/app/base/admin/HUTTPRD/wallet/tde' identified by secret; + +~~ previous command will create the file ewallet.p12 +ls -l /app/base/admin/HUTTPRD/wallet/tde/ewallet.p12 + + + +set lines 300 +column WRL_PARAMETER format a40 +select WRL_TYPE, WRL_PARAMETER, STATUS, CON_ID,INST_ID from gv$encryption_wallet; + +WRL_TYPE WRL_PARAMETER STATUS CON_ID INST_ID +-------------------- ---------------------------------------- ------------------------------ ---------- ---------- +FILE /app/base/admin/HUTTPRD/wallet/tde/ NOT_AVAILABLE 1 2 +FILE NOT_AVAILABLE 2 2 +FILE NOT_AVAILABLE 3 2 +FILE /app/base/admin/HUTTPRD/wallet/tde/ CLOSED 1 1 +FILE CLOSED 2 1 +FILE CLOSED 3 1 + +~~ we have status NOT_AVAILABLE for 2nd instance because the wallet has not been yet copied to 2nd + +Step 3: Open the key store CDB & PDB +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +administer key management set keystore open force keystore identified by secret container = all; + + +set lines 300 +column WRL_PARAMETER format a40 +select WRL_TYPE, WRL_PARAMETER, STATUS, CON_ID,INST_ID from gv$encryption_wallet; + +WRL_TYPE WRL_PARAMETER STATUS CON_ID INST_ID +-------------------- ---------------------------------------- ------------------------------ ---------- ---------- +FILE /app/base/admin/HUTTPRD/wallet/tde/ OPEN_NO_MASTER_KEY 1 1 +FILE OPEN_NO_MASTER_KEY 2 1 +FILE OPEN_NO_MASTER_KEY 3 1 +FILE /app/base/admin/HUTTPRD/wallet/tde/ NOT_AVAILABLE 1 2 +FILE NOT_AVAILABLE 2 2 +FILE NOT_AVAILABLE 3 2 + + +Step 4.1: Create the master key for the container database +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + +administer key management set key identified by secret with backup; + +set lines 300 +column WRL_PARAMETER format a40 +column NAME format a10 + +select + a.INST_ID, b.NAME, a.STATUS, a.WRL_TYPE, a.WRL_PARAMETER +from + gv$encryption_wallet a, gv$pdbs b +where + a.con_id = b.con_id (+) +order by + a.INST_ID, b.NAME, a.STATUS +; + + INST_ID NAME STATUS WRL_TYPE WRL_PARAMETER +---------- ---------- ------------------------------ -------------------- ---------------------------------------- + 1 PDB$SEED OPEN FILE + 1 PDB$SEED OPEN FILE + 1 VORAS OPEN_NO_MASTER_KEY FILE + 1 VORAS OPEN_NO_MASTER_KEY FILE + 1 OPEN FILE /app/base/admin/HUTTPRD/wallet/tde/ + 2 PDB$SEED NOT_AVAILABLE FILE + 2 PDB$SEED NOT_AVAILABLE FILE + 2 VORAS NOT_AVAILABLE FILE + 2 VORAS NOT_AVAILABLE FILE + 2 NOT_AVAILABLE FILE /app/base/admin/HUTTPRD/wallet/tde/ + + +Step 4.2: Create the master key for PDB (unified mode) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +alter session set container=VORAS; +administer key management set key identified by secret with backup; + +set lines 300 +column WRL_PARAMETER format a40 +column NAME format a10 + +select + a.INST_ID, b.NAME, a.STATUS, a.WRL_TYPE, a.WRL_PARAMETER +from + gv$encryption_wallet a, gv$pdbs b +where + a.con_id = b.con_id (+) +order by + a.INST_ID, b.NAME, a.STATUS +; + + INST_ID NAME STATUS WRL_TYPE WRL_PARAMETER +---------- ---------- ------------------------------ -------------------- ---------------------------------------- + 1 PDB$SEED OPEN FILE + 1 PDB$SEED OPEN FILE + 1 VORAS OPEN FILE + 1 VORAS OPEN FILE + 1 OPEN FILE /app/base/admin/HUTTPRD/wallet/tde/ + 2 PDB$SEED NOT_AVAILABLE FILE + 2 PDB$SEED NOT_AVAILABLE FILE + 2 VORAS NOT_AVAILABLE FILE + 2 VORAS NOT_AVAILABLE FILE + 2 NOT_AVAILABLE FILE /app/base/admin/HUTTPRD/wallet/tde/ + + + +Step 5: Create an autologin keystorefor the CDB +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +administer key management create auto_login keystore from keystore '/app/base/admin/HUTTPRD/wallet/tde' identified by secret; + +ls -ltr /app/base/admin/HUTTPRD/wallet/tde +total 24 +-rw-------. 1 oracle asmadmin 2555 Apr 12 17:52 ewallet_2020041215523610.p12 +-rw-------. 1 oracle asmadmin 3995 Apr 12 22:26 ewallet_2020041220264943.p12 +-rw-------. 1 oracle asmadmin 5467 Apr 12 22:26 ewallet.p12 +-rw-------. 1 oracle asmadmin 5512 Apr 12 22:35 cwallet.sso + +-- cwallet.sso has been created + +-- copy security files to ALL RAC nodes +cd /app/base/admin/HUTTPRD/wallet/tde +scp -rp * vortex-db02:/app/base/admin/HUTTPRD/wallet/tde + + +set lines 300 +column WRL_PARAMETER format a40 +column NAME format a10 + +select + a.INST_ID, b.NAME, a.STATUS, a.WRL_TYPE, a.WRL_PARAMETER +from + gv$encryption_wallet a, gv$pdbs b +where + a.con_id = b.con_id (+) +order by + a.INST_ID, b.NAME, a.STATUS +; + + INST_ID NAME STATUS WRL_TYPE WRL_PARAMETER +---------- ---------- ------------------------------ -------------------- ---------------------------------------- + 1 PDB$SEED OPEN FILE + 1 PDB$SEED OPEN FILE + 1 VORAS OPEN FILE + 1 VORAS OPEN FILE + 1 OPEN FILE /app/base/admin/HUTTPRD/wallet/tde/ + 2 PDB$SEED OPEN FILE + 2 PDB$SEED OPEN FILE + 2 VORAS OPEN FILE + 2 VORAS OPEN FILE + 2 OPEN FILE /app/base/admin/H + + + +Step 6.1: Encrypt tablespaces online (CDB) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +CDB$ROOT> select TABLESPACE_NAME, STATUS, ENCRYPTED from DBA_TABLESPACES; + + +TABLESPACE_NAME STATUS ENC +------------------------------ --------- --- +SYSTEM ONLINE NO +SYSAUX ONLINE NO +UNDOTBS1 ONLINE NO +TEMP ONLINE NO +UNDOTBS2 ONLINE NO +USERS ONLINE NO + + +CDB$ROOT> alter tablespace USERS encryption online encrypt; +CDB$ROOT> alter tablespace SYSTEM encryption online encrypt; + + +CDB$ROOT> select TABLESPACE_NAME, STATUS, ENCRYPTED from DBA_TABLESPACES; + +TABLESPACE_NAME STATUS ENC +------------------------------ --------- --- +SYSTEM ONLINE YES +SYSAUX ONLINE NO +UNDOTBS1 ONLINE NO +TEMP ONLINE NO +UNDOTBS2 ONLINE NO +USERS ONLINE YES + + + + +RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON; + + +Manually OPEN/CLOSE keystore +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +SYSKM@HUTTPRD1:CDB$ROOT> administer key management set keystore open force keystore identified by secret container = all; + +keystore altered. + +SYSKM@HUTTPRD1:CDB$ROOT> administer key management set keystore close identified by secret; + +keystore altered. + diff --git a/tiddlywiki/Using DATAPUMP through PL_SQL examples.md b/tiddlywiki/Using DATAPUMP through PL_SQL examples.md new file mode 100755 index 0000000..b618814 --- /dev/null +++ b/tiddlywiki/Using DATAPUMP through PL_SQL examples.md @@ -0,0 +1,190 @@ +>Note based on [Data Pump API for PL/SQL (DBMS_DATAPUMP)](https://oracle-base.com/articles/misc/data-pump-api#schema-import) +> +# Import +## Schema import +`impdp` command line +```bash +impdp userid=superhero/***** \ + dumpfile=MYDUMP:EMUSER.dmp logfile=MYDUMP:impo1.log \ + remap_schema=EMUSER:APP_USER \ + remap_tablespace=EM_DATA:APP_TS +``` + +`PL/SQL` block +```sql +declare + l_dp_handle number; +begin + -- Open a schema import job. + l_dp_handle := dbms_datapump.open( + operation => 'IMPORT', + job_mode => 'SCHEMA', + remote_link => NULL, + job_name => 'EMUSER_IMPORT', + version => 'LATEST'); + + -- Specify the schema to be imported. + dbms_datapump.metadata_filter( + handle => l_dp_handle, + name => 'SCHEMA_EXPR', + value => '= ''EMUSER'''); + + -- Specify the dump file name and directory object name. + dbms_datapump.add_file( + handle => l_dp_handle, + filename => 'EMUSER.dmp', + directory => 'MYDUMP'); + + -- Specify the log file name and directory object name. + dbms_datapump.add_file( + handle => l_dp_handle, + filename => 'impo1.log, + directory => 'MYDUMP', + filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); + + -- Perform a REMAP_SCHEMA + dbms_datapump.metadata_remap( + handle => l_dp_handle, + name => 'REMAP_SCHEMA', + old_value => 'EMUSER', + value => 'APP_USER'); + + -- Perform a REMAP_TABLESPACE + dbms_datapump.metadata_remap( + handle => l_dp_handle, + name => 'REMAP_TABLESPACE', + old_value => 'EM_DATA', + value => 'APP_TS'); + + dbms_datapump.start_job(l_dp_handle); + + dbms_datapump.detach(l_dp_handle); +end; +/ +``` + +## Multiple schema import +`impdp` command line +```bash +impdp userid=superhero/***** \ + dumpfile=MYDUMP:GREEDOPRD_%U.dmp logfile=MYDUMP:impo.log \ + schemas=APP_USER,REPO_USER\ + remap_schema=APP_USER:ALPHA,REPO_USER:OMEGA \ + remap_tablespace=APP_TS:TS_ALPHA,REPO_TS:TS_OMEGA +``` + +`PL/SQL` block +```sql +declare + l_dp_handle number; +begin + -- Open a schema import job. + l_dp_handle := dbms_datapump.open( + operation => 'IMPORT', + job_mode => 'SCHEMA', + remote_link => NULL, + job_name => 'MULTIPLE_SCHEMAS_IMPORT', + version => 'LATEST'); + + -- Specify the schema to be imported. + dbms_datapump.metadata_filter( + handle => l_dp_handle, + name => 'SCHEMA_EXPR', + value => 'in (''APP_USER'',''REPO_USER'')'); + + -- Specify the dump file name and directory object name. + dbms_datapump.add_file( + handle => l_dp_handle, + filename => 'GREEDOPRD_%U.dmp', + directory => 'MYDUMP'); + + -- Specify the log file name and directory object name. + dbms_datapump.add_file( + handle => l_dp_handle, + filename => 'impo.log', + directory => 'MYDUMP', + filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); + + -- Perform a REMAP_SCHEMA + dbms_datapump.metadata_remap( + handle => l_dp_handle, + name => 'REMAP_SCHEMA', + old_value => 'APP_USER', + value => 'ALPHA'); + + dbms_datapump.metadata_remap( + handle => l_dp_handle, + name => 'REMAP_SCHEMA', + old_value => 'REPO_USER', + value => 'OMEGA'); + + + -- Perform a REMAP_TABLESPACE + dbms_datapump.metadata_remap( + handle => l_dp_handle, + name => 'REMAP_TABLESPACE', + old_value => 'APP_TS', + value => 'TS_ALPHA'); + + dbms_datapump.metadata_remap( + handle => l_dp_handle, + name => 'REMAP_TABLESPACE', + old_value => 'REPO_TS', + value => 'TS_OMEGA'); + + dbms_datapump.set_parallel(l_dp_handle,2); + + dbms_datapump.start_job(l_dp_handle); + + dbms_datapump.detach(l_dp_handle); +end; +/ +``` + +# Export +## FULL database export +`expdp` command line +```bash +expdp userid=superhero/***** \ + dumpfile=MYDUMP:GREEDOPRD_%U.dmp logfile=MYDUMP:GREEDOPRD.log \ + full=Y \ + flashback_time=systimestamp \ + parallel=2 +``` + +`PL/SQL` block +```sql +declare + l_dp_handle number; +begin + -- Open a full export job. + l_dp_handle := dbms_datapump.open( + operation => 'EXPORT', + job_mode => 'FULL', + remote_link => NULL, + job_name => 'GREEDOPRD_FULL_EXP', + version => 'LATEST'); + + -- Specify the dump file name and directory object name. + dbms_datapump.add_file( + handle => l_dp_handle, + filename => 'GREEDOPRD_%U.dmp', + directory => 'MYDUMP'); + + -- Specify the log file name and directory object name. + dbms_datapump.add_file( + handle => l_dp_handle, + filename => 'GREEDOPRD.log', + directory => 'MYDUMP', + filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); + + dbms_datapump.set_parameter(l_dp_handle,'CLIENT_COMMAND','Full Consistent Data Pump Export with PARALLEL 2'); + dbms_datapump.set_parameter(l_dp_handle,'FLASHBACK_TIME','SYSTIMESTAMP'); + dbms_datapump.set_parallel(l_dp_handle,2); + + dbms_datapump.start_job(l_dp_handle); + + dbms_datapump.detach(l_dp_handle); +end; +/ +``` \ No newline at end of file diff --git a/tiddlywiki/Using dnsmanager.io API's.txt b/tiddlywiki/Using dnsmanager.io API's.txt new file mode 100755 index 0000000..ba2d316 --- /dev/null +++ b/tiddlywiki/Using dnsmanager.io API's.txt @@ -0,0 +1,22 @@ +~~ Get your ID/key from profile/API/Keys + +ID = 9422ac9d-2c62-4967-ae12-c1d15bbbe200 +Key = I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy + +~~ To get domain/record id, navigate into domain/record trough app.dnsmanager.io user interface and note dows the ID's from links + +~~ List records of a domain +curl -u 9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy https://app.dnsmanager.io/api/v1/user/domain/139613/records + +~~ List a record +curl -u 9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy https://app.dnsmanager.io/api/v1/user/domain/139613/record/6674671 + +~~ Update a record (full mode) +curl -u 9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy -H 'Content-Type: application/json' -X PUT \ +-d '{"id":6674671,"type":"A","name":"power","content":"90.127.90.90","ttl":300,"prio":0}' \ +https://app.dnsmanager.io/api/v1/user/domain/139613/record/6674671 + +~~ Update a record (partial mode) +curl -u 9422ac9d-2c62-4967-ae12-c1d15bbbe200:I9HV2Jqp1gFqMuic3zPRYW5guSQEvoyy -H 'Content-Type: application/json' -X PUT \ +-d '{"id":6674671,"content":"90.127.90.119"}' \ +https://app.dnsmanager.io/api/v1/user/domain/139613/record/6674671 \ No newline at end of file diff --git a/tiddlywiki/Verbe - present.tid b/tiddlywiki/Verbe - present.tid new file mode 100755 index 0000000..fefcd4c --- /dev/null +++ b/tiddlywiki/Verbe - present.tid @@ -0,0 +1,22 @@ +created: 20191107134326719 +creator: vplesnila +modified: 20191108122108918 +modifier: vplesnila +tags: English +title: Verbe - présent +type: text/vnd.tiddlywiki + +|!PRÉSENT SIMPLE|action ou événement régulier| +|!PRÉSENT CONTINU (ou progressif)|action ou événement en cours| +|!PRESENT PERFECT|action ou événement commencé depuis un moment ou qui viennent de se terminer (ou d'une action passée dont on voit un rapport avec le présent)| +|!PRESENT PERFECT CONTINU (ou progressif)|action ou événement en cours commencé depuis longtemps (utilisé pour souslignier la durée de l'action)| + +Example pour le verbe ''to walk'' + +||!PRÉSENT SIMPLE|!PRÉSENT CONTINU|!PRESENT PERFECT|!PRESENT PERFECT CONTINU| +|I|walk|am walking|have walked|have been walking| +|You|walk|are walking|have walked|have been walking| +|He / She|walks|is walking|has walked|has been walking| +|We|walk|are walking|have walked|have been walking| +|You|walk|are walking|have walked|have been walking| +|They|walk|are walking|have walked|have been walking| diff --git a/tiddlywiki/Wake on LAN.md b/tiddlywiki/Wake on LAN.md new file mode 100755 index 0000000..b0afdbc --- /dev/null +++ b/tiddlywiki/Wake on LAN.md @@ -0,0 +1,11 @@ +Install tools: + + dnf install -y ethtool.x86_64 net-tools.x86_64 ipmitool.x86_64 + +Using `ether-wake`: + + ether-wake + +Using `ipmitool`: + + ipmitool -H -U -P chassis power on \ No newline at end of file diff --git a/tiddlywiki/X11 forwarding.md b/tiddlywiki/X11 forwarding.md new file mode 100755 index 0000000..9c82389 --- /dev/null +++ b/tiddlywiki/X11 forwarding.md @@ -0,0 +1,6 @@ +> PowerTools Repository should be enabled + +On the **target** host: + + dnf install xauth + dnf install xorg-x11-apps.x86_64 diff --git a/tiddlywiki/XEN - CentOS.tid b/tiddlywiki/XEN - CentOS.tid new file mode 100755 index 0000000..c6baf32 --- /dev/null +++ b/tiddlywiki/XEN - CentOS.tid @@ -0,0 +1,22 @@ +created: 20190616214605466 +creator: vplesnila +modified: 20190623001459265 +modifier: vplesnila +tags: Linux CentOS XEN +title: XEN - CentOS +type: text/vnd.tiddlywiki + +! Installation +* https://xen.crc.id.au/support/guides/install/ +* http://www.itzgeek.com/how-tos/mini-howtos/create-a-network-bridge-on-centos-7-rhel-7.html + +! Repositionner GRUB après une mise à jour + + +``` +grub2-mkconfig -o /boot/grub2/grub.cfg + +awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg +grub2-set-default **** +grub2-editenv list +``` diff --git a/tiddlywiki/XEN - Debian.tid b/tiddlywiki/XEN - Debian.tid new file mode 100755 index 0000000..0c35429 --- /dev/null +++ b/tiddlywiki/XEN - Debian.tid @@ -0,0 +1,39 @@ +created: 20200215131428341 +creator: vplesnila +modified: 20200217213843042 +modifier: vplesnila +tags: XEN Linux +title: XEN - Debian +type: text/plain + +~~ Install and configure network bridge +apt-get install bridge-utils pigz cifs-utils + +~~ config file /etc/network/interfaces +~~ pay attention on ident (3 spaces) +----> +# The primary network interface +allow-hotplug enp4s0 +iface enp4s0 inet manual +# Bridge interface +auto xenbr0 +iface xenbr0 inet static + bridge_ports enp4s0 + address 192.168.0.5 + netmask 255.255.255.0 + gateway 192.168.0.1 +<-------------------------------------------- + +~~ restart server and check bridge status +brctl show + +~~ install XEN hypervisor & tools +apt-get install xen-system +apt-get install xen-tools + +~~ restart (the system will start using XEN kernel) and check XEN status +xl info + +~~ update packages +apt-get update +apt-get upgrade \ No newline at end of file diff --git a/tiddlywiki/XEN.tid b/tiddlywiki/XEN.tid new file mode 100755 index 0000000..1f1e734 --- /dev/null +++ b/tiddlywiki/XEN.tid @@ -0,0 +1,8 @@ +color: #80ffff +created: 20200203165630769 +creator: vplesnila +modified: 20200203165628929 +modifier: vplesnila +title: XEN +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/acme_tiny.py - Let's Encrypt - Free SSL_TLS Certificates.tid b/tiddlywiki/acme_tiny.py - Let's Encrypt - Free SSL_TLS Certificates.tid new file mode 100755 index 0000000..5125a52 --- /dev/null +++ b/tiddlywiki/acme_tiny.py - Let's Encrypt - Free SSL_TLS Certificates.tid @@ -0,0 +1,74 @@ +created: 20190618154531946 +creator: vplesnila +modified: 20190622101908943 +modifier: vplesnila +tags: [[Apache HTTPD]] +title: acme_tiny.py - Let's Encrypt - Free SSL/TLS Certificates +type: text/vnd.tiddlywiki + +!! Create a Let's Encrypt account private key + +``` +openssl genpkey -algorithm rsa -pkeyopt rsa_keygen_bits:4096 -out /data/wwwroot/cassandra.itemdb.com/private/letsencrypt.key +``` + + +Create a DOMAIN private key + +``` +openssl genrsa 4096 > /data/wwwroot/cassandra.itemdb.com/private/domain.key +``` + + +!! Create a certificate signing request (CSR) for your domain + +``` +openssl req -new -sha256 -key domain.key -subj "/CN=cassandra.itemdb.com" > /data/wwwroot/cassandra.itemdb.com/private/domain.csr +``` + + +!! Create directory for website host challenge files + +``` +mkdir -p /data/wwwroot/cassandra.itemdb.com/public/.well-known/acme-challenge +``` + + +!! Get (or renew) a signed certificate + +``` +/root/shell/acme_tiny.py \ + --account-key /data/wwwroot/cassandra.itemdb.com/private/letsencrypt.key \ + --csr /data/wwwroot/cassandra.itemdb.com/private/domain.csr \ + --acme-dir /data/wwwroot/cassandra.itemdb.com/public/.well-known/acme-challenge > /data/wwwroot/cassandra.itemdb.com/private/signed_chain.crt +``` + +!! Apache configuration + + +``` + + ServerName cassandra.itemdb.com + Redirect permanent / https://cassandra.itemdb.com + DocumentRoot "/data/wwwroot/cassandra.itemdb.com/public/" + + Options Indexes FollowSymLinks + AllowOverride All + Require all granted + + + + + ServerName cassandra.itemdb.com + SSLEngine on + SSLCertificateFile "/data/wwwroot/cassandra.itemdb.com/private/signed_chain.crt" + SSLCertificateKeyFile "/data/wwwroot/cassandra.itemdb.com/private/domain.key" + DocumentRoot "/data/wwwroot/cassandra.itemdb.com/public/" + + DirectoryIndex index.php index.htm index.html + Options Indexes FollowSymLinks + AllowOverride All + Require all granted + + +``` diff --git a/tiddlywiki/anglais - verbes.tid b/tiddlywiki/anglais - verbes.tid new file mode 100755 index 0000000..964dd6d --- /dev/null +++ b/tiddlywiki/anglais - verbes.tid @@ -0,0 +1,39 @@ +created: 20191023125102629 +creator: vplesnila +modified: 20191107150134891 +modifier: vplesnila +tags: English +title: anglais - verbes +type: text/vnd.tiddlywiki + +!! Le prétérit simple (//past simple//) +Pour les verbes réguliers, on rajoute ''d/ed'' à la fin du verbe: + +* I work''ed'' in Italy last year. +* Yesterday, we walk''ed'' together in Central Parc +* They dance''d'' all night. +* She studi''ed'' Pithagora's theorem. + +Pour les verbes irréguliers, voir la [[liste|anglais - les verbes irréguliers]]. + +Pour les verbes auxiliaires: + +|!be |!have | +|I ''was'' |I ''had'' | +|you ''were'' |you ''had'' | +|he/she ''was'' |he/she ''had'' | +|we ''were''|we ''had'' | +|you ''were''|you ''had'' | +|they ''were''|they ''had'' | + +Le négatif se forme avec ''did not'' ou la forme contactée ''didn't'' ''+ l'infinitif'' du verbe: + +* I ''did not'' play Bridge. +* She ''didn't'' see errors on script output. +* They ''didn't'' encounter any issues. + +La forme interrogative: ''did'' + ''sujet'' + ''l’infinitif du verbe'': + +* What did you say? +* Did they study French? +* Did we dance last night? diff --git a/tiddlywiki/ashtop.txt b/tiddlywiki/ashtop.txt new file mode 100755 index 0000000..d91bff9 --- /dev/null +++ b/tiddlywiki/ashtop.txt @@ -0,0 +1 @@ +@ash/ashtop event2,wait_class 1=1 sysdate-1/24/6 sysdate diff --git a/tiddlywiki/autoupgrade notes - 01.txt b/tiddlywiki/autoupgrade notes - 01.txt new file mode 100755 index 0000000..33bd798 --- /dev/null +++ b/tiddlywiki/autoupgrade notes - 01.txt @@ -0,0 +1,137 @@ +export PATH=/app/oracle/product/21/jdk/bin:$PATH +export JAVA_HOME=/app/oracle/product/21/jdk + +java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -version + + + + + +create spfile='/app/oracle/base/admin/WEDGEPRD/spfile/spfileWEDGEPRD.ora' from pfile='/app/oracle/base/admin/WEDGEPRD/pfile/initWEDGEPRD.ora'; + +startup nomount; + +rman auxiliary / + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate target database to WEDGE backup location '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/Standalone/11.2.0.4/WEDGE/'; +} + + +@$ORACLE_HOME/rdbms/admin/catbundle psu apply +@$ORACLE_HOME/rdbms/admin/utlrp + +java -jar /mnt/yavin4/tmp/autoupgrade.jar -version +java -jar /mnt/yavin4/tmp/autoupgrade.jar -config /home/oracle/myconfig.cfg -clear_recovery_data +java -jar /mnt/yavin4/tmp/autoupgrade.jar -config myconfig.cfg -mode analyze +java -jar /mnt/yavin4/tmp/autoupgrade.jar -config myconfig.cfg -mode fixups +java -jar /mnt/yavin4/tmp/autoupgrade.jar -config myconfig.cfg -mode deploy + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +global.autoupg_log_dir=/home/oracle + +upg1.sid=WEDGEPRD # ORACLE_SID of the source DB/CDB +upg1.source_home=/app/oracle/product/11.2 # Path of the source ORACLE_HOME +upg1.target_home=/app/oracle/product/19 # Path of the target ORACLE_HOME +upg1.start_time=NOW # Optional. [NOW | +XhYm (X hours, Y minutes after launch) | dd/mm/yyyy hh:mm:ss] +upg1.upgrade_node=taris.swgalaxy # Optional. To find out the name of your node, run the hostname utility. Default is 'localhost' +upg1.run_utlrp=yes # Optional. Whether or not to run utlrp after upgrade +upg1.timezone_upg=yes # Optional. Whether or not to run the timezone upgrade +upg1.target_version=19 # Oracle version of the target ORACLE_HOME. Only required when the target Oracle database version is 12.2 + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +create spfile='/app/oracle/base/admin/ASTYPRD/spfile/spfileASTYPRD.ora' from pfile='/app/oracle/base/admin/ASTYPRD/pfile/initASTYPRD.ora'; + +startup nomount; + +rman auxiliary / + +run +{ + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate target database to ASTY backup location '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/Standalone/19.11/ASTY/'; +} + + + +cd $ORACLE_HOME/OPatch +./datapatch + +@$ORACLE_HOME/rdbms/admin/utlrp + + +global.autoupg_log_dir=/home/oracle + +# +# Database number 3 - Noncdb to PDB upgrade +# +upg3.sid=WEDGEPRD +upg3.source_home=/app/oracle/product/11.2 +upg3.target_cdb=ASTYPRD +upg3.target_home=/app/oracle/product/19 +upg3.target_pdb_name=PDBWEDGEPRD +upg3.start_time=NOW # Optional. 10 Minutes from now +upg3.upgrade_node=localhost # Optional. To find out the name of your node, run the hostname utility. Default is 'localhost' +upg3.run_utlrp=yes # Optional. Whether or not to run utlrp after upgrade +upg3.timezone_upg=yes # Optional. Whether or not to run the timezone upgrade + + +rman target / + +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/ASTYPRD/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/ASTYPRD/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/ASTYPRD/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/ASTYPRD/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/ASTYPRD/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; +} + + +run +{ + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/WEDGEPRD/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/WEDGEPRD/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/WEDGEPRD/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/WEDGEPRD/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/temp/upgrade1/WEDGEPRD/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; +} diff --git a/tiddlywiki/awr_sql_id_exec_history.sql.txt b/tiddlywiki/awr_sql_id_exec_history.sql.txt new file mode 100755 index 0000000..b99dbcd --- /dev/null +++ b/tiddlywiki/awr_sql_id_exec_history.sql.txt @@ -0,0 +1,53 @@ +-- vplesnila +------------ + + +set lines 180 pages 100 + +alter session set nls_date_format='dd/mm hh24:mi:ss'; + +COLUMN iname HEADING "Instance" FORMAT A8 +COLUMN snap_id HEADING "Snap|Id" FORMAT 99999 +COLUMN endsnaptime HEADING "End|snaphot|time" FORMAT A11 +COLUMN plan_hash_value HEADING "Plan|hash|value" +COLUMN executions_delta HEADING "#Ex" FORMAT 999 +COLUMN buffer_gets_delta HEADING "Buffer|gets" +COLUMN bufferperexec HEADING "Buffer|gets|/exec" FORMAT 999999999 +COLUMN optimizer_cost HEADING "0ptimizer|cost" +COLUMN rows_processed_delta HEADING "#Rows" +COLUMN sql_profile HEADING "SQL|Prof" FORMAT A4 +COLUMN elapsed_time_delta HEADING "Elapsed|time|/exec|(sec)" FORMAT 99999 +COLUMN cpu_time_delta HEADING "CPU|time|/exec|(sec)" FORMAT 9999 +COLUMN iowait_delta HEADING "IO|time|/exec|(sec)" FORMAT 99999 +COLUMN px_servers_execs_delta HEADING "Px" FORMAT 999 +COLUMN disk_reads_delta HEADING "Disk|reads" +COLUMN io_offload_elig_bytes_delta HEADING "Elig|Mb" FORMAT 999999 +COLUMN io_interconnect_bytes_delta HEADING "Inter|Mb" FORMAT 999999 + +select + i.instance_name iname, + sqlstat.snap_id, + to_char(end_interval_time,'dd/mm hh24:mi') endsnaptime, + plan_hash_value, + optimizer_cost, + executions_delta, + buffer_gets_delta, + buffer_gets_delta/executions_delta bufferperexec, + decode (sql_profile,null, '',substr(sql_profile,1,4)) sql_profile, + px_servers_execs_delta, + disk_reads_delta, + round(elapsed_time_delta/1000000) elapsed_time_delta, + round(cpu_time_delta/1000000) cpu_time_delta, + round(iowait_delta/1000000) iowait_delta, + io_offload_elig_bytes_delta/1024/1024 io_offload_elig_bytes_delta, + io_interconnect_bytes_delta/1024/1024 io_interconnect_bytes_delta, + rows_processed_delta +from + dba_hist_sqlstat sqlstat + join dba_hist_snapshot snap on (sqlstat.snap_id=snap.snap_id) and (sqlstat.INSTANCE_NUMBER=snap.INSTANCE_NUMBER) + join gv$instance i on sqlstat.instance_number=i.instance_number +where executions_delta>0 and + sql_id='&&sql_id' +order by + sqlstat.snap_id +/ diff --git a/tiddlywiki/bash - code tips.txt b/tiddlywiki/bash - code tips.txt new file mode 100755 index 0000000..cc4ab7c --- /dev/null +++ b/tiddlywiki/bash - code tips.txt @@ -0,0 +1,66 @@ +# extract database names from oratab +cat /etc/oratab|grep -v "^#"|grep -v "N$"|cut -f1 -d: -s + +# running SID +ps -ef | grep smon | grep -v grep | grep -v sed | awk '{print $NF}' | sed -n 's/ora_smon_\(.*\)/\1/p' + +# check if substring exists in string +string='My long string' +if [[ $string == *"My long"* ]]; then + echo "It's there!" +fi + + +# To save both stdout and stderr to a variable: +# Note that this interleaves stdout and stderr into the same variable. +MYVARIABLE="$(path/myExcecutable-bin 2>&1)" + +# To save just stderr to a variable: +MYVARIABLE="$(path/myExcecutable-bin 2>&1 > /dev/null)" + +# XARGS usage +cat /etc/oratab| grep 12.1.0.2 | grep -v "^#"|grep -v "N$"|cut -f1 -d: -s | xargs -I{} echo myshell.sh -d {} + +# test if a variable is in a set +if [[ "$WORD" =~ ^(cat|dog|horse)$ ]]; then + echo "$WORD is in the list" +else + echo "$WORD is not in the list" +fi + +# Remove substring from string +echo "Hello world" | sed "s/world//g" + +# Replace text in file in place +sed -i 's/myoldstring/thenewstring/g' move_34.sh + +# Example with xargs and awk +cat out.txt | grep -v SUCCESSFULLY | awk -F ":" '{ print $1}' | sort -u | xargs -I{} echo "/dbfs_tools/TOOLS/admin/sh/db_pre_change_oh.sh -d {} -a fixall" + +# Find the line number of the first occurance of a string in a file +awk '/YODAEXA_DGMGRL/{ print NR; exit }' /tmp/listener.ora + +# How to replace an entire line in a text file by line number +# where N should be replaced by your target line number. +sed -i 'Ns/.*/replacement-line/' file.txt + +# Uppercasing Text with sed +sed 's/[a-z]/\U&/g' + +# Lowercasing Text with sed +sed 's/[A-Z]/\L&/g' + +# Search a text in a file and print the line# +grep -n "Database Status:" /tmp/vpl.txt | awk -F ":" '{print $1}' + +# echo without NEWLINE +echo -n $x + +# ltrim using sed +cat /tmp/vpl.txt | grep 'Intended State' | awk -F ":" '{print $2}' | sed 's/^ *//g' + +# tkprof multiple traces +ls -1 TRACES*.trc | while read FILE ; do tkprof $FILE $FILE.out; done + +# get errors from a datapump log +cat impo_01.log | grep ORA- | awk -F ":" '{ print $1}' | sort -u diff --git a/tiddlywiki/bash - use encrypted passwords in shell.txt b/tiddlywiki/bash - use encrypted passwords in shell.txt new file mode 100755 index 0000000..6fe996a --- /dev/null +++ b/tiddlywiki/bash - use encrypted passwords in shell.txt @@ -0,0 +1,7 @@ +export SECRET="*****" + +export PASS_CLEAR="*****" +echo ${PASS_CLEAR} | openssl enc -aes-256-cbc -md sha512 -a -pbkdf2 -iter 100000 -salt -pass pass:${SECRET} + +export PASS_ENCRYPTED="*****" +echo ${PASS_ENCRYPTED} | openssl enc -aes-256-cbc -md sha512 -a -d -pbkdf2 -iter 100000 -salt -pass pass:${SECRET} diff --git a/tiddlywiki/certbot - Let's Encrypt - Free SSL_TLS Certificates.tid b/tiddlywiki/certbot - Let's Encrypt - Free SSL_TLS Certificates.tid new file mode 100755 index 0000000..2703932 --- /dev/null +++ b/tiddlywiki/certbot - Let's Encrypt - Free SSL_TLS Certificates.tid @@ -0,0 +1,79 @@ +created: 20190620085907644 +creator: vplesnila +modified: 20220101134215781 +modifier: vplesnila +tags: [[Apache HTTPD]] +title: certbot - Let's Encrypt - Free SSL/TLS Certificates +type: text/vnd.tiddlywiki + +!! certbot installation + +``` +pip3 install certbot +``` + + +!! Virtual host Apache configuration + +``` + + ServerName notes.databasepro.fr + DocumentRoot "/data/wwwroot/notes.databasepro.fr/public/" + + Options Indexes FollowSymLinks + AllowOverride All + Require all granted + + +``` + + +!! Generate a signed certificate and a private key from Let's Encrypt +``` +certbot certonly --webroot --webroot-path /data/wwwroot/notes.databasepro.fr/public -d notes.databasepro.fr +``` +Generated files: + +* Certificate: `/etc/letsencrypt/live/notes.databasepro.fr/fullchain.pem` +* Key: `/etc/letsencrypt/live/notes.databasepro.fr/privkey.pem` + + +!! Add HTTPS config to Virtual host Apache configuration + +``` + + ServerName notes.databasepro.fr + DocumentRoot "/data/wwwroot/notes.databasepro.fr/public/" + + Options Indexes FollowSymLinks + AllowOverride All + Require all granted + + AllowEncodedSlashes on + SSLEngine on + SSLCertificateFile "/etc/letsencrypt/live/notes.databasepro.fr/fullchain.pem" + SSLCertificateKeyFile "/etc/letsencrypt/live/notes.databasepro.fr/privkey.pem" + +``` + +!! Restart apache + +``` +systemctl restart httpd +``` + +!! Renew all certificates + +``` +certbot renew +``` + +!! Remove a certificate + +``` +certbot delete --cert-name code.databasepro.fr +``` + + + + diff --git a/tiddlywiki/cra_01.txt b/tiddlywiki/cra_01.txt new file mode 100755 index 0000000..dda2a9c --- /dev/null +++ b/tiddlywiki/cra_01.txt @@ -0,0 +1,362 @@ +#!/bin/bash + +## Usage: dbaas_restorepoint.sh , where: +## -d|--database +## -o|--operation +## -r|--restorepoint + + + +# Static parameters +#------------------ +typeset ORATAB='/etc/oratab' +typeset ROOTDIR='/dbfs_tools/TOOLS/dbaas' +typeset ENVIRONMENT='dev' + +# Global variables +#----------------- +typeset THIS_SHELL +typeset DB_UNIQUE_NAME +typeset OHOME +typeset OVERSION +typeset GHOME +typeset EXEC_ID +typeset SQLPLUS +typeset DB_SRVCTL +typeset GRID_SRVCTL +typeset -i LAST_RETURN_CODE +typeset -i AUX_INTEGER +typeset AUX_STRING +typeset DB_CONNECT_STRING +typeset STEP_MESSAGE +typeset STEP +typeset OPERATION +typeset RESTOREPOINT + + +# Procedures +#----------- +usage() { + [ "$*" ] && echo "$0: $*" + sed -n '/^##/,/^$/s/^## \{0,1\}//p' "$0" + exit 2 +} 2>/dev/null + + +get_oracle_home (){ + # Get the last 3 letters of database unique name parameter + AUX_STRING=$(echo ${DB_UNIQUE_NAME} | rev | cut -c 1-3 | rev) + if [[ "${AUX_STRING}" != "EXA" ]] ; then + echo " ERROR: Invalid database name: ${DB_UNIQUE_NAME} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + fi + # Get GID_HOME + GHOME=$(cat ${ORATAB} | grep -v '^#' | grep "+ASM" | cut -d":" -f2 | sed 's/ //g') + GRID_SRVCTL=${GHOME}/bin/srvctl + ${GRID_SRVCTL} config database -v | grep ${DB_UNIQUE_NAME} | awk '{ print $1 }'> ${ROOTDIR}/workdir/${EXEC_ID}.tmp + typeset -i DBCOUNT=$(cat ${ROOTDIR}/workdir/${EXEC_ID}.tmp | wc -l) + if [[ ( ${DBCOUNT} > 1 ) || ( ${DBCOUNT} = 0 ) ]] ; then + echo " ERROR: ${DBCOUNT} databases founded for the database name: ${DB_UNIQUE_NAME} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + OHOME=$(${GRID_SRVCTL} config database -v | grep ${DB_UNIQUE_NAME} | awk '{ print $2 }') + export ORACLE_HOME=${OHOME} + OVERSION=$(${GRID_SRVCTL} config database -v | grep ${DB_UNIQUE_NAME} | awk '{ print $3 }') + SQLPLUS="${OHOME}/bin/sqlplus -s /nolog" + DB_SRVCTL=${OHOME}/bin/srvctl + echo " INFO: database name=${DB_UNIQUE_NAME}, oracle home=${OHOME}, version=${OVERSION} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + fi +} + +ckeck_database_connectivity (){ + ${SQLPLUS} < /dev/null +set pages 0 +set head off +set feed off +connect ${DB_CONNECT_STRING} as sysdba +spool ${ROOTDIR}/workdir/${EXEC_ID}.tmp +alter session set NLS_DATE_FORMAT='DD-MM-YYYY HH24:MI:SS'; +select sysdate from dual; +spool off +exit +EOF! + # Remove the empty lines from the file + sed -i '/^[[:space:]]*$/d' ${ROOTDIR}/workdir/${EXEC_ID}.tmp +} + + +get_exec_id (){ + EXEC_ID="$(date +"%Y%m%d%H%M%S")$$" +} + +count_oracle_error_in_spool (){ + typeset SPOOL=${1} + typeset -i ORA_COUNT=$(cat ${SPOOL} | grep ORA- |wc -l) + typeset -i SP2_COUNT=$(cat ${SPOOL} | grep SP2- |wc -l) + typeset -i ERR_COUNT=$(echo $((${ORA_COUNT} + ${SP2_COUNT}))) + return ${ERR_COUNT} +} + + +create_restore_point (){ + ${SQLPLUS} < /dev/null +set pages 0 +set head off +set feed off +connect ${DB_CONNECT_STRING} as sysdba +spool ${ROOTDIR}/workdir/${EXEC_ID}.tmp +create restore point R${EXEC_ID} guarantee flashback database; +spool off +exit +EOF! + # Remove the empty lines from the file + sed -i '/^[[:space:]]*$/d' ${ROOTDIR}/workdir/${EXEC_ID}.tmp + STEP_MESSAGE="create restorepoint R${EXEC_ID}" + count_oracle_error_in_spool "${ROOTDIR}/workdir/${EXEC_ID}.tmp" + LAST_RETURN_CODE=$? + if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + AUX_STRING=$(cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp") + echo " INFO: ${STEP_MESSAGE} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + fi +} + +delete_restore_point (){ + ${SQLPLUS} < /dev/null +set pages 0 +set head off +set feed off +connect ${DB_CONNECT_STRING} as sysdba +spool ${ROOTDIR}/workdir/${EXEC_ID}.tmp +drop restore point ${RESTOREPOINT}; +spool off +exit +EOF! + # Remove the empty lines from the file + sed -i '/^[[:space:]]*$/d' ${ROOTDIR}/workdir/${EXEC_ID}.tmp + STEP_MESSAGE="drop restorepoint R${EXEC_ID}" + count_oracle_error_in_spool "${ROOTDIR}/workdir/${EXEC_ID}.tmp" + LAST_RETURN_CODE=$? + if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + cat ${ROOTDIR}/workdir/${EXEC_ID}.tmp + exit 1 + else + AUX_STRING=$(cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp") + echo " INFO: ${STEP_MESSAGE} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + fi +} + + +list_restore_points (){ + ${SQLPLUS} < /dev/null +connect ${DB_CONNECT_STRING} as sysdba + +set lines 180 pages 100 +set verify off + +DEFINE BYTES_FORMAT="9,999,999" +DEFINE BYTES_HEADING="MB" +DEFINE BYTES_DIVIDER="1024/1024" + +spool ${ROOTDIR}/workdir/${EXEC_ID}.tmp + +COLUMN time HEADING "Time" FORMAT a18 +COLUMN name HEADING "Name" FORMAT a30 +COLUMN guarantee_flashback_database HEADING "Guar|anted" FORMAT a5 +COLUMN preserved HEADING "Pre|ser|ved" FORMAT a3 +COLUMN restore_point_time HEADING "Restore|Point|Time" FORMAT a18 +COLUMN scn HEADING "SCN" FORMAT 999999999999999 +COLUMN database_incarnation# HEADING "DB|Inc#" FORMAT 9999 +COLUMN storage_size HEADING "Size(&&BYTES_HEADING)" FORMAT &&BYTES_FORMAT + +SELECT TO_CHAR(r.time,'DD-MON-YY HH24:MI:SS') time + , r.name + , r.guarantee_flashback_database + , r.preserved + , r.database_incarnation# + , r.scn + , (r.storage_size)/&&BYTES_DIVIDER storage_size + , TO_CHAR(r.restore_point_time,'DD-MON-YY HH24:MI:SS') restore_point_time + FROM v\$restore_point r +ORDER BY r.time; +spool off +exit +EOF! + STEP_MESSAGE="create restorepoint R${EXEC_ID}" + count_oracle_error_in_spool "${ROOTDIR}/workdir/${EXEC_ID}.tmp" + LAST_RETURN_CODE=$? + if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + sed -i 's/no rows selected/ INFO: no restorepoint found [OK]/g' ${ROOTDIR}/workdir/${EXEC_ID}.tmp + cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp" + fi +} + +flashback_to_restore_point (){ + STEP_MESSAGE="stop database" + # Get the first node of the database + typeset FIRST_DB_INSTANCE=$(${DB_SRVCTL} status database -d ${DB_UNIQUE_NAME} | head -n 1| awk -F " " '{print $2}') + /dbfs_tools/TOOLS/admin/sh/db_blackout.sh ${DB_UNIQUE_NAME} ON + ${DB_SRVCTL} stop database -d ${DB_UNIQUE_NAME} + LAST_RETURN_CODE=$? + if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + AUX_STRING=$(cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp") + echo " INFO: ${STEP_MESSAGE} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + fi + + STEP_MESSAGE="start in mount mode ${FIRST_DB_INSTANCE} instance of ${DB_UNIQUE_NAME} database " + ${DB_SRVCTL} start instance -i ${FIRST_DB_INSTANCE} -d ${DB_UNIQUE_NAME} -o "mount" + LAST_RETURN_CODE=$? + if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + AUX_STRING=$(cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp") + echo " INFO: ${STEP_MESSAGE} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + fi + sleep 30 + ${SQLPLUS} < /dev/null +set pages 0 +set head off +set feed off +connect ${DB_CONNECT_STRING} as sysdba +spool ${ROOTDIR}/workdir/${EXEC_ID}.tmp +flashback database to restore point ${RESTOREPOINT}; +alter database open resetlogs; +spool off +exit +EOF! +cat ${ROOTDIR}/workdir/${EXEC_ID}.tmp + # Remove the empty lines from the file + sed -i '/^[[:space:]]*$/d' ${ROOTDIR}/workdir/${EXEC_ID}.tmp + STEP_MESSAGE="flashback to restorepoint ${RESTOREPOINT}" + count_oracle_error_in_spool "${ROOTDIR}/workdir/${EXEC_ID}.tmp" + LAST_RETURN_CODE=$? + if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + cat ${ROOTDIR}/workdir/${EXEC_ID}.tmp + exit 1 + else + AUX_STRING=$(cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp") + echo " INFO: ${STEP_MESSAGE} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + fi + + STEP_MESSAGE="restart database" + ${DB_SRVCTL} stop database -d ${DB_UNIQUE_NAME} + ${DB_SRVCTL} start database -d ${DB_UNIQUE_NAME} + LAST_RETURN_CODE=$? + ${DB_SRVCTL} start service -d ${DB_UNIQUE_NAME} + /dbfs_tools/TOOLS/admin/sh/db_blackout.sh ${DB_UNIQUE_NAME} OFF + if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + AUX_STRING=$(cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp") + echo " INFO: ${STEP_MESSAGE} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + fi +} + + +# -------- +# M a i n +#--------- + +# Get the execution id of the shell +#---------------------------------- +get_exec_id +THIS_SHELL=$(basename "$0") + +# Parse the input parameters +while [ "$1" != "" ]; do + case $1 in + -d | --database ) shift + # Uppercase the database name + AUX_STRING=$1 + DB_UNIQUE_NAME=${AUX_STRING^^} + ;; + -r | --restorepoint ) shift + RESTOREPOINT=$1 + ;; + -o | --operation ) shift + OPERATION=$1 + ;; + -h | --help ) usage + exit + ;; + * ) usage + exit 1 + esac + shift +done + +# Check for parameter consistance +#-------------------------------- +if [ -z "${DB_UNIQUE_NAME}" ]; then usage; exit 1; fi + + +if ! [[ "${OPERATION}" == "list" || "${OPERATION}" == "create" || "${OPERATION}" == "delete" || "${OPERATION}" == "flashback" ]] ; then + usage + exit 1 +fi + +if [[ "${OPERATION}" == "delete" || "${OPERATION}" == "flashback" ]] ; then + if [ -z "${RESTOREPOINT}" ]; then + usage + exit 1 + fi +fi + +if [[ "${ENVIRONMENT}" == "prod" ]] ; then + DB_CONNECT_STRING="sys/plusdacces@dmp01-scan/${DB_UNIQUE_NAME}" +else + DB_CONNECT_STRING="sys/plusdacces@dmt01-scan/${DB_UNIQUE_NAME}" +fi + + +# Get the ORACLE_HOME of the database +#------------------------------------ +get_oracle_home + +# Try to connect sys as sysdba and get the SYSDATE +ckeck_database_connectivity + +STEP_MESSAGE="call sysdate() in ${DB_UNIQUE_NAME} to check database connectivity" +count_oracle_error_in_spool "${ROOTDIR}/workdir/${EXEC_ID}.tmp" +LAST_RETURN_CODE=$? + +if [[ ( ${LAST_RETURN_CODE} != 0 ) ]] ; then + echo " ERROR: ${STEP_MESSAGE} [FAILED]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + AUX_STRING=$(cat "${ROOTDIR}/workdir/${EXEC_ID}.tmp") + echo " INFO: ${STEP_MESSAGE}: ${AUX_STRING} [OK]" | tee -a ${ROOTDIR}/log/${THIS_SHELl}_${EXEC_ID}.log +fi + + +if [[ "${OPERATION}" == "create" ]] ; then + create_restore_point + exit $? +fi + +if [[ "${OPERATION}" == "list" ]] ; then + list_restore_points + exit $? +fi + +if [[ "${OPERATION}" == "flashback" ]] ; then + flashback_to_restore_point + exit $? +fi + +if [[ "${OPERATION}" == "delete" ]] ; then + delete_restore_point + exit $? +fi diff --git a/tiddlywiki/cra_02.txt b/tiddlywiki/cra_02.txt new file mode 100755 index 0000000..d752f45 --- /dev/null +++ b/tiddlywiki/cra_02.txt @@ -0,0 +1,307 @@ +#!/bin/bash + +## Usage: db_change_oh.sh , where: +## -d|--database +## -o|--oraclehome + +# Global variables +#----------------- +typeset ABS_PATH_THIS_SHELL +typeset ABS_PATH_EXECUTION_LOG_FILE +typeset EXEC_ID +typeset ROOTDIR="/dbfs_tools/TOOLS/admin/sh" +typeset UPLEVEL_SCRIPT_PATH +typeset NEW_ORACLE_HOME +typeset THIS_HOST +typeset DBNODE1 +typeset DBNODE2 +typeset DBINST1 +typeset DBINST2 +typeset EXECUTION_TYPE +typeset EXECUTION_HOST +typeset EXECUTION_INSTANCE +typeset EXECUTION_HOST_BIS +typeset EXECUTION_INSTANCE_BIS +typeset THIS_SHELL +typeset DB_UNIQUE_NAME +typeset OHOME +typeset OVERSION +typeset GHOME +typeset SQLPLUS +typeset DB_SRVCTL +typeset GRID_SRVCTL +typeset -i LAST_RETURN_CODE +typeset AUX_STRING +typeset DATABASE_ROLE + +# Procedures +#----------- +usage() { + [ "$*" ] && echo "$0: $*" + sed -n '/^##/,/^$/s/^## \{0,1\}//p' "$0" + exit 2 +} 2>/dev/null + +get_database_info (){ + # Get the last 3 letters of database unique name parameter + AUX_STRING=$(echo ${DB_UNIQUE_NAME} | rev | cut -c 1-3 | rev) + if [[ "${AUX_STRING}" != "EXA" ]] ; then + echo " ERROR: Invalid database name: ${DB_UNIQUE_NAME} [FAILED]" | tee -a ${UPLEVEL_SCRIPT_PATH}/logs/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + fi + # Get GID_HOME + GHOME=$(cat /etc/oratab | grep -v '^#' | grep "+ASM" | cut -d":" -f2 | sed 's/ //g') + GRID_SRVCTL=${GHOME}/bin/srvctl + ${GRID_SRVCTL} config database -v | grep ${DB_UNIQUE_NAME} | awk '{ print $1 }'> ${UPLEVEL_SCRIPT_PATH}/workdir/${EXEC_ID}.tmp + typeset -i DBCOUNT=$(cat ${UPLEVEL_SCRIPT_PATH}/workdir/${EXEC_ID}.tmp | wc -l) + if [[ ( ${DBCOUNT} > 1 ) || ( ${DBCOUNT} = 0 ) ]] ; then + echo " ERROR: ${DBCOUNT} databases founded for the database name: ${DB_UNIQUE_NAME} [FAILED]" | tee -a ${UPLEVEL_SCRIPT_PATH}/logs/${THIS_SHELl}_${EXEC_ID}.log + exit 1 + else + OHOME=$(${GRID_SRVCTL} config database -v | grep ${DB_UNIQUE_NAME} | awk '{ print $2 }') + export ORACLE_HOME=${OHOME} + OVERSION=$(${GRID_SRVCTL} config database -v | grep ${DB_UNIQUE_NAME} | awk '{ print $3 }') + SQLPLUS="${OHOME}/bin/sqlplus -s / as sysdba" + DB_SRVCTL=${OHOME}/bin/srvctl + echo "DB_UNIQUE_NAME=${DB_UNIQUE_NAME}, ORACLE_HOME=${OHOME}, VERSION=${OVERSION}" + fi + + # Identify database instances + THIS_HOST=$(hostname -s) + THIS_HOST_NUMBER=$(echo ${THIS_HOST} | rev | cut -c 1-1 | rev) + THIS_HOST_BASE=${THIS_HOST:0:10} + DB_UNIQUE_NAME_BASE=${DB_UNIQUE_NAME:0:${#DB_UNIQUE_NAME}-3} + + STR1=$(${DB_SRVCTL} config database -db ${DB_UNIQUE_NAME} | grep "Configured nodes") + STR2=${STR1#"Configured nodes: "} + DBNODE1=$(echo ${STR2} | awk -F"," '{ print $1}') + DBNODE2=$(echo ${STR2} | awk -F"," '{ print $2}') + + STR1=$(${DB_SRVCTL} config database -db ${DB_UNIQUE_NAME} | grep "Database instances") + STR2=${STR1#"Database instances: "} + DBINST1=$(echo ${STR2} | awk -F"," '{ print $1}') + DBINST2=$(echo ${STR2} | awk -F"," '{ print $2}') + + echo "Database instances: ${DBINST1}@${DBNODE1}, ${DBINST2}@${DBNODE2}" +} + +continue_if_ok () { + RETURN_CODE=$1 + if [[ ( ${RETURN_CODE} != 0 ) ]] ; then + echo "ERROR, please check the logfile: ${ABS_PATH_EXECUTION_LOG_FILE}" + exit 1 + fi +} + +restart_database (){ + ${DB_SRVCTL} stop database -db ${DB_UNIQUE_NAME} > ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + continue_if_ok $? + + ${DB_SRVCTL} start database -db ${DB_UNIQUE_NAME} > ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + continue_if_ok $? +} + +copy_init_and_passwordfile (){ + cp -p ${OHOME}/dbs/init${EXECUTION_INSTANCE}.ora ${NEW_ORACLE_HOME}/dbs/init${EXECUTION_INSTANCE}.ora >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + cp -p ${OHOME}/dbs/orapw${EXECUTION_INSTANCE} ${NEW_ORACLE_HOME}/dbs/orapw${EXECUTION_INSTANCE} >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + scp -p ${OHOME}/dbs/init${EXECUTION_INSTANCE}.ora ${EXECUTION_HOST_BIS}:${NEW_ORACLE_HOME}/dbs/init${EXECUTION_INSTANCE_BIS}.ora >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + scp -p ${OHOME}/dbs/orapw${EXECUTION_INSTANCE} ${EXECUTION_HOST_BIS}:${NEW_ORACLE_HOME}/dbs/orapw${EXECUTION_INSTANCE_BIS} >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 +} + +disable_cluster (){ + export ORACLE_HOME=${OHOME} + export ORACLE_SID=${EXECUTION_INSTANCE} + ${ORACLE_HOME}/bin/sqlplus / as sysdba<> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 +whenever oserror exit failure +whenever sqlerror exit sql.sqlcode +alter system set cluster_database=false scope=spfile sid='*'; +EOF! + continue_if_ok $? +} + +stop_db (){ + ${DB_SRVCTL} stop database -db ${DB_UNIQUE_NAME} >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + continue_if_ok $? +} + +update_oratab (){ + cp /etc/oratab ${UPLEVEL_SCRIPT_PATH}/logs/oratab_${DB_UNIQUE_NAME}_${EXECUTION_HOST}_${EXEC_ID}.bak + scp ${EXECUTION_HOST_BIS}:/etc/oratab ${UPLEVEL_SCRIPT_PATH}/logs/oratab_${DB_UNIQUE_NAME}_${EXECUTION_HOST_BIS}_${EXEC_ID}.bak >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + + sed '/'${DB_UNIQUE_NAME}':/d' /etc/oratab > ${UPLEVEL_SCRIPT_PATH}/workdir/oratab_${DB_UNIQUE_NAME}_{EXEC_ID}.tmp + echo "${DB_UNIQUE_NAME}:${NEW_ORACLE_HOME}:N" >> ${UPLEVEL_SCRIPT_PATH}/workdir/oratab_${DB_UNIQUE_NAME}_{EXEC_ID}.tmp + cp ${UPLEVEL_SCRIPT_PATH}/workdir/oratab_${DB_UNIQUE_NAME}_{EXEC_ID}.tmp /etc/oratab + + sed '/'${DB_UNIQUE_NAME}':/d' ${UPLEVEL_SCRIPT_PATH}/logs/oratab_${DB_UNIQUE_NAME}_${EXECUTION_HOST_BIS}_${EXEC_ID}.bak > ${UPLEVEL_SCRIPT_PATH}/workdir/oratab_${DB_UNIQUE_NAME}_{EXEC_ID}.tmp + echo "${DB_UNIQUE_NAME}:${NEW_ORACLE_HOME}:N" >> ${UPLEVEL_SCRIPT_PATH}/workdir/oratab_${DB_UNIQUE_NAME}_{EXEC_ID}.tmp + scp ${UPLEVEL_SCRIPT_PATH}/workdir/oratab_${DB_UNIQUE_NAME}_{EXEC_ID}.tmp ${EXECUTION_HOST_BIS}:/etc/oratab >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 +} + +execute_datapatch (){ + # Switch to new ORACLE_HOME + export ORACLE_HOME=${NEW_ORACLE_HOME} + + echo " Start instance ${EXECUTION_INSTANCE}@${EXECUTION_HOST} in UPGRADE mode" + ${ORACLE_HOME}/bin/sqlplus / as sysdba<> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 +whenever oserror exit failure +whenever sqlerror exit sql.sqlcode +startup upgrade; +EOF! + continue_if_ok $? + + # Asked by Michael :) + ${ORACLE_HOME}/bin/sqlplus / as sysdba<> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 +whenever oserror exit failure +whenever sqlerror exit sql.sqlcode +begin + FOR dummy IN (SELECT index_name from dba_indexes where owner='SYS' and index_name='[FK_RES]') + LOOP + execute immediate 'drop index SYS."[FK_RES]"'; + END LOOP; +end; +/ +EOF! + continue_if_ok $? + + echo " Executing DATAPATCH" + ${ORACLE_HOME}/OPatch/datapatch >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + continue_if_ok $? +} + +enable_cluster (){ + echo " Set cluster_database=true and shutdown ${DB_UNIQUE_NAME} database" + ${ORACLE_HOME}/bin/sqlplus / as sysdba<> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 +whenever oserror exit failure +whenever sqlerror exit sql.sqlcode +alter system set cluster_database=true scope=spfile sid='*'; +shutdown immediate; +EOF! + continue_if_ok $? +} + +update_CRS (){ + ${DB_SRVCTL} modify database -db ${DB_UNIQUE_NAME} -oraclehome ${NEW_ORACLE_HOME} >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + continue_if_ok $? +} + +start_db (){ + ${DB_SRVCTL} start database -db ${DB_UNIQUE_NAME} >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 + continue_if_ok $? + + ${DB_SRVCTL} status database -db ${DB_UNIQUE_NAME} -v >> ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log 2>&1 +} + +get_database_role (){ + export ORACLE_HOME=${OHOME} + export ORACLE_SID=${EXECUTION_INSTANCE} + DATABASE_ROLE=$(${ORACLE_HOME}/bin/sqlplus -s / as sysdba< oratab. + echo " Update ${DB_UNIQUE_NAME} in oratab with the new ORACLE_HOME" + update_oratab + + if [ "${DATABASE_ROLE}" == "PRIMARY" ]; then + execute_datapatch + enable_cluster + fi + + echo " Update ${DB_UNIQUE_NAME} in CRS with new ORACLE_HOME" + update_CRS + + if [ "${DATABASE_ROLE}" == "PRIMARY" ]; then + echo " Start ${DB_UNIQUE_NAME} database" + start_db + else + echo " Restart ${DB_UNIQUE_NAME} database" + stop_db + start_db + fi + +} + +# -------- +# M a i n +#--------- + +THIS_SHELL=$(basename "$0") +EXEC_ID="$(date +"%Y%m%d%H%M%S")$$" + +# Parse the input parameters +while [ "$1" != "" ]; do + case $1 in + -d | --database ) shift + # Uppercase the database name + AUX_STRING=$1 + DB_UNIQUE_NAME=${AUX_STRING^^} + ;; + -o | --oraclehome ) shift + NEW_ORACLE_HOME=$1 + ;; + -h | --help ) usage + exit + ;; + * ) usage + exit 1 + esac + shift +done + +# Check for parameter consistance +#-------------------------------- +if [ -z "${DB_UNIQUE_NAME}" ]; then usage; exit 1; fi +if [ -z "${NEW_ORACLE_HOME}" ]; then usage; exit 1; fi + +ABS_PATH_THIS_SHELL=$(readlink -f ${THIS_SHELL}) +UPLEVEL_SCRIPT_PATH=$(dirname "${ROOTDIR}") +ABS_PATH_EXECUTION_LOG_FILE=$(readlink -f ${UPLEVEL_SCRIPT_PATH}/logs/${DB_UNIQUE_NAME}_${EXEC_ID}.log) + +get_database_info +echo "I like to move it, move it!" + +if [ "${THIS_HOST}" == "${DBNODE1}" ]; then + EXECUTION_HOST=${THIS_HOST} + EXECUTION_INSTANCE=${DBINST1} + EXECUTION_HOST_BIS=${DBNODE2} + EXECUTION_INSTANCE_BIS=${DBINST2} + echo "BEGIN ORACLE HOME MOVE for ${DB_UNIQUE_NAME} database" + local_execution + echo "END ORACLE HOME MOVE for ${DB_UNIQUE_NAME} database" + RETURN_CODE=0 +elif [ "${THIS_HOST}" == "${DBNODE2}" ]; then + EXECUTION_HOST=${THIS_HOST} + EXECUTION_INSTANCE=${DBINST2} + EXECUTION_HOST_BIS=${DBNODE1} + EXECUTION_INSTANCE_BIS=${DBINST1} + echo "BEGIN DATAPATCH for ${DB_UNIQUE_NAME} database" + local_execution + echo "END DATAPATCH for ${DB_UNIQUE_NAME} database" + RETURN_CODE=0 +else + echo "ERROR: please execute this shell on ${DBNODE1} or ${DBNODE2}" + RETURN_CODE=1 +fi + +exit ${RETURN_CODE} diff --git a/tiddlywiki/database_links.sql.txt b/tiddlywiki/database_links.sql.txt new file mode 100755 index 0000000..8df7d8d --- /dev/null +++ b/tiddlywiki/database_links.sql.txt @@ -0,0 +1,8 @@ +set line 180 pages 999 + +col OWNER for a30 +col DB_LINK for a30 +col HOST for a30 +col USERNAME for a30 + +select OWNER,DB_LINK,HOST,USERNAME from dba_db_links; diff --git a/tiddlywiki/dataguard.tid b/tiddlywiki/dataguard.tid new file mode 100755 index 0000000..68c411e --- /dev/null +++ b/tiddlywiki/dataguard.tid @@ -0,0 +1,8 @@ +created: 20190622073543646 +creator: vplesnila +modified: 20190622073550179 +modifier: vplesnila +tags: +title: dataguard +type: text/vnd.tiddlywiki + diff --git a/tiddlywiki/datapump_jobs.sql.txt b/tiddlywiki/datapump_jobs.sql.txt new file mode 100755 index 0000000..e8bcbb5 --- /dev/null +++ b/tiddlywiki/datapump_jobs.sql.txt @@ -0,0 +1,26 @@ +-- ----------------------------------------------------------------------------------- +-- File Name : https://oracle-base.com/dba/10g/datapump_jobs.sql +-- Author : Tim Hall +-- Description : Displays information about all Data Pump jobs. +-- Requirements : Access to the DBA views. +-- Call Syntax : @datapump_jobs +-- Last Modified: 28/01/2019 +-- ----------------------------------------------------------------------------------- +SET LINESIZE 150 + +COLUMN owner_name FORMAT A20 +COLUMN job_name FORMAT A30 +COLUMN operation FORMAT A10 +COLUMN job_mode FORMAT A10 +COLUMN state FORMAT A12 + +SELECT owner_name, + job_name, + TRIM(operation) AS operation, + TRIM(job_mode) AS job_mode, + state, + degree, + attached_sessions, + datapump_sessions +FROM dba_datapump_jobs +ORDER BY 1, 2; \ No newline at end of file diff --git a/tiddlywiki/dba_registry.sql.txt b/tiddlywiki/dba_registry.sql.txt new file mode 100755 index 0000000..5584a78 --- /dev/null +++ b/tiddlywiki/dba_registry.sql.txt @@ -0,0 +1,7 @@ +set lines 180 pages 999 + +column COMP_NAME for a40 trunc head 'Component name' +column VERSION for a20 head 'Version' +column STATUS for a10 head 'Status' + +select COMP_NAME,VERSION,STATUS from dba_registry; diff --git a/tiddlywiki/dbms_xplan - examples.txt b/tiddlywiki/dbms_xplan - examples.txt new file mode 100755 index 0000000..bdb2e20 --- /dev/null +++ b/tiddlywiki/dbms_xplan - examples.txt @@ -0,0 +1,8 @@ +-- for a cursor in library cache +set pages 999 lines 200 +select * from table(dbms_xplan.display_cursor('&&SQL_ID',null,null)); + +-- from AWR +set pages 999 lines 200 +select * from table(dbms_xplan.display_awr('&&SQL_ID',null,null,'ADVANCED')); + diff --git a/tiddlywiki/dcli - example.txt b/tiddlywiki/dcli - example.txt new file mode 100755 index 0000000..c0caf7f --- /dev/null +++ b/tiddlywiki/dcli - example.txt @@ -0,0 +1,9 @@ +cat /root/dbs_group +dmp01dbadm01 +dmp01dbadm02 +dmp01dbadm03 +dmp01dbadm04 +dmp01dbadm05 +dmp01dbadm06 + +dcli -l root -g /root/dbs_group "df -h /u01" diff --git a/tiddlywiki/docker-compose notes.md b/tiddlywiki/docker-compose notes.md new file mode 100755 index 0000000..7083931 --- /dev/null +++ b/tiddlywiki/docker-compose notes.md @@ -0,0 +1,25 @@ +Install example (current version was `1.28.5`): + + curl -L "https://github.com/docker/compose/releases/download/1.28.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/sbin/docker-compose + chmod +x /usr/local/sbin/docker-compose + docker-compose --version + + +Usage examples from [web.leikir.io](https://web.leikir.io/docker-compose-un-outil-desormais-indispensable/): +```bash +docker-compose up # démarre les services décrits dans mon docker-compose.yml et ne me rend pas la main. +docker-compose up -d # fait la même chose mais me rend la main une fois que les services sont démarrés. +docker-compose up –build # reconstruit les services avant de les lancer. + +docker-compose down # stoppe les services. + +docker-compose restart # redémarre l’ensemble des services. +docker-compose restart nginx # redémarre un des services (ici nginx). + +docker-compose exec rails bash # me fournit une console bash au sein du conteneur rails. +docker-compose exec rails bin/rails db:migrate # effectue un rails db:migrate au sein du conteneur rails. + +docker-compose logs # me retourne l’ensemble des logs des services depuis le dernier démarrage et me rend la main. +docker-compose logs -f # affiche les logs des services et continue à les « écouter » sans me rendre la main. +docker-compose logs -f rails # fait la même chose pour le conteneur rails uniquement. +``` diff --git a/tiddlywiki/flashback_query.sql.txt b/tiddlywiki/flashback_query.sql.txt new file mode 100755 index 0000000..08187a7 --- /dev/null +++ b/tiddlywiki/flashback_query.sql.txt @@ -0,0 +1,5 @@ +-- Le FLASHBACK QUERY fonctionne sans que le FLASHBACK soit activé au niveau de la BDD +-- car il s'appuie sur l'UNDO tablespace + +SELECT count(*) FROM yoda.t0 AS OF TIMESTAMP systimestamp - INTERVAL '5' MINUTE; + diff --git a/tiddlywiki/fra_info.sql.txt b/tiddlywiki/fra_info.sql.txt new file mode 100755 index 0000000..3ee2b25 --- /dev/null +++ b/tiddlywiki/fra_info.sql.txt @@ -0,0 +1,8 @@ +set lines 180 +col limite_go for 999999999 +col utilise for 999999999 + +select space_limit/1024/1024/1024 limite_go, space_used/1024/1024/1024 utilise + from v$recovery_file_dest; + +select * from v$recovery_area_usage; diff --git a/tiddlywiki/git examples.md b/tiddlywiki/git examples.md new file mode 100755 index 0000000..726cb06 --- /dev/null +++ b/tiddlywiki/git examples.md @@ -0,0 +1,66 @@ +Clone existing repository to local file system: + + git clone https://code.databasepro.eu/support/oracle.git + cd oracle/ + +Example modifiyng an existing file: + + git add timhall/mon_nouveau_fichier.txt + echo "2-eme ligne" >> timhall/mon_nouveau_fichier.txt + git commit -m " Rajout d'une 2-eme linge dans timhall/mon_nouveau_fichier.txt" + + + git add timhall/mon_nouveau_fichier.txt + echo "3-eme ligne" >> timhall/mon_nouveau_fichier.txt + git commit -m " Rajout d'une 3-eme linge dans timhall/mon_nouveau_fichier.txt" + + git fetch + git push origin master + +Removing a directory: + + git rm -r timhall + git commit . -m "Remove timhall directory" + + git fetch + git push origin master + +Add a directory: + + git add timhall + git commit -m "Add Timm Hall directory" + + git fetch + git push origin master + +Save credentials locally: + + git config --global credential.helper store + git pull + cat ~/.git-credentials + + +New project +----------- + +Create new project using Gitlab web interface. +Form command line: + + cd programming + git switch -c main + touch README.md + git add README.md + git commit -m "add README" + git push -u origin main + + +Add directory: + + mkdir python bash + echo "empty" > python/Readme.md + echo "empty" > bash/Readme.md + git add python/* + git add bash/* + git commit -m "Add python & bash" + git fetch + git push diff --git a/tiddlywiki/gitlab with Docker (second edition).md b/tiddlywiki/gitlab with Docker (second edition).md new file mode 100755 index 0000000..7578760 --- /dev/null +++ b/tiddlywiki/gitlab with Docker (second edition).md @@ -0,0 +1,71 @@ +In this example gitlab will be accessible through the public URL: **http://code.databasepro.fr** + +As prerequisits: +- A valid SSL certificate for the subdomain `code.databasepro.fr` was generated (using **LetsEncrypt** `certbot`) +- a reverse-proxy was defined. + +Examlple of *nginx* reverse-proxy configuration: + + server { + listen 80; + server_name code.databasepro.fr; + access_log /wwwlogs/code.databasepro.fr.access.log combined; + error_log /wwwlogs/code.databasepro.fr.error.log info; + location / { + root /www/code.databasepro.fr; + index index.html index.htm; + autoindex on; + } + rewrite ^ https://code.databasepro.fr$request_uri? permanent; + } + server { + listen 443 ssl http2; + ssl_certificate /etc/letsencrypt/live/code.databasepro.fr/fullchain.pem; + ssl_certificate_key /etc/letsencrypt/live/code.databasepro.fr/privkey.pem; + ssl_stapling on; + server_name code.databasepro.fr; + access_log /wwwlogs/code.databasepro.fr.access.log combined; + error_log /wwwlogs/code.databasepro.fr.error.log info; + location / { + proxy_pass https://192.168.0.91:7004/; + } + } + + +Create persistent directories: + + mkdir /app/persistent_docker/gitlab + cd /app/persistent_docker/gitlab + mkdir config data logs + + +Pull the *Comunity Edition* of gitlab: + + docker pull gitlab/gitlab-ce + + +Create `docker-compose.yaml` file in `/app/persistent_docker/gitlab`: + + services: + gitlab: + image: 'gitlab/gitlab-ce:latest' + restart: always + hostname: 'code.databasepro.fr' + environment: + GITLAB_OMNIBUS_CONFIG: | + external_url 'https://code.databasepro.fr' + # add any other gitlab.rb configuration here, each on its own line + ports: + - 7004:443 + volumes: + - /app/persistent_docker/gitlab/config:/etc/gitlab + - /app/persistent_docker/gitlab/logs:/var/log/gitlab + - /app/persistent_docker/gitlab/data:/var/opt/gitlab + + +Start container: + + docker-compose up -d + + +Initial `root` password can be found in `/app/persistent_docker/gitlab/config/initial_root_password` diff --git a/tiddlywiki/gitlab with Docker.md b/tiddlywiki/gitlab with Docker.md new file mode 100755 index 0000000..c88cea0 --- /dev/null +++ b/tiddlywiki/gitlab with Docker.md @@ -0,0 +1,156 @@ +Get Docker image +---------------- +``` +docker pull gitlab/gitlab-ee +``` + +Prepare persistent directories +------------------------------ +``` +mkdir /app/appsdocker/gitlab +cd /app/appsdocker/gitlab +mkdir config data logs +``` + +Run the container +----------------- +Let's run Gitlab in `gitlab.databasepro.eu` in HTTP mode: +``` +export GITLAB_HOME=/app/appsdocker/gitlab +docker run --detach \ + --hostname gitlab.databasepro.eu \ + --publish 7001:80 \ + --name gitlab \ + --restart always \ + --volume $GITLAB_HOME/config:/etc/gitlab \ + --volume $GITLAB_HOME/logs:/var/log/gitlab \ + --volume $GITLAB_HOME/data:/var/opt/gitlab \ + gitlab/gitlab-ee:latest +``` + +Supposing that `ossus` is the Docker host name, and in the router NAT we have mapped external port `80` to internal `ossus:7001`, on reverse proxy we will have: +``` + + ServerName gitlab.databasepro.eu + + ServerAdmin admin@gitlab.databasepro.eu + DocumentRoot /usr/local/apache2/wwwroot/gitlab + + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + ErrorLog logs/gitlab-error.log + CustomLog logs/gitlab-access.log combined + + ProxyPass / http://ossus:7001/ + ProxyPassReverse / http://ossus:7001/ + +``` + +Run Gitlab in HTTPS +------------------- + +Configure `external_url "https://gitlab.databasepro.eu"` in `/app/appsdocker/gitlab/config/gitlab.rb`: +``` +external_url 'https://gitlab.databasepro.eu' +``` + +> Using external created letsencrypt certificate caused loop reboot of the container after host restart. +The sollution was to set also: +``` +letsencrypt['enable'] = false +``` +> + +Stop, remove and restart the container: +``` +export GITLAB_HOME=/app/appsdocker/gitlab +docker run --detach \ + --hostname gitlab.databasepro.eu \ + --publish 7004:443 \ + --name gitlab \ + --restart always \ + --volume $GITLAB_HOME/config:/etc/gitlab \ + --volume $GITLAB_HOME/logs:/var/log/gitlab \ + --volume $GITLAB_HOME/data:/var/opt/gitlab \ + gitlab/gitlab-ee:latest +``` +Map in NAT external port `443` to internal `ossus` HTTPD port and update `gitlab.conf`: +``` + + ServerName gitlab.databasepro.eu + + ServerAdmin admin@gitlab.databasepro.eu + DocumentRoot /usr/local/apache2/wwwroot/gitlab + + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + ErrorLog logs/gitlab-error.log + CustomLog logs/gitlab-access.log combined + + ProxyPass / http://ossus:7001/ + ProxyPassReverse / http://ossus:7001/ + + + + ServerName gitlab.databasepro.eu + + ServerAdmin admin@gitlab.databasepro.eu + DocumentRoot /usr/local/apache2/wwwroot/gitlab + + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + SSLEngine On + SSLProxyEngine On + + # Disable SSLProxyCheck + SSLProxyCheckPeerCN Off + SSLProxyCheckPeerName Off + SSLProxyVerify none + + ErrorLog logs/gitlab-error.log + CustomLog logs/gitlab-access.log combined + + SSLCertificateFile "/etc/letsencrypt/live/gitlab.databasepro.eu/fullchain.pem" + SSLCertificateKeyFile "/etc/letsencrypt/live/gitlab.databasepro.eu/privkey.pem" + + ProxyPass / https://ossus:7004/ + ProxyPassReverse / https://ossus:7004/ + +``` +Optionally using docker-compose +------------------------------- +`docker-compose.yaml` file: +``` +gitlab: + image: 'gitlab/gitlab-ee:latest' + restart: always + hostname: 'code.databasepro.eu' + environment: + GITLAB_OMNIBUS_CONFIG: | + external_url 'https://code.databasepro.eu' + # Add any other gitlab.rb configuration here, each on its own line + ports: + - 7004:443 + volumes: + - /app/appsdocker/gitlab/config:/etc/gitlab + - /app/appsdocker/gitlab/logs:/var/log/gitlab + - /app/appsdocker/gitlab/data:/var/opt/gitlab +``` + + + \ No newline at end of file diff --git a/tiddlywiki/hidden_undocumented_parameters.sql.txt b/tiddlywiki/hidden_undocumented_parameters.sql.txt new file mode 100755 index 0000000..ed4856a --- /dev/null +++ b/tiddlywiki/hidden_undocumented_parameters.sql.txt @@ -0,0 +1,14 @@ +set lines 180 pages 100 + +col p_name for a50 heading 'Parameter' +col p_value for a50 heading 'Value' + +select + n.ksppinm p_name, + c.ksppstvl p_value +from + sys.x$ksppi n, sys.x$ksppcv c +where + n.indx=c.indx + and lower(n.ksppinm) like lower('\_%') escape '\' +; diff --git a/tiddlywiki/htop install on Cent0S 8.txt b/tiddlywiki/htop install on Cent0S 8.txt new file mode 100755 index 0000000..c5f3c83 --- /dev/null +++ b/tiddlywiki/htop install on Cent0S 8.txt @@ -0,0 +1,2 @@ +dnf install epel-release +dnf install htop.x86_64 \ No newline at end of file diff --git a/tiddlywiki/httpd Apache with Docker.md b/tiddlywiki/httpd Apache with Docker.md new file mode 100755 index 0000000..353bdf2 --- /dev/null +++ b/tiddlywiki/httpd Apache with Docker.md @@ -0,0 +1,235 @@ +Based on [this article](https://www.middlewareinventory.com/blog/docker-reverse-proxy-example/) + +Download HTTPD docker image +--------------------------- +Download last httpd image from [Docker Hub](https://hub.docker.com) +``` +docker pull httpd +``` + +To list installed images +``` +docker images +``` + +Costomize image +--------------- +Create the directory structure for Apache HTTPD docker application +``` +mkdir -p /app/appsdocker/httpd +cd /app/appsdocker/httpd +mkdir vhosts wwwroot logs +``` + +In order to browse the image and get the `httpd.conf` file, create an auto-remove container in interactive mode and map local `/app/appsdocker/httpd/` diredctory to container `/usr/local/apache2/htdocs/` directory +``` +docker run -it --rm -v /app/appsdocker/httpd/:/usr/local/apache2/htdocs/ httpd:latest bash +``` +In interactiv shell, copy `httpd.conf` file to `/usr/local/apache2/htdocs` -- this one is pointing to local `/app/appsdocker/httpd/tmp` +``` +root@937797441b4b:/usr/local/apache2# cp /usr/local/apache2/conf/httpd.conf /usr/local/apache2/htdocs/ +``` +Update `httpd.conf` +``` +Listen 80 +Listen 443 + +IncludeOptional conf/vhosts/*.conf + +LoadModule ssl_module modules/mod_ssl.so +LoadModule proxy_module modules/mod_proxy.so +LoadModule xml2enc_module modules/mod_xml2enc.so +LoadModule slotmem_shm_module modules/mod_slotmem_shm.so +LoadModule proxy_html_module modules/mod_proxy_html.so +LoadModule proxy_http_module modules/mod_proxy_http.so +LoadModule proxy_balancer_module modules/mod_proxy_balancer.so +``` + +Create `Dockerfile` under `/app/appsdocker/httpd` +``` +# The Base Image used to create this Image +FROM httpd:latest + +# Just my name who wrote this file +MAINTAINER Valeriu PLESNILA + +# to Copy a file named httpd.conf from present working directory to the /usr/local/apache2/conf inside the container +# I have taken the Standard httpd.conf file and enabled the necassary modules and adding Support for an additional Directory +COPY httpd.conf /usr/local/apache2/conf/httpd.conf + +# This is the Additional Directory where we are going to keep our Virtualhost configuraiton files +# You can use the image to create N number of different virtual hosts +RUN mkdir -p /usr/local/apache2/conf/vhosts/ +RUN mkdir -p /usr/local/apache2/wwwroot/ + +# To tell docker to expose this port +EXPOSE 80 +EXPOSE 443 + +# The Base command, This command should be used to start the container +# Remember, A Container is a Process.As long as the base process (started by base cmd) is live the Container will be ALIVE. +CMD ["httpd", "-D", "FOREGROUND"] +``` + +A simple site +-------------- +Create a simple VirtualHost configuration file `/app/appsdocker/httpd/vhosts/gitlab.conf` for the site `gitlab.databasepro.eu` +``` + + ServerName gitlab.databasepro.eu + ServerAdmin admin@gitlab.databasepro.eu + + DocumentRoot /usr/local/apache2/wwwroot/gitlab + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + ErrorLog logs/gitlab-error.log + CustomLog logs/gitlab-access.log combined + +``` + +Create a default homepage +``` +mkdir /app/appsdocker/httpd/wwwroot/gitlab +echo "Hello, you are on gitlab.databasepro.eu" > /app/appsdocker/httpd/wwwroot/gitlab/index.html +``` + +Build the image +``` +cd /app/appsdocker/httpd +docker build -t my_httpd_image . +``` + +Create and run the container: +* mapping container `80` port to local `8080` port +* mapping container `443` port to local `8443` port +* mounting container `/usr/local/apache2/conf/vhosts` to local `/app/appsdocker/httpd/vhosts` +* mounting container `/usr/local/apache2/wwwroot` to local `/app/appsdocker/httpd/wwwroot` +* mounting container `/usr/local/apache2/logs` to local `/app/appsdocker/httpd/vhosts` +``` +docker container run \ + --publish 8080:80 \ + --publish 8443:443 \ + -d --name my_httpd_server \ + -v /app/appsdocker/httpd/vhosts:/usr/local/apache2/conf/vhosts \ + -v /app/appsdocker/httpd/wwwroot:/usr/local/apache2/wwwroot \ + -v /app/appsdocker/httpd/logs:/usr/local/apache2/logs \ + my_httpd_image +``` + +> In my example I used NAT port mzpping from my Livebox as: +* external port 80 mapped to internal myvm:8080 +* external port 443 mapped to internal myvm:8443 +> + +Add SSL +------- +We will use `certboot` client from [Let's encrypt](https://letsencrypt.org) +``` +dnf install -y certbot.noarch +certbot certonly --webroot --webroot-path /app/appsdocker/httpd/wwwroot/gitlab -d gitlab.databasepro.eu +``` + +Certificate and chain will be saved in `/etc/letsencrypt/` + +Destroy container and builded image in order to recreate them for SSL. +``` +-- list all container +docker ps -a + +-- stop a container +docker stop + +-- start a container +docker start + +-- restart a container +docker restart + +-- remove a container +docker rm + +-- logs for a container +docker logs + +-- list images +docker images +-- to delete an image +docker rmi +``` + +Update VirtualHost configuration file `/app/appsdocker/httpd/vhosts/gitlab.conf` for the site `gitlab.databasepro.eu` +``` + + ServerName gitlab.databasepro.eu + + ServerAdmin admin@gitlab.databasepro.eu + DocumentRoot /usr/local/apache2/wwwroot/gitlab + + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + ErrorLog logs/gitlab-error.log + CustomLog logs/gitlab-access.log combined + + + + ServerName gitlab.databasepro.eu + + ServerAdmin admin@gitlab.databasepro.eu + DocumentRoot /usr/local/apache2/wwwroot/gitlab + + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + SSLEngine On + + ErrorLog logs/gitlab-error.log + CustomLog logs/gitlab-access.log combined + + SSLCertificateFile "/etc/letsencrypt/live/gitlab.databasepro.eu/fullchain.pem" + SSLCertificateKeyFile "/etc/letsencrypt/live/gitlab.databasepro.eu/privkey.pem" + +``` + +Recreate a container mapping also `/etc/letsencrypt` +``` +docker container run \ + --publish 8080:80 \ + --publish 8443:443 \ + -d --name my_httpd_server \ + -v /etc/letsencrypt:/etc/letsencrypt \ + -v /app/appsdocker/httpd/vhosts:/usr/local/apache2/conf/vhosts \ + -v /app/appsdocker/httpd/wwwroot:/usr/local/apache2/wwwroot \ + -v /app/appsdocker/httpd/logs:/usr/local/apache2/logs \ + my_httpd_image +``` + +Optionally using docker-compose +------------------------------- +`docker-compose.yaml` file: +``` +my_httpd_server: + image: my_httpd_image + restart: always + ports: + - 8080:80 + - 8443:443 + volumes: + - /etc/letsencrypt:/etc/letsencrypt + - /app/appsdocker/httpd/vhosts:/usr/local/apache2/conf/vhosts + - /app/appsdocker/httpd/wwwroot:/usr/local/apache2/wwwroot + - /app/appsdocker/httpd/logs:/usr/local/apache2/logs +``` diff --git a/tiddlywiki/jira_deploy_oracle.py.txt b/tiddlywiki/jira_deploy_oracle.py.txt new file mode 100755 index 0000000..87c1ab6 --- /dev/null +++ b/tiddlywiki/jira_deploy_oracle.py.txt @@ -0,0 +1,583 @@ +#!/u01/app/python/current_version/bin/python3 + +import requests +import json +import os +import shutil +import subprocess +import zipfile +import logging +import argparse +import string +import random +import smtplib +from email.mime.multipart import MIMEMultipart +from email.mime.base import MIMEBase +from email.mime.text import MIMEText +from email.utils import COMMASPACE, formatdate +from email import encoders +from requests.packages.urllib3.exceptions import InsecureRequestWarning + +# FUNCTIONS +########### +def set_constants(): + url_base ='https://reman.arval.com/rest/api/2/' + auth = ('dbjira', 'marcguillard') + return (url_base, auth) + + +def parse_command_line_args(): + parser = argparse.ArgumentParser(description='Toool for deploy JIRA Oracle scripts') + parser.add_argument('-j','--jira',help='JIRA task (ex: JIRA-12345)', required=True) + parser.add_argument('-i','--ignorestatus', action='store_true', default=False ,help='Ignore JIRA status (force execution for JIRA task in BUILD/CLOSED status)', required=False) + args = parser.parse_args() + jira=args.jira.upper() + ignorestatus=args.ignorestatus + return (jira, ignorestatus) + + +def start_logging(logfile): + logger = logging.getLogger(__name__) + logger.setLevel(logging.INFO) + + # create a file handler + handler = logging.FileHandler(logfile) + handler.setLevel(logging.INFO) + + # create a logging format + formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') + handler.setFormatter(formatter) + + # add the handlers to the logger + logger.addHandler(handler) + return logger + + +def get_task_json(url_base, auth, jira_task): + url_end='issue/'+jira_task + r = requests.get(url_base+url_end, auth=auth, verify=False) + jira_json = r.json() + return jira_json + + +def dos2unix(filename): + content = '' + outsize = 0 + with open(filename, 'rb') as infile: + content = infile.read() + with open(filename, 'wb') as output: + for line in content.splitlines(): + outsize += len(line) + 1 + output.write(line + b'\n') + return + +def generate_password(): + password = 'D#0'+''.join(random.choices(string.ascii_uppercase + string.digits, k=9)) + return password + +def create_proxy_user(main_sql_file): + fileContents = open(main_sql_file,"r").read() + for line in fileContents: + if 'DEFINE password_' in line: + database_username = line.split('DEFINE password_')[0] + print (database_username) + return + +def generate_local_deploy_package(url_base, auth, jira_task): + url_end='issue/'+jira_task + r = requests.get(url_base+url_end, auth=auth, verify=False) + jira_json = r.json() + jira_nfs_scripts_path=jira_json['fields']['customfield_12084'] + if 'dsi_mer' in jira_nfs_scripts_path: + jira_local_scripts_path=jira_nfs_scripts_path.replace('file://Frshares0105.france.intra.corp/dsi_mer','/mnt/jira/MER') + elif 'dsi_rtpp' in jira_nfs_scripts_path: + jira_local_scripts_path=jira_nfs_scripts_path.replace('file://Frshares0105.france.intra.corp/dsi_rtpp','/mnt/jira/RTPP') + else: + jira_local_scripts_path=jira_nfs_scripts_path.replace('file://Frshares0105.france.intra.corp/dsi_rtp','/mnt/jira/RTP') + + # Create local jira package + jira_target_database=jira_json['fields']['customfield_12277']['child']['value'].strip() + jira_local_package_dir = os.path.dirname(os.path.abspath(__file__)) + '/packages_jira/' + jira_task + '/' + jira_target_database + os.makedirs(jira_local_package_dir, mode=0o755, exist_ok=True) + + # Copy files from NFS into local directory + # and generate a new "run all" sql script for manage connection strings using a proxy user + main_sql_file = jira_local_package_dir + '/' + jira_target_database +'.sql' + src_files = os.listdir(jira_local_scripts_path) + for file_name in src_files: + full_file_name = os.path.join(jira_local_scripts_path, file_name) + if (os.path.isfile(full_file_name)): + if (jira_target_database + '.sql') in file_name: + shutil.copy(full_file_name, main_sql_file) + os.chmod(main_sql_file, 0o755) + dos2unix(main_sql_file) + else: + shutil.copy(full_file_name, jira_local_package_dir) + os.chmod(jira_local_package_dir + '/' + file_name, 0o755) + dos2unix(jira_local_package_dir + '/' + file_name) + + contents_of_create_deploy_user_sql=[] + contents_of_drop_deploy_user_sql=[] + dba_username = 'system' + dba_password = 'plusdacces' + deploy_username = 'DEPLOY' + deploy_password = generate_password() + contents_of_create_deploy_user_sql.append('connect ' + dba_username + '/' + dba_password + '@&INSTANCE') + contents_of_create_deploy_user_sql.append('create user ' + deploy_username + ' identified externally;') + contents_of_create_deploy_user_sql.append('alter user ' + deploy_username + ' identified by ' + deploy_password + ';') + contents_of_create_deploy_user_sql.append('grant create session to ' + deploy_username + ';') + + contents_of_main_sql_file = open(main_sql_file,'r', encoding='utf-8', errors='replace').readlines() + for line in contents_of_main_sql_file: + line=line.rstrip() + line.replace(' ', '') + if 'DEFINE password_' in line: + database_username = line.split('DEFINE password_')[1] + database_username = database_username.split('=')[0] + contents_of_create_deploy_user_sql.append('alter user ' + database_username + ' grant connect through ' + deploy_username + ';') + + contents_of_create_deploy_user_sql.append('disconnect') + + contents_of_drop_deploy_user_sql.append('connect ' + dba_username + '/' + dba_password + '@&INSTANCE') + contents_of_drop_deploy_user_sql.append('drop user ' + deploy_username + ';') + contents_of_drop_deploy_user_sql.append('disconnect') + + contents_of_new_sql_file = [] + + for line in contents_of_main_sql_file: + if 'DEFINE INSTANCE' in line.upper(): + contents_of_new_sql_file.append(line + '\n\n') + contents_of_new_sql_file.append('-- create deploy user \n') + # Add create deploy user section + + for line_deploy_user in contents_of_create_deploy_user_sql: + contents_of_new_sql_file.append(line_deploy_user + '\n') + + elif 'DEFINE PASSWORD' in line.upper(): + # skip the line + pass + + elif line.startswith('SET logPath'): + # for the PRODUCTION only + # skip the line + pass + + elif line.startswith('spool %logPath%'): + # for the PRODUCTION only + # change the spool file name + line_stripped =line.rstrip() + words = line_stripped.split('\\') + new_spool_name = jira_local_package_dir + '/' + words[-1] + contents_of_new_sql_file.append('spool '+ new_spool_name + '\n') + pass + + elif 'exit' in line: + # skip the line + pass + + elif line.upper().startswith('CONNECT'): + database_user = line.split('/')[0].split(' ')[1] + contents_of_new_sql_file.append('connect ' + deploy_username + '[' + database_user +']'+ '/' + deploy_password + '@&INSTANCE' + '\n') + else: + contents_of_new_sql_file.append(line) + + contents_of_new_sql_file.append('\n-- drop deploy user \n') + for line_deploy_user in contents_of_drop_deploy_user_sql: + contents_of_new_sql_file.append(line_deploy_user + '\n') + + contents_of_new_sql_file.append('exit') + contents_of_new_sql_file.append('\n\n') + + f = open(main_sql_file, 'w') + contents_of_new_sql_file="".join(contents_of_new_sql_file) + f.write(contents_of_new_sql_file) + f.close() + + return (jira_local_package_dir, main_sql_file) + +def execute_sql_file(directory, sql_file, nls_lang_command): + shellcommand='export TNS_ADMIN=/u01/app/oracle/admin/JIRA; ' + nls_lang_command + ';' + ' cd ' + directory + '; '+ 'sqlplus /nolog @'+sql_file + try: + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + except subprocess.CalledProcessError as err: + pass + return + +def zip_logfiles(directory, zipname): + fantasy_zip = zipfile.ZipFile(directory + '/'+ zipname +'.zip', 'w') + for folder, subfolders, files in os.walk(directory): + for file in files: + if file.endswith('.log'): + fantasy_zip.write(os.path.join(folder, file), os.path.relpath(os.path.join(folder,file), directory), compress_type = zipfile.ZIP_DEFLATED) + fantasy_zip.close() + return + +def upload_zip_logfiles_to_jira(url_base, auth, jira_task, file): + url_end='issue/'+ jira_task + '/attachments' + headers = {"X-Atlassian-Token": "nocheck"} + files = {'file': open(file, 'rb')} + + r = requests.post(url_base+url_end, auth=auth, verify=False, headers=headers, files=files) + return + +def jira_transition(url_base, auth, jira_task, transion_target): + url_end = 'issue/' + jira_task + '/transitions?expand=transitions.fields' + headers = { + 'Content-Type': 'application/json', + } + + if transion_target == 'BUILD': + # OPEN to BUILD transition + data = json.dumps({ + "update": { + "comment": [ + { + "add": { + "body": "Task started, work in progress." + } + } + ] + }, + "transition": { + "id": "41" + } + }) + elif transion_target == 'CLOSE': + # BUILD to CLOSED transition + data = json.dumps({ + "update": { + "comment": [ + { + "add": { + "body": "Task done." + } + } + ] + }, + "transition": { + "id": "31" + }, + "fields": { + "resolution": { + "name": "Fixed" + } + } + }) + + # Make the JIRA transition + r = requests.post(url_base+url_end, data=data, auth=auth, verify=False ,headers=headers) + return + +def count_errors_in_logfile(logfile, error_pattern): + logfile_contents = open(logfile, 'r', encoding='utf-8', errors='replace').readlines() + error_count = 0 + for line in logfile_contents: + if line.startswith (error_pattern): + error_count = error_count + 1 + return error_count + + +def count_errors_in_directory(directory, error_pattern): + # Return all errors as a dictionary + all_errors={} + for folder, subfolders, files in os.walk(directory): + for file in files: + if file.endswith('.log'): + error_count = count_errors_in_logfile(os.path.join(folder, file), error_pattern) + all_errors[file] = error_count + return all_errors + + +def count_warnings_in_logfile(logfile, warning_pattern_start): + logfile_contents = open(logfile, 'r', encoding='utf-8', errors='replace').readlines() + warning_count = 0 + for line in logfile_contents: + if line.startswith (warning_pattern_start): + warning_count = warning_count + 1 + return warning_count + + +def count_warnings_in_directory(directory, warning_pattern_start): + # Return all warnings as a dictionary + all_warnings={} + for folder, subfolders, files in os.walk(directory): + for file in files: + if file.endswith('.log'): + warnings_count = count_warnings_in_logfile(os.path.join(folder, file), warning_pattern_start) + all_warnings[file] = warnings_count + return all_warnings + + +def comment_jira(url_base, auth, jira_task, comment): + url_end='issue/'+ jira_task + '/comment' + headers = { + 'Content-Type': 'application/json', + } + data = json.dumps({ + 'body':comment + }) + r = requests.post(url_base+url_end, data=data, auth=auth, verify=False ,headers=headers) + return + + +def send_mail(send_from, send_to, subject, text, files=[], server="localhost"): + assert type(send_to)==list + assert type(files)==list + msg = MIMEMultipart() + msg['From'] = send_from + msg['To'] = COMMASPACE.join(send_to) + msg['Date'] = formatdate(localtime=True) + msg['Subject'] = subject + msg.attach( MIMEText(text) ) + for f in files: + part = MIMEBase('application', "octet-stream") + part.set_payload( open(f,"rb").read() ) + encoders.encode_base64(part) + part.add_header('Content-Disposition', 'attachment; filename="%s"' % os.path.basename(f)) + msg.attach(part) + + smtp = smtplib.SMTP(server) + smtp.sendmail(send_from, send_to, msg.as_string()) + smtp.close() + return + + +def get_watchers(url_base, auth, jira_task): + # Get the JIRA watchers email list + watchers_list = [] + url_end='issue/'+ jira_task + '/watchers' + r = requests.get(url_base+url_end, auth=auth, verify=False) + for watcher in r.json()['watchers']: + watchers_list.append(watcher['emailAddress']) + + return watchers_list + + +def create_connect_test_script(jira_local_package_dir, main_sql_file): + contents_of_connect_test_script = [] + main_sql_file_contents = open(main_sql_file, 'r', encoding='utf-8', errors='replace').readlines() + line_number = -1 + for line in main_sql_file_contents: + line_number +=1 + line_stripped =line.rstrip() + if line_stripped.startswith("@"): + contents_of_connect_test_script.append("select * from dual;\n") + elif line_stripped.startswith("spool"): + # supress intermediate spool commands + pass + elif line_stripped.startswith("disconnect"): + prev_line_stripped =main_sql_file_contents[line_number - 1].rstrip() + if prev_line_stripped.startswith("alter user"): + # add spool after disconnect + contents_of_connect_test_script.append(line) + contents_of_connect_test_script.append("\nspool connect_test.txt\n") + elif prev_line_stripped.startswith("drop user"): + # add a line to extract NLS_LANG value + contents_of_connect_test_script.append("select 'export NLS_LANG=.'|| value SQLPLUS_NLS from nls_database_parameters where parameter='NLS_CHARACTERSET';\n") + # add spool off and echo before drop deploy user + contents_of_connect_test_script.append("spool off\n") + contents_of_connect_test_script.append(line) + + else: + contents_of_connect_test_script.append(line) + + connect_test_script = jira_local_package_dir + "/connect_test.sql" + f = open(connect_test_script, 'w') + contents_of_connect_test_script="".join(contents_of_connect_test_script) + f.write(contents_of_connect_test_script) + f.close() + return connect_test_script + +def get_nls_lang_command(jira_local_package_dir): + connect_test_script_logfile = jira_local_package_dir + "/connect_test.txt" + contents_of_connect_test_script = open(connect_test_script_logfile, 'r', encoding='utf-8', errors='replace').readlines() + for line in contents_of_connect_test_script: + line_stripped = line.rstrip() + if line_stripped.startswith("export NLS_LANG"): + nls_lang_command = line_stripped + return nls_lang_command + +# MAIN +########### + +(jira_task, ignorestatus) = parse_command_line_args() + +script_path = os.path.dirname(os.path.abspath(__file__)) +script_name = os.path.basename(__file__) + +requests.packages.urllib3.disable_warnings(InsecureRequestWarning) + +(url_base, auth) = set_constants() + +global logger +global jira_json + +jira_json = get_task_json(url_base, auth, jira_task) + +jira_description = jira_json['fields']['description'] +jira_statusid = jira_json['fields']['status']['id'] +jira_statusname = jira_json['fields']['status']['name'] +jira_target_database = jira_json['fields']['customfield_12277']['child']['value'] +jira_nfs_scripts_path = jira_json['fields']['customfield_12084'] +jira_comment = jira_json['fields']['comment']['comments'] +jira_reporter = jira_json['fields']['reporter']['emailAddress'] + +if jira_statusname == 'Build' or jira_statusname == 'Closed' or jira_statusname == 'Frozen': + if ignorestatus == False: + print (jira_task + ' is in ' + jira_statusname.upper() + ' status') + print ('Use -i|--ignorestatus to force the execution' ) + exit (0) + +logger = start_logging(script_path+'/logs/jira_deploy.log') + +logger.info (jira_task + ' : ' + 'STARTED') +logger.info (jira_task + ' : status is ' + jira_statusname) +logger.info (jira_task + ' : target database is ' + jira_target_database) +logger.info (jira_task + ' : NFS scripts path is ' + jira_nfs_scripts_path) +for user_comment in jira_comment: + logger.info (jira_task + ' : user comment is '+user_comment['body']) + +jira_transition(url_base, auth, jira_task, 'BUILD') +comment_jira(url_base, auth, jira_task, 'Execution in progress.') +logger.info (jira_task + ' : ' + 'status transition to BUILD') + +# Download JIRA packages and generate the main sql file +(jira_local_package_dir, main_sql_file) = generate_local_deploy_package(url_base, auth, jira_task) +logger.info (jira_task + ' : ' + 'local package copied under ' + jira_local_package_dir) +logger.info (jira_task + ' : ' + 'main SQL file generated in ' + main_sql_file) + +# Before the execution of the main sql file, we check the database connectivity of deploy user through the differents schemas +connect_test_script = create_connect_test_script(jira_local_package_dir, main_sql_file) +connect_test_script_logfile = connect_test_script.replace('.sql', '.txt') +logger.info (jira_task + ' : ' + 'connexion test SQL file generated in ' + connect_test_script) + + +logger.info (jira_task + ' : ' + 'execution of connexion test SQL file started') +execute_sql_file(jira_local_package_dir, connect_test_script, 'export NLS_LANG=.UTF8') +logger.info (jira_task + ' : ' + 'execution of connexion test SQL file finished') + +# check for ORA- and SP2-0640: Not connected in logfile +ora_errors = count_errors_in_logfile(connect_test_script_logfile, 'ORA-') +sp2_not_connected_errors = count_errors_in_logfile(connect_test_script_logfile, 'SP2-0640: Not connected') +if (ora_errors + sp2_not_connected_errors) > 0: + logger.error (jira_task + ' : ' + str(ora_errors + sp2_not_connected_errors) + ' error(s) in ' + connect_test_script_logfile) + print ("Database connexion issue, please check " + connect_test_script_logfile) + exit (1) + +# Execution of the main sql file +nls_lang_command = get_nls_lang_command(jira_local_package_dir) +logger.info (jira_task + ' : ' + 'execution of main SQL file started') +execute_sql_file(jira_local_package_dir, main_sql_file, nls_lang_command) +logger.info (jira_task + ' : ' + 'execution of main SQL file finished') + +zip_logfiles(jira_local_package_dir ,jira_task) +logger.info (jira_task + ' : ' + 'zip logfile(s)') +upload_zip_logfiles_to_jira(url_base, auth, jira_task, jira_local_package_dir+'/'+jira_task+'.zip') +logger.info (jira_task + ' : ' + 'upload zipped logfile(s) to JIRA') + +# check for ORA- in logfiles +ora_errors = count_errors_in_directory (jira_local_package_dir, 'ORA-') +count_all_errors = 0 +for logfile, error_count in ora_errors.items(): + count_all_errors = count_all_errors + error_count + if error_count > 0: + logger.info (jira_task + ' : ' + logfile + ' has ' + str(error_count) + ' ORA- error message(s)') + else: + logger.info (jira_task + ' : ' + logfile + ' has no error messages') + +# check for SP2-0640: Not connected in logfiles +sp2_not_connected_errors = count_errors_in_directory (jira_local_package_dir, 'SP2-0640: Not connected') +count_all_sp2_not_connected_errors = 0 +for logfile, sp2_count in sp2_not_connected_errors.items(): + count_all_sp2_not_connected_errors = count_all_sp2_not_connected_errors + sp2_count + if error_count > 0: + logger.info (jira_task + ' : ' + logfile + ' has ' + str(sp2_count) + ' SP2-0640: Not connected error message(s)') + else: + logger.info (jira_task + ' : ' + logfile + ' has no messages') + +# check for SP2-0042: unknown command in logfiles +sp2_unknown_command_errors = count_errors_in_directory (jira_local_package_dir, 'SP2-0042: unknown command') +count_all_sp2_unknown_command_errors = 0 +for logfile, sp2_0042_count in sp2_unknown_command_errors.items(): + count_all_sp2_unknown_command_errors = count_all_sp2_unknown_command_errors + sp2_0042_count + if error_count > 0: + logger.info (jira_task + ' : ' + logfile + ' has ' + str(sp2_0042_count) + ' SP2-0042: unknown command error message(s)') + else: + logger.info (jira_task + ' : ' + logfile + ' has no messages') + +# check for SP2-0734: unknown command in logfiles +sp2_unknown_command_2_errors = count_errors_in_directory (jira_local_package_dir, 'SP2-0734: unknown command beginning') +count_all_sp2_unknown_command_2_errors = 0 +for logfile, sp2_0734_count in sp2_unknown_command_2_errors.items(): + count_all_sp2_unknown_command_2_errors = count_all_sp2_unknown_command_2_errors + sp2_0734_count + if error_count > 0: + logger.info (jira_task + ' : ' + logfile + ' has ' + str(sp2_0734_count) + ' SP2-0734: unknown command beginning error message(s)') + else: + logger.info (jira_task + ' : ' + logfile + ' has no messages') + + +# check for Warning: ... created with compilation errors. in logfiles +ora_warnings = count_warnings_in_directory (jira_local_package_dir, 'Warning:') +count_all_warnings = 0 +for logfile, warnings_count in ora_warnings.items(): + count_all_warnings = count_all_warnings + warnings_count + if warnings_count > 0: + logger.info (jira_task + ' : ' + logfile + ' has ' + str(warnings_count) + ' warning message(s)') + else: + logger.info (jira_task + ' : ' + logfile + ' has no warning messages') + +email_text = 'Hello,\n\n' +error_summary = '' +sp2_not_connected_summary = '' +warning_summary = '' +if (count_all_errors + count_all_sp2_not_connected_errors + count_all_sp2_unknown_command_errors + count_all_sp2_unknown_command_2_errors + count_all_warnings) == 0: + comment_jira(url_base, auth, jira_task, 'Done without errors.') + jira_transition(url_base, auth, jira_task, 'CLOSE') + logger.info (jira_task + ' : ' + 'status transition to CLOSE') + email_subject = jira_target_database + ' : ' + jira_task + ' has been executed executed without errors.' + email_text = email_text + 'Jira task ' + jira_task + ' has been executed executed without errors.\n' + email_text = email_text + 'Log files enclosed.' +else: + for element in ora_errors: + if ora_errors[element] > 0: + error_summary = error_summary + '\nLogfile ' + element + ' has ' + str(ora_errors[element]) + ' error(s)' + for element in sp2_not_connected_errors: + if sp2_not_connected_errors[element] > 0: + error_summary = error_summary + '\nLogfile ' + element + ' has ' + str(sp2_not_connected_errors[element]) + ' error(s)' + for element in sp2_unknown_command_errors: + if sp2_unknown_command_errors[element] > 0: + error_summary = error_summary + '\nLogfile ' + element + ' has ' + str(sp2_unknown_command_errors[element]) + ' error(s)' + for element in sp2_unknown_command_2_errors: + if sp2_unknown_command_2_errors[element] > 0: + error_summary = error_summary + '\nLogfile ' + element + ' has ' + str(sp2_unknown_command_2_errors[element]) + ' error(s)' + + for element in ora_warnings: + if ora_warnings[element] > 0: + warning_summary = warning_summary + '\nLogfile ' + element + ' has ' + str(ora_warnings[element]) + ' warning(s)' + + comment_jira(url_base, auth, jira_task, 'Done with ' + str(count_all_errors + count_all_sp2_not_connected_errors + count_all_sp2_unknown_command_errors + count_all_sp2_unknown_command_2_errors) + ' error(s) and ' + str(count_all_warnings) + ' warning(s), please check the enclosed logfile(s):\n') + if error_summary != '': + comment_jira(url_base, auth, jira_task, 'Errors summary: \n' + error_summary) + if sp2_not_connected_summary != '': + comment_jira(url_base, auth, jira_task, 'SP2-0640: Not connected summary: \n' + sp2_not_connected_summary) + if warning_summary != '': + comment_jira(url_base, auth, jira_task, 'Warnings summary: \n' + warning_summary) + email_subject = jira_target_database + ' : ' + jira_task + ' executed with ' + str(count_all_errors + count_all_sp2_not_connected_errors + count_all_sp2_unknown_command_errors + count_all_sp2_unknown_command_2_errors + count_all_warnings) + ' error(s)/warning(s).' + email_text = email_text + jira_task + ' has been executed with ' + str(count_all_errors + count_all_sp2_not_connected_errors + count_all_sp2_unknown_command_errors + count_all_sp2_unknown_command_2_errors + count_all_warnings) + ' errors/warnings:\n\n' + '\n\n'.join(error_summary.split('\n')) + '\n\n'.join(sp2_not_connected_summary.split('\n')) + '\n\n' + '\n\n'.join(warning_summary.split('\n')) + '\n\nPlease ckeck the logfiles and close the JIRA.' + +# Send an email to oracle, release, JIRA reporter and all JIRA watchers +send_to = ['dbm@arval.fr','release@arval.fr'] +send_to.append(jira_reporter) +watchers_list = get_watchers(url_base, auth, jira_task) +send_to.extend(watchers_list) +files=[] +files.append(jira_local_package_dir+'/'+jira_task+'.zip') +send_mail(send_from = 'no-reply@arval.com', send_to = send_to, + subject = email_subject, text = email_text, files=files, server="localhost") +logger.info (jira_task + ' : ' + 'FINISHED') + diff --git a/tiddlywiki/kill_schema_sessions.sql.txt b/tiddlywiki/kill_schema_sessions.sql.txt new file mode 100755 index 0000000..836becb --- /dev/null +++ b/tiddlywiki/kill_schema_sessions.sql.txt @@ -0,0 +1,7 @@ +select + 'alter system disconnect session '''|| SID ||','|| SERIAL# ||',@'||inst_id||''' immediate;' +from + gv$session +where + USERNAME in ('GREECE_WCR','GREECE') +/ diff --git a/tiddlywiki/lftp.txt b/tiddlywiki/lftp.txt new file mode 100755 index 0000000..33e1495 --- /dev/null +++ b/tiddlywiki/lftp.txt @@ -0,0 +1,4 @@ +lftp perso-ftp.orange.fr +lftp perso-ftp.orange.fr:~> set ftp:ssl-auth TLS +lftp perso-ftp.orange.fr:~> user plesnila.valeriu@orange.fr ***** +lftp plesnila.valeriu@orange.fr@perso-ftp.orange.fr:~> ls \ No newline at end of file diff --git a/tiddlywiki/library cache pin - find blocking session.txt b/tiddlywiki/library cache pin - find blocking session.txt new file mode 100755 index 0000000..46b9015 --- /dev/null +++ b/tiddlywiki/library cache pin - find blocking session.txt @@ -0,0 +1,36 @@ +-- note down SADDR and P1RAW from the following query +select s.sid, s.saddr, sw.p1raw + from gv$session_wait sw, gv$session s + where sw.sid = s.sid and sw.event='library cache pin'; + +-- use previous SADDR and P1RAW and note down KGLLKUSE +select b.KGLLKUSE from dba_kgllock w , dba_kgllock b + where w.KGLLKHDL = b.KGLLKHDL + and w.KGLLKREQ > 0 and b.KGLLKMOD > 0 + and w.KGLLKTYPE = b.KGLLKTYPE + and w.KGLLKUSE = '000000025C47A5F0' -- SADDR + and w.KGLLKHDL = '000000008AEE0AB8' -- P1RAW +; + +-- use previous KGLLKUSE to find the SID +select sid from gv$session s + where saddr in ('000000025DEC3278'); + +-- use previous SID to find session detail +select + i.instance_name instance_name + , s.sid sid + , s.serial# serial_id + , s.status session_status + , s.username oracle_username + , s.osuser os_username + , p.spid os_pid + , s.terminal session_terminal + , s.machine session_machine + , s.program session_program +from + gv$session s + inner join gv$process p on (s.paddr = p.addr and s.inst_id = p.inst_id) + inner join gv$instance i on (p.inst_id = i.inst_id) +where + s.sid in (516); diff --git a/tiddlywiki/linux - devices SSD_HDD.md b/tiddlywiki/linux - devices SSD_HDD.md new file mode 100755 index 0000000..61a43ef --- /dev/null +++ b/tiddlywiki/linux - devices SSD_HDD.md @@ -0,0 +1,5 @@ +Linux automatically detects SSD, and since kernel version 2.6.29, you may verify sda with: + + cat /sys/block/sda/queue/rotational + +You should get 1 for hard disks and 0 for a SSD. \ No newline at end of file diff --git a/tiddlywiki/lock_tree_RAC.sql.txt b/tiddlywiki/lock_tree_RAC.sql.txt new file mode 100755 index 0000000..c9cbd36 --- /dev/null +++ b/tiddlywiki/lock_tree_RAC.sql.txt @@ -0,0 +1,85 @@ +-- script of Lionel Magallon +-- https://easyteam.fr/dealing-with-lock-issues-in-oracle-rac-environnement/ + + +set pages 120 +set lines 240 + +col RUNNING_SESSION for a20 +col INST_ID for 99 Head I# +col SID for 99999 +col SERIAL# for 99999 +col MACHINE for a24 trunc +col OSUSER for a10 +col USERNAME for a20 +col SQL_ID for a18 +col EVENT for a30 trunc +col WAIT_CLASS for a15 trunc + + + +WITH +-- global lock view +gl AS ( +select +inst_id || '-' || sid instsid, id1, id2, +ctime, lmode, block, request +from +gv$lock +), +-- joins the global lock view on itself to identify locks +l AS ( +SELECT +l1.instsid holding_session, +l2.instsid waiting_session +FROM +gl l1, +gl l2 +WHERE +l1.block > 0 +AND l2.request > 0 +AND l1.id1=l2.id1 +AND l1.id2=l2.id2 +), +-- result view (tree of locked sessions) +rs AS ( +SELECT +lpad(' ',3*(level-1),' ') || waiting_session running_session +FROM ( +-- first insert as in utllockt +(SELECT +'-' holding_session, holding_session waiting_session +FROM +l +MINUS +SELECT +'-', waiting_session +FROM +l +) +UNION ALL +-- second insert as in utllockt +SELECT +holding_session, waiting_session +FROM +l +) +CONNECT BY PRIOR +waiting_session = holding_session +START WITH +holding_session = '-' +), +-- useful session informations +s AS ( +SELECT +inst_id, sid, serial#, machine, osuser, username, +nvl(sql_id, '-') sql_id, event, wait_class +FROM gv$session +) +-- final tree +SELECT +* +FROM +rs +JOIN +s ON ltrim(rs.running_session)=s.inst_id || '-' || s.sid; diff --git a/tiddlywiki/locked_objects.sql.txt b/tiddlywiki/locked_objects.sql.txt new file mode 100755 index 0000000..ea166cd --- /dev/null +++ b/tiddlywiki/locked_objects.sql.txt @@ -0,0 +1,66 @@ +-- view all currently locked objects: + +SELECT username U_NAME, owner OBJ_OWNER, +object_name, object_type, s.osuser, +DECODE(l.block, + 0, 'Not Blocking', + 1, 'Blocking', + 2, 'Global') STATUS, + DECODE(v.locked_mode, + 0, 'None', + 1, 'Null', + 2, 'Row-S (SS)', + 3, 'Row-X (SX)', + 4, 'Share', + 5, 'S/Row-X (SSX)', + 6, 'Exclusive', TO_CHAR(lmode) + ) MODE_HELD +FROM gv$locked_object v, dba_objects d, +gv$lock l, gv$session s +WHERE v.object_id = d.object_id +AND (v.object_id = l.id1) +AND v.session_id = s.sid +ORDER BY username, session_id; + + +-- list current locks + +SELECT session_id,lock_type, +mode_held, +mode_requested, +blocking_others, +lock_id1 +FROM dba_lock l +WHERE lock_type +NOT IN ('Media Recovery', 'Redo Thread'); + + +-- list objects that have been +-- locked for 60 seconds or more: + +SELECT SUBSTR(TO_CHAR(w.session_id),1,5) WSID, p1.spid WPID, +SUBSTR(s1.username,1,12) "WAITING User", +SUBSTR(s1.osuser,1,8) "OS User", +SUBSTR(s1.program,1,20) "WAITING Program", +s1.client_info "WAITING Client", +SUBSTR(TO_CHAR(h.session_id),1,5) HSID, p2.spid HPID, +SUBSTR(s2.username,1,12) "HOLDING User", +SUBSTR(s2.osuser,1,8) "OS User", +SUBSTR(s2.program,1,20) "HOLDING Program", +s2.client_info "HOLDING Client", +o.object_name "HOLDING Object" +FROM gv$process p1, gv$process p2, gv$session s1, +gv$session s2, dba_locks w, dba_locks h, dba_objects o +WHERE w.last_convert > 60 +AND h.mode_held != 'None' +AND h.mode_held != 'Null' +AND w.mode_requested != 'None' +AND s1.row_wait_obj# = o.object_id +AND w.lock_type(+) = h.lock_type +AND w.lock_id1(+) = h.lock_id1 +AND w.lock_id2 (+) = h.lock_id2 +AND w.session_id = s1.sid (+) +AND h.session_id = s2.sid (+) +AND s1.paddr = p1.addr (+) +AND s2.paddr = p2.addr (+) +ORDER BY w.last_convert DESC; \ No newline at end of file diff --git a/tiddlywiki/mailx - example with orange.fr.txt b/tiddlywiki/mailx - example with orange.fr.txt new file mode 100755 index 0000000..d1ec90d --- /dev/null +++ b/tiddlywiki/mailx - example with orange.fr.txt @@ -0,0 +1,10 @@ + echo "This is the message body and contains the message" | mailx -v \ + -r "root@ssh.databasepro.fr" \ + -s "This is the subject" \ + -S smtp="smtp.orange.fr:587" \ + -S smtp-auth=login \ + -S smtp-auth-user="plesnila.valeriu@orange.fr" \ + -S smtp-auth-password="*****" \ + -S ssl-verify=ignore \ + vplesnila@gmail.com + \ No newline at end of file diff --git a/tiddlywiki/mailx examples.tid b/tiddlywiki/mailx examples.tid new file mode 100755 index 0000000..3bc86ad --- /dev/null +++ b/tiddlywiki/mailx examples.tid @@ -0,0 +1,9 @@ +created: 20190924090609692 +creator: vplesnila +modified: 20190924090702041 +modifier: vplesnila +tags: Linux +title: mailx examples +type: text/plain + +echo "Logs enclosed"| mailx -s 'Some logs' -a full_rmjvm.log -a mddins.log valeriu.plesnila@externe.arval.com diff --git a/tiddlywiki/mazzolino_tiddlywiki with docker.md b/tiddlywiki/mazzolino_tiddlywiki with docker.md new file mode 100755 index 0000000..6f1dd70 --- /dev/null +++ b/tiddlywiki/mazzolino_tiddlywiki with docker.md @@ -0,0 +1,41 @@ +# Get basic docker image + + docker pull mazzolino/tiddlywiki + + +# Customize the image upgrading tiddlywiki + +Create persistent directory: + + mkdir -p /app/persistent_docker/tiddlywiki + +Create `Dockerfile`: + + FROM mazzolino/tiddlywiki:latest + MAINTAINER Valeriu PLESNILA + RUN npm update -g tiddlywiki + + +Build new image `my_tiddlywiki`: + + docker build -t my_tiddlywiki . + + +Create `docker-compose.yaml` file: + + services: + wiki: + image: my_tiddlywiki + restart: always + environment: + - USERNAME=***** + - PASSWORD=***** + ports: + - 8080:8080 + volumes: + - /app/persistent_docker/tiddlywiki:/var/lib/tiddlywiki + + +# Create and run the container: + + docker-compose up -d diff --git a/tiddlywiki/nginx with Docker.md b/tiddlywiki/nginx with Docker.md new file mode 100755 index 0000000..ab3abd8 --- /dev/null +++ b/tiddlywiki/nginx with Docker.md @@ -0,0 +1,70 @@ +List available docker images: + + docker search nginx + +Download official image: + + docker pull nginx + +Create persistent directory: + + mkdir -p /app/persistent_docker/nginx + cd /app/persistent_docker/nginx + mkdir www conf logs + +Create `/app/persistent_docker/nginx/conf/nginx.conf`: + + events { + + } + error_log /wwwlogs/error.log info; + http { + + server { + listen 80; + server_name localhost; + access_log /wwwlogs/access.log combined; + location / { + root /www/demo; + index index.html index.htm; + } + } + + +Create then root directory for default site: + + mkdir /app/persistent_docker/nginx/www/demo + echo "Hello world" > /app/persistent_docker/nginx/www/demo/index.html + +Start the container: + + docker run \ + -p 80:80 -p 443:443 \ + --name nginx \ + -v /etc/letsencrypt:/etc/letsencrypt \ + -v /app/persistent_docker/nginx/www:/www \ + -v /app/persistent_docker/nginx/conf:/etc/nginx \ + -v /app/persistent_docker/nginx/logs:/wwwlogs \ + -d nginx + + +In order to use docker-compose, create `docker-compose.yml`: + + services: + nginx: + image: nginx + restart: always + volumes: + - /etc/letsencrypt:/etc/letsencrypt + - /app/persistent_docker/nginx/www:/www + - /app/persistent_docker/nginx/conf:/etc/nginx + - /app/persistent_docker/nginx/logs:/wwwlogs + ports: + - 80:80 + - 443:443 + + +Start the container and set the autostart: + + docker-compose up -d + docker update --restart unless-stopped nginx diff --git a/tiddlywiki/nid example.txt b/tiddlywiki/nid example.txt new file mode 100755 index 0000000..4f7c0b1 --- /dev/null +++ b/tiddlywiki/nid example.txt @@ -0,0 +1 @@ +https://doyensys.com/blogs/how-to-change-the-db-name-using-nid-utility/ diff --git a/tiddlywiki/non-CDB upgrade and convert to PDB - example.md b/tiddlywiki/non-CDB upgrade and convert to PDB - example.md new file mode 100755 index 0000000..5d5567c --- /dev/null +++ b/tiddlywiki/non-CDB upgrade and convert to PDB - example.md @@ -0,0 +1,126 @@ +> This example applies only if target database is in 19c version or less; direct upgrade to 21c is possible only if source database version is 12.2 or superior + +In this example we will: + * restore `11.2.0.4` WEDGEPRD database (`db_name=WEDGE, db_unique_name=WEDGEPRD`) from a backup of another database (ZABRAK) + * upgrade WEDGEPRD to `19` + * plug WEDGEPRD as container into `19 CDB` ASTYPRD database (`db_name=ASTY, db_unique_name=ASTYPRD`) + + +Restore WEDGEPRD from ZABRAK backup with `noopen` in order to perform the database upgrade to `19`: + + rman auxiliary / + run + { + allocate auxiliary channel aux01 device type disk; + allocate auxiliary channel aux02 device type disk; + allocate auxiliary channel aux03 device type disk; + allocate auxiliary channel aux04 device type disk; + allocate auxiliary channel aux05 device type disk; + allocate auxiliary channel aux06 device type disk; + allocate auxiliary channel aux07 device type disk; + allocate auxiliary channel aux08 device type disk; + allocate auxiliary channel aux09 device type disk; + allocate auxiliary channel aux10 device type disk; + duplicate target database to WEDGE noopen backup location '/mnt/yavin4/tmp/_oracle_/orabackup/_keep_/RAC/11.2.0.4/ZABRAK'; + } + + +Startup database in `upgrade` mode: + + alter database open resetlogs upgrade; + +Change `ORACLE_HOME` to `19` and make the upgrade: + + $ORACLE_HOME/bin/dbupgrade + + +Check if CDB is in *local undo* mode: + + column property_name format a30 + column property_value format a30 + + select property_name, property_value + from database_properties + where property_name = 'LOCAL_UNDO_ENABLED'; + + +To change in *local undo* mode: + + startup upgrade; + alter database local undo on; + +Backup CDB: + + run + { + set nocfau; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + allocate channel ch02 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + allocate channel ch03 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + allocate channel ch04 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.bck'; + backup as compressed backupset incremental level 0 database section size 2G include current controlfile plus archivelog delete input; + release channel ch01; + release channel ch02; + release channel ch03; + release channel ch04; + allocate channel ch01 device type disk format '/mnt/yavin4/tmp/_oracle_/orabackup/ASTY/%d_%U_%s_%t.controlfile'; + backup current controlfile; + release channel ch01; + } + + +Restart source WEDGEPRD in `read only` mode, generate xml file and stop the database: + + startup open read only; + exec DBMS_PDB.DESCRIBE('/mnt/yavin4/tmp/_oracle_/tmp/WEDGE.xml'); + + +Check database compatibility with the CDB: + + exec DBMS_PDB.CLEAR_PLUGIN_VIOLATIONS; + + set serveroutput on + DECLARE + compatible CONSTANT VARCHAR2(3) := + CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/mnt/yavin4/tmp/_oracle_/tmp/WEDGE.xml', pdb_name => 'WEDGEPRD') WHEN TRUE THEN 'YES' ELSE 'NO' + END; + + BEGIN + DBMS_OUTPUT.PUT_LINE('Is the future PDB compatible? ==> ' || compatible); + END; + / + + + + col name for a10 trunc + col type for a10 trunc + col cause for a10 trunc + col time for a15 trunc + col status for a10 trunc + col action for a50 trunc + col message for a70 trunc + set lines 200 + + select name, cause, type, status,action,message,time from pdb_plug_in_violations; + + +Plug WEDGEPRD into ASTYPRD CDB (in place plugin, not a database copy): + + create pluggable database WEDGEPRD using '/mnt/yavin4/tmp/_oracle_/tmp/WEDGE.xml' nocopy tempfile reuse; + + -- Monitoring parallel execution servers using a different session + SELECT qcsid, qcserial#, sid, serial# + FROM v$px_session + ORDER BY 1,2,3; + + alter session set container=WEDGEPRD; + @?/rdbms/admin/noncdb_to_pdb.sql + + +Restart pluggable database and save the state: + + alter pluggable database WEDGEPRD close immediate; + alter pluggable database WEDGEPRD open; + alter pluggable database WEDGEPRD save state; + +> The new pluggable database will use the controlfile and redologs of the CDB, so old controlfile + redolog can be deleted. \ No newline at end of file diff --git a/tiddlywiki/obj_stats_history.sql.txt b/tiddlywiki/obj_stats_history.sql.txt new file mode 100755 index 0000000..370e26d --- /dev/null +++ b/tiddlywiki/obj_stats_history.sql.txt @@ -0,0 +1,19 @@ +SET LINE 180 +SET PAGES 50 + +COL OWNER FOR A20 +COL OBJECT_NAME FOR A20 + +define OWNER=AINSREQ1 +define OBJECT_NAME=INS_MKT_SUMMARY_J + +SELECT + OBJ.OWNER,OBJ.OBJECT_NAME,ANALYZETIME,HS.ROWCNT,HS.BLKCNT,HS.SAMPLESIZE + FROM + DBA_OBJECTS OBJ, WRI$_OPTSTAT_TAB_HISTORY HS + WHERE + OBJ.OWNER='&OWNER' AND OBJ.OBJECT_NAME='&OBJECT_NAME' AND OBJ.OBJECT_ID=HS.OBJ# + ORDER BY + ANALYZETIME ASC; + +select num_rows from dba_tables where owner='&OWNER' AND table_name='&OBJECT_NAME'; \ No newline at end of file diff --git a/tiddlywiki/ogg_libs.py.txt b/tiddlywiki/ogg_libs.py.txt new file mode 100755 index 0000000..718dfdb --- /dev/null +++ b/tiddlywiki/ogg_libs.py.txt @@ -0,0 +1,1574 @@ +#!/u01/app/python/current_version/bin/python3 + +import os +import subprocess +import logging +import argparse +import datetime +import socket +import shlex +import cx_Oracle +from colorama import init +from colorama import Fore, Back +import json +import time +import stat +import shutil +import zipfile +from pathlib import Path +import re + + + +# CONSTANTS +OGG_SERVICE_DIR = "/u01/app/oracle/admin/OGG_Service" +SUCCESS = 0 +ERROR = 1 +TAB = "\t" +NEWLINE = "\n" +OGG_STATUS_SLEEP_TIME_WHEN_CHECK = 5 +OGG_STATUS_MAX_ITERATION_CHECK = 12 +OGG_LAG_MAX_ITERATION_CHECK = 24 +TARGET_DATABASE_CREATE_INDEX_IGNORE_ERROR_MESSAGES = ["ORA-02260","ORA-00955","ORA-01408"] +EXPDP_IGNORE_ERROR_MESSAGES = ["ORA-31693","ORA-02354","ORA-01466"] +IMPDP_IGNORE_ERROR_MESSAGES = [] + + + +# GENERIC FUNCTIONS + +def start_logging(logfile): + logger = logging.getLogger(__name__) + logger.setLevel(logging.INFO) + + # create a file handler + handler = logging.FileHandler(logfile) + handler.setLevel(logging.INFO) + + # create a logging format + formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") + handler.setFormatter(formatter) + + # add the handlers to the logger + logger.addHandler(handler) + return logger + + +def get_hostname(): + global hostname + hostname = socket.gethostname() + return + +def upload_file(source_file, target_host, target_file): + shellcommand = "ssh oracle@" + target_host + " mkdir -p $(dirname " + target_file + ") ; "+ "scp -p " + source_file + " oracle@" + target_host + ":" + target_file + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + return + +def download_file(source_file, source_host, target_file): + shellcommand = "scp -p " + source_host + ":" + source_file + " " + target_file + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + return + +def upload_directory(source_dir, target_host, target_dir): + shellcommand = "scp -rp " + source_dir + " oracle@" + target_host + ":" + target_dir + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + return + + +# Functions for pretty color printing +def init_pretty_color_printing(): + init(autoreset=True) + return + +def Red_On_Black(string): + return (Fore.RED + Back.BLACK + string) + +def Cyan_On_Black(string): + return (Fore.CYAN + Back.BLACK + string) + +def Yellow_On_Black(string): + return (Fore.YELLOW + Back.BLACK + string) + +def Magenta_On_Black(string): + return (Fore.MAGENTA + Back.BLACK + string) + +def Green_On_Black(string): + return (Fore.GREEN + Back.BLACK + string) + +def White_On_Black(string): + return (Fore.WHITE + Back.BLACK + string) + +def intersection(lst1, lst2): + lst3 = [value for value in lst1 if value in lst2] + return lst3 + +def union(lst1, lst2): + final_list = list(set(lst1) | set(lst2)) + return final_list + +def union_keep_order(lst1, lst2): + lst3 = lst1 + lst2 + # Create an empty list to store unique elements + uniqueList = [] + # Iterate over the original list and for each element + # add it to uniqueList, if its not already there. + for element in lst3: + if element not in uniqueList: + uniqueList.append(element) + + # Return the list of unique elements + return uniqueList + + +def removeDuplicates(listofElements): + # Create an empty list to store unique elements + uniqueList = [] + # Iterate over the original list and for each element + # add it to uniqueList, if its not already there. + for elem in listofElements: + if elem not in uniqueList: + uniqueList.append(elem) + # Return the list of unique elements + return uniqueList + +def concatenate_files(input_files, output_file): + with open(output_file, 'w') as outfile: + for fname in input_files: + with open(fname) as infile: + outfile.write(infile.read()) + return + +def merge_dict(dict1, dict2): + dict3 = dict(dict1, **dict2) + return dict3 + +def remove_empty_lines(filename): + with open(filename) as filehandle: + lines = filehandle.readlines() + + with open(filename, 'w') as filehandle: + lines = filter(lambda x: x.strip(), lines) + filehandle.writelines(lines) + return + +# CLASS OGG_Sync +class OGG_Sync: + def __init__(self, extract, replicat): # Constructor of the class + self.runid = datetime.datetime.now().strftime('%Y-%m-%d_%H_%M_%S_%f') + self.extract = extract + self.replicat = replicat + # Identify environement type (dev|prod) + envinfo_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/env.info" + try: + contents_of_envinfo = open(envinfo_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file env.info not found.")) + exit (ERROR) + + self.specific_params={} + for line in contents_of_envinfo: + line = line.rstrip() + line = line.lstrip() + if line !="": + param, value = line.split("=", 1) + param = param.rstrip() + param = param.lstrip() + value = value.rstrip() + value = value.lstrip() + self.specific_params[param] = value + + + self.extract_addnative_option_array = [] + # Load configuration + configuration_file = OGG_SERVICE_DIR + "/etc/" + self.specific_params["type_environement"] + ".conf" + contents_configuration_file = open(configuration_file, "r", encoding = "utf-8", errors = "replace").readlines() + self.config_params={} + for line in contents_configuration_file: + line = line.rstrip() + line = line.lstrip() + if line !="": + param, value = line.split("=", 1) + param = param.rstrip() + param = param.lstrip() + value = value.rstrip() + value = value.lstrip() + self.config_params[param] = value + return + + def get_class_attributes_as_json(self): # Display class attributes + json_class_attributes = vars(self) + return json.dumps(json_class_attributes, indent = 2) + + def get_config_params_as_json(self): # Display class attributes + return json.dumps(self.config_params, indent = 2) + + def get_type_env(self): + return + + def build_extract_prm(self): + extract_header_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.extract + "_header.prm" + extract_tables_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.extract + "_tables.prm" + if os.path.isfile(extract_header_prm_filename) and os.path.isfile(extract_tables_prm_filename): + extract_prm_filename = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.extract + ".prm" + # concatenate header + table prm sections + contents_of_prm = [] + contents_of_prm.append("-- This file has been is generated from the following files on " + hostname + " :\n") + contents_of_prm.append("-- " + extract_header_prm_filename + " \n") + contents_of_prm.append("-- " + extract_tables_prm_filename) + contents_of_prm.append(NEWLINE) + contents_of_extract_header_prm = open(extract_header_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + contents_of_extract_tables_prm = open(extract_tables_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + + for line in contents_of_extract_header_prm: + auxline = line.rstrip() + auxline = auxline.replace(" ", "") + if auxline != "": + contents_of_prm.append(line) + + contents_of_prm.append(NEWLINE) + + for line in contents_of_extract_tables_prm: + auxline = line.rstrip() + auxline = auxline.replace(" ", "") + if auxline != "": + contents_of_prm.append(line) + + f = open(extract_prm_filename, 'w') + contents_of_prm="".join(contents_of_prm) + f.write(contents_of_prm) + f.close() + # store extract_prm_filename in class for later usage + self.extract_prm_filename = extract_prm_filename + else: + print ("OGG Sync EXTRACT configuration files does not exist: ") + print (" " + extract_header_prm_filename) + print (" " + extract_tables_prm_filename) + exit (ERROR) + return + + def build_replicat_prm(self): + replicat_header_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_header.prm" + replicat_tables_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_tables.prm" + if os.path.isfile(replicat_header_prm_filename) and os.path.isfile(replicat_tables_prm_filename): + # create temporary sync directory + try: + os.mkdir(OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat) + # store the value in class for later usage + except (FileExistsError) as e: + pass + + replicat_prm_filename = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.replicat + ".prm" + + # concatename header + tables prm sections + contents_of_prm = [] + contents_of_prm.append("-- This file has been is generated from the following files on " + hostname + " :\n") + contents_of_prm.append("-- " + replicat_header_prm_filename + " \n") + contents_of_prm.append("-- " + replicat_tables_prm_filename) + contents_of_prm.append(NEWLINE) + contents_of_replicat_header_prm = open(replicat_header_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + contents_of_replicat_tables_prm = open(replicat_tables_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + + for line in contents_of_replicat_header_prm: + auxline = line.rstrip() + auxline = auxline.replace(" ", "") + if auxline != "": + contents_of_prm.append(line) + + contents_of_prm.append(NEWLINE) + + for line in contents_of_replicat_tables_prm: + auxline = line.rstrip().lstrip() + auxline = auxline.replace(" ", "") + auxline = re.sub(' +', ' ',auxline) + if auxline != "": + contents_of_prm.append(line) + + f = open(replicat_prm_filename, 'w') + contents_of_prm="".join(contents_of_prm) + f.write(contents_of_prm) + f.close() + # store replicat_prm_filename in class for later usage + self.replicat_prm_filename = replicat_prm_filename + else: + print ("OGG Sync REPLICAT configuration files does not exist: ") + print (" " + replicat_header_prm_filename) + print (" " + replicat_tables_prm_filename) + exit (ERROR) + return + + def upload_extract_prm(self): + upload_file(OGG_SERVICE_DIR + "/temp/" + self.extract + "_" + self.replicat + "/" + self.extract + ".prm" , self.config_params["ogg_host"], self.config_params["ogg12_path"] + "/dirprm/" + self.extract + ".prm") + return + + def upload_replicat_prm(self): + upload_file(OGG_SERVICE_DIR + "/temp/" + self.extract + "_" + self.replicat + "/" + self.replicat + ".prm" , self.config_params["ogg_host"], self.config_params["ogg12_path"] + "/dirprm/" + self.replicat + ".prm") + return + + def upload_prm_files(self): + self.upload_extract_prm() + self.upload_replicat_prm() + return + + def upload_temp_file_to_ogg_host(self, file_name): + upload_file(OGG_SERVICE_DIR + "/temp/" + self.extract + "_" + self.replicat + "/" + file_name, self.config_params["ogg_host"], self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + file_name) + return + + def sync_full_old(self): + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " " + self.config_params["ogg_full_refresh_shell"] + " -e " + self.extract + " -r " + self.replicat + " -v 12" + process = subprocess.Popen(shlex.split(shellcommand), stdout=subprocess.PIPE) + while True: + output = process.stdout.readline() + output = output.decode('utf-8') + if output == '' and process.poll() is not None: + break + if output: + print(Cyan_On_Black(output.strip())) + rc = process.poll() + return rc + + def parse_prm_headers(self): + # Parsing EXTRACT header + extract_header_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.extract + "_header.prm" + try: + contents_of_extract_header_prm = open(extract_header_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.extract + "_header.prm not found.")) + exit (ERROR) + + for line in contents_of_extract_header_prm: + # supress spaces in begin/end line + auxline = line.rstrip() + auxline = auxline.lstrip() + # skip comments + if not auxline.startswith("--"): + if auxline.upper().startswith("USERID"): + # GGSCI login on SOURCE database + self.ggsci_db_source_login = auxline.replace("USERID ", "", 1) + # Source DB informations + (str1,str2) = auxline.split(",") + (str3,str4) = str1.split("@") + self.database_source = str4 + self.database_source_instances = self.get_array_active_db_nodes(self.database_source) + self.database_source_version = self.get_database_version(self.database_source) + # Parsing REPLICAT header + replicat_header_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_header.prm" + try: + contents_of_replicat_header_prm = open(replicat_header_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.replicat + "_header.prm not found.")) + exit (ERROR) + + remap_tablespace_list = [] + for line in contents_of_replicat_header_prm: + # supress spaces in begin/end line + auxline = line.rstrip() + auxline = auxline.lstrip() + # skip comments + if not auxline.startswith("--"): + if auxline.upper().startswith("USERID"): + # GGSCI login on SOURCE database + self.ggsci_db_target_login = auxline.replace("USERID ", "", 1) + # Source DB informations + (str1,str2) = auxline.upper().split(",") + (str3,str4) = str1.split("@") + (str5,str6) = str4.split("_OGG") + self.database_target = str5 + "EXA" + if auxline.upper().startswith("DDLSUBST"): + # Tablespace mapping + auxline = auxline.replace("'", "") + # Remove double spaces + auxline = re.sub(' +', ' ',auxline) + words_of_auxline = auxline.split(" ") + index_of_word_with = words_of_auxline.index("with") + remap_tablespace_clause = words_of_auxline[index_of_word_with -1] + ":" + words_of_auxline[index_of_word_with +1] + remap_tablespace_list.append(remap_tablespace_clause) + + self.remap_tablespace = ",".join(remap_tablespace_list) + self.database_target_instances = self.get_array_active_db_nodes(self.database_target) + self.database_target_version = self.get_database_version(self.database_target) + return + + def fast_parse_prm_headers(self): + # For WWW presentation, where the standard parsing procedure is not fast enough + # Parsing EXTRACT header + extract_header_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.extract + "_header.prm" + try: + contents_of_extract_header_prm = open(extract_header_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.extract + "_header.prm not found.")) + exit (ERROR) + + for line in contents_of_extract_header_prm: + # supress spaces in begin/end line + auxline = line.rstrip() + auxline = auxline.lstrip() + # skip comments + if not auxline.startswith("--"): + if auxline.upper().startswith("USERID"): + # GGSCI login on SOURCE database + self.ggsci_db_source_login = auxline.replace("USERID ", "", 1) + # Source DB informations + (str1,str2) = auxline.split(",") + (str3,str4) = str1.split("@") + self.database_source = str4 + # Parsing REPLICAT header + replicat_header_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_header.prm" + try: + contents_of_replicat_header_prm = open(replicat_header_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.replicat + "_header.prm not found.")) + exit (ERROR) + + remap_tablespace_list = [] + for line in contents_of_replicat_header_prm: + # supress spaces in begin/end line + auxline = line.rstrip() + auxline = auxline.lstrip() + # skip comments + if not auxline.startswith("--"): + if auxline.upper().startswith("USERID"): + # GGSCI login on SOURCE database + self.ggsci_db_target_login = auxline.replace("USERID ", "", 1) + # Source DB informations + (str1,str2) = auxline.upper().split(",") + (str3,str4) = str1.split("@") + (str5,str6) = str4.split("_OGG") + self.database_target = str5 + "EXA" + + return + + + def parse_prm_delta(self): + # EXTRACT DELTA + extract_delta_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.extract + "_delta.prm" + try: + contents_of_extract_delta_prm = open(extract_delta_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.extract + "_delta.prm not found.")) + exit (ERROR) + + if len(contents_of_extract_delta_prm) == 0: + print (Red_On_Black("ERROR: file " + self.extract + "_delta.prm is empty.")) + exit (ERROR) + + tab_owner_array=[] + tab_name_array=[] + # Identify tables + delta_addnative_option_array = [] + for line in contents_of_extract_delta_prm: + auxline = line.rstrip() + auxline = auxline.lstrip() + # skip comments + if not auxline.startswith("--"): + # eliminate double spaces and spaces + auxline = re.sub(' +', ' ',auxline) + auxline = auxline.replace(" ","") + auxline = auxline.replace(";", "") + auxline = auxline.upper() + if auxline.upper().startswith("TABLE"): + auxline = auxline.replace("TABLE", "", 1) + auxline2 = auxline.split(",")[0] + (tab_owner,tab_name) = auxline2.split(".") + tab_owner_array.append(tab_owner) + tab_name_array.append(tab_name) + if ",_ADDNATIVE" in auxline: + delta_addnative_option_array.append(tab_name) + initial_addnative_option_array = self.extract_addnative_option_array + self.extract_addnative_option_array = union_keep_order(initial_addnative_option_array, delta_addnative_option_array) + # Check if we have only one owner + self.new_table_owner_on_source_db = tab_owner_array[0] + self.array_new_tables_on_source_db = tab_name_array + for tab_owner in tab_owner_array: + if tab_owner != self.new_table_owner_on_source_db: + print (Red_On_Black("ERROR: file " + self.replicat + "_delta.prm not found.")) + exit (ERROR) + + # REPLICAT DELTA + replicat_delta_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_delta.prm" + + try: + contents_of_replicat_delta_prm = open(replicat_delta_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.replicat + "_delta.prm not found.")) + exit (ERROR) + + if len(contents_of_extract_delta_prm) == 0: + print (contents_of_replicat_delta_prm("ERROR: file " + self.replicat + "_delta.prm is empty.")) + exit (ERROR) + + self.new_map_dictionary = {} + table_array = [] + for line in contents_of_replicat_delta_prm: + # supress spaces in begin/end line + auxline = line.rstrip() + auxline = auxline.lstrip() + # skip comments + if not auxline.startswith("--"): + if auxline.upper().startswith("MAP") and "TARGET" in auxline.upper() and "J$" not in auxline.upper(): + # Classic MAP section + # Force uppercase and elilminate ";" + auxline = auxline.upper() + auxline = auxline.replace(";", "") + (str1, str2) = auxline.split("TARGET ") + str2 = str2.lstrip().rstrip() + table_array.append(str2) + if auxline.upper().startswith("MAP") and "TARGET" in auxline.upper() and "J$" in auxline.upper(): + # J$ section + auxline = auxline.upper() + str1 = auxline.split(",")[1] + str2 = str1.split(".")[1] + self.new_map_dictionary[str2] = auxline + + array_new_owner_tables_on_target_db = table_array + + # Check if we have only one owner + self.array_new_tables_on_target_db = [] + self.new_table_owner_on_target_db = array_new_owner_tables_on_target_db[0].split(".")[0] + for line in array_new_owner_tables_on_target_db: + (owner, table_name) = line.split(".") + self.array_new_tables_on_target_db.append(table_name) + if owner != self.new_table_owner_on_target_db: + print ("More than one table owners in REPLICAT delta parameter file.") + exit (ERROR) + return + + + + def check_pk_uk_on_source_db_delta_tables(self): + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + self.database_source, mode=cx_Oracle.SYSDBA) + cursor = db.cursor() + # Check if source tables exist + for table_name in self.array_new_tables_on_source_db: + sql = "select count(1) from DBA_TABLES where OWNER='" + self.current_table_owner_on_source_db + "' and TABLE_NAME='" + table_name +"'" + cursor.execute(sql) + for row in cursor: + count=row[0] + if count == 0: + print (NEWLINE + "ERROR: Table " + self.current_table_owner_on_source_db + "." + table_name + " does not exists in database " + self.database_source) + exit (ERROR) + # Check if the tables have PK or UK + for table_name in self.array_new_tables_on_source_db: + sql = "select count(1) from DBA_CONSTRAINTS where OWNER='" + self.current_table_owner_on_source_db + "' and TABLE_NAME='" + table_name +"' and CONSTRAINT_TYPE in ('P','U')" + cursor.execute(sql) + for row in cursor: + count=row[0] + if count == 0: + print (NEWLINE +"ERROR: Table " + self.current_table_owner_on_source_db + "." + table_name + " in database " + self.database_source + " does not heve any PRIMARY KEY or UNIQUE INDEX") + exit (ERROR) + + cursor.close() + except cx_Oracle.DatabaseError as err: + print (err) + exit (ERROR) + return + + + def add_trandata_on_delta_tables(self): + # Generate INFO TRANDATA obey file + # compose file content + contents_of_file = [] + contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + for tab_name in self.array_new_tables_on_source_db: + contents_of_file.append("INFO TRANDATA " + self.current_table_owner_on_source_db + "." + tab_name) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "delta_info_trandata.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the INFO TRANDATA obey file + self.upload_temp_file_to_ogg_host("delta_info_trandata.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "delta_info_trandata.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + + # Create an array for the tables with disabled TRANDATA + array_new_tables_to_enable_trandata = [] + for line in cmd_output.splitlines(): + if "disabled" in line: + # trandata is disabled + table_name = line.split("for table ")[1].split(".")[1] + array_new_tables_to_enable_trandata.append(table_name) + + # Generate ADD TRANDATA obey file + # compose file content + contents_of_file = [] + contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + for tab_name in array_new_tables_to_enable_trandata: + contents_of_file.append("ADD TRANDATA " + self.current_table_owner_on_source_db + "." + tab_name) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "delta_add_trandata.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the INFO TRANDATA obey file + self.upload_temp_file_to_ogg_host("delta_add_trandata.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "delta_add_trandata.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + return + + def create_remote_dirs(self): + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " mkdir -p " + self.config_params["remote_directory"] +"/temp/" + self.extract + "_" + self.replicat + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + return + + + def get_array_active_db_nodes(self, database): + array_active_db_nodes = [] + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + database, mode=cx_Oracle.SYSDBA) + cursor = db.cursor() + sql = "select INSTANCE_NAME, HOST_NAME from GV$INSTANCE where STATUS='OPEN'" + cursor.execute(sql) + for row in cursor: + instance_name, host_name=row + short_host_name = host_name.replace(".france.intra.corp", "") + array_active_db_nodes.append(instance_name) + array_active_db_nodes.append(short_host_name) + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + print (Red_On_Black("UNHANDELED ERROR: " + str(err))) + exit (ERROR) + return array_active_db_nodes + + def get_database_version(self, database): + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + database, mode=cx_Oracle.SYSDBA) + cursor = db.cursor() + sql = "select VERSION from V$INSTANCE" + cursor.execute(sql) + for row in cursor: + version = row[0] + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + print (Red_On_Black("UNHANDELED ERROR: " + str(err))) + exit (ERROR) + return version + + def drop_new_tables_exists_on_target_db(self): + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + self.database_target, mode=cx_Oracle.SYSDBA) + except cx_Oracle.DatabaseError as err: + print (Red_On_Black("UNHANDELED ERROR: " + str(err))) + exit (ERROR) + + cursor = db.cursor() + + # Drop table on target database + for table_name in self.array_new_tables_on_target_db: + try: + sql = "drop table " + self.current_table_owner_on_target_db + "." + table_name +" purge" + cursor.execute(sql) + except cx_Oracle.DatabaseError as err: + ora_xxxxx = str(err).split(":")[0] + if ora_xxxxx == "ORA-00942": + # ORA-00942: table or view does not exist + pass + else: + print (Red_On_Black("UNHANDELED ERROR: " + str(err))) + exit (ERROR) + cursor.close() + return + + + def get_running_ogg_host(self): + ogg12_env = self.config_params["ogg12_env"] + ogg12_path = self.config_params["ogg12_path"] + + # Get the ogg host list, remove spaces, and tokenize it by commas ',' + ogg_host_list = self.config_params["ogg_host_list"] + ogg_host_list = ogg_host_list.replace(' ','').split(',') + + # Check if, and which, node is running the ogg manager + for ogg_host in ogg_host_list: + shellcommand="ssh oracle@" + ogg_host + " \'" + ogg12_env +"; echo info manager | " + ogg12_path + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + for line in cmd_output.splitlines(): + if ("MANAGER IS RUNNING" in line.upper()): + self.config_params["ogg_host"] = ogg_host + return ogg_host + + return + + + def set_ogg_host(self, ogg_host_dev, ogg_host_prod): + # Replace the ogg_host with the one received by parameter + if self.specific_params["type_environement"].upper() == "DEV": + self.config_params["ogg_host"] = ogg_host_dev + elif self.specific_params["type_environement"].upper() == "PROD": + self.config_params["ogg_host"] = ogg_host_prod + return + + + def get_extract_status(self): + # Generate STOP EXTRACT obey file + # compose file content + contents_of_file = [] + # contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + contents_of_file.append("info " + self.extract ) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "info_extract.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the INFO TRANDATA obey file + self.upload_temp_file_to_ogg_host("info_extract.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "info_extract.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + for line in cmd_output.splitlines(): + if ("EXTRACT" in line.upper()) and (self.extract.upper() in line.upper()): + extract_status = line.split("Status ")[1] + return extract_status + + def stop_extract(self): + # Generate STOP EXTRACT obey file + # compose file content + contents_of_file = [] + # contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + contents_of_file.append("stop " + self.extract ) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "stop_extract.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the INFO TRANDATA obey file + self.upload_temp_file_to_ogg_host("stop_extract.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "stop_extract.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + # Check the status until is STOPPED + status = self.get_extract_status() + iteration = 0 + while (status != "STOPPED" and iteration <= OGG_STATUS_MAX_ITERATION_CHECK): + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + status = self.get_extract_status() + iteration += 1 + if iteration == OGG_STATUS_MAX_ITERATION_CHECK: + print(Red_On_Black("FAILED")) + exit (ERROR) + return + + def start_extract(self): + # Generate START EXTRACT obey file + # compose file content + contents_of_file = [] + # contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + contents_of_file.append("start " + self.extract ) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "start_extract.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the INFO TRANDATA obey file + self.upload_temp_file_to_ogg_host("start_extract.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "start_extract.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + # Wait a little while + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + # Check the status until is RUNNING + status = self.get_extract_status() + iteration = 0 + while (status != "RUNNING" and iteration <= OGG_STATUS_MAX_ITERATION_CHECK): + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + status = self.get_extract_status() + iteration += 1 + if iteration == OGG_STATUS_MAX_ITERATION_CHECK: + print(Red_On_Black("FAILED")) + exit (ERROR) + return + + + def get_replicat_status(self): + replicat_status = "MISSING" + # Generate STOP EXTRACT obey file + # compose file content + contents_of_file = [] + # contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + contents_of_file.append("info " + self.replicat ) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "info_replicat.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the INFO TRANDATA obey file + self.upload_temp_file_to_ogg_host("info_replicat.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "info_replicat.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + for line in cmd_output.splitlines(): + if ("REPLICAT" in line.upper()) and (self.replicat.upper() in line.upper()): + replicat_status = line.split("Status ")[1] + return replicat_status + + def update_sync_status(self): + self.extract_status = self.get_extract_status() + self.replicat_status = self.get_replicat_status() + return + + + def stop_replicat(self): + # Generate STOP REPLICAT obey file + # compose file content + contents_of_file = [] + # contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + contents_of_file.append("stop " + self.replicat ) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "stop_replicat.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the obey file + self.upload_temp_file_to_ogg_host("stop_replicat.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "stop_replicat.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + # Check the status until is STOPPED + status = self.get_replicat_status() + iteration = 0 + while (status != "STOPPED" and iteration <= OGG_STATUS_MAX_ITERATION_CHECK): + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + status = self.get_replicat_status() + iteration += 1 + if iteration == OGG_STATUS_MAX_ITERATION_CHECK: + print(Red_On_Black("FAILED")) + exit (ERROR) + return + + def start_replicat(self): + # Generate START REPLICAT obey file + # compose file content + contents_of_file = [] + # contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + contents_of_file.append("start " + self.replicat ) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "start_replicat.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the obey file + self.upload_temp_file_to_ogg_host("start_replicat.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "start_replicat.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + cmd_output=cmd.stdout.decode('utf-8') + # Check the status until is RUNNING + status = self.get_replicat_status() + iteration = 0 + while (status != "RUNNING" and iteration <= OGG_STATUS_MAX_ITERATION_CHECK): + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + status = self.get_replicat_status() + iteration += 1 + if iteration == OGG_STATUS_MAX_ITERATION_CHECK: + print(Red_On_Black("FAILED")) + exit (ERROR) + return + + + def replicat_has_lag(self): + # Generate LAG REPLICAT obey file + # compose file content + contents_of_file = [] + # contents_of_file.append("DBLOGIN USERID " + self.ggsci_db_source_login) + contents_of_file.append("lag " + self.replicat ) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contect to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "lag_replicat.obey" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + # Upload and execute the INFO TRANDATA obey file + self.upload_temp_file_to_ogg_host("lag_replicat.obey") + remote_obey_filename = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "lag_replicat.obey" + shellcommand="ssh oracle@" + self.config_params["ogg_host"] + " \'" + self.config_params["ogg12_env"] +"; cat "+ remote_obey_filename+" | " + self.config_params["ogg12_path"] + "/ggsci" +"\'" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + has_lag = True + cmd_output=cmd.stdout.decode('utf-8') + for line in cmd_output.splitlines(): + if "no more records to process" in line: + has_lag = False + break + return has_lag + + def export_delta_tables(self, scn): + # Create OGGSYNC directory on source database and pick one of the active hosts for the database + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + self.database_source, mode=cx_Oracle.SYSDBA) + cursor = db.cursor() + sql = "create or replace directory OGGSYNC as '" + self.config_params["datapump_directory_path"]+ "'" + cursor.execute(sql) + + sql = "select HOST_NAME from v$instance" + cursor.execute(sql) + for row in cursor: + db_host_name=row[0] + db_host_name = db_host_name.split(".")[0] + + cursor.close() + except cx_Oracle.DatabaseError as err: + print (err) + exit (ERROR) + # Generate EXPDP parfile + # compose file content + contents_of_file = [] + contents_of_file.append("dumpfile = OGGSYNC:" + self.extract + "_"+ self.replicat + "_delta.dmp") + contents_of_file.append("logfile = OGGSYNC:export_" + self.extract + "_"+ self.replicat + "_delta.log") + contents_of_file.append("reuse_dumpfiles=Y") + contents_of_file.append("exclude=GRANT,CONSTRAINT,INDEX,TRIGGER") + + if scn != None: + contents_of_file.append("flashback_scn = " + scn) + for table_name in self.array_new_tables_on_source_db: + contents_of_file.append("tables = " + self.current_table_owner_on_source_db + "." + table_name) + + if self.database_source_version.startswith("12") and self.database_target_version.startswith("11"): + contents_of_file.append("version=11.2") + + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contents to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "export_delta.par" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + + expdp_parfile = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "export_delta.par" + # Generate EXPDP shell + # compose file contents + + contents_of_file = [] + contents_of_file.append(". oraenv < /dev/null") + contents_of_file.append(self.database_source) + contents_of_file.append("EOF!") + contents_of_file.append("export ORACLE_SID=" + self.database_source_instances[0]) + contents_of_file.append("expdp userid=\"'/ as sysdba'\" parfile=" + expdp_parfile) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contents to file + file = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "export_delta.sh" + f = open(file, 'w') + f.write(contents_of_file) + f.close() + + expdp_shell = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "export_delta.sh" + # Perform chmod +x fir the EXPDP shell + st = os.stat(file_name) + os.chmod(file, st.st_mode | stat.S_IEXEC) + + # Upload and execute the EXPDP shell and parfile + self.upload_temp_file_to_ogg_host("export_delta.par") + self.upload_temp_file_to_ogg_host("export_delta.sh") + # EXPDP command will be launched from une of source database host + shellcommand="ssh oracle@" + self.database_source_instances[1] + " " + expdp_shell + process = subprocess.Popen(shlex.split(shellcommand), stdout=subprocess.PIPE) + while True: + output = process.stdout.readline() + output = output.decode('utf-8') + if output == '' and process.poll() is not None: + break + if output: + print(Cyan_On_Black(output.strip())) + rc = process.poll() + + # Check for errors + download_file(self.config_params["datapump_directory_path"] + "/export_" + self.extract + "_" + self.replicat + "_delta.log", self.config_params["ogg_host"], OGG_SERVICE_DIR + "/temp/.") + logfile = OGG_SERVICE_DIR + "/temp/export_" + self.extract + "_" + self.replicat + "_delta.log" + contents_of_logfile = open(logfile, "r", encoding = "utf-8", errors = "replace").readlines() + blocking_ora_error = 0 + for line in contents_of_logfile: + if line.startswith("ORA-"): + # Line starts with an ORA-xxxx + ora_error = line.split(":")[0] + if ora_error not in EXPDP_IGNORE_ERROR_MESSAGES: + # The ORA-xxxx message is not in the whitelist + blocking_ora_error += 1 + if blocking_ora_error > 0: + print(Red_On_Black("FAILED")) + print(White_On_Black("Please check the " + str(blocking_ora_error) + " error message(s) (ORA-xxxxx) in " + logfile)) + exit (ERROR) + return + + def import_delta_tables(self): + # Create OGGSYNC directory on target database and pick one of the active hosts for the database + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + self.database_target, mode=cx_Oracle.SYSDBA) + cursor = db.cursor() + sql = "create or replace directory OGGSYNC as '" + self.config_params["datapump_directory_path"]+ "'" + cursor.execute(sql) + + sql = "select HOST_NAME from v$instance" + cursor.execute(sql) + for row in cursor: + db_host_name=row[0] + db_host_name = db_host_name.split(".")[0] + + cursor.close() + except cx_Oracle.DatabaseError as err: + print (err) + exit (ERROR) + # Generate IMPDP parfile + # compose file content + contents_of_file = [] + contents_of_file.append("dumpfile = OGGSYNC:" + self.extract + "_"+ self.replicat + "_delta.dmp") + contents_of_file.append("logfile = OGGSYNC:import_" + self.extract + "_"+ self.replicat + "_delta.log") + + for table_name in self.array_new_tables_on_source_db: + contents_of_file.append("tables = " + self.current_table_owner_on_source_db + "." + table_name) + + contents_of_file.append("remap_schema = " + self.current_table_owner_on_source_db + ":" + self.current_table_owner_on_target_db) + + if self.remap_tablespace: + contents_of_file.append("remap_tablespace=" + self.remap_tablespace) + contents_of_file.append(NEWLINE) + + contents_of_file = NEWLINE.join(contents_of_file) + # write contents to file + file_name = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "import_delta.par" + f = open(file_name, 'w') + f.write(contents_of_file) + f.close() + + impdp_parfile = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "import_delta.par" + # Generate IMPDP shell + # compose file content + + contents_of_file = [] + contents_of_file.append(". oraenv < /dev/null") + contents_of_file.append(self.database_target) + contents_of_file.append("EOF!") + contents_of_file.append("export ORACLE_SID=" + self.database_target_instances[0]) + contents_of_file.append("impdp userid=\"'/ as sysdba'\" parfile=" + impdp_parfile) + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + # write contents to file + file = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + "import_delta.sh" + f = open(file, 'w') + f.write(contents_of_file) + f.close() + + impdp_shell = self.config_params["remote_directory"] + "/temp/" + self.extract + "_" + self.replicat + "/" + "import_delta.sh" + # Perform chmod +x for the IMPDP shell + st = os.stat(file) + os.chmod(file, st.st_mode | stat.S_IEXEC) + + # Upload and execute the IMPDP shell and parfile + self.upload_temp_file_to_ogg_host("import_delta.par") + self.upload_temp_file_to_ogg_host("import_delta.sh") + # IMPDP command will be launched from une of target database host + shellcommand="ssh oracle@" + self.database_target_instances[1] + " " + impdp_shell + process = subprocess.Popen(shlex.split(shellcommand), stdout=subprocess.PIPE) + while True: + output = process.stdout.readline() + output = output.decode('utf-8') + if output == '' and process.poll() is not None: + break + if output: + print(Cyan_On_Black(output.strip())) + rc = process.poll() + + # Check for errors + download_file(self.config_params["datapump_directory_path"] + "/import_" + self.extract + "_" + self.replicat + "_delta.log", self.config_params["ogg_host"], OGG_SERVICE_DIR + "/temp/.") + logfile = OGG_SERVICE_DIR + "/temp/import_" + self.extract + "_" + self.replicat + "_delta.log" + contents_of_logfile = open(logfile, "r", encoding = "utf-8", errors = "replace").readlines() + blocking_ora_error = 0 + for line in contents_of_logfile: + if line.startswith("ORA-"): + # Line starts with an ORA-xxxx + ora_error = line.split(":")[0] + if ora_error not in EXPDP_IGNORE_ERROR_MESSAGES: + # The ORA-xxxx message is not in the whitelist + blocking_ora_error += 1 + if blocking_ora_error > 0: + print(Red_On_Black("FAILED")) + print(White_On_Black("Please check the " + str(blocking_ora_error) + " error message(s) (ORA-xxxxx) in " + logfile)) + exit (ERROR) + return + + + def generate_ddl_source_delta_tables_pk_uk(self): + shutil.copy(OGG_SERVICE_DIR + "/sql/" + self.config_params["sql_script_extract_ddl_pk_uk"], OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/") + # Generate extract DDL script + contents_of_file = [] + contents_of_file.append("connect " + self.config_params["sysdba_user"] + "/" + self.config_params["sysdba_password"] + "@" + self.config_params["scan"] + "/" + self.database_source + " as sysdba") + contents_of_file.append("spool " + self.database_target + "_create_pk_uk.sql") + for line in self.array_new_tables_on_source_db: + contents_of_file.append("@" + self.config_params["sql_script_extract_ddl_pk_uk"] + " " + self.current_table_owner_on_source_db + " " + line) + + contents_of_file.append("spool off") + contents_of_file.append("exit") + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + + # write contents to file + file = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.database_source + "_extract_ddl_indexes.sql" + f = open(file, 'w') + f.write(contents_of_file) + f.close() + + # Execute extract DDL script with SQL*Plus + shellcommand = "source ~/.bash_profile; cd "+ OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/"+ "; " + self.config_params["local_sqlplus"] + " /nolog @" + self.database_source + "_extract_ddl_indexes.sql" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + + # Add connection information and spool for the previous generated file + file = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.database_target + "_create_pk_uk.sql" + contents_of_file = open(file, "r", encoding = "utf-8", errors = "replace").readlines() + contents_of_file.insert(0, "spool " + self.database_target + "_create_pk_uk.log") + contents_of_file.insert(1, NEWLINE + "connect " + self.config_params["sysdba_user"] + "/" + self.config_params["sysdba_password"] + "@" + self.config_params["scan"] + "/" + self.database_target + " as sysdba") + contents_of_file.append("spool off" + NEWLINE) + contents_of_file.append("exit" + NEWLINE) + # Change OWNER as in target database + contents_of_file_bis = [] + for line in contents_of_file: + #line = line.replace('\"', "") + line = line.replace(self.current_table_owner_on_source_db + '"' + ".", self.current_table_owner_on_target_db + '"' + ".") + contents_of_file_bis.append(line) + contents_of_file_bis = "".join(contents_of_file_bis) + f = open(file, 'w') + f.write(contents_of_file_bis) + f.close() + return + + def create_delta_tables_pk_uk_on_target_database(self): + # Execute extract DDL script with SQL*Plus + shellcommand = "source ~/.bash_profile; cd "+ OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/"+ "; " + self.config_params["local_sqlplus"] + " /nolog @" + self.database_target + "_create_pk_uk.sql" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + # Check for anormal ORA- in logfile + logfile_create_index = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.database_target + "_create_pk_uk.log" + contents_of_logfile_create_index = open(logfile_create_index, "r", encoding = "utf-8", errors = "replace").readlines() + blocking_ora_error = 0 + for line in contents_of_logfile_create_index: + if line.startswith("ORA-"): + # Line starts with an ORA-xxxx + ora_error = line.split(":")[0] + if ora_error not in TARGET_DATABASE_CREATE_INDEX_IGNORE_ERROR_MESSAGES: + # The ORA-xxxx message is not in the whitelist + blocking_ora_error += 1 + if blocking_ora_error > 0: + print(Red_On_Black("FAILED")) + print(White_On_Black("Please check the " + str(blocking_ora_error) + " error message(s) (ORA-xxxxx) in " + logfile_create_index)) + exit (ERROR) + return + + + def create_temporary_sync_directory(self): + # create temporary sync directory + try: + os.mkdir(OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat) + except (FileExistsError) as e: + pass + return + + def generate_ddl_target_delta_tables_old_index(self): + shutil.copy(OGG_SERVICE_DIR + "/sql/" + self.config_params["sql_script_extract_ddl_index"], OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/") + # Generate extract DDL script + contents_of_file = [] + contents_of_file.append("connect " + self.config_params["sysdba_user"] + "/" + self.config_params["sysdba_password"] + "@" + self.config_params["scan"] + "/" + self.database_target + " as sysdba") + contents_of_file.append("spool " + self.database_target + "_create_old_indexes.sql") + for line in self.array_new_tables_on_target_db: + contents_of_file.append("@" + self.config_params["sql_script_extract_ddl_index"] + " " + self.current_table_owner_on_target_db + " " + line) + + contents_of_file.append("spool off") + contents_of_file.append("exit") + contents_of_file.append(NEWLINE) + contents_of_file = NEWLINE.join(contents_of_file) + + # write contents to file + file = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.database_target + "_extract_ddl_old_indexes.sql" + f = open(file, 'w') + f.write(contents_of_file) + f.close() + + # Execute extract DDL script with SQL*Plus + shellcommand = "source ~/.bash_profile; cd "+ OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/"+ "; " + self.config_params["local_sqlplus"] + " /nolog @" + self.database_target + "_extract_ddl_old_indexes.sql" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + + # Add connection information and spool for the previous generated file + file = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.database_target + "_create_old_indexes.sql" + contents_of_file = open(file, "r", encoding = "utf-8", errors = "replace").readlines() + contents_of_file.insert(0, "spool " + self.database_target + "_create_old_indexes.log") + contents_of_file.insert(1, NEWLINE + "connect " + self.config_params["sysdba_user"] + "/" + self.config_params["sysdba_password"] + "@" + self.config_params["scan"] + "/" + self.database_target + " as sysdba") + contents_of_file.append("spool off" + NEWLINE) + contents_of_file.append("exit" + NEWLINE) + contents_of_file = "".join(contents_of_file) + f = open(file, 'w') + f.write(contents_of_file) + f.close() + # Archive SQL file + archive_folder = OGG_SERVICE_DIR + "/archive" + shutil.copy(file, archive_folder + "/" + self.extract + "_"+ self.replicat + "_" + self.database_target + "_create_old_indexes_" + self.runid + ".sql") + return + + def create_delta_tables_old_indexes_on_target_database(self): + # Execute extract DDL script with SQL*Plus + shellcommand = "source ~/.bash_profile; cd "+ OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/"+ "; " + self.config_params["local_sqlplus"] + " /nolog @" + self.database_target + "_create_old_indexes.sql" + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + # Check for anormal ORA- in logfile + logfile_create_index = OGG_SERVICE_DIR + "/temp/" + self.extract + "_"+ self.replicat + "/" + self.database_target + "_create_old_indexes.log" + contents_of_logfile_create_index = open(logfile_create_index, "r", encoding = "utf-8", errors = "replace").readlines() + blocking_ora_error = 0 + for line in contents_of_logfile_create_index: + if line.startswith("ORA-"): + # Line starts with an ORA-xxxx + ora_error = line.split(":")[0] + if ora_error not in TARGET_DATABASE_CREATE_INDEX_IGNORE_ERROR_MESSAGES: + # The ORA-xxxx message is not in the whitelist + blocking_ora_error += 1 + if blocking_ora_error > 0: + print(Red_On_Black("FAILED")) + print(White_On_Black("Please check the " + str(blocking_ora_error) + " error message(s) (ORA-xxxxx) in " + logfile_create_index)) + return + + + def add_new_tables_to_extract_prm(self): + contents_of_file = [] + for table_name in self.array_final_tables_on_extract: + addnative_string = "" + for table_with_add_native_option in self.extract_addnative_option_array: + if table_name == table_with_add_native_option: + addnative_string = ",_ADDNATIVE" + contents_of_file.append("TABLE " + self.current_table_owner_on_source_db + "." + table_name + addnative_string + ";") + + contents_of_file = NEWLINE.join(contents_of_file) + # Save content as prm extract tables + file = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.extract + "_tables.prm" + f = open(file, 'w') + f.write(contents_of_file) + f.close() + return + + def add_new_tables_to_replicat_prm_with_csn_filter(self, filter_transaction_csn): + contents_of_file = [] + # Classic MAP section + for table_name in self.array_final_tables_on_replicat: + if (table_name not in self.array_new_tables_on_target_db) or (filter_transaction_csn == None): + contents_of_file.append("MAP " + self.current_table_owner_on_source_db + "." + table_name + ", TARGET " + self.current_table_owner_on_target_db + "." + table_name + ";") + else: + self.replicat_intermediate_filter_string = "FILTER ( @GETENV ('TRANSACTION', 'CSN') > " + filter_transaction_csn + ')' + contents_of_file.append("MAP " + self.current_table_owner_on_source_db + "." + table_name + ", TARGET " + self.current_table_owner_on_target_db + "." + table_name + ", " + self.replicat_intermediate_filter_string + ";") + + # J$ section + for key in self.current_map_dictionary.keys(): + contents_of_file.append(self.current_map_dictionary[key]) + + contents_of_file = NEWLINE.join(contents_of_file) + # Save content as prm replicat tables + file = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_tables.prm" + f = open(file, 'w') + f.write(contents_of_file) + f.close() + return + + def switch_logfile_and_chekcpoint(self): + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + self.database_source, mode=cx_Oracle.SYSDBA) + cursor = db.cursor() + sql = "alter system switch logfile" + cursor.execute(sql) + sql = "alter system checkpoint" + cursor.execute(sql) + cursor.close() + except cx_Oracle.DatabaseError as err: + print (err) + exit (ERROR) + return + + def define_scn_for_export(self): + try: + db = cx_Oracle.connect(self.config_params["sysdba_user"], self.config_params["sysdba_password"], self.config_params["scan"] + "/" + self.database_source, mode=cx_Oracle.SYSDBA) + cursor = db.cursor() + sql = """select min(scn) scn from( + select min(current_scn) scn from gv$database + union + select min(t.start_scn) scn from gv$transaction t, gv$session s + where s.saddr = t.ses_addr + and s.inst_id = t.inst_id + )""" + cursor.execute(sql) + for row in cursor: + self.scn_of_export=str(row[0]) + cursor.close() + except cx_Oracle.DatabaseError as err: + print (err) + exit (ERROR) + return + + + def remove_csn_filter_from_replicat_prm(self): + contents_of_file = [] + for table_name in self.array_final_tables_on_replicat: + contents_of_file.append("MAP " + self.current_table_owner_on_source_db + "." + table_name + ", TARGET " + self.current_table_owner_on_target_db + "." + table_name + ";") + + for key in self.final_map_dictionary.keys(): + contents_of_file.append(self.final_map_dictionary[key]) + + contents_of_file = NEWLINE.join(contents_of_file) + + file = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_tables.prm" + f = open(file, 'w') + f.write(contents_of_file) + f.close() + return + + def backup_and_empty_delta_prm(self): + for str1 in [self.extract, self.replicat]: + file = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + str1 + "_delta.prm" + file_old = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/." + str1 + "_delta.old" + shutil.copyfile(file, file_old) + # Empry file + f = open(file, 'w') + f.write("") + f.close() + return + + + def archive_prm(self): + prm_folder = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + archive_prm_folder = OGG_SERVICE_DIR + "/archive" + zipname = self.extract + "_"+ self.replicat + "_" + self.runid +'.zip' + fantasy_zip = zipfile.ZipFile(archive_prm_folder + '/'+ zipname, 'w') + for folder, subfolders, files in os.walk(prm_folder): + for file in files: + if file.endswith('.prm'): + fantasy_zip.write(os.path.join(folder, file), os.path.relpath(os.path.join(folder,file), prm_folder), compress_type = zipfile.ZIP_DEFLATED) + fantasy_zip.close() + return zipname + + def parse_prm_tables(self): + # EXTRACT current + ################# + extract_current_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.extract + "_tables.prm" + try: + contents_of_extract_current_prm = open(extract_current_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.extract + "_tables.prm not found.")) + exit (ERROR) + + tab_owner_array=[] + tab_name_array=[] + # Identify tables + for line in contents_of_extract_current_prm: + # supress spaces in begin/end line + auxline = line.rstrip() + auxline = auxline.lstrip() + # skip comments + if not auxline.startswith("--"): + # eliminate double spaces and spaces + auxline = re.sub(' +', ' ',auxline) + auxline = auxline.replace(" ","") + auxline = auxline.replace(";", "") + auxline = auxline.upper() + if auxline.upper().startswith("TABLE"): + auxline = auxline.replace("TABLE", "", 1) + auxline2 = auxline.split(",")[0] + (tab_owner,tab_name) = auxline2.split(".") + tab_owner_array.append(tab_owner) + tab_name_array.append(tab_name) + if ",_ADDNATIVE" in auxline: + self.extract_addnative_option_array.append(tab_name) + + # Check if we have only one owner + self.current_table_owner_on_source_db = tab_owner_array[0] + self.array_current_tables_on_source_db = tab_name_array + for tab_owner in tab_owner_array: + if tab_owner != self.current_table_owner_on_source_db: + print ("More than one table owners in EXTRACT current parameter file.") + exit (ERROR) + + # REPLICAT current + ################## + replicat_current_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_tables.prm" + try: + contents_of_replicat_current_prm = open(replicat_current_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + except FileNotFoundError as err: + print (Red_On_Black("ERROR: file " + self.replicat + "_tables.prm not found.")) + exit (ERROR) + + self.current_map_dictionary = {} + table_array = [] + for line in contents_of_replicat_current_prm: + # supress spaces in begin/end line + auxline = line.rstrip() + auxline = auxline.lstrip() + # Remove double spaces + auxline = re.sub(' +', ' ',auxline) + # skip comments + if not auxline.startswith("--"): + if auxline.upper().startswith("MAP") and "TARGET" in auxline.upper() and "J$" not in auxline.upper(): + # Classic MAP section + # Force uppercase and elilminate ";" + auxline = auxline.upper() + auxline = auxline.replace(";", "") + (str1, str2) = auxline.split("TARGET ") + str2 = str2.lstrip().rstrip() + table_array.append(str2) + if auxline.upper().startswith("MAP") and "TARGET" in auxline.upper() and "J$" in auxline.upper(): + # J$ section + auxline = auxline.upper() + str1 = auxline.split(",")[1] + str2 = str1.split(".")[1] + self.current_map_dictionary[str2] = auxline + + + array_current_owner_tables_on_target_db = table_array + + # Check if we have only one owner + self.array_current_tables_on_target_db = [] + self.current_table_owner_on_target_db = array_current_owner_tables_on_target_db[0].split(".")[0] + for line in array_current_owner_tables_on_target_db: + (owner, table_name) = line.split(".") + # If a FILTER condition with CSN exists, we remove it + if ("CSN" in table_name) and ("TRANSACTION" in table_name): + pass + else: + self.array_current_tables_on_target_db.append(table_name) + if owner != self.current_table_owner_on_target_db: + print ("More than one table owners in REPLICAT current parameter file.") + exit (ERROR) + + return + + def clean_local_prm_files(self): + replicat_tables_prm_filename = OGG_SERVICE_DIR + "/sync.d/" + self.extract + "_"+ self.replicat + "/" + self.replicat + "_tables.prm" + contents_of_replicat_tables_prm = open(replicat_tables_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + contents_of_replicat_tables_prm = removeDuplicates(contents_of_replicat_tables_prm) + + # Remove duplicate lines + f = open(replicat_tables_prm_filename, 'w') + contents_of_replicat_tables_prm="".join(contents_of_replicat_tables_prm) + f.write(contents_of_replicat_tables_prm) + f.close() + # Remove empty lines + remove_empty_lines(replicat_tables_prm_filename) + return + + + def generate_finals_tables_arrays(self): + self.array_duplicate_tables_on_extract = intersection(self.array_current_tables_on_source_db, self.array_new_tables_on_source_db) + self.array_final_tables_on_extract = union_keep_order(self.array_current_tables_on_source_db, self.array_new_tables_on_source_db) + self.array_duplicate_tables_on_replicat = intersection(self.array_current_tables_on_target_db, self.array_new_tables_on_target_db) + self.array_final_tables_on_replicat = union_keep_order(self.array_current_tables_on_target_db, self.array_new_tables_on_target_db) + self.final_map_dictionary = merge_dict(self.current_map_dictionary, self.new_map_dictionary) + self.tables_in_delta_exists_in_current = False + if (len(self.array_duplicate_tables_on_extract) > 0) or (len(self.array_duplicate_tables_on_replicat) > 0): + self.tables_in_delta_exists_in_current = False + return diff --git a/tiddlywiki/ogg_sync.py.txt b/tiddlywiki/ogg_sync.py.txt new file mode 100755 index 0000000..452ddca --- /dev/null +++ b/tiddlywiki/ogg_sync.py.txt @@ -0,0 +1,300 @@ +#!/u01/app/python/current_version/bin/python3 + +from libs.ogg_libs import * + +def parse_command_line_args(): + parser = argparse.ArgumentParser(description = "Oracle Golden Gate Syncronisation Tool") + parser.add_argument("-e", "--extract", help = "EXTRACT name (ex: oedri1p)", required = True) + parser.add_argument("-r", "--replicat", help = "REPLICAT name (ex: ordro1p)", required = True) + parser.add_argument("-s", "--sync", choices=['full','incremental','stop','start'],help = "ACTION (full or incremental sync, stop or start)", required = True) + parser.add_argument('-f','--force', action='store_true', default=False ,help='May the force be with you', required=False) + parser.add_argument('-d','--debug', action='store_true', default=False ,help='Used ONLY for internal debug purpose', required=False) + + args = parser.parse_args() + extract = args.extract.lower() + replicat = args.replicat.lower() + debug = args.debug + + try: + sync = args.sync.lower() + except AttributeError: sync = "none" + + return (extract, replicat, sync, debug) + + +# To reset the default terminal color after each usage of colorama module +init_pretty_color_printing() +(extract, replicat, sync, debug) = parse_command_line_args() +get_hostname() +script_path = os.path.dirname(os.path.abspath(__file__)) +script_name = os.path.basename(__file__) +logger = start_logging(script_path+"/log/ogg_sync.log") +# Log BEGIN of the execution +logger.info("BEGIN ogg_sync.py -e " + extract + " -r " + replicat + " -s " + sync) +ogg_sync = OGG_Sync(extract, replicat) +ogg_sync.create_remote_dirs() +if debug: + # DEBUG ONLY + ogg_sync.create_temporary_sync_directory() + ogg_sync.parse_prm_headers() + ogg_sync.parse_prm_tables() + ogg_sync.build_extract_prm() + ogg_sync.build_replicat_prm() + ogg_sync.parse_prm_delta(); + print(ogg_sync.get_class_attributes_as_json()) + print(ogg_sync.get_config_params_as_json()) + exit() +if sync == "full": + # FULL sync + print(White_On_Black("Start OGG Synchronisation ") + Yellow_On_Black(extract + "=>" + replicat) + White_On_Black(" in mode ") + Yellow_On_Black(sync.upper())) + print(White_On_Black("Generate " + ogg_sync.extract + ".prm and " + ogg_sync.replicat + ".prm files and upload them on OGG host " + ogg_sync.config_params["ogg_host"] + "... "), end='') + ogg_sync.create_temporary_sync_directory() + ogg_sync.parse_prm_headers() + ogg_sync.parse_prm_tables() + ogg_sync.build_extract_prm() + ogg_sync.build_replicat_prm() + ogg_sync.upload_prm_files() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Remote run on " + ogg_sync.config_params["ogg_host"] + ": /dbfs_tools/TOOLS/admin/sh/ogg_refresh_zfs.sh -e " + ogg_sync.extract + " -r " + ogg_sync.replicat + " -v 12")) + ogg_sync.sync_full_old() + + print(White_On_Black("Wait a moment..."), end='') + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK*2) + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Check if OGG Synchronisation is running... "), end='') + ogg_sync.update_sync_status() + if (ogg_sync.extract_status != "RUNNING" or ogg_sync.replicat_status != "RUNNING"): + print(Red_On_Black("FAILED")) + print(White_On_Black(TAB + "- extract " + ogg_sync.extract + " is " + ogg_sync.extract_status)) + print(White_On_Black(TAB + "- replicat " + ogg_sync.replicat + " is " + ogg_sync.replicat_status)) + exit (ERROR) + print(Green_On_Black("SUCCES")) + # Log for debug purpose + logger.info(ogg_sync.get_class_attributes_as_json()) + logger.info(ogg_sync.get_config_params_as_json()) +elif sync == "incremental": + # INCREMENTAL sync + print(White_On_Black("Start OGG Synchronisation ") + Yellow_On_Black(extract + "=>" + replicat) + White_On_Black(" in mode ") + Yellow_On_Black(sync.upper())) + print(White_On_Black("Parse extract/replicat prm files... "), end='') + ogg_sync.create_temporary_sync_directory() + ogg_sync.parse_prm_headers() + ogg_sync.parse_prm_tables() + ogg_sync.parse_prm_delta() + ogg_sync.generate_finals_tables_arrays() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Check if OGG Synchronisation is running... "), end='') + ogg_sync.update_sync_status() + if (ogg_sync.extract_status != "RUNNING" or ogg_sync.replicat_status != "RUNNING"): + print(Red_On_Black("FAILED")) + print(White_On_Black(TAB + "- extract " + ogg_sync.extract + " is " + ogg_sync.extract_status)) + print(White_On_Black(TAB + "- replicat " + ogg_sync.replicat + " is " + ogg_sync.replicat_status)) + print(White_On_Black("Both extract/replicat should be RUNNING before adding new tables.")) + exit (ERROR) + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Catch the lag of replication... "), end='') + iteration = 0 + while (ogg_sync.replicat_has_lag() and iteration <= OGG_LAG_MAX_ITERATION_CHECK): + print(White_On_Black("."), end='') + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + iteration += 1 + + if iteration == OGG_STATUS_MAX_ITERATION_CHECK: + print(Red_On_Black("FAILED")) + print(White_On_Black("Too much lag to catch, please retry later")) + exit (ERROR) + + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Found new table(s) to add from " + ogg_sync.database_source +" database: ") + Yellow_On_Black(",".join(ogg_sync.array_new_tables_on_target_db))) + + print(White_On_Black("Check for Primary Key or Unique Index on source table(s)... "), end='') + ogg_sync.check_pk_uk_on_source_db_delta_tables() + print(Green_On_Black("SUCCES")) + print(White_On_Black("Extract Primary Key and Unique Index DDL from source table(s)... "), end='') + ogg_sync.generate_ddl_source_delta_tables_pk_uk() + print(Green_On_Black("SUCCES")) + + # Stop EXTRACT + print(White_On_Black("Stop extract " + ogg_sync.extract + "... "), end='') + ogg_sync.stop_extract() + print(Green_On_Black("SUCCES")) + # Wait REPLICAT for catching the LAG + + print(White_On_Black("Wait replicat to catch the lag... "), end='') + iteration = 0 + while (ogg_sync.replicat_has_lag() and iteration <= OGG_LAG_MAX_ITERATION_CHECK): + print(White_On_Black("."), end='') + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + iteration += 1 + + if iteration == OGG_STATUS_MAX_ITERATION_CHECK: + print(Red_On_Black("FAILED")) + exit (ERROR) + + # Stop REPLICAT + print(Green_On_Black("SUCCES")) + print(White_On_Black("Stop replicat " + ogg_sync.replicat + "... "), end='') + ogg_sync.stop_replicat() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Add Trandata on source table(s)... "), end='') + ogg_sync.add_trandata_on_delta_tables() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Extract indexes from target delta tables if exist... "), end='') + ogg_sync.generate_ddl_target_delta_tables_old_index() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("On target database " + ogg_sync.database_target + ", drop tables " + ",".join(ogg_sync.array_new_tables_on_target_db) + " (if exist)... "), end='') + ogg_sync.drop_new_tables_exists_on_target_db() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Switch logile and ckeckpoint on source " + ogg_sync.database_source + " database... "), end='') + ogg_sync.switch_logfile_and_chekcpoint() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Define SCN for EXPDP on source " + ogg_sync.database_source + " database... "), end='') + ogg_sync.define_scn_for_export() + print(Green_On_Black("SUCCES")) + print(White_On_Black("SCN=" + ogg_sync.scn_of_export + " will be used for exporting new table(s)")) + + # Backup current prm files + print(White_On_Black("Backup current prm files... "), end='') + zipname = ogg_sync.archive_prm() + print(Green_On_Black("SUCCES")) + print(White_On_Black("Current prm files backed up in " + zipname)) + + # Add new tables in the extract prm file + print(White_On_Black("Add new tables in "+ ogg_sync.extract + "_tables.prm file... "), end='') + ogg_sync.add_new_tables_to_extract_prm() + print(Green_On_Black("SUCCES")) + + # Add new tables in the replicat prm file + print(White_On_Black("Add new tables in "+ ogg_sync.replicat + "_tables.prm file... "), end='') + ogg_sync.add_new_tables_to_replicat_prm_with_csn_filter(ogg_sync.scn_of_export) + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Generate " + ogg_sync.extract + ".prm and " + ogg_sync.replicat + ".prm files and upload them on OGG host " + ogg_sync.config_params["ogg_host"] + "... "), end='') + ogg_sync.build_extract_prm() + ogg_sync.build_replicat_prm() + ogg_sync.upload_prm_files() + print(Green_On_Black("SUCCES")) + + # Start EXTRACT + print(White_On_Black("Start extract " + ogg_sync.extract + "... "), end='') + ogg_sync.start_extract() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Start export of new table(s) as of SCN=" + ogg_sync.scn_of_export + " on remote host " + ogg_sync.database_source_instances[1]) +":") + ogg_sync.export_delta_tables(ogg_sync.scn_of_export) + + print(White_On_Black("Import export of new table(s) as of SCN=" + ogg_sync.scn_of_export + " on remote host " + ogg_sync.database_source_instances[1]) +":") + ogg_sync.import_delta_tables() + + # Create indexes on target tables + print(White_On_Black("Create indexes on target tables... "), end='') + ogg_sync.create_delta_tables_pk_uk_on_target_database() + ogg_sync.create_delta_tables_old_indexes_on_target_database() + print(Green_On_Black("SUCCES")) + + # Start REPLICAT with + print(White_On_Black("Start replicat " + ogg_sync.replicat + " using: " + 'FILTER ( @GETENV ("TRANSACTION", "CSN") > ' + ogg_sync.scn_of_export) + "... ", end='') + ogg_sync.start_replicat() + print(Green_On_Black("SUCCES")) + + # Wait REPLICAT for catching the LAG + print(White_On_Black("Wait replicat to catch the lag... "), end='') + while ogg_sync.replicat_has_lag(): + print(White_On_Black("."), end='') + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + + # Stop REPLICAT + print(Green_On_Black("SUCCES")) + print(White_On_Black("Stop replicat " + ogg_sync.replicat + "... "), end='') + ogg_sync.stop_replicat() + print(Green_On_Black("SUCCES")) + + # Remove replcat filters + print(White_On_Black("Remove filter from " + ogg_sync.replicat + ".prm files and upload them on OGG host " + ogg_sync.config_params["ogg_host"] + "... "), end='') + ogg_sync.remove_csn_filter_from_replicat_prm() + ogg_sync.build_replicat_prm() + ogg_sync.upload_replicat_prm() + print(Green_On_Black("SUCCES")) + + # Start REPLICAT + print(White_On_Black("Start replicat " + ogg_sync.replicat + "... "), end='') + ogg_sync.start_replicat() + print(Green_On_Black("SUCCES")) + + # Wait REPLICAT for catching the LAG + print(White_On_Black("Wait replicat to catch the lag... "), end='') + while ogg_sync.replicat_has_lag(): + print(White_On_Black("."), end='') + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK) + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Backup *delta.prm files in .*delta.old..."), end='') + ogg_sync.backup_and_empty_delta_prm() + print(Green_On_Black("SUCCES")) + + # Log for debug purpose + logger.info(ogg_sync.get_class_attributes_as_json()) + logger.info(ogg_sync.get_config_params_as_json()) +elif sync == "start": + print(White_On_Black("Parse extract/replicat prm headers... "), end='') + ogg_sync.parse_prm_headers() + print(Green_On_Black("SUCCES")) + + # Start EXRACT and REPLICAT + print(White_On_Black("Start extract "+ ogg_sync.extract + " and replicat " + ogg_sync.replicat + "... "), end='') + ogg_sync.start_extract() + ogg_sync.start_replicat() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Wait a moment..."), end='') + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK*2) + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Check if OGG Synchronisation is running... "), end='') + ogg_sync.update_sync_status() + if (ogg_sync.extract_status != "RUNNING" or ogg_sync.replicat_status != "RUNNING"): + print(Red_On_Black("FAILED")) + print(White_On_Black(TAB + "- extract " + ogg_sync.extract + " is " + ogg_sync.extract_status)) + print(White_On_Black(TAB + "- replicat " + ogg_sync.replicat + " is " + ogg_sync.replicat_status)) + exit (ERROR) + print(Green_On_Black("SUCCES")) + # Log for debug purpose + logger.info(ogg_sync.get_class_attributes_as_json()) + logger.info(ogg_sync.get_config_params_as_json()) +elif sync == "stop": + print(White_On_Black("Parse extract/replicat prm headers... "), end='') + ogg_sync.parse_prm_headers() + print(Green_On_Black("SUCCES")) + + # Stop EXRACT and REPLICAT + print(White_On_Black("Stop extract "+ ogg_sync.extract + " and replicat " + ogg_sync.replicat + "... "), end='') + ogg_sync.stop_replicat() + ogg_sync.stop_extract() + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Wait a moment..."), end='') + time.sleep(OGG_STATUS_SLEEP_TIME_WHEN_CHECK*2) + print(Green_On_Black("SUCCES")) + + print(White_On_Black("Check if OGG Synchronisation is stopped... "), end='') + ogg_sync.update_sync_status() + if (ogg_sync.extract_status == "RUNNING" or ogg_sync.replicat_status == "RUNNING"): + print(Red_On_Black("FAILED")) + print(White_On_Black(TAB + "- extract " + ogg_sync.extract + " is " + ogg_sync.extract_status)) + print(White_On_Black(TAB + "- replicat " + ogg_sync.replicat + " is " + ogg_sync.replicat_status)) + exit (ERROR) + print(Green_On_Black("SUCCES")) + # Log for debug purpose + logger.info(ogg_sync.get_class_attributes_as_json()) + logger.info(ogg_sync.get_config_params_as_json()) +# Log END of the execution +logger.info("END ogg_sync.py -e " + extract + " -r " + replicat + " -s " + sync) diff --git a/tiddlywiki/passwords.sql.txt b/tiddlywiki/passwords.sql.txt new file mode 100755 index 0000000..2ae5077 --- /dev/null +++ b/tiddlywiki/passwords.sql.txt @@ -0,0 +1,30 @@ +set pages 0 lines 256 feedback off + +select + 'alter user ' ||name|| ' identified by values '''|| password||''';' +from + user$ +where + password is not null and + name not in + ('SYS', + 'SYSTEM', + 'OUTLN', + 'GLOBAL_AQ_USER_ROLE', + 'DIP', + 'DBSNMP', + 'WMSYS', + 'XDB', + 'ANONYMOUS', + 'ORACLE_OCM', + 'APPQOSSYS', + 'XS$NULL', + 'SYSBACKUP', + 'AUDSYS', + 'SYSDG', + 'SYSKM', + 'GSMADMIN_INTERNAL', + 'GSMUSER', + 'GSMCATUSER' + ) +/ diff --git a/tiddlywiki/postgresql with Docker.txt b/tiddlywiki/postgresql with Docker.txt new file mode 100755 index 0000000..3028230 --- /dev/null +++ b/tiddlywiki/postgresql with Docker.txt @@ -0,0 +1,34 @@ +mkdir -p /app/persistent_docker/postgres/data +cd /app/persistent_docker/postgres + +cat docker-compose.yaml +# Use postgres/example user/password credentials +version: '3.1' + +services: + + db: + image: postgres + restart: always + environment: + POSTGRES_PASSWORD: secret + PGDATA: /var/lib/postgresql/data/pgdata + volumes: + - /app/persistent_docker/postgres/data:/var/lib/postgresql/data + ports: + - 5432:5432 + + adminer: + image: adminer + restart: always + ports: + - 8080:8080 + +docker-compose up -d +docker update --restart unless-stopped $(docker ps -q) + +# adminer URL: http://192.168.0.91:8080/ +# connect from remote machine +psql -h socorro -U postgres -d postgres + + diff --git a/tiddlywiki/rc.local as system service (system V).txt b/tiddlywiki/rc.local as system service (system V).txt new file mode 100755 index 0000000..bc87738 --- /dev/null +++ b/tiddlywiki/rc.local as system service (system V).txt @@ -0,0 +1,44 @@ +~~ create (if it doses not exists) rc.local file as +--------------------------------------------------------------------> +#!/bin/sh -e +# +# rc.local +# +# This script is executed at the end of each multiuser runlevel. +# Make sure that the script will "exit 0" on success or any other +# value on error. +# +# In order to enable or disable this script just change the execution +# bits. +# +# By default this script does nothing. + +exit 0 +<-------------------------------------------------------------------- + +chmod +x /etc/rc.local + +~~ create the service +~~ create file /etc/systemd/system/rc-local.service as + +----------------------------------> +[Unit] +Description=/etc/rc.local +ConditionPathExists=/etc/rc.local +After=multi-user.target + +[Service] +Type=forking +ExecStart=/etc/rc.local start +TimeoutSec=0 +StandardOutput=tty +RemainAfterExit=yes + +[Install] +WantedBy=multi-user.target +<--------------------------------- +systemctl daemon-reload +systemctl status rc-local +systemctl start rc-local +systemctl stop rc-local +systemctl enable rc-local diff --git a/tiddlywiki/redo_log_switch_frequency_map.sql.txt b/tiddlywiki/redo_log_switch_frequency_map.sql.txt new file mode 100755 index 0000000..9323cff --- /dev/null +++ b/tiddlywiki/redo_log_switch_frequency_map.sql.txt @@ -0,0 +1,57 @@ +set pages 999 lines 400 +col h0 format 999 +col h1 format 999 +col h2 format 999 +col h3 format 999 +col h4 format 999 +col h5 format 999 +col h6 format 999 +col h7 format 999 +col h8 format 999 +col h9 format 999 +col h10 format 999 +col h11 format 999 +col h12 format 999 +col h13 format 999 +col h14 format 999 +col h15 format 999 +col h16 format 999 +col h17 format 999 +col h18 format 999 +col h19 format 999 +col h20 format 999 +col h21 format 999 +col h22 format 999 +col h23 format 999 +SELECT TRUNC (first_time) "Date", inst_id, TO_CHAR (first_time, 'Dy') "Day", + COUNT (1) "Total", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '00', 1, 0)) "h0", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '01', 1, 0)) "h1", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '02', 1, 0)) "h2", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '03', 1, 0)) "h3", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '04', 1, 0)) "h4", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '05', 1, 0)) "h5", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '06', 1, 0)) "h6", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '07', 1, 0)) "h7", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '08', 1, 0)) "h8", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '09', 1, 0)) "h9", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '10', 1, 0)) "h10", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '11', 1, 0)) "h11", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '12', 1, 0)) "h12", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '13', 1, 0)) "h13", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '14', 1, 0)) "h14", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '15', 1, 0)) "h15", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '16', 1, 0)) "h16", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '17', 1, 0)) "h17", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '18', 1, 0)) "h18", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '19', 1, 0)) "h19", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '20', 1, 0)) "h20", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '21', 1, 0)) "h21", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '22', 1, 0)) "h22", + SUM (DECODE (TO_CHAR (first_time, 'hh24'), '23', 1, 0)) "h23", + ROUND (COUNT (1) / 24, 2) "Avg" +FROM gv$log_history +WHERE thread# = inst_id +AND first_time > sysdate -7 +GROUP BY TRUNC (first_time), inst_id, TO_CHAR (first_time, 'Dy') +ORDER BY 1,2; diff --git a/tiddlywiki/sabnzbd with Docker.md b/tiddlywiki/sabnzbd with Docker.md new file mode 100755 index 0000000..4a78ae0 --- /dev/null +++ b/tiddlywiki/sabnzbd with Docker.md @@ -0,0 +1,135 @@ +Context +------- + +| | | +| ---------------------------- | --------------------------------------------- | +| public URL | `https://sabnzbd.databasepro.eu` | +| Apache reverse-proxy host | `umbara` | +| sabnzbd server | `ossus:7100` | + +Apache (reverse proxy) config +----------------------------- +``` + + ServerName sabnzbd.databasepro.eu + ServerAdmin admin@sabnzbd.databasepro.eu + + Redirect permanent / https://sabnzbd.databasepro.eu + + DocumentRoot /usr/local/apache2/wwwroot/sabnzbd + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + ErrorLog logs/sabnzbd-error.log + CustomLog logs/sabnzbd-access.log combined + + + + ServerName sabnzbd.databasepro.eu + + ServerAdmin admin@sabnzbd.databasepro.eu + DocumentRoot /usr/local/apache2/wwwroot/sabnzbd + + + Order allow,deny + AllowOverride All + Allow from all + Require all granted + + + SSLEngine On + SSLProxyEngine On + + # Disable SSLProxyCheck + SSLProxyCheckPeerCN Off + SSLProxyCheckPeerName Off + SSLProxyVerify none + + ErrorLog logs/sabnzbd-error.log + CustomLog logs/sabnzbd-access.log combined + + SSLCertificateFile "/etc/letsencrypt/live/sabnzbd.databasepro.eu/fullchain.pem" + SSLCertificateKeyFile "/etc/letsencrypt/live/sabnzbd.databasepro.eu/privkey.pem" + + ProxyPass / http://ossus:7100/ + ProxyPassReverse / http://ossus:7100/ + +``` + +sabnzbd server setup +------------------- + +Because in my case, some sabnzbd directories will be on a CIFS share, I will run sabnzbd container as `smbuser` + +Get Docker image +---------------- +``` +docker pull sabnzbd/sabnzbd +``` + +Prepare persistent directory +---------------------------- +``` +mkdir /app/appsdocker/sabnzbd +chown -R smbuser:smbuser /app/appsdocker/sabnzbd +``` + +> I had resolution name issue from docker container using a bind9 docker container on the same host. The workaround was to use an external nameserver (Google for example) by mapping container file `/etc/resolv.conf` to the local persistent file `/app/appsdocker/sabnzbd/resolv.conf`: + + nameserver 8.8.8.8 + +Run the container +----------------- +Note that the continer will be run with `smbuser` local user `uid/gid` and in addition to standard image directories `/datadir` and `/media` I included my CIFS share `/mnt/yavin4/download/sabnzbd` +``` +docker run -d --name sabnzbd \ + -e SABNZBD_UID=1000 \ + -e SABNZBD_GID=1000 \ + -v /app/appsdocker/sabnzbd/resolv.conf:/etc/resolv.conf \ + -v /app/appsdocker/sabnzbd:/datadir \ + -v /app/appsdocker/sabnzbd:/media \ + -v /mnt/yavin4/download/sabnzbd:/mnt/yavin4/download/sabnzbd \ + -p 7100:8080 sabnzbd/sabnzbd +``` + +>Acceding to `https://sabnzbd.databasepro.eu` at this moment will geberate the error: `Access denied - Hostname verification failed: https://sabnzbd.org/hostname-check` + +To fix this, add `ossus` to the whitelist in `/app/appsdocker/sabnzbd/config.ini` +``` +host_whitelist = ossus +``` + +and restart the container: +``` +docker restart sabnzbd +``` +> + +Optionally using docker-compose +------------------------------- +`docker-compose.yaml` file: +``` +sabnzbd: + image: "sabnzbd/sabnzbd" + container_name: "sabnzbd" + volumes: + - /app/appsdocker/sabnzbd/resolv.conf:/etc/resolv.conf + - /app/appsdocker/sabnzbd:/datadir + - /app/appsdocker/sabnzbd:/media + - /mnt/yavin4/download/sabnzbd:/mnt/yavin4/download/sabnzbd + environment: + - SABNZBD_UID=1502 + - SABNZBD_GID=1502 + ports: + - "7005:8080" + restart: always +``` + +Setup sabnzbd +------------- +Run the configuration wizard from `https://sabnzbd.databasepro.eu` and once finished go back to the start page +`https://sabnzbd.databasepro.eu` in order to customize security username/password, download directories, skin etc. diff --git a/tiddlywiki/select_sync.tpl.txt b/tiddlywiki/select_sync.tpl.txt new file mode 100755 index 0000000..b926d7f --- /dev/null +++ b/tiddlywiki/select_sync.tpl.txt @@ -0,0 +1,115 @@ + + + + + Select OGG syncronisation + + + + + + + + + + + + + + + + + + + {% include 'top_bar.tpl' %} + + + + + + + + + + + + +
+
+
+
+
+ + + + + + + + + + + + + + + + + + + {% for extract, replicat, type_environement, title, database_source, database_target, extract_status, replicat_status in www_visible_oggsync_array %} + + + {% if type_environement=="dev" %} + + {% endif %} + {% if type_environement=="prod" %} + + {% endif %} + + + {% if extract_status=="RUNNING" %} + + + {% if replicat_status=="RUNNING" %} + + + + + + {% endfor %} +
Source DBExtractReplicatTarget DB
{{ loop.index }}devprod{{title}}{{database_source}} + {% elif extract_status=="ABENDED" %} + + {% elif extract_status=="STOPPED" %} + + {% elif extract_status=="UNKNOW" %} + + {% elif extract_status=="MISSING" %} + + {% endif %} + {{extract}} + {% elif replicat_status=="ABENDED" %} + + {% elif replicat_status=="STOPPED" %} + + {% elif replicat_status=="UNKNOW" %} + + {% elif replicat_status=="MISSING" %} + + {% endif %} + + {{replicat}}{{database_target}}Manage
+
+ EXTRACT/REPLICAT status updated on: {{status_time}} + RUNNINGSTOPPEDABENDEDUNKNOWMISSING +
+
+
+ + diff --git a/tiddlywiki/ssh - ProxyJump.md b/tiddlywiki/ssh - ProxyJump.md new file mode 100755 index 0000000..71bfb12 --- /dev/null +++ b/tiddlywiki/ssh - ProxyJump.md @@ -0,0 +1,5 @@ +Examples without using ssh-agent +In all cases (with or without ssh-agent) the public key of your **initial user** should be declared in authorized_key of the **final host** + + ssh -i /home/vplesnila/data/sshkeys/id_rsa -J root@192.168.0.8 root@coruscant + scp -i /home/vplesnila/data/sshkeys/id_rsa -J root@192.168.0.8 scripts/wireguard_stop root@coruscant:/tmp \ No newline at end of file diff --git a/tiddlywiki/temp_usage.sql.txt b/tiddlywiki/temp_usage.sql.txt new file mode 100755 index 0000000..a1a33e7 --- /dev/null +++ b/tiddlywiki/temp_usage.sql.txt @@ -0,0 +1,12 @@ +SELECT a.tablespace_name TABLESPACE, + d.TEMP_TOTAL_MB, + SUM (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_USED_MB, + d.TEMP_TOTAL_MB - SUM (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_FREE_MB +FROM v$sort_segment a, ( +SELECT b.name, + c.block_size, + SUM (c.bytes) / 1024 / 1024 TEMP_TOTAL_MB +FROM v$tablespace b, + v$tempfile c +WHERE b.ts#= c.ts# group by b.name, c.block_size ) d +where a.tablespace_name = d.name group by a.tablespace_name, d.TEMP_TOTAL_MB; \ No newline at end of file diff --git a/tiddlywiki/tiddlywiki with docker.md b/tiddlywiki/tiddlywiki with docker.md new file mode 100755 index 0000000..f934271 --- /dev/null +++ b/tiddlywiki/tiddlywiki with docker.md @@ -0,0 +1,83 @@ +Pull docker image: + + docker pull elasticdog/tiddlywiki + + +Create persistent directory: + + /app/appsdocker/tiddlywiki + cd /app/appsdocker/tiddlywiki + + +Check the version of the application: + + docker run -it --rm elasticdog/tiddlywiki --version + + +Create `tiddlywiki-docker` shell as: + + #!/usr/bin/env bash + + docker run --interactive --tty --rm \ + --publish 192.168.0.103:8080:8080 \ + --mount "type=bind,source=${PWD},target=/tiddlywiki" \ + --user "$(id -u):$(id -g)" \ + elasticdog/tiddlywiki \ + "$@" + +Use the sccript to create a new wiki: + + ./tiddlywiki-docker mywiki --init server + + +Create `tiddlywiki-server` shell as: + + #!/usr/bin/env bash + + readonly WIKIFOLDER=$1 + + docker run --detach --rm \ + --name tiddlywiki \ + --publish 192.168.0.103:7006:8080 \ + --mount "type=bind,source=${PWD},target=/tiddlywiki" \ + --user "$(id -u):$(id -g)" \ + elasticdog/tiddlywiki \ + "$WIKIFOLDER" \ + --listen host=0.0.0.0 + +where `192.168.0.103` is the listening host interface and `7006 `is the listening TCP port. + + +Run the tiddlywiki server: + + ./tiddlywiki-server mywiki + + +Test the application: + + http://192.168.0.103:7006/ + + +Optionally create a `docker-compose.yaml` file in order to start the container with docker composer and customize the command line startup options. In this case, create a private wiki requiering authentification: + + + services: + tiddlywiki: + image: "elasticdog/tiddlywiki" + container_name: "tiddlywiki" + volumes: + - /app/persistent_docker/tiddlywiki/mywiki:/tiddlywiki + ports: + - "7006:8080" + restart: always + command: "/tiddlywiki --listen host=0.0.0.0 username=***** password=*****" + + +Start the container: + + docker-compose up -d + +> It is very eassy to clone a tiddlywiki: just copy the content of tiddlywiki root folder fdrom a wiki to another. + +> Installed plugins will be also copied because they are a simply tiddlers. + diff --git a/tiddlywiki/travaux.txt b/tiddlywiki/travaux.txt new file mode 100755 index 0000000..22a29f2 --- /dev/null +++ b/tiddlywiki/travaux.txt @@ -0,0 +1,2 @@ +Andu: 00407225211313 +de la Nestor Bogdan diff --git a/tiddlywiki/ts.sql.txt b/tiddlywiki/ts.sql.txt new file mode 100755 index 0000000..04c3c08 --- /dev/null +++ b/tiddlywiki/ts.sql.txt @@ -0,0 +1,19 @@ +set pages 999 +set lines 200 +col tablespace for a30 +select tablespace,used_mb,free_mb,total_mb,max_mb,pct_used,pct_used_max +from ( +select + total.ts tablespace, + total.bytes - nvl(free.bytes,0) used_mb, + free.bytes free_mb, + total.bytes total_mb, + total.MAXBYTES max_mb, + 100-ROUND( (nvl(free.bytes,0))/(total.bytes) * 100, 2) pct_used, + 100-ROUND( ( nvl(total.MAXBYTES,0) - (nvl(total.bytes,0)-nvl(free.bytes,0)) )/(total.MAXBYTES) * 100, 2) pct_used_max +from + (select tablespace_name ts, round(sum(bytes)/1024/1024,2) bytes, round(sum(decode(MAXBYTES,0,BYTES,MAXBYTES))/1024/1024,2) MAXBYTES + from dba_data_files group by tablespace_name) total, + (select tablespace_name ts, round(sum(bytes)/1024/1024,2) bytes from dba_free_space group by tablespace_name) free +where total.ts=free.ts(+) ) +order by 7 desc; diff --git a/tiddlywiki/wireguard with Docker.md b/tiddlywiki/wireguard with Docker.md new file mode 100755 index 0000000..a3e0285 --- /dev/null +++ b/tiddlywiki/wireguard with Docker.md @@ -0,0 +1,133 @@ +Be sure to have system up to date: + + dnf update + +Install kernel headers: + + dnf install -y kernel-headers.x86_64 kernel-devel + +Check: + + ls -l /usr/src/kernels/$(uname -r) + +Install wireguard kernel module: + + dnf install -y epel-release elrepo-release -y + dnf install -y kmod-wireguard wireguard-tools + +Create persistent directory for container: + + mkdir -p /app/persistent_docker/wireguard + cd /app/persistent_docker/wireguard + +Run container: + + docker run -d \ + --name=wireguard \ + --cap-add=NET_ADMIN \ + --cap-add=SYS_MODULE \ + -e TZ=Europe/London \ + -e SERVERURL=ssh.databasepro.eu `#optional` \ + -e SERVERPORT=51820 `#optional` \ + -e PEERS=1 `#optional` \ + -e PEERDNS=auto `#optional` \ + -e INTERNAL_SUBNET=10.10.10.0 `#optional` \ + -e ALLOWEDIPS=0.0.0.0/0 `#optional` \ + -p 7006:51820/udp \ + -v /app/persistent_docker/wireguard:/config \ + -v /lib/modules:/lib/modules \ + --sysctl="net.ipv4.conf.all.src_valid_mark=1" \ + --restart unless-stopped \ + ghcr.io/linuxserver/wireguard + + +At the first run that will create configuration under `/app/persistent_docker/wireguard` + +In logfiles, we have the QR-code to use clint side. + +A client configuration ready to use is generated under `/app/persistent_docker/wireguard/peer1` + +Example configuration server side: + + [Interface] + Address = 10.10.10.1 + ListenPort = 51820 + PrivateKey = ******************************* + PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE + PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE + + [Peer] + # peer1 + PublicKey = ******************************* + AllowedIPs = 10.10.10.2/32 + + +Example configuration client side: + + [Interface] + PrivateKey = ******************************* + ListenPort = 51820 + Address = 10.10.10.2/32 + DNS = 10.10.10.1 + + [Peer] + PublicKey = ******************************* + AllowedIPs = 0.0.0.0/0 + Endpoint = ssh.databasepro.eu:41820 + + +In the previous example, the NAT rule has been defined in the router: `ssh.databasepro.eu:41820->(by UDP)->[docker host]:7006`. + +> I guess it was something special on my system (maybe because the host is also Name Server) but to make it working I modified `coredns/Corefile` as: + + . { + forward . 1.1.1.1 192.168.0.8 + } + + +If you want use `docker-compose` to start the container, `docker-compose.yaml`: + + version: "2.1" + services: + wireguard: + image: ghcr.io/linuxserver/wireguard + container_name: wireguard + cap_add: + - NET_ADMIN + - SYS_MODULE + environment: + - TZ=Europe/London + - SERVERURL=ssh.databasepro.eu #optional + - SERVERPORT=51820 #optional + - PEERS=1 #optional + - PEERDNS=auto #optional + - INTERNAL_SUBNET=10.10.10.0 #optional + - ALLOWEDIPS=0.0.0.0/0 #optional + volumes: + - /app/persistent_docker/wireguard:/config + - /lib/modules:/lib/modules + ports: + - 7006:51820/udp + sysctls: + - net.ipv4.conf.all.src_valid_mark=1 + restart: unless-stopped + +Start the container with `docker-compose`: + + docker-compose up -d + +Troubleshooting +--------------- + +One time, after a kernel upgrade, wireguard stopped to work with the error message: + + iptables v1.6.1: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) + Perhaps iptables or your kernel needs to be upgraded. + +Issue solved reloading `iptable_nat` module: + + modprobe iptable_nat + +To load automaticly this module on system reboot, create the file `/etc/modules-load.d/iptable_nat.conf` with the following contents: + + iptable_nat diff --git a/tiddlywiki/www_ogg_service.py.txt b/tiddlywiki/www_ogg_service.py.txt new file mode 100755 index 0000000..a8aa2a2 --- /dev/null +++ b/tiddlywiki/www_ogg_service.py.txt @@ -0,0 +1,1597 @@ +#!/u01/app/python/current_version/bin/python3 + + +from libs.ogg_libs import * + +import os +import cx_Oracle +import ldap +import subprocess +import cherrypy +from jinja2 import Environment, FileSystemLoader +from cherrypy.lib import auth_basic +import requests +import json +import pytz +import dateutil.parser +import datetime +from email.utils import COMMASPACE, formatdate +from requests.packages.urllib3.exceptions import InsecureRequestWarning +from libs.ogg_libs import * +import time +from email.message import EmailMessage +import smtplib + + +DEBUG_MODE = False +TEMPLATE_DIR = OGG_SERVICE_DIR + "/www" +LDAP = "frru3dc4635.france.intra.corp" +OEMDB_EZ_STRING = "FRPIVSQL2418/OEMPRD" +ORA_USERNAME = "system" +ORA_PASSWORD = "plusdacces" +CONTROLMDB_EZ_STRING = "dmp01-scan/EMPRDEXA" +DRIVE_DATABASES = {"FR":"dmp01-scan/DRF1PRDEXA", "IT":"dmp01-scan/DRI1PRDEXA", "UK":"dmp01-scan/DRU1PRDEXA", "SP":"dmp01-scan/DRS1PRDEXA"} +DRIVE_DATABASES_DESCRIPTION = {"FR":"Drive FRANCE database", "IT":"Drive ITALY database", "UK":"Drive UK database", "SP":"Drive SPAIN database"} +ACTIVE_SESS_HIST_HOURS = 4 +LOCK_HIST_HOURS = 4 +DATGUARD_ACCEPTED_LAG_SECONDS = 0 +DATGUARD_ACCEPTED_COLLECT_DELAI_MINUTES = 15 + + +env=Environment(loader=FileSystemLoader(TEMPLATE_DIR), trim_blocks=True) +requests.packages.urllib3.disable_warnings(InsecureRequestWarning) +get_hostname() +logger = start_logging(OGG_SERVICE_DIR+"/log/www_actions.log") + +def ad_validate(username, domain, password): + # Delegate password check to Active Directory + RC = False + l = ldap.initialize("ldap://" + LDAP) + try: + l.protocol_version = ldap.VERSION3 + l.set_option(ldap.OPT_REFERRALS, 0) + l.simple_bind_s(username + domain, password) + RC = True + logger.info("User " + username + " logged in") + except ldap.LDAPError as e: + # Reject if wrong password + RC = False + # Accept otherwise + return RC + +def load_user_dict(): + with open(OGG_SERVICE_DIR+"/etc/www_users.conf", 'r') as f: + user_dict = json.load(f) + return user_dict + + +def get_www_visible_oggsync(): + all_sync_array = [] + directory = OGG_SERVICE_DIR + "/sync.d/" + for name in sorted(os.listdir(directory)): + if os.path.isdir(os.path.join(directory, name)): + try: + (extract, replicat) = name.split("_") + + with open(OGG_SERVICE_DIR+"/etc/status.info") as f: + arr_status = json.load(f) + + for status in arr_status: + if arr_status[extract] == None: + arr_status[extract] = "UNKNOW" + if arr_status[replicat] == None: + arr_status[replicat] = "UNKNOW" + + ogg_sync = OGG_Sync(extract, replicat) + ogg_sync.create_temporary_sync_directory() + ogg_sync.fast_parse_prm_headers() + env_info_filename = os.path.join(directory, name, "env.info") + env_info_contents = open(env_info_filename, "r", encoding = "utf-8", errors = "replace").readlines() + specific_params = {} + for line in env_info_contents: + line = line.rstrip().lstrip() + if line !="": + param, value = line.split("=", 1) + param = param.rstrip().lstrip() + value = value.rstrip().lstrip() + specific_params[param] = value + + if specific_params["www_visible"] == "yes": + sync = (extract, replicat, specific_params["type_environement"], specific_params["www_title"], ogg_sync.database_source, ogg_sync.database_target, arr_status[extract], arr_status[replicat]) + all_sync_array.append(sync) + + except: + pass + return all_sync_array + +def get_target_instances(target_database): + db_name = target_database[:-3] + db_name_like_pattern = "PRD%" + db_name + "%" + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + sql = "select TARGET_NAME from sysman.MGMT$TARGET where target_name like '" + db_name_like_pattern + "'" + cursor.execute(sql) + target_instances = [] + for row in cursor: + target_instances.append(row[0]) + + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + raise + return target_instances + + +# OUR CONTROLLER +class Root(object): + user_dict = load_user_dict() + menu = {} + # CherryPy published function + + @cherrypy.expose() + def login_page(self): + templateVars = {} + template = env.get_template('login_page.tpl') + templateVars = { + "myvar1" : "myvar1" + } + output = template.render(templateVars) + return output + + @cherrypy.expose() + def select_sync(self): + self.restrict_url_to_groups(["SUPERUSER","OGGOPER"]) + + www_visible_oggsync_array = get_www_visible_oggsync() + + status_time = time.ctime(os.path.getmtime(OGG_SERVICE_DIR+"/etc/status.info")) + templateVars = {} + template = env.get_template('select_sync.tpl') + templateVars = { + "www_visible_oggsync_array" : www_visible_oggsync_array, + "status_time" : status_time + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + @cherrypy.expose() + def oemtargetlist(self, *args, **kwargs): + self.restrict_url_to_logged() + # GET values + try: + hostlike = kwargs["hostlike"] + except KeyError as e: + hostlike="" + + try: + instancelike = kwargs["instancelike"] + except KeyError as e: + instancelike="" + + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + sql = """ + WITH data_tab AS ( + select T.target_name, P1.property_value as DBAPPLICATION + from sysman.mgmt$target T, + sysman.mgmt$target_properties P1 + where P1.TARGET_GUID(+) = T.TARGET_GUID + and P1.property_name(+) = 'orcl_gtp_cost_center' + """ + if hostlike !="": + sql = sql + "and (T.target_name like '%" + hostlike + "%') " + + if instancelike !="": + sql = sql + "and (T.target_name like '%" + instancelike + "%') " + + sql = sql + """ + and (T.target_type IN ('oracle_database')) + ) + SELECT + regexp_substr (target_name, '[^_]+', 1, 1) DBENV, + regexp_substr (target_name, '[^_]+', 1, 2) DBHOST, + regexp_substr (target_name, '[^_]+', 1, 3) DBINST, + DBAPPLICATION + FROM data_tab + order by 2, 3, 4, 1 + """ + cursor.execute(sql) + items = list(cursor.fetchall()) + + if hostlike !="": + templateVars = {} + template = env.get_template('oemtargetlist_hostlike.tpl') + templateVars = { + "selected_host": hostlike, + "items" : items + } + templateVars.update(Root.menu) + + if instancelike !="": + templateVars = {} + template = env.get_template('oemtargetlist_instancelike.tpl') + templateVars = { + "instancelike": instancelike, + "items" : items + } + templateVars.update(Root.menu) + + output = template.render(templateVars) + cursor.close() + + except cx_Oracle.DatabaseError as err: + output = str(err) + + return output + + @cherrypy.expose() + def oemhosts(self, *args, **kwargs): + self.restrict_url_to_logged() + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + sql = """ + WITH data_tab AS ( + select T.target_name, P1.property_value as DBAPPLICATION + from sysman.mgmt$target T, + sysman.mgmt$target_properties P1 + where P1.TARGET_GUID(+) = T.TARGET_GUID + and P1.property_name(+) = 'orcl_gtp_cost_center' + and (T.target_type IN ('oracle_database')) + ) + SELECT distinct + regexp_substr (target_name, '[^_]+', 1, 2) DBHOST + FROM data_tab + order by 1 asc + """ + cursor.execute(sql) + items = list(cursor.fetchall()) + cursor.close() + except cx_Oracle.DatabaseError as err: + output = str(err) + return output + + exaprodhosts=[] + exapnonrodhosts=[] + otherhosts=[] + + for item in items: + dbhost = str(item[0]) + + if dbhost.startswith("dmp"): + exaprodhosts.append(dbhost) + elif dbhost.startswith("dmt"): + exapnonrodhosts.append(dbhost) + else: + otherhosts.append(dbhost) + + templateVars = {} + template = env.get_template('oemhosts.tpl') + templateVars = { + "exaprodhosts" : exaprodhosts, + "exapnonrodhosts" : exapnonrodhosts, + "otherhosts" : otherhosts, + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + @cherrypy.expose() + def drivesessions(self, *args, **kwargs): + country = kwargs["country"] + accepted_group = "HELPDESK_" + country + self.restrict_url_to_groups(["SUPERUSER", accepted_group]) + try: + db = cx_Oracle.connect(dsn=DRIVE_DATABASES[country]) + cursor = db.cursor() + html_newline = "%20%0A" + sql = """ + SELECT DISTINCT + S1.SID||'-'||S1.SERIAL#||'-@'||S1.INST_ID, + S1.USERNAME ||' - '||''||a.full_name||'', + S1.MODULE, + S2.USERNAME ||' - '||''||b.full_name||'', + S2.MODULE, + S1.SQL_ID, + trunc(L1.ctime/60) + FROM GV$LOCK L1, + GV$SESSION S1, + GV$LOCK L2, + GV$SESSION S2, + drive.drive_users a, + drive.drive_users b + WHERE S1.SID=L1.SID AND S2.SID=L2.SID + AND S1.INST_ID=L1.INST_ID AND S2.INST_ID=L2.INST_ID + AND L1.BLOCK > 0 AND L2.REQUEST > 0 + AND L1.ID1 = L2.ID1 AND L1.ID2 = L2.ID2 + AND S1.username=a.user_name(+) + AND S2.username=b.user_name(+) + AND trunc(L1.ctime/60)>0 + AND S1.username not in ('SYS','SYSTEM','OGG_USER','DRIVE','WSHANDLER') + """ % ("Drive " + country, html_newline + html_newline, "Drive " + country, html_newline, html_newline, html_newline) + cursor.execute(sql) + items = list(cursor.fetchall()) + cursor.close() + db.close() + title = DRIVE_DATABASES_DESCRIPTION[country] + templateVars = {} + template = env.get_template('drivesessions.tpl') + templateVars = { + "title" : title, + "items" : items + } + templateVars.update(Root.menu) + output = template.render(templateVars) + except Exception as err: + output = str(DRIVE_DATABASES_DESCRIPTION[country]) + pass + + return output + + @cherrypy.expose() + def drivekillsessions(self, *args, **kwargs): + country = kwargs["country"] + oracle_sessions_string = kwargs["sessions"] + accepted_group = "HELPDESK_" + country + self.restrict_url_to_groups(["SUPERUSER", accepted_group]) + oracle_sessions = oracle_sessions_string.split(",") + try: + db = cx_Oracle.connect(dsn=DRIVE_DATABASES[country]) + cursor = db.cursor() + for oracle_session in oracle_sessions: + (oracle_sid, oracle_serial, oracle_instance) = oracle_session.split("-") + # before KILL session, grab informations about SQL_ID, user email etc. + sql = "select username, sql_id, event, seconds_in_wait from gv$session where sid=" + oracle_sid + " and serial#=" + oracle_serial + cursor.execute(sql) + for row in cursor: + username, sql_id, event, seconds_in_wait = row + + sql = "select full_name, email_address from drive.drive_users where user_name='" + username + "'" + cursor.execute(sql) + for row in cursor: + full_name, email_address = row + + sql = "alter system disconnect session '" + oracle_sid + "," + oracle_serial + "," + oracle_instance + "' immediate" + logger.info("HELPDESK " + cherrypy.session.get("ad_uid") + " killed on " + DRIVE_DATABASES[country] + " the user " + username + " having SID=" + str(oracle_sid) + ", SERIAL#=" + str(oracle_serial) + ", SQL_ID=" + str(sql_id) + ", EVENT=" + event + ", SECONDS_IN_WAIT=" + str(seconds_in_wait)) + cursor.execute(sql) + + cursor.close() + db.close() + # Send email + email_body = """ + Hello %s,\n + Your DRIVE application session has been killed because it was blocking other sessions.\n + For more details, you can contact me by email: %s \n + ----- \n + Kind Regards,\n + %s + """ % (full_name, cherrypy.session.get("ad_uid") + "@arval.com", cherrypy.session.get("ad_uid") + "@arval.com") + msg = EmailMessage() + msg.set_content(email_body) + msg['Subject'] = "DRIVE " + country + " -- session killed" + if email_address == "" or email_address==None: + email_address = str(username) + "@arval.com" + print (email_address) + msg['From'] = cherrypy.session.get("ad_uid") + "@arval.com" + msg['To'] = email_address + # Send the message via our own SMTP server. + s = smtplib.SMTP('localhost') + s.send_message(msg) + s.quit() + + except cx_Oracle.DatabaseError as err: + print (err) + + # Display information windows + templateVars = {} + template = env.get_template('box_wait.tpl') + box_type = "box-primary" + box_title = "Please wait..." + redirect_url = "../drivesessions?country=" + country + redirect_timeout = "3000" + box_message1 = "Starting session KILL..." + box_message2 = "This page will be automaticly redirected after JOB submission." + templateVars = { + "box_type" : box_type, + "box_title" : box_title, + "redirect_url" : redirect_url, + "redirect_timeout" : redirect_timeout, + "box_message1" : box_message1, + "box_message2" : box_message2 + } + templateVars.update(Root.menu) + output = template.render(templateVars) + + return output + + @cherrypy.expose() + def show_logfile(self, *args, **kwargs): + self.restrict_url_to_groups(["SUPERUSER","OGGOPER"]) + + if cherrypy.session.get('ad_uid') != None: + logfile = kwargs["logfile"] + logfile_contents = open(OGG_SERVICE_DIR + "/log/www_executions/" + logfile, "r", encoding = "utf-8", errors = "replace").readlines() + templateVars = {} + template = env.get_template('view_log.tpl') + templateVars = { + "logfile" : logfile, + "logfile_contents" : "".join(logfile_contents) + } + templateVars.update(Root.menu) + output = template.render(templateVars) + else: + raise cherrypy.HTTPRedirect("/login_page") + return output + + def build_menu(self): + # Disable all by default + Root.menu["show_ogg_menu"] = "NO" + Root.menu["show_drive_it_sessions_menu"] = "NO" + Root.menu["show_drive_fr_sessions_menu"] = "NO" + Root.menu["show_drive_sp_sessions_menu"] = "NO" + Root.menu["show_drive_uk_sessions_menu"] = "NO" + # Enable by GROUP + if self.user_is_in_groups(["SUPERUSER","OGGOPER"]): + Root.menu["show_ogg_menu"] = "YES" + if self.user_is_in_groups(["SUPERUSER","HELPDESK_IT"]): + Root.menu["show_drive_it_sessions_menu"] = "YES" + if self.user_is_in_groups(["SUPERUSER","HELPDESK_FR"]): + Root.menu["show_drive_fr_sessions_menu"] = "YES" + if self.user_is_in_groups(["SUPERUSER","HELPDESK_SP"]): + Root.menu["show_drive_sp_sessions_menu"] = "YES" + if self.user_is_in_groups(["SUPERUSER","HELPDESK_UK"]): + Root.menu["show_drive_uk_sessions_menu"] = "YES" + + return + + @cherrypy.expose() + def logout(self): + cherrypy.session.pop('ad_uid', None) + raise cherrypy.HTTPRedirect("/login_page") + return + + @cherrypy.expose() + def login(self, *args, **kwargs): + ad_uid = kwargs["ad_uid"] + ad_domain = kwargs["ad_domain"] + ad_password = kwargs["ad_password"] + if not ad_validate(ad_uid, ad_domain, ad_password): + raise cherrypy.HTTPRedirect("/login_page") + else: + cherrypy.session["ad_uid"] = ad_uid.lower() + Root.user_dict = load_user_dict() + self.build_menu() + raise cherrypy.HTTPRedirect("/index") + return + + @cherrypy.expose() + def index(self): + self.restrict_url_to_logged() + + templateVars = {} + template = env.get_template('start_page.tpl') + templateVars = {} + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + def restrict_url_to_logged(self): + # Check if user is logged in + if cherrypy.session.get("ad_uid") == None: + # Not logged in, redirect to login page + raise cherrypy.HTTPRedirect("/login_page") + return + + def user_is_in_groups(self, groups): + is_in_groups = False + try: + user_groups = Root.user_dict[cherrypy.session.get("ad_uid")] + except KeyError as e: + user_groups = "GUEST" + for group in groups: + if group in user_groups: + is_in_groups = True + break + return is_in_groups + + + def restrict_url_to_groups(self, authorized_groups): + self.restrict_url_to_logged() + authorized = False + + if not self.user_is_in_groups(authorized_groups): + raise cherrypy.HTTPRedirect("/unathorized") + return + + @cherrypy.expose() + def dash_start(self, *args, **kwargs): + # self.restrict_url_to_logged() + + # Set to production cluster by default + try: + cluster = kwargs["cluster"] + except: + cluster = "dmp" + + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + + # Database status counters and graph + sql = """ + with exadata_db as ( + SELECT + mgmt$target.host_name + , mgmt$target.target_name + , mgmt$target.target_type + FROM sysman.mgmt$target + where mgmt$target.host_name like '%s' + and mgmt$target.target_type = 'rac_database' + ) + select count(*), A.AVAILABILITY_STATUS + FROM sysman.mgmt$availability_current A, sysman.MGMT$TARGET B , exadata_db C + WHERE B.target_name= C.target_name + AND A.TARGET_GUID=B.TARGET_GUID + group by AVAILABILITY_STATUS + """ % (cluster + "%") + + cursor.execute(sql) + + # All that sequence for guarantee series color :) + series_dict={} + status_dict={} + for status_name in ["Target Up", "Target Down", "Blackout", "Metric Error","Pending/Unknown"]: + series_dict[status_name] = "{name: '" + status_name + "', data: [0]}," + status_dict[status_name] = 0 + + for row in cursor: + (target_count, availability_status) = row + for status_name in ["Target Up", "Target Down", "Blackout", "Metric Error","Pending/Unknown"]: + if availability_status == status_name: + series_dict[status_name] = "{ name: '" + availability_status + "', data: [" + str(target_count) + "]}," + status_dict[status_name] = target_count + + series = "" + for status_name in ["Target Up", "Target Down", "Blackout", "Metric Error","Pending/Unknown"]: + series = series + series_dict[status_name] + + + + # Dataguard status counters + sql = """ + SELECT count(*) + FROM sysman.mgmt$metric_current + WHERE metric_column='dg_status' + AND key_value !=target_name + """ + cursor.execute(sql) + + for row in cursor: + dataguard_total_number = row[0] + + sql = """ + with + drp_errors as ( + SELECT DISTINCT entity_name, + value, + ROUND((sysdate-collection_time)*1440) Min_delta_collect, + CASE + WHEN ROUND((sysdate-collection_time)*1440)>=%s + THEN 'collect_error' + ELSE + CASE + WHEN value>%s + THEN 'unsync' + END + END drp_status + FROM sysman.GC_METRIC_VALUES_LATEST + WHERE metric_group_name ='dataguard_sperf_112' + AND metric_column_name IN ('dg_lag','dg_pdl') + AND (value >%s + OR collection_time <=sysdate-%s/1440) + ORDER BY value DESC, + Min_delta_collect DESC + ) + select count(1), drp_status from drp_errors group by drp_status order by drp_status asc + """ % (DATGUARD_ACCEPTED_COLLECT_DELAI_MINUTES, DATGUARD_ACCEPTED_LAG_SECONDS, DATGUARD_ACCEPTED_LAG_SECONDS, DATGUARD_ACCEPTED_COLLECT_DELAI_MINUTES) + + cursor.execute(sql) + dataguard_status_dict = {"sync":0, "unsync":0, "collect_error":0} + for row in cursor: + (count, dataguard_status) = row + dataguard_status_dict[dataguard_status] = count + + dataguard_status_dict['sync'] = dataguard_total_number + + cursor.close() + db.close() + + except cx_Oracle.DatabaseError as err: + raise + + # Build target status graph + templateVars = {} + template = env.get_template('target_status_column.graph') + + templateVars = { + "chart_name": "target_status_column_graph", + "series": series, + } + + target_status_column_graph = template.render(templateVars) + + # Build target status table + templateVars = {} + template = env.get_template('target_status.table') + + templateVars = { + "target_up": status_dict["Target Up"], + "target_down": status_dict["Target Down"], + "blackout": status_dict["Blackout"], + "metric_error": status_dict["Metric Error"], + "pending_unknow": status_dict["Pending/Unknown"], + } + + target_status_table = template.render(templateVars) + + # Build dataguard status table + template = env.get_template('dataguard_status.table') + templateVars = { + "collect_error": dataguard_status_dict['collect_error'], + "unsync": dataguard_status_dict['unsync'], + "sync": dataguard_status_dict['sync'], + } + + dataguard_status_table = template.render(templateVars) + + + # Drive boxes + drive_box_content_dict={"FR":"", "UK":"", "SP":"", "IT":""} + for country_key in drive_box_content_dict: + drive_db_name = DRIVE_DATABASES[country_key].split("/")[1] + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + + # User session count metric + sql = """ + select value from sysman.mgmt$metric_current where metric_name ='ME$GLOBAL_USER_SESSION_COUNT' + and target_name='%s' + """ % (drive_db_name) + cursor.execute(sql) + for row in cursor: + user_sessions = row[0] + + + # Active session count metric + sql = """ + select value from sysman.mgmt$metric_current where metric_name ='ME$GLOBAL_ACTIVE_SESSION_COUNT' + and target_name='%s' + """ % (drive_db_name) + cursor.execute(sql) + for row in cursor: + active_sessions = row[0] + + + sql = """ + select value from sysman.mgmt$metric_current where metric_name ='ME$LOCK_COUNT' + and target_name='%s' + """ % (drive_db_name) + cursor.execute(sql) + for row in cursor: + locked_sessions = row[0] + + templateVars = {} + template = env.get_template('drive_mini.table') + templateVars = { + "user_sessions": user_sessions, + "active_sessions": active_sessions, + "locked_sessions": locked_sessions, + } + drive_box_content_dict[country_key] = template.render(templateVars) + except: + raise + + + # Build dashboard + templateVars = {} + template = env.get_template('dash_start.tpl') + if cluster == "dmp": + title = "EXADATA PRODUCTION" + elif cluster == "dmt": + title = "EXADATA NON-PRODUCTION" + + templateVars = { + "title": title, + "cluster": cluster, + "target_status_table": target_status_table, + "dataguard_status_table": dataguard_status_table, + "target_status_column_graph": target_status_column_graph, + "drive_fr": drive_box_content_dict["FR"], + "drive_uk": drive_box_content_dict["UK"], + "drive_sp": drive_box_content_dict["SP"], + "drive_it": drive_box_content_dict["IT"], + } + templateVars.update(Root.menu) + output = template.render(templateVars) + + return output + + @cherrypy.expose() + def dash_start2(self, *args, **kwargs): + self.restrict_url_to_logged() + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + + # Database status counters + sql = """ + with exadata_db as ( + SELECT + mgmt$target.host_name + , mgmt$target.target_name + , mgmt$target.target_type + FROM sysman.mgmt$target + where mgmt$target.host_name like 'dmp01%' + and mgmt$target.target_type = 'rac_database' + ) + select count(*), A.AVAILABILITY_STATUS + FROM sysman.mgmt$availability_current A, sysman.MGMT$TARGET B , exadata_db C + WHERE B.target_name= C.target_name + AND A.TARGET_GUID=B.TARGET_GUID + group by AVAILABILITY_STATUS + """ + cursor.execute(sql) + + status_dict={} + + for status_name in ["Target Up", "Target Down", "Blackout", "Metric Error","Pending/Unknown"]: + status_dict[status_name] = 0 + + for row in cursor: + (target_count, availability_status) = row + status_dict[availability_status] = target_count + + sql = """ + select metric_column, key_value, value from sysman.mgmt$metric_current + where + target_name='+ASM_cluster-dmp' and + target_type='osm_cluster' and + metric_name='DiskGroup_Usage' and key_value!='DBFS' + and metric_column in ('total_mb','usable_file_mb') + """ + cursor.execute(sql) + + usable = {} + total = {} + used = {} + + for row in cursor: + (metric_column, key_value, value) = row + if metric_column == "usable_file_mb": + usable[key_value] = round (float(value)/1024/1024,2) + else: + total[key_value] = round (float(value)/1024/1024,2) + + for key in total.keys(): + used[key] = round(total[key] - usable[key], 2) + + dg_data_series = "[%s,%s]" % (used["DATA"], usable["DATA"]) + dg_data_labels = "['Used','Free']" + + dg_reco_series = "[%s,%s]" % (used["RECO"], usable["RECO"]) + dg_reco_labels = "['Used','Free']" + + + cursor.close() + db.close() + + except cx_Oracle.DatabaseError as err: + raise + + + # Build target status table + templateVars = {} + template = env.get_template('target_status.table') + + templateVars = { + "target_up": status_dict["Target Up"], + "target_down": status_dict["Target Down"], + "blackout": status_dict["Blackout"], + "metric_issue": status_dict["Metric Error"] + status_dict["Pending/Unknown"], + } + target_status_table = template.render(templateVars) + + + # Build Storage graph + templateVars = {} + template = env.get_template('asm_dg.graph') + templateVars = { + "chart_name": "ASM_DG_DATA", + "chart_title": "", + "series": dg_data_series, + "labels": dg_data_labels, + } + asm_data_graph = template.render(templateVars) + + templateVars = {} + template = env.get_template('asm_dg.graph') + templateVars = { + "chart_name": "ASM_DG_RECO", + "chart_title": "", + "series": dg_reco_series, + "labels": dg_reco_labels, + } + asm_reco_graph = template.render(templateVars) + + # Drive boxes + drive_box_content_dict={"FR":"", "UK":"", "SP":"", "IT":""} + for country_key in drive_box_content_dict: + drive_db_name = DRIVE_DATABASES[country_key].split("/")[1] + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + + # User session count metric + sql = """ + select value from sysman.mgmt$metric_current where metric_name ='ME$GLOBAL_USER_SESSION_COUNT' + and target_name='%s' + """ % (drive_db_name) + cursor.execute(sql) + for row in cursor: + user_sessions = row[0] + + + # Active session count metric + sql = """ + select value from sysman.mgmt$metric_current where metric_name ='ME$GLOBAL_ACTIVE_SESSION_COUNT' + and target_name='%s' + """ % (drive_db_name) + cursor.execute(sql) + for row in cursor: + active_sessions = row[0] + + + sql = """ + select value from sysman.mgmt$metric_current where metric_name ='ME$LOCK_COUNT' + and target_name='%s' + """ % (drive_db_name) + cursor.execute(sql) + for row in cursor: + locked_sessions = row[0] + + + templateVars = {} + template = env.get_template('drive_mini.table') + templateVars = { + "user_sessions": user_sessions, + "active_sessions": active_sessions, + "locked_sessions": locked_sessions, + "workload": self.db_workload_mini_graph2(drive_db_name), + } + drive_box_content_dict[country_key] = template.render(templateVars) + except: + raise + + # Build dashboard + templateVars = {} + template = env.get_template('dash_start2.tpl') + + templateVars = { + "title": "EXADATA PRODUCTION", + "target_status_table": target_status_table, + "asm_data_graph": asm_data_graph, + "asm_reco_graph": asm_reco_graph, + "drive_fr": drive_box_content_dict["FR"], + "drive_uk": drive_box_content_dict["UK"], + "drive_sp": drive_box_content_dict["SP"], + "drive_it": drive_box_content_dict["IT"], + } + templateVars.update(Root.menu) + output = template.render(templateVars) + + return output + + + @cherrypy.expose() + def dash_target_status_detail(self, *args, **kwargs): + self.restrict_url_to_logged() + # Set to production cluster by default + try: + cluster = kwargs["cluster"] + except: + cluster = "dmp" + + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + sql1 = """ + with exadata_db as ( + SELECT + mgmt$target.host_name + , mgmt$target.target_name + , mgmt$target.target_type + FROM sysman.mgmt$target + where mgmt$target.host_name like + """ + sql2 = "'" + cluster + '%' + "'" + sql3 = """ + and mgmt$target.target_type = 'rac_database' + ) + select C.target_name, E.easy_name, A.AVAILABILITY_STATUS + FROM sysman.mgmt$availability_current A, sysman.MGMT$TARGET B , exadata_db C, system.t_target_easy_name E WHERE B.target_name= C.target_name + AND B.target_name= E.target_name (+) + AND A.TARGET_GUID=B.TARGET_GUID + order by E.display_priority ASC + """ + sql = sql1 + sql2 + sql3 + cursor.execute(sql) + items = list(cursor.fetchall()) + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + raise + + + + templateVars = {} + template = env.get_template('target_status_details.table') + + if cluster == "dmp": + title = "Exadata production database status" + elif cluster == "dmt": + title = "Exadata non-production database status" + + templateVars = { + "title": title, + "query_result": items, + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + + @cherrypy.expose() + def dash_dataguard_status_detail(self, *args, **kwargs): + # self.restrict_url_to_logged() + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + sql = """ + with + drp_errors as ( + SELECT DISTINCT entity_name, + value, + ROUND((sysdate-collection_time)*1440) Min_delta_collect, + CASE + WHEN ROUND((sysdate-collection_time)*1440)>=%s + THEN 'Collect error' + ELSE + CASE + WHEN value>%s + THEN 'Unsync' + ELSE 'Sync' + END + END drp_status + FROM sysman.GC_METRIC_VALUES_LATEST + WHERE entity_name like '%s' and metric_group_name ='dataguard_sperf_112' + AND metric_column_name IN ('dg_lag','dg_pdl') + AND (value >%s + OR collection_time <=sysdate-%s/1440) + ORDER BY value DESC, + Min_delta_collect DESC + ), + drp_list as ( + SELECT target_name, + key_value + FROM sysman.mgmt$metric_current + WHERE + metric_column='dg_status' + AND key_value !=target_name + ) + select n.easy_name, e.*, l.target_name from drp_errors e, drp_list l, system.t_target_easy_name n + where e.entity_name=l.key_value + and l.target_name = n.target_name (+) + """ % (DATGUARD_ACCEPTED_COLLECT_DELAI_MINUTES, DATGUARD_ACCEPTED_LAG_SECONDS, "%EXA",DATGUARD_ACCEPTED_LAG_SECONDS, DATGUARD_ACCEPTED_COLLECT_DELAI_MINUTES) + + cursor.execute(sql) + items = list(cursor.fetchall()) + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + raise + + + templateVars = {} + template = env.get_template('dataguard_status_details.table') + + + templateVars = { + "title": "Unsynchronised/suspicious Dataguards", + "query_result": items, + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + + def graph_simple_custom_metric(self, database, metric_name, hours_in_past, graph_title, line_color, line_description): + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, OEMDB_EZ_STRING) + cursor = db.cursor() + sql = """ + SELECT collection_timestamp, value + FROM sysman.mgmt$metric_details + WHERE metric_name ='%s' + AND target_name ='%s' + AND collection_timestamp >=sysdate-%s/24 + ORDER BY collection_timestamp ASC + """ % (metric_name, database, hours_in_past) + + cursor.execute(sql) + items = list(cursor.fetchall()) + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + raise + + series_values=[] + xaxis_values=[] + for item in items: + series_values.append(str(item[1])) + xaxis_values.append('"' + item[0].strftime("%H:%M") + '"') + + series = 'series: [{name: "' + line_description + '", data: [' + ",".join(series_values) + '] }],' + xaxis = 'xaxis: { categories: [' + ",".join(xaxis_values) + '] } };' + + + short_metric_name = metric_name.split("ME$")[1] + templateVars = {} + template = env.get_template('simple_custom_metric.graph') + templateVars = { + "chart_name": "%s_%s" % (database, short_metric_name), + "chart_title": graph_title, + "line_color": line_color, + "series": series, + "xaxis": xaxis, + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + def graph_ash_wait_class(self, database): + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, "dmp01-scan/" + database) + cursor = db.cursor() + sql = """ + select round(count(*)/60) MINUTES, DECODE(WAIT_CLASS,NULL,'CPU',WAIT_CLASS) + from gv$active_session_history + where SAMPLE_TIME between sysdate-1/24 and sysdate + group by WAIT_CLASS having count(*)>=300 FETCH FIRST 5 ROWS ONLY + """ + cursor.execute(sql) + series_values=[] + labels_values=[] + + for row in cursor: + (secondes, wait_class) = row + series_values.append(str(secondes)) + labels_values.append("'" + wait_class + "'") + + series = "[" + ",".join(series_values) + "]" + labels = '[' + ",".join(labels_values) + "]" + + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + raise + + templateVars = {} + template = env.get_template('ash_wait_class.graph') + templateVars = { + "chart_name": "graph_ash_wait_class", + "chart_title": "Wait events profile for last hour", + "series": series, + "labels": labels, + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + @cherrypy.expose() + def dashboard_db(self, *args, **kwargs): + self.restrict_url_to_logged() + db = kwargs["db"] + try: + (instance1, instance2) = get_target_instances(db) + except ValueError as err: + # Cannot identify database instances from OEM + templateVars = {} + template = env.get_template('box_message_with_back.tpl') + box_type = "box-primary" + box_title = db + box_message = "Instances of %s database are not defined in the OEM repository" % db + templateVars = { + "box_type" : box_type, + "box_title" : box_title, + "box_message" : box_message + } + output = template.render(templateVars) + return output + + graph_active_sessions = self.graph_simple_custom_metric(db, "ME$GLOBAL_ACTIVE_SESSION_COUNT", ACTIVE_SESS_HIST_HOURS, "Active session history for the last %s hours" % ACTIVE_SESS_HIST_HOURS, "#f3f3f3", "Active sessions") + if db in ["DRF1PRDEXA", "DRU1PRDEXA", "DRS1PRDEXA", "DRI1PRDEXA"]: + graph_locks = self.graph_simple_custom_metric(db, "ME$LOCK_COUNT", LOCK_HIST_HOURS, "Locked session for the last %s hours" % LOCK_HIST_HOURS, "#f3f3f3", "Locked sessions") + else: + graph_locks = "" + + graph_ash_wait_class = self.graph_ash_wait_class(db) + templateVars = {} + template = env.get_template('dashboard_db.tpl') + templateVars = { + "title" : db + " dashboard", + "graph_active_sessions" : graph_active_sessions, + "graph_locks" : graph_locks, + "graph_ash_wait_class" : graph_ash_wait_class, + } + + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + @cherrypy.expose() + def controlm_dash(self, *args, **kwargs): + self.restrict_url_to_logged() + # Identify the table name based on current hour + now = datetime.datetime.now() + current_year = now.strftime("%y") + current_month = now.strftime("%m") + current_day = now.strftime("%d") + yesterday = str(int(current_day) - 1) + if int(current_day) < 10: + current_day = now.strftime("%d") + + if int(yesterday) < 10: + yesterday = "0" + str(int(current_day) - 1) + + current_hour = now.strftime("%H") + if int(current_hour) < 11: + table_name = "EMUSER.A%s001_AJOB" % (current_year + current_month + yesterday) + else: + table_name = "EMUSER.A%s001_AJOB" % (current_year + current_month + current_day) + + # Assume that ration is 50 by default + try: + ratio = kwargs["ratio"] + except: + ratio = 50 + + # Query the table and store result in items variable + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, CONTROLMDB_EZ_STRING) + cursor = db.cursor() + sql = """ + SELECT application , + group_name , + job_name , + memname , + description , + to_char(to_date(start_time, 'YYYYMMDDHH24MISS'),'DD/MM HH24:MI'), + SUBSTR(avg_runtime,1,2) + ||'h ' + ||SUBSTR(avg_runtime,3,2) + ||'m ' + || SUBSTR(avg_runtime,5,2) + ||'s', + TO_CHAR(TRUNC((( + (SELECT sysdate FROM dual + ) - to_date(start_time, 'YYYYMMDDHH24MISS' ))*24*3600)/3600), '900') + || 'h' + || TO_CHAR(TRUNC((( + (SELECT sysdate FROM dual + ) - to_date(start_time, 'YYYYMMDDHH24MISS' ))*24*3600)/60) - TRUNC(((sysdate - to_date(start_time, 'YYYYMMDDHH24MISS'))*24*3600)/3600)*60, '00') + || 'm' + || TO_CHAR(TRUNC((( + (SELECT sysdate + FROM dual + ) - to_date(start_time, 'YYYYMMDDHH24MISS' ))*24*3600) - (TRUNC(((sysdate - to_date(start_time, 'YYYYMMDDHH24MISS'))*24*3600)/60)*60)), '00') + || 's' + FROM %s + WHERE status='Executing' + AND (trim(TO_CHAR(TRUNC((( + (SELECT sysdate FROM dual + ) - to_date(start_time, 'YYYYMMDDHH24MISS' ))*24*3600)/3600), '900')) + || trim(TO_CHAR(TRUNC((( + (SELECT sysdate FROM dual + ) - to_date(start_time, 'YYYYMMDDHH24MISS' ))*24*3600)/60) - TRUNC(((sysdate - to_date(start_time, 'YYYYMMDDHH24MISS'))*24*3600)/3600)*60, '00')) + || trim(TO_CHAR(TRUNC((( + (SELECT sysdate FROM dual + ) - to_date(start_time, 'YYYYMMDDHH24MISS' ))*24*3600) - (TRUNC(((sysdate - to_date(start_time, 'YYYYMMDDHH24MISS'))*24*3600)/60)*60)), '00') )) > avg_runtime * (1+%s/100) + ORDER BY start_time ASC + """ % (table_name, ratio) + cursor.execute(sql) + items = list(cursor.fetchall()) + cursor.close() + db.close() + except cx_Oracle.DatabaseError as err: + raise + + templateVars = {} + template = env.get_template('controlm_jobs.table') + templateVars = { + "title" : " running jobs", + "items" : items, + "ratio" : ratio, + } + templateVars.update(Root.menu) + output = template.render(templateVars) + + # controlm_jobs.table + return output + + + def db_workload_mini_graph(self, database): + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, "dmp01-scan/" + database) + cursor = db.cursor() + sql = """ + select VALUE from v$parameter where NAME in ('cpu_count','parallel_threads_per_cpu') + """ + cursor.execute(sql) + + # We assume that the compute capacity on each instance is cpu_count + parallel_threads_per_cpu + compute_threads = 2 # for both instances + for row in cursor: + compute_threads = compute_threads * int(row[0]) + + sql = """ + select count(*) from gv$session where status='ACTIVE' and type != 'BACKGROUND' + """ + cursor.execute(sql) + + for row in cursor: + active_sessions = row[0] + + cursor.close() + db.close() + + except cx_Oracle.DatabaseError as err: + raise + + in_use_percent = int(100*active_sessions/compute_threads) + if in_use_percent >= 100: + in_use_percent = 100 + + free_percent = 100-in_use_percent + series="[%s,%s]" % (in_use_percent, free_percent) + + templateVars = {} + template = env.get_template('drive_workload_mini.graph') + templateVars = { + "chart_name": "workload_" + database, + "chart_title": str(compute_threads), + "series": series, + "labels": "labels", } + output = template.render(templateVars) + + return output + + def db_workload_mini_graph2(self, database): + try: + db = cx_Oracle.connect(ORA_USERNAME, ORA_PASSWORD, "dmp01-scan/" + database) + cursor = db.cursor() + sql = """ + select VALUE from v$parameter where NAME in ('cpu_count','parallel_threads_per_cpu') + """ + cursor.execute(sql) + + # We assume that the compute capacity on each instance is cpu_count + parallel_threads_per_cpu + compute_threads = 2 # for both instances + for row in cursor: + compute_threads = compute_threads * int(row[0]) + + sql = """ + select count(*) from gv$session where status='ACTIVE' and type != 'BACKGROUND' + """ + cursor.execute(sql) + + for row in cursor: + active_sessions = row[0] + + cursor.close() + db.close() + + except cx_Oracle.DatabaseError as err: + raise + + in_use_percent = int(100*active_sessions/compute_threads) + if in_use_percent >= 100: + in_use_percent = 100 + + series="[%s]" % in_use_percent + + templateVars = {} + template = env.get_template('gauge_workload.graph') + templateVars = { + "chart_name": "workload_" + database, + "chart_title": str(compute_threads), + "series": series, + "labels": "labels", } + output = template.render(templateVars) + + return output + + + @cherrypy.expose() + def unathorized(self, *args, **kwargs): + templateVars = {} + template = env.get_template('box_message_with_back.tpl') + box_type = "box-primary" + box_title = "Unauthorized" + box_message = "You are not authorized to use this feature." + templateVars = { + "box_type" : box_type, + "box_title" : box_title, + "box_message" : box_message + } + output = template.render(templateVars) + return output + + + @cherrypy.expose() + def show_sync_and_actions(self, *args, **kwargs): + self.restrict_url_to_groups(["SUPERUSER","OGGOPER"]) + + cherrypy.session["sync_action_has_been_executed"] = "NO" + templateVars = {} + template = env.get_template('show_sync_and_actions.tpl') + try: + (extract, replicat) = kwargs["sync"].split("_") + except KeyError as e: + raise cherrypy.HTTPRedirect("/select_sync") + + cherrypy.session["extract"] = extract + cherrypy.session["replicat"] = replicat + ogg_sync = OGG_Sync(extract, replicat) + ogg_sync.parse_prm_headers() + ogg_sync.parse_prm_tables() + ogg_sync.build_extract_prm() + ogg_sync.build_replicat_prm() + ogg_sync.update_sync_status() + extract_file_contents = open(ogg_sync.extract_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + extract_file_contents = "".join(extract_file_contents).replace("ogguser01$", "*****") + replicat_file_contents = open(ogg_sync.replicat_prm_filename, "r", encoding = "utf-8", errors = "replace").readlines() + replicat_file_contents = "".join(replicat_file_contents).replace("ogguser01$", "*****") + + list_of_logfiles = [] + for root, dirs, files in os.walk(OGG_SERVICE_DIR + "/log/www_executions/"): + for filename in files: + if (extract in filename) and (replicat in filename): + list_of_logfiles.append(filename) + + list_of_logfiles.sort(key = str.lower, reverse = True) + + html_list_of_logfiles = "
    " + for logfile in list_of_logfiles: + html_list_of_logfiles = html_list_of_logfiles + '
  • ' + logfile + '
  • ' + html_list_of_logfiles = html_list_of_logfiles + "
" + + if DEBUG_MODE: + debug_info = "nothing" + # debug_info = ogg_sync.get_class_attributes_as_json() + else: + debug_info = "" + templateVars = { + "extract_replicat" : extract + "_" + replicat, + "extract_file_contents" : extract_file_contents, + "replicat_file_contents" : replicat_file_contents, + "extract" : ogg_sync.extract, + "replicat" : ogg_sync.replicat, + "extract_status" : ogg_sync.extract_status, + "replicat_status" : ogg_sync.replicat_status, + "list_of_logfiles" : html_list_of_logfiles, + "debug_info": debug_info + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + @cherrypy.expose() + def execute_sync_action(self, *args, **kwargs): + self.restrict_url_to_groups(["SUPERUSER","OGGOPER"]) + + if cherrypy.session["sync_action_has_been_executed"] != "YES": + action = kwargs["action"] + extract = cherrypy.session.get("extract") + replicat = cherrypy.session.get("replicat") + if action == "stop" or action == "start" or action == "full": + cherrypy.session["action"] = action + templateVars = {} + template = env.get_template('box_confirm.tpl') + box_type = "box-primary" + box_title = "Please confirm your action" + if action != "full": + box_message = "Are you sure you want to " + action.upper() + " the extract " + extract + " and the replicat " + replicat + " ? " + else: + box_message = "Are you sure you want to start a " + action.upper() + " refresh of the OGG sync " + extract + " => " + replicat + " ? " + templateVars = { + "box_type" : box_type, + "box_title" : box_title, + "box_message" : box_message + } + templateVars.update(Root.menu) + output = template.render(templateVars) + if action == "incremental": + extract_delta_file = kwargs["extract_delta_file"] + replicat_delta_file = kwargs["replicat_delta_file"] + + if extract_delta_file.file == None or replicat_delta_file.file == None: + templateVars = {} + template = env.get_template('box_message_with_back.tpl') + box_type = "box-primary" + box_title = "No file selected" + box_message = "Please pick both Extract/Replicat delta files before to continue." + templateVars = { + "box_type" : box_type, + "box_title" : box_title, + "box_message" : box_message + } + templateVars.update(Root.menu) + output = template.render(templateVars) + return output + + # Create extract_delta.prm + f = open(OGG_SERVICE_DIR + "/sync.d/" + extract + "_" + replicat + "/" + extract + "_delta.prm", 'wb') + while True: + data = extract_delta_file.file.read(8192) + if not data: + break + f.write(data) + f.close() + # Create replicat_delta.prm + f = open(OGG_SERVICE_DIR + "/sync.d/" + extract + "_" + replicat + "/" + replicat + "_delta.prm", 'wb') + while True: + data = replicat_delta_file.file.read(8192) + if not data: + break + f.write(data) + + cherrypy.session["action"] = action + templateVars = {} + template = env.get_template('box_confirm.tpl') + box_type = "box-primary" + box_title = "Please confirm your action" + box_message = "Are you sure you want to start a " + action.upper() + " refresh of the OGG sync " + extract + " => " + replicat + " ? " + templateVars = { + "box_type" : box_type, + "box_title" : box_title, + "box_message" : box_message + } + templateVars.update(Root.menu) + output = template.render(templateVars) + else: + output = "Already executed" + return output + + @cherrypy.expose() + def start_ogg_sync_in_background(self, *args, **kwargs): + self.restrict_url_to_groups(["SUPERUSER","OGGOPER"]) + + if cherrypy.session["sync_action_has_been_executed"] != "YES": + cherrypy.session["sync_action_has_been_executed"] = "YES" + action = cherrypy.session.get("action") + extract = cherrypy.session.get("extract") + replicat = cherrypy.session.get("replicat") + runid = datetime.datetime.now().strftime('%Y-%m-%d_%H_%M_%S_%f') + logfile = OGG_SERVICE_DIR + "/log/www_executions/" + extract + "_" + replicat + "_" + runid + ".log" + + try: + # Start GGSCI command in background + shellcommand = 'echo "source /home/oracle/.bash_profile; /u01/app/oracle/admin/OGG_Service/ogg_sync.py -e ' + extract + ' -r ' + replicat + ' -s ' + action + '>' + logfile +'" | at now' + logger.info("User " + cherrypy.session.get("ad_uid") + " run: " + shellcommand) + cherrypy.log(shellcommand) + cmd = subprocess.run( + shellcommand, + check=True, + shell=True, + stdout=subprocess.PIPE, + ) + except: + pass + + # Display information windows + templateVars = {} + template = env.get_template('box_wait.tpl') + box_type = "box-primary" + box_title = "Please wait..." + redirect_url = "../select_sync" + redirect_timeout = "5000" + box_message1 = "Starting JOB: " + action.upper() + " the extract " + extract + " and the replicat " + replicat + "" + box_message2 = "This page will be automaticly redirected after JOB submission." + templateVars = { + "box_type" : box_type, + "box_title" : box_title, + "redirect_url" : redirect_url, + "redirect_timeout" : redirect_timeout, + "box_message1" : box_message1, + "box_message2" : box_message2 + } + templateVars.update(Root.menu) + output = template.render(templateVars) + else: + output = "Already executed" + return output + + + @cherrypy.expose + def other_url(self, *args, **kwargs): + ''' + The name of the method is mapped to the URL. This url is /other_url + Try calling this with /other_url/some/path + Try calling this with /other_url?foo=Foo&bar=Bar + Try calling this with POST data. + ''' + return '''\ + Any sub-paths on the URL are available as args: {0} + Query params _and_ POST data is available via kwargs: {1} + Headers and the HTTP method and everything else is available via + the thread-local cherrypy objects ``cherrypy.request`` and + ``cherrypy.response``. + You can get and set session values and cookies as though they are + dictionaries: + cherrypy.session['key'] = 'val' + cherrypy.request.cookie['key'] = 'val' + cherrypy.session.get('key', 'defaultval') + cherrypy.request.cookie.get('key', 'defaultval') + '''.format(args, kwargs) + + @cherrypy.expose + def test_url(self, *args, **kwargs): + ''' + The name of the method is mapped to the URL. This url is /other_url + Try calling this with /other_url/some/path + Try calling this with /other_url?foo=Foo&bar=Bar + Try calling this with POST data. + ''' + + text1 = '''\ + Any sub-paths on the URL are available as args: {0} + Query params _and_ POST data is available via kwargs: {1} + Headers and the HTTP method and everything else is available via + the thread-local cherrypy objects ``cherrypy.request`` and + ``cherrypy.response``. + You can get and set session values and cookies as though they are + dictionaries: + cherrypy.session['key'] = 'val' + cherrypy.request.cookie['key'] = 'val' + cherrypy.session.get('key', 'defaultval') + cherrypy.request.cookie.get('key', 'defaultval') + '''.format(args, kwargs) + + text2 = Root.user_dict["b11251"] + + return text2 + +if __name__ == '__main__': + Root_conf = { + '/': { + 'tools.staticdir.root': OGG_SERVICE_DIR + '/www' + }, + '/AdminLTE': { + 'tools.staticdir.on': True, + 'tools.staticdir.dir': 'AdminLTE' + }, + 'global':{ + 'server.socket_host' : "FRPIVSQL2418", + 'server.socket_port' : 9026, + 'server.thread_pool' : 4, + 'tools.sessions.on' : True, + 'tools.sessions.timeout': 60, + 'tools.encode.encoding' : "Utf-8" + } + } + # RUN + cherrypy.quickstart(Root(), '/', config=Root_conf) diff --git a/tmp/2_nested_loop_01.txt b/tmp/2_nested_loop_01.txt new file mode 100644 index 0000000..263ad5e --- /dev/null +++ b/tmp/2_nested_loop_01.txt @@ -0,0 +1,96 @@ +SELECT /*+ GATHER_PLAN_STATISTICS QB_NAME(main) */ employees.* +FROM HR.employees, + ( SELECT /*+ QB_NAME(iv1) */ + trunc(hire_date, 'YYYY'), MAX(employee_id) employee_id + FROM HR.employees + GROUP BY trunc(hire_date, 'YYYY')) x +WHERE employees.employee_id=x.employee_id +/ + +EMPLOYEE_ID FIRST_NAME LAST_NAME EMAIL PHONE_NUMBER HIRE_DATE JOB_ID SALARY MANAGER_ID DEPARTMENT_ID +----------- -------------------- ------------------------- ------------------------- -------------------- ------------------- ---------- ---------- ---------- ------------- + 206 William Gietz WGIETZ 515.123.8181 2002-06-07 00:00:00 AC_ACCOUNT 8300 205 110 + 102 Lex De Haan LDEHAAN 515.123.4569 2001-01-13 00:00:00 AD_VP 17000 100 90 + 197 Kevin Feeney KFEENEY 650.507.9822 2006-05-23 00:00:00 SH_CLERK 3000 124 50 + 201 Michael Hartstein MHARTSTE 515.123.5555 2004-02-17 00:00:00 MK_MAN 13000 100 20 + 199 Douglas Grant DGRANT 650.507.9844 2008-01-13 00:00:00 SH_CLERK 2600 124 50 + 200 Jennifer Whalen JWHALEN 515.123.4444 2003-09-17 00:00:00 AD_ASST 4400 101 10 + 198 Donald OConnell DOCONNEL 650.507.9833 2007-06-21 00:00:00 SH_CLERK 2600 124 50 + 202 Pat Fay PFAY 603.123.6666 2005-08-17 00:00:00 MK_REP 6000 201 20 + +8 rows selected. + +-------------------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | +-------------------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | 1 | | | 4 (100)| 8 |00:00:00.01 | 19 | | | | +| 1 | NESTED LOOPS | | 1 | 107 | 15622 | 4 (25)| 8 |00:00:00.01 | 19 | | | | +| 2 | NESTED LOOPS | | 1 | 107 | 15622 | 4 (25)| 8 |00:00:00.01 | 11 | | | | +| 3 | VIEW | | 1 | 107 | 1391 | 4 (25)| 8 |00:00:00.01 | 7 | | | | +| 4 | HASH GROUP BY | | 1 | 107 | 2354 | 4 (25)| 8 |00:00:00.01 | 7 | 1116K| 1116K| 894K (0)| +| 5 | TABLE ACCESS FULL | EMPLOYEES | 1 | 107 | 2354 | 3 (0)| 107 |00:00:00.01 | 7 | | | | +|* 6 | INDEX UNIQUE SCAN | EMP_EMP_ID_PK | 8 | 1 | | 0 (0)| 8 |00:00:00.01 | 4 | | | | +| 7 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 8 | 1 | 133 | 0 (0)| 8 |00:00:00.01 | 8 | | | | +-------------------------------------------------------------------------------------------------------------------------------------------------------- + +Query Block Name / Object Alias (identified by operation id): +------------------------------------------------------------- + + 1 - MAIN + 3 - IV1 / X@MAIN + 4 - IV1 + 5 - IV1 / EMPLOYEES@IV1 + 6 - MAIN / EMPLOYEES@MAIN + 7 - MAIN / EMPLOYEES@MAIN + +Predicate Information (identified by operation id): +--------------------------------------------------- + + 6 - access("EMPLOYEES"."EMPLOYEE_ID"="X"."EMPLOYEE_ID") + + + +SELECT /*+ GATHER_PLAN_STATISTICS QB_NAME(iv1) */ + trunc(hire_date, 'YYYY'), MAX(employee_id) employee_id + FROM HR.employees + GROUP BY trunc(hire_date, 'YYYY') +/ + +TRUNC(HIRE_DATE,'YY EMPLOYEE_ID +------------------- ----------- +2002-01-01 00:00:00 206 +2001-01-01 00:00:00 102 +2006-01-01 00:00:00 197 +2004-01-01 00:00:00 201 +2008-01-01 00:00:00 199 +2003-01-01 00:00:00 200 +2007-01-01 00:00:00 198 +2005-01-01 00:00:00 202 + +8 rows selected. + +------------------------------------------------------------------------------------------------------------------------------------------ +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | +------------------------------------------------------------------------------------------------------------------------------------------ +| 0 | SELECT STATEMENT | | 1 | | | 4 (100)| 8 |00:00:00.01 | 7 | | | | +| 1 | HASH GROUP BY | | 1 | 107 | 2354 | 4 (25)| 8 |00:00:00.01 | 7 | 1116K| 1116K| 892K (0)| +| 2 | TABLE ACCESS FULL| EMPLOYEES | 1 | 107 | 2354 | 3 (0)| 107 |00:00:00.01 | 7 | | | | +------------------------------------------------------------------------------------------------------------------------------------------ + + + +-------------------------------------------------------------------------------------------------------------------------------------------------------- +| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | +-------------------------------------------------------------------------------------------------------------------------------------------------------- +| 0 | SELECT STATEMENT | | 1 | | | 4 (100)| 8 |00:00:00.01 | 19 | | | | +| 1 | NESTED LOOPS | | 1 | 107 | 15622 | 4 (25)| 8 |00:00:00.01 | 19 | | | | +| 2 | NESTED LOOPS | | 1 | 107 | 15622 | 4 (25)| 8 |00:00:00.01 | 11 | | | | +| 3 | VIEW | | 1 | 107 | 1391 | 4 (25)| 8 |00:00:00.01 | 7 | | | | +|* 6 | INDEX UNIQUE SCAN | EMP_EMP_ID_PK | 8 | 1 | | 0 (0)| 8 |00:00:00.01 | 4 | | | | +| 7 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 8 | 1 | 133 | 0 (0)| 8 |00:00:00.01 | 8 | | | | +-------------------------------------------------------------------------------------------------------------------------------------------------------- + + + + + diff --git a/tmp/openssl_orapki_01.txt b/tmp/openssl_orapki_01.txt new file mode 100644 index 0000000..88d96a5 --- /dev/null +++ b/tmp/openssl_orapki_01.txt @@ -0,0 +1,51 @@ +# How to Create a New Wallet from an Existing Private Key and Certificates using OpenSSL and orapki (Doc ID 2769138.1) +openssl pkcs12 -export \ + -in /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.crt \ + -inkey /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.key \ + -certfile /app/oracle/staging_area/TLS_poc/openssl_files/rootCA.pem \ + -out /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.p12 + +# create an empty wallet +orapki wallet create -wallet /app/oracle/staging_area/TLS_poc/wallet -pwd "Secret00!" -auto_login_local + +# we can import directly both user / trusted certificate from .p12 file +orapki wallet import_pkcs12 -wallet /app/oracle/staging_area/TLS_poc/wallet -pwd "Secret00!" \ + -pkcs12file /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.p12 + +# or we can add separately trusted certificate and user certificate +orapki wallet add -wallet /app/oracle/staging_area/TLS_poc/wallet -pwd "Secret00!" \ + -trusted_cert -cert /app/oracle/staging_area/TLS_poc/openssl_files/rootCA.pem + +-> THIS fails +orapki wallet add -wallet /app/oracle/staging_area/TLS_poc/wallet -pwd "Secret00!" \ + -user_cert -cert /app/oracle/staging_area/TLS_poc/openssl_files/togoria.swgalaxy.crt + +orapki wallet import_private_key -wallet /oracle/wallet/location -pwd oracle_wallet_password -pvtkeyfile /tmp/encrypted.key -pvtkeypwd long_key_encryption_password -cert /etc/pki/tls/private/servername.crt + +# How to Remove Trusted Certificate From Oracle Wallet (Doc ID 2257925.1) +orapki wallet remove -trusted_cert_all -wallet /app/oracle/staging_area/TLS_poc/wallet -pwd "Secret00!" + +# display wallet contents +orapki wallet display -wallet /app/oracle/staging_area/TLS_poc/wallet -pwd "Secret00!" + + +export TNS_ADMIN=/app/oracle/staging_area/TLS_poc/tnsadmin + + +# client side +orapki wallet add -wallet /app/oracle/staging_area/TLS_poc/wallet -pwd "Secret00!" \ + -user_cert -cert /app/oracle/staging_area/TLS_poc/openssl_files/wayland.swgalaxy.fullchain.crt + + + +# listener registration +alter system set local_listener="(DESCRIPTION_LIST = + (DESCRIPTION = + (ADDRESS = (PROTOCOL = TCPS)(HOST = togoria.swgalaxy)(PORT = 24000)) + (ADDRESS = (PROTOCOL = TCP)(HOST = togoria.swgalaxy)(PORT = 1521)) + ) +)" +scope=both sid='*'; + +alter system register; + diff --git a/tmp/postgres_01.txt b/tmp/postgres_01.txt new file mode 100644 index 0000000..040bfe9 --- /dev/null +++ b/tmp/postgres_01.txt @@ -0,0 +1,30 @@ +export PATH=$PATH:/app/postgres/rdbms/17.2/bin +export PGDATA=/app/postgres/data/db01 +export PGLOG=/app/postgres/log/db01.log +export PGPORT=5501 + +alias restart='pg_ctl stop; pg_ctl start --pgdata $PGDATA --log $PGLOG' + + +initdb -D $PGDATA + + +# in $PGDATA/postgresql.conf +listen_addresses='*' +port=5501 + +pg_ctl start --pgdata $PGDATA --log $PGLOG + + +create role jabba login password 'secret'; +create database jabba; +alter database jabba owner to jabba; + + +https://smallstep.com/hello-mtls/doc/combined/postgresql/psql + +SELECT backend_start,ssl,version,datname as "Database name", usename as "User name",client_addr, application_name, backend_type +FROM pg_stat_ssl +JOIN pg_stat_activity +ON pg_stat_ssl.pid = pg_stat_activity.pid +ORDER BY backend_start DESC,ssl; \ No newline at end of file diff --git a/tmp/vacances.txt b/tmp/vacances.txt new file mode 100644 index 0000000..b72b9b4 --- /dev/null +++ b/tmp/vacances.txt @@ -0,0 +1,2 @@ +Club Lookéa Torrequebrada - Choix Flex / Malaga +Club Marmara Sandy Beach / Corfou diff --git a/wallet/wallet_01.txt b/wallet/wallet_01.txt new file mode 100644 index 0000000..0755359 --- /dev/null +++ b/wallet/wallet_01.txt @@ -0,0 +1,106 @@ +-- http://www.br8dba.com/store-db-credentials-in-oracle-wallet/ +-- https://franckpachot.medium.com/19c-ezconnect-and-wallet-easy-connect-and-external-password-file-8e326bb8c9f5 + +# create 2 users in a PDB +alter session set container=NIHILUS; +show con_name +show pdbs + +grant create session to WOMBAT identified by "CuteAnimal99#"; +grant create session to OTTER identified by "CuteAnimal@88"; + + +# create directory for tnsnames.ora +export TNS_ADMIN=/home/oracle/tmp/tns +mkdir -p ${TNS_ADMIN} + +# add TNS alias in $TNS_ADMIN/tnsnames.ora +WOMBAT_NIHILUS=(DESCRIPTION= + (CONNECT_DATA= + (SERVICE_NAME=NIHILUS) + ) + (ADDRESS= + (PROTOCOL=tcp) + (HOST=bakura) + (PORT=1521) + ) + ) + +OTTER_NIHILUS=(DESCRIPTION= + (CONNECT_DATA= + (SERVICE_NAME=NIHILUS) + ) + (ADDRESS= + (PROTOCOL=tcp) + (HOST=bakura) + (PORT=1521) + ) + ) + + +# test connections using TNS alias +sqlplus /nolog +connect WOMBAT/"CuteAnimal99#"@WOMBAT_NIHILUS +connect OTTER/"CuteAnimal@88"@OTTER_NIHILUS + + + +# add following lines in $TNS_ADMIN/sqlnet.ora +WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/home/oracle/tmp/wdir))) +SQLNET.WALLET_OVERRIDE=TRUE + + +# create wallet +export MY_WALLET_DIR=/home/oracle/tmp/wdir +mkdir -p ${MY_WALLET_DIR} +orapki wallet create -wallet ${MY_WALLET_DIR} -auto_login + +# files generated in ${MY_WALLET_DIR} +# check for 600 permissions on all theses files, otherwise the connection using the wallet will not work +ewallet.p12.lck +ewallet.p12 +cwallet.sso.lck +cwallet.sso + + +# add credentials +mkstore -wrl ${MY_WALLET_DIR} -createCredential WOMBAT_NIHILUS WOMBAT "CuteAnimal99#" +mkstore -wrl ${MY_WALLET_DIR} -createCredential OTTER_NIHILUS OTTER "CuteAnimal@88" + +# list wallet credentials +mkstore -wrl ${MY_WALLET_DIR} -listCredential + +# update or delete entery +mkstore -wrl ${MY_WALLET_DIR} -modifyCredential WOMBAT_NIHILUS WOMBAT "CuteAnimal99#" +mkstore -wrl ${MY_WALLET_DIR} -deleteCredential OTTER_NIHILUS + +# test connection using wallet +sqlplus /@WOMBAT_NIHILUS +show user + +sqlplus /@OTTER_NIHILUS +show user + +# NOTES +# if we want to store in the same wallet passwords of multiple users, we should use multiple TNS alias because the TNS alias is an unique key in the wallet +# if the wallet has bee created with -auto_login option, the wallet password is required to add/modify/delete/list wallet credentials +# but it is not required to establish connections + + +# using ezConnect +################# + +# basicly, when using ezConnect we have the same "TNS alias", ex: //bakura:1521/NIHILUS +# how to add multiples credentials? using dummy ezConnect parameter introduced in 19c: +# https://franckpachot.medium.com/19c-easy-connect-e0c3b77968d7 + +# in our case we will add a dummy parameter "ConnectAs" in order to crerate distinct ezStrings depending by username +mkstore -wrl ${MY_WALLET_DIR} -createCredential //bakura:1521/NIHILUS?ConnectAs=WOMBAT WOMBAT "CuteAnimal99#" +mkstore -wrl ${MY_WALLET_DIR} -createCredential //bakura:1521/NIHILUS?ConnectAs=OTTER OTTER "CuteAnimal@88" + +sqlplus /@//bakura:1521/NIHILUS?ConnectAs=WOMBAT +show user + +sqlplus /@//bakura:1521/NIHILUS?ConnectAs=OTTER +show user + diff --git a/winpush.cmd b/winpush.cmd new file mode 100644 index 0000000..6020d73 --- /dev/null +++ b/winpush.cmd @@ -0,0 +1,14 @@ +@echo off +:: Obtenir la date et l'heure actuelles au format souhaité +for /f "tokens=2 delims==" %%a in ('wmic os get localdatetime /value') do set datetime=%%a +set year=%datetime:~0,4% +set month=%datetime:~4,2% +set day=%datetime:~6,2% +set hour=%datetime:~8,2% +set minute=%datetime:~10,2% +set second=%datetime:~12,2% +set commitname=%year%-%month%-%day%:%hour%:%minute%:%second% + +git add . +git commit -m %commitname% +git push -u origin main diff --git a/zabbix/agent2_rpm_download_and_list_contents.txt b/zabbix/agent2_rpm_download_and_list_contents.txt new file mode 100644 index 0000000..5dbd333 --- /dev/null +++ b/zabbix/agent2_rpm_download_and_list_contents.txt @@ -0,0 +1,33 @@ +dnf download zabbix-agent2.x86_64 + +ls -l zabbix-agent2-6.4.8-release2.el8.x86_64.rpm +-rw-r--r-- 1 root root 5893140 Nov 19 16:48 zabbix-agent2-6.4.8-release2.el8.x86_64.rpm + +rpm -ql zabbix-agent2.x86_64 +/etc/logrotate.d/zabbix-agent2 +/etc/zabbix/zabbix_agent2.conf +/etc/zabbix/zabbix_agent2.d +/etc/zabbix/zabbix_agent2.d/plugins.d/ceph.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/docker.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/memcached.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/modbus.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/mqtt.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/mysql.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/oracle.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/redis.conf +/etc/zabbix/zabbix_agent2.d/plugins.d/smart.conf +/usr/lib/.build-id +/usr/lib/.build-id/20 +/usr/lib/.build-id/20/ddcdda34e7f63ee72136b2b411f97e984b9712 +/usr/lib/systemd/system/zabbix-agent2.service +/usr/lib/tmpfiles.d/zabbix_agent2.conf +/usr/sbin/zabbix_agent2 +/usr/share/doc/zabbix-agent2 +/usr/share/doc/zabbix-agent2/AUTHORS +/usr/share/doc/zabbix-agent2/COPYING +/usr/share/doc/zabbix-agent2/ChangeLog +/usr/share/doc/zabbix-agent2/NEWS +/usr/share/doc/zabbix-agent2/README +/usr/share/man/man8/zabbix_agent2.8.gz +/var/log/zabbix +/var/run/zabbix \ No newline at end of file diff --git a/zabbix/draft.txt b/zabbix/draft.txt new file mode 100644 index 0000000..1ac6d49 --- /dev/null +++ b/zabbix/draft.txt @@ -0,0 +1,103 @@ +docker run -d \ + --name postgres \ + -e POSTGRES_PASSWORD=secret \ + -e PGDATA=/var/lib/postgresql/data/pgdata \ + -v /app/persistent_docker/postgres/data:/var/lib/postgresql/data \ + postgres + +docker run -it --rm --network some-network postgres psql -h postgres -U postgres + +select pid as process_id, + usename as username, + datname as database_name, + client_addr as client_address, + application_name, + backend_start, + state, + state_change +from pg_stat_activity; + + +docker update --restart unless-stopped $(docker ps -q) + +create database zabbix; +CREATE ROLE zabbix LOGIN PASSWORD 'secret'; +ALTER DATABASE zabbix OWNER TO zabbix; + + +docker run --name zabbix-server -p 10051:10051 -e DB_SERVER_HOST="socorro" -e DB_SERVER_PORT="5500" -e POSTGRES_USER="zabbix" -e POSTGRES_PASSWORD="secret" --init -d zabbix/zabbix-server-pgsql:latest + +docker exec -ti zabbix-server /bin/bash +docker exec -ti zabbix-agent /bin/bash + +docker run --name zabbix-web-service -p 10053:10053 -e ZBX_ALLOWEDIP="socorro" --cap-add=SYS_ADMIN -d zabbix/zabbix-web-service:latest + +docker run --name zabbix-web-nginx-pgsql -p 8080:8080 -p 8443:8443 -e DB_SERVER_HOST="socorro" -e DB_SERVER_PORT="5500" -e POSTGRES_USER="zabbix" -e POSTGRES_PASSWORD="secret" -e ZBX_SERVER_HOST="socorro" -d zabbix/zabbix-web-nginx-pgsql:latest + +Default username/password is Admin/zabbix. It will pop up a wizard window which will guide you through the final configuration of the server. + +docker run --name zabbix-agent -p 10050:10050 -e ZBX_HOSTNAME="socorro" -e ZBX_SERVER_HOST="socorro" --init -d zabbix/zabbix-agent:latest + + +alias cclean='docker stop $(docker ps -a -q); docker rm $(docker ps -a -q)' +alias listen='lsof -w -i -P | grep -i "listen"' + +---------------------------------- +docker-compose.yaml + + +version: '3.1' + +services: + + db: + image: postgres:15.4 + restart: always + environment: + POSTGRES_PASSWORD: secret + PGDATA: /var/lib/postgresql/data/pgdata + volumes: + - /app/persistent_docker/postgres15/data:/var/lib/postgresql/data + network_mode: "host" + + +docker run --name zabbix-server -p 10051:10051 -e DB_SERVER_HOST="socorro" -e DB_SERVER_PORT="5500" -e POSTGRES_USER="zabbix" -e POSTGRES_PASSWORD="secret" --init -d zabbix/zabbix-server-pgsql:latest + +docker run --name zabbix-server --network host -e DB_SERVER_HOST="socorro" -e DB_SERVER_PORT="5500" -e POSTGRES_USER="zabbix" -e POSTGRES_PASSWORD="secret" --init -d zabbix/zabbix-server-pgsql:latest + + +docker run --name zabbix-agent -p 10050:10050 -e ZBX_HOSTNAME="socorro" -e ZBX_SERVER_HOST="socorro" --init -d zabbix/zabbix-agent:latest +docker run --name zabbix-agent --network host -e ZBX_PASSIVESERVERS="0.0.0.0/0" -e ZBX_HOSTNAME="socorro" -e ZBX_SERVER_HOST="socorro" --init -d zabbix/zabbix-agent:latest + + +docker run --name some-zabbix-agent -e ZBX_HOSTNAME="some-hostname" -e ZBX_SERVER_HOST="some-zabbix-server" --init -d zabbix/zabbix-agent2:tag +docker run --name zabbix-agent -p 10050:10050 -e ZBX_HOSTNAME="socorro" -e ZBX_SERVER_HOST="socorro" --init -d zabbix/zabbix-agent2:latest +docker run --name zabbix-agent --network host -e ZBX_PASSIVESERVERS="0.0.0.0/0" -e ZBX_HOSTNAME="socorro" -e ZBX_SERVER_HOST="socorro" --init -d zabbix/zabbix-agent2:latest + + +docker run --name zabbix-web-nginx-pgsql -p 8080:8080 -p 8443:8443 -e DB_SERVER_HOST="socorro" -e DB_SERVER_PORT="5500" -e POSTGRES_USER="zabbix" -e POSTGRES_PASSWORD="secret" -e ZBX_SERVER_HOST="socorro" -d zabbix/zabbix-web-nginx-pgsql:latest + +docker run --name zabbix-web-nginx-pgsql --network host -e DB_SERVER_HOST="socorro" -e DB_SERVER_PORT="5500" -e POSTGRES_USER="zabbix" -e POSTGRES_PASSWORD="secret" -e ZBX_SERVER_HOST="socorro" -d zabbix/zabbix-web-nginx-pgsql:latest + + + +CREATE USER zbx_monitor WITH PASSWORD 'secret' INHERIT; +GRANT pg_monitor TO zbx_monitor; + + + + + +Plugins.Oracle.Sessions.ANDOPRD.Uri=tcp://togoria:1521 +Plugins.Oracle.Sessions.ANDOPRD.User=zabbix_mon +Plugins.Oracle.Sessions.ANDOPRD.Password=secret +Plugins.Oracle.Sessions.ANDOPRD.Service=ANDOPRD + + +https://www.zabbix.com/download?zabbix=6.4&os_distribution=rocky_linux&os_version=8&components=agent_2&db=&ws= + + + + + + diff --git a/zabbix/install_01.txt b/zabbix/install_01.txt new file mode 100644 index 0000000..4b0053e --- /dev/null +++ b/zabbix/install_01.txt @@ -0,0 +1,332 @@ +# in my case PostgreSQL run in docker with network_mode: "host" and the server port is 5500 + +docker ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +103200fc07a3 postgres:15.4 "docker-entrypoint.s…" 7 days ago Up About an hour postgres15-db-1 + +# use interactive shell on docker container in order to run psql +docker exec -it postgres15-db-1 bash + +psql -p 5500 -U postgres + +# list users +\du + +# list databases +\l + +# cleanup old ZABBIX database install +drop database zabbix; +drop role zabbix; + +# create database & user +create database zabbix; +create role zabbix login password 'secret'; +alter database zabbix owner TO zabbix; + +# test connection +psql -p 5500 -U zabbix -d zabbix + +# list tables +\dt + + +# get official docker files +mkdir -p /app/persistent_docker/zabbix +cd /app/persistent_docker/zabbix +git clone https://github.com/zabbix/zabbix-docker.git + +# config files +############## + +# use docker-compose_v3_alpine_pgsql_latest.yaml docker compose file to create our custom compose file +cp docker-compose_v3_alpine_pgsql_latest.yaml zabbix.yaml + +zabbix.yaml +----------- +version: '3.5' +services: + zabbix-server: + image: zabbix/zabbix-server-pgsql:alpine-6.4-latest + ports: + - "10051:10051" + volumes: + - /etc/localtime:/etc/localtime:ro + - /etc/timezone:/etc/timezone:ro + - ./zbx_env/usr/lib/zabbix/alertscripts:/usr/lib/zabbix/alertscripts:ro + - ./zbx_env/usr/lib/zabbix/externalscripts:/usr/lib/zabbix/externalscripts:ro + - ./zbx_env/var/lib/zabbix/dbscripts:/var/lib/zabbix/dbscripts:ro + - ./zbx_env/var/lib/zabbix/export:/var/lib/zabbix/export:rw + - ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:ro + - ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro + - ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro + - ./zbx_env/var/lib/zabbix/mibs:/var/lib/zabbix/mibs:ro + - ./zbx_env/var/lib/zabbix/snmptraps:/var/lib/zabbix/snmptraps:ro +# - ./env_vars/.ZBX_DB_CA_FILE:/run/secrets/root-ca.pem:ro +# - ./env_vars/.ZBX_DB_CERT_FILE:/run/secrets/client-cert.pem:ro +# - ./env_vars/.ZBX_DB_KEY_FILE:/run/secrets/client-key.pem:ro + ulimits: + nproc: 65535 + nofile: + soft: 20000 + hard: 40000 + deploy: + resources: + limits: + cpus: '0.70' + memory: 1G + reservations: + cpus: '0.5' + memory: 512M + env_file: + - ./env_vars/.env_db_pgsql + - ./env_vars/.env_srv + secrets: + - POSTGRES_USER + - POSTGRES_PASSWORD + networks: + zbx_net_backend: + aliases: + - zabbix-server + - zabbix-server-pgsql + - zabbix-server-alpine-pgsql + - zabbix-server-pgsql-alpine + zbx_net_frontend: +# devices: +# - "/dev/ttyUSB0:/dev/ttyUSB0" + stop_grace_period: 30s + sysctls: + - net.ipv4.ip_local_port_range=1024 64999 + - net.ipv4.conf.all.accept_redirects=0 + - net.ipv4.conf.all.secure_redirects=0 + - net.ipv4.conf.all.send_redirects=0 + labels: + com.zabbix.description: "Zabbix server with PostgreSQL database support" + com.zabbix.company: "Zabbix LLC" + com.zabbix.component: "zabbix-server" + com.zabbix.dbtype: "pgsql" + com.zabbix.os: "alpine" + + + zabbix-web-nginx-pgsql: + image: zabbix/zabbix-web-nginx-pgsql:alpine-6.4-latest + ports: + - "80:8080" + - "443:8443" + volumes: + - /etc/localtime:/etc/localtime:ro + - /etc/timezone:/etc/timezone:ro + - ./zbx_env/etc/ssl/nginx:/etc/ssl/nginx:ro + - ./zbx_env/usr/share/zabbix/modules/:/usr/share/zabbix/modules/:ro +# - ./env_vars/.ZBX_DB_CA_FILE:/run/secrets/root-ca.pem:ro +# - ./env_vars/.ZBX_DB_CERT_FILE:/run/secrets/client-cert.pem:ro +# - ./env_vars/.ZBX_DB_KEY_FILE:/run/secrets/client-key.pem:ro + deploy: + resources: + limits: + cpus: '0.70' + memory: 512M + reservations: + cpus: '0.5' + memory: 256M + env_file: + - ./env_vars/.env_db_pgsql + - ./env_vars/.env_web + secrets: + - POSTGRES_USER + - POSTGRES_PASSWORD + depends_on: + - zabbix-server + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8080/ping"] + interval: 10s + timeout: 5s + retries: 3 + start_period: 30s + networks: + zbx_net_backend: + aliases: + - zabbix-web-nginx-pgsql + - zabbix-web-nginx-alpine-pgsql + - zabbix-web-nginx-pgsql-alpine + zbx_net_frontend: + stop_grace_period: 10s + sysctls: + - net.core.somaxconn=65535 + labels: + com.zabbix.description: "Zabbix frontend on Nginx web-server with PostgreSQL database support" + com.zabbix.company: "Zabbix LLC" + com.zabbix.component: "zabbix-frontend" + com.zabbix.webserver: "nginx" + com.zabbix.dbtype: "pgsql" + com.zabbix.os: "alpine" + +networks: + zbx_net_frontend: + driver: bridge + driver_opts: + com.docker.network.enable_ipv6: "false" + ipam: + driver: default + config: + - subnet: 172.16.238.0/24 + zbx_net_backend: + driver: bridge + driver_opts: + com.docker.network.enable_ipv6: "false" + internal: true + ipam: + driver: default + config: + - subnet: 172.16.239.0/24 + +volumes: + snmptraps: + +secrets: + POSTGRES_USER: + file: ./env_vars/.POSTGRES_USER + POSTGRES_PASSWORD: + file: ./env_vars/.POSTGRES_PASSWORD + + +./env_vars/.env_db_pgsql +------------------------ +DB_SERVER_HOST=socorro.swgalaxy +DB_SERVER_PORT=5500 +POSTGRES_USER=zabbix +POSTGRES_PASSWORD=secret +POSTGRES_DB=zabbix + + + +# start docker containers, check status and logs +docker compose -f zabbix.yaml up -d +docker ps -a +docker logs zabbix-docker-zabbix-server-1 +docker logs zabbix-docker-zabbix-web-nginx-pgsql-1 + +# download zabbix agent: zabbix_agent-6.4.8-linux-3.0-amd64-static.tar.gz +# uncompress archive to /app/zabbix_agent2 + +cd /app/zabbix_agent2 +gunzip -c zabbix_agent-6.4.8-linux-3.0-amd64-static.tar.gz | tar -xvf - + +# update zabbix_agentd.conf file: +Server=172.16.238.0/24 <- frontend network defined in docker compose file +ServerActive=192.168.0.91 <- IP of the docker host +AllowRoot=1 <- if you want to allow agent running under root account + + + +# in my case I prefer to run tha agent as a non root user +groupadd zabbixag +useradd zabbixag -g zabbixag -G zabbixag + +# switch to agent user and start: +su - zabbixag +/app/zabbix_agent2/sbin/zabbix_agentd -c /app/zabbix_agent2/conf/zabbix_agentd.conf + +# check agent process and log +ps -edf | grep -i agent +tail -f /tmp/zabbix_agentd.log + +# interesting, when I deployed the agent on a remote host, I had tu put in agent configuration file: +Server=192.168.0.91 <- IP of the docker host + +# Setup a notification test when a specific file exists +####################################################### +# https://aaronsaray.com/2020/zabbix-test-notification/ +- select a host +- create a new ITEM: + - Name: (my) check if file /tmp/test exists + - Type: Zabbix agent + - Key: vfs.file.exists[/tmp/test] + - Update interval: 1m +- create a nuew TRIGGER: + - Name: (my) raise error if file /tmp/test exists + - Severity: Disaster + - Expression: last(/bakura.swgalaxy/vfs.file.exists[/tmp/test])=1 + + +# Setup notifications to Opsgenie using webhook +############################################### +# https://www.zabbix.com/integrations/opsgenie + +From Opsgenie we need: + - Opsgenie API URL: https://api.eu.opsgenie.com/v2/alerts + - Your 0psgenie API KEY (token): 58798dad-fd7f-4f97-a4cc-85a45174fb29 + - Your Opsgenie Web URL: https://swgalaxy.app.opsgenie.com + +In Zabbix: + + 1. Setup an Opsgenie media type + + - define the global macro {$ZABBIX.URL}=, example {$ZABBIX.URL}=http://192.168.0.91 + (Menu: Administration/Macros) + - Create a copy pf Opsgenie media type (export in yaml, change media type name, import from yaml) + (Menu: Alerts/Media type) + - In your new Opsgenie media type, configure: + - opsgenie_api + - opsgenie_token + - opsgenie_web + - Enable the media type and test: + - alert_message: MEDIA TYPE TEST + - event_id: 12345 + - event_source: 0 + - event_update_status: 0 + - event_value: 1 + + 2. Associate the media type with User profile + + - Menu: User settings/Profile/Media + - Click on Add + - Send to: (not used but mandatory to add) + - customize (defaults values seems to be good) + - Don't forget to click on Update button + + 3. Enable triggering alerts to administrators via all media + + - Menu: Alerts / Actions / Trigger Actions + - Enable the action: Report problems to Zabbix administrators + (the value of Operation should be: Send message to user groups: Zabbix administrators via all media) + +# using zabbix_sender to send custom values +########################################### + + +On the host bakura.swgalaxy I will create a new item: + +Name: (my) item from zabbix_sender +Type: Zabbix trapper +Key: my_key_custom_integer +Type of information: Numeric (unsigned) + + +From the host bakura.swgalaxy: +/app/oracle/zabbix_agent/bin/zabbix_sender -c /app/oracle/zabbix_agent/conf/zabbix_agentd.conf -s "bakura.swgalaxy" -k my_key_custom_integer -o 39 + + +# Proxy zabbix +############## + +Docker file example: + +zabbix_proxy.yaml +----------------- +services: + exegol-zabbix-proxy: + image: zabbix/zabbix-proxy-sqlite3:latest + restart: always + environment: + ZBX_HOSTNAME: exegol.swgalaxy + ZBX_PROXYMODE: 0 + ZBX_SERVER_HOST: socorro.swgalaxy + + +To declare the proxy in Web Interface: Administration / Proxys / Create proxy + + + + + diff --git a/zabbix/poc.1/draft_02.txt b/zabbix/poc.1/draft_02.txt new file mode 100644 index 0000000..18f017f --- /dev/null +++ b/zabbix/poc.1/draft_02.txt @@ -0,0 +1,11 @@ +Discover: +- discover DB instances (15m) +- discover listeners (15m) +- discover tablespaces (15min) + + +Items: +- instance status (30s) +- listener status (30s) +- tablespace free/max (1m) +