Feed aggregator

Resuming Standby Databases after a RESETLOGS (without RECOVER DATABASE using Online Redo Logs !!) at the Primary

Hemant K Chitale - Sun, 2021-09-26 10:30

 Supposing that you lose the Online Redo Logs of the Primary [after it has been SHUTDOWN NORMAL/IMMEDIATE] and you have to  OPEN RESETLOGS, how do you handle the Standbys ?

This scenario demonstrated below works only if you did not have to issue a RESTORE DATABASE and RECOVER DATABASE at the Primary.  In such a case you would have to FLASHBACK all the Standbys to a time/SCN earlier than the Recovery time OR you'd have to Rebuild the Standbys.

In my demonstration below, the RECOVER DATABASE was only from the ArchiveLogs that have already been shipped or applied to the Standby -- upto Sequence#505You must verify that the last active Online Redo Log at the Primary has been Archived and Shipped (not necessarily applied) to the Standby[s] before you release your Primary for storage/server maintenance.  If they were not Shipped automatically, you can manually copy them to the Standbys and use the ALTER DATABASE REGISTER LOGFILE command at the Standbys,

Let's assume that the Primary had been shutdown for storage/server maintenance  (but the Standbys were yet running).  The Standby's are *not* shutdown in my scenario.

Then, during the storage/server maintenance, the disk/filesystem for the  Online Redo Logs was lost/corrupt.  This necessitated an OPEN RESETLOGS.

The key thing is than an OPEN RESETLOGS creates a new Incarnation of the database.  What happens at the Standbys subsequently ?

Let's assume that the Primary created a Controlfile Autobackup and then did SHUTDOWN IMMEDIATE before the storage/server maintenance.



Starting Control File and SPFILE Autobackup at 26-SEP-21
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-SEP-21

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 26 21:34:25 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL>
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>


entries from the alert log show that SEQUENCE#505 was the last Online Redo Log that was active
and was also archived at shutdown (as is done in recent versions)

2021-09-26T21:33:34.741801+08:00
Thread 1 advanced to log sequence 505 (LGWR switch), current SCN: 15237281
Current log# 1 seq# 505 mem# 0: /opt/oracle/oradata/ORCLCDB/redo01.log
2021-09-26T21:33:35.533318+08:00
NET (PID:13253): Archived Log entry 1295 added for T-1.S-504 ID 0xa7521ccd LAD:1
2021-09-26T21:34:40.355271+08:00
Shutting down ORACLE instance (immediate) (OS id: 13897)
...
...
ALTER DATABASE CLOSE NORMAL
Stopping Emon pool
2021-09-26T21:34:54.826891+08:00
alter pluggable database all close immediate
Completed: alter pluggable database all close immediate
alter pluggable database all close immediate
Completed: alter pluggable database all close immediate
2021-09-26T21:34:55.811566+08:00

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
Stopping Emon pool
Closing sequence subsystem (1373431005).
2021-09-26T21:34:56.082543+08:00
Stopping change tracking
2021-09-26T21:35:00.702496+08:00
TT04 (PID:4521): Shutdown in progress, stop CF update
2021-09-26T21:35:01.439450+08:00
Thread 1 advanced to log sequence 506 (thread close)
2021-09-26T21:35:01.537622+08:00
LGWR (PID:4387): Waiting for ORLs to be archived
2021-09-26T21:35:01.694980+08:00
ARC0 (PID:4457): Archived Log entry 1298 added for T-1.S-505 ID 0xa7521ccd LAD:1
2021-09-26T21:35:01.709616+08:00
LGWR (PID:4387): ORLs successfully archived
2021-09-26T21:35:02.711736+08:00
Shutting down archive processes
2021-09-26T21:35:02.712053+08:00
TT00 (PID:4453): Gap Manager exiting
2021-09-26T21:35:03.712018+08:00
Archiving is disabled
2021-09-26T21:35:03.712297+08:00
ARC3 (PID:4464): ARCH shutting down




However, it so happens that the Online Redo Logs are "lost" during server/storage maintenance.  Maybe, even the Controlfiles are "lost".  I decide to RESTORE CONTROLFILE and  to OPEN RESETLOGS.
In the  commands below, I deliberately use the SQL*Plus command line so that I can show the ArchiveLog Sequences#s that are applied before I can OPEN RESETLOGS  (RMAN command line would hide the details) and I know that I can CANCEL and stop after Sequence#505 

oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 26 21:39:59 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 1207955544 bytes
Fixed Size 9134168 bytes
Variable Size 872415232 bytes
Database Buffers 318767104 bytes
Redo Buffers 7639040 bytes
SQL> alter database mount;
alter database mount
*
ERROR at line 1:
ORA-00205: error in identifying control file, check alert log for more info


SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Sep 26 21:40:34 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (not mounted)

RMAN> restore controlfile from autobackup;

Starting restore at 26-SEP-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=245 device type=DISK

recovery area destination: /opt/oracle/FRA/ORCLCDB
database name (or database unique name) used for search: ORCLCDB
channel ORA_DISK_1: AUTOBACKUP /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/opt/oracle/oradata/ORCLCDB/control01.ctl
output file name=/opt/oracle/oradata/ORCLCDB/control02.ctl
Finished restore at 26-SEP-21

RMAN>
RMAN> alter database mount;

released channel: ORA_DISK_1
Statement processed

RMAN> alter database open;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 09/26/2021 21:41:17
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

RMAN>
RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 09/26/2021 21:41:35
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/opt/oracle/oradata/ORCLCDB/system01.dbf'

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 26 21:54:27 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> recover database using backup controlfile until cancel;
ORA-00279: change 15237057 generated at 09/26/2021 21:28:26 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/ORCLCDB/1_503_1036108814.dbf
ORA-00280: change 15237057 for thread 1 is in sequence #503


Specify log: {RET=suggested | filename | AUTO | CANCEL}

ORA-00279: change 15237253 generated at 09/26/2021 21:33:26 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/ORCLCDB/1_504_1036108814.dbf
ORA-00280: change 15237253 for thread 1 is in sequence #504
ORA-00278: log file '/opt/oracle/archivelog/ORCLCDB/1_503_1036108814.dbf' no longer needed for this recovery


Specify log: {RET=suggested | filename | AUTO | CANCEL}

ORA-00279: change 15237281 generated at 09/26/2021 21:33:33 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/ORCLCDB/1_505_1036108814.dbf
ORA-00280: change 15237281 for thread 1 is in sequence #505
ORA-00278: log file '/opt/oracle/archivelog/ORCLCDB/1_504_1036108814.dbf' no longer needed for this recovery


Specify log: {RET=suggested | filename | AUTO | CANCEL}

ORA-00279: change 15238147 generated at 09/26/2021 21:34:58 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/ORCLCDB/1_506_1036108814.dbf
ORA-00280: change 15238147 for thread 1 is in sequence #506
ORA-00278: log file '/opt/oracle/archivelog/ORCLCDB/1_505_1036108814.dbf' no longer needed for this recovery


Specify log: {RET=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL> alter database open resetlogs;

Database altered.

SQL> set pages60
SQL> set linesize132
SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select * from v$database_incarnation;

INCARNATION# RESETLOGS_CHANGE# RESETLOGS_TIME PRIOR_RESETLOGS_CHANGE# PRIOR_RESETLOGS_TI STATUS RESETLOGS_ID PRIOR_INCARNATION#
------------ ----------------- ------------------ ----------------------- ------------------ ------- ------------ ------------------
FLASHBACK_DATABASE_ALLOWED CON_ID
-------------------------- ----------
1 1 17-APR-19 00:55:59 0 PARENT 1005785759 0
NO 0

2 1920977 04-MAY-19 23:21:26 1 17-APR-19 00:55:59 PARENT 1007421686 1
NO 0

3 4797184 27-MAR-20 00:00:14 1920977 04-MAY-19 23:21:26 PARENT 1036108814 2
NO 0

4 15238148 26-SEP-21 22:00:48 4797184 27-MAR-20 00:00:14 CURRENT 1084312848 3
NO 0


SQL>


So, my Primary Database has now switched to INCARNATION#4 with a RESETLOGS as of 26-Sep-21 22:00:48 with RESETLOGS_ID "1084312848"

Let me check my Standby alert logs.
First, STDBYDB which shows that Sequence#505 was applied (ie. "Recovery of Onlin Redo Log ... Seq 505 .." message completed by showing the next message "Media Recovery Waiting for T-1.5-506" 


2021-09-26T21:33:28.002521+08:00
rfs (PID:4667): Standby controlfile consistent with primary
2021-09-26T21:33:28.339263+08:00
rfs (PID:4667): Selected LNO:4 for T-1.S-504 dbid 2778483057 branch 1036108814
2021-09-26T21:33:28.667001+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 504 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log
2021-09-26T21:33:33.555294+08:00
ARC3 (PID:3201): Archived Log entry 108 added for T-1.S-504 ID 0xa7521ccd LAD:1
2021-09-26T21:33:33.827814+08:00
PR00 (PID:3847): Media Recovery Waiting for T-1.S-505
2021-09-26T21:33:33.991340+08:00
rfs (PID:4667): Standby controlfile consistent with primary
2021-09-26T21:33:34.547950+08:00
rfs (PID:4667): Selected LNO:4 for T-1.S-505 dbid 2778483057 branch 1036108814
2021-09-26T21:33:35.283450+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 505 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log
2021-09-26T21:35:00.057442+08:00
ARC1 (PID:3197): Archived Log entry 109 added for T-1.S-505 ID 0xa7521ccd LAD:1
2021-09-26T21:35:00.289539+08:00
PR00 (PID:3847): Media Recovery Waiting for T-1.S-506
2021-09-26T21:35:02.732262+08:00
rfs (PID:4502): Possible network disconnect with primary database


At this point, 21:35:02, it shows a "network disconnect with the primary database" which is when the Primary had fnished archiving Sequence#505 and was continuing the shutdown.
Subsequently, when the Primary goes through the OPEN RESETLOGS, the Standby shows 
[note the "rfs (PID:20837): A new recovery destination branch has been registered" and " rfs (PID:20837): Standby in the future of new recovery destination branch(resetlogs_id) 1084312848" and "rfs (PID:20837): New Archival REDO Branch(resetlogs_id): 1084312848 Prior: 1036108814" messages" which match the RESETLOGS_ID information from the Primary's v$database_incarnation (listed above)  :

2021-09-26T22:01:32.924024+08:00
rfs (PID:20634): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is Foreground (PID:14755)
2021-09-26T22:01:44.591773+08:00
rfs (PID:20820): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is SYNC (PID:14565)
rfs (PID:20820): New archival redo branch: 1084312848 current: 1036108814
rfs (PID:20820): Primary database is in MAXIMUM AVAILABILITY mode
rfs (PID:20820): Changing standby controlfile to RESYNCHRONIZATION level
rfs (PID:20820): Standby controlfile consistent with primary
rfs (PID:20820): No SRLs available for T-1
Clearing online log 5 of thread 1 sequence number 0
2021-09-26T22:01:49.532864+08:00
rfs (PID:20820): Selected LNO:5 for T-1.S-3 dbid 2778483057 branch 1084312848
2021-09-26T22:01:49.535634+08:00
Clearing online log 4 of thread 1 sequence number 0
2021-09-26T22:01:56.788073+08:00
rfs (PID:20837): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is FAL (PID:14751)
2021-09-26T22:01:59.406342+08:00
rfs (PID:20837): Selected LNO:4 for T-1.S-2 dbid 2778483057 branch 1084312848
2021-09-26T22:02:00.964530+08:00
rfs (PID:20837): A new recovery destination branch has been registered
rfs (PID:20837): Standby in the future of new recovery destination branch(resetlogs_id) 1084312848
rfs (PID:20837): Incomplete Recovery SCN:0x0000000000e88403
rfs (PID:20837): Resetlogs SCN:0x0000000000e88404
rfs (PID:20837): SBPS:0x00000000004c4f04
rfs (PID:20837): New Archival REDO Branch(resetlogs_id): 1084312848 Prior: 1036108814
rfs (PID:20837): Archival Activation ID: 0xaa3191f1 Current: 0xa7521ccd
rfs (PID:20837): Effect of primary database OPEN RESETLOGS
rfs (PID:20837): Managed Standby Recovery process is active
2021-09-26T22:02:00.965240+08:00
Incarnation entry added for Branch(resetlogs_id): 1084312848 (STDBYDB)
2021-09-26T22:02:02.317070+08:00
Setting recovery target incarnation to 4
2021-09-26T22:02:02.421791+08:00
PR00 (PID:3847): MRP0: Incarnation has changed! Retry recovery...
2021-09-26T22:02:02.422333+08:00
Errors in file /opt/oracle/diag/rdbms/stdbydb/STDBYDB/trace/STDBYDB_pr00_3847.trc:
ORA-19906: recovery target incarnation changed during recovery
PR00 (PID:3847): Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Stopping change tracking
2021-09-26T22:02:02.735890+08:00
Errors in file /opt/oracle/diag/rdbms/stdbydb/STDBYDB/trace/STDBYDB_pr00_3847.trc:
ORA-19906: recovery target incarnation changed during recovery
2021-09-26T22:02:03.908863+08:00
Started logmerger process
2021-09-26T22:02:04.088714+08:00
PR00 (PID:20863): Managed Standby Recovery starting Real Time Apply
2021-09-26T22:02:05.658988+08:00
ARC1 (PID:3197): Archived Log entry 110 added for T-1.S-2 ID 0xaa3191f1 LAD:1
2021-09-26T22:02:06.477510+08:00
max_pdb is 5
2021-09-26T22:02:07.696019+08:00
Parallel Media Recovery started with 4 slaves
2021-09-26T22:02:08.510123+08:00
Stopping change tracking
2021-09-26T22:02:10.459429+08:00
PR00 (PID:20863): Media Recovery Waiting for T-1.S-1
PR00 (PID:20863): Fetching gap from T-1.S-1 to T-1.S-1
2021-09-26T22:02:10.779495+08:00
rfs (PID:20986): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is FAL (PID:14753)
2021-09-26T22:02:12.270336+08:00
rfs (PID:20986): Opened log for T-1.S-1 dbid 2778483057 branch 1084312848
2021-09-26T22:02:12.385769+08:00
rfs (PID:20986): Archived Log entry 111 added for B-1084312848.T-1.S-1 ID 0xaa3191f1 LAD:2
2021-09-26T22:02:13.251841+08:00
PR00 (PID:20863): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_1_1084312848.dbf
2021-09-26T22:02:14.796992+08:00
PR00 (PID:20863): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_2_1084312848.dbf
2021-09-26T22:02:22.220965+08:00
PR00 (PID:20863): Media Recovery Waiting for T-1.S-3 (in transit)
2021-09-26T22:02:23.679612+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 3 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo02.dbf
2021-09-26T22:02:33.322073+08:00
ARC2 (PID:3199): Archived Log entry 112 added for T-1.S-3 ID 0xaa3191f1 LAD:1
2021-09-26T22:02:34.236382+08:00
PR00 (PID:20863): Media Recovery Waiting for T-1.S-4
2021-09-26T22:02:34.559830+08:00
rfs (PID:20820): Changing standby controlfile to MAXIMUM AVAILABILITY level
2021-09-26T22:02:35.020783+08:00
rfs (PID:20820): Selected LNO:4 for T-1.S-4 dbid 2778483057 branch 1084312848
2021-09-26T22:02:35.329328+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 4 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log
2021-09-26T22:10:24.128439+08:00
PR00 (PID:20863): Media Recovery Waiting for T-1.S-5
2021-09-26T22:10:24.194710+08:00
ARC0 (PID:3193): Archived Log entry 113 added for T-1.S-4 ID 0xaa3191f1 LAD:1
2021-09-26T22:10:25.894496+08:00
rfs (PID:20820): Standby controlfile consistent with primary
2021-09-26T22:10:26.415829+08:00
rfs (PID:20820): Selected LNO:4 for T-1.S-5 dbid 2778483057 branch 1084312848
2021-09-26T22:10:32.885773+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 5 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log
2021-09-26T22:10:33.484509+08:00
ARC2 (PID:3199): Archived Log entry 114 added for T-1.S-5 ID 0xaa3191f1 LAD:1
2021-09-26T22:10:33.653161+08:00
PR00 (PID:20863): Media Recovery Waiting for T-1.S-6 (in transit)
2021-09-26T22:10:33.666695+08:00
rfs (PID:20820): Standby controlfile consistent with primary
2021-09-26T22:10:33.983459+08:00
rfs (PID:20820): Selected LNO:4 for T-1.S-6 dbid 2778483057 branch 1084312848
2021-09-26T22:10:34.715704+08:00
2021-09-26T22:10:34.715704+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 6 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log


The STDBYDB instance is already at Sequence#6 of the new Incarnation because the Primary database has been released for use and is already receiving new transactions and generating redo and ArchiveLogs which it is now shipping to all the Standbys.

The other current Standby STDB2 also shows that is now running with the new Incarnation and Sequence#6 :


2021-09-26T22:10:33.848217+08:00
PR00 (PID:20945): Media Recovery Waiting for T-1.S-6 (in transit)
2021-09-26T22:10:34.910903+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 6 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBNEW/stdbredo02.dbf


What about the Stanndy Database STDB2 that I had set to lag by 1hour. 
I present the alert log entries that shows it is aware of (Received and Archived, and in some cases, Applied, older Archives going back to the time before the Primary was shutdown).


2021-09-26T21:24:03.036631+08:00
ARC0 (PID:3517): Archive log for T-1.S-499 available in 59 minute(s)
2021-09-26T21:24:15.615120+08:00
rfs (PID:4480): Selected LNO:4 for T-1.S-501 dbid 2778483057 branch 1036108814
2021-09-26T21:24:16.085817+08:00
ARC1 (PID:3523): Archived Log entry 246 added for T-1.S-500 ID 0xa7521ccd LAD:1
2021-09-26T21:24:16.085881+08:00
ARC1 (PID:3523): Archive log for T-1.S-500 available in 60 minute(s)
2021-09-26T21:27:03.617096+08:00
db_recovery_file_dest_size of 10240 MB is 10.63% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
2021-09-26T21:28:21.459249+08:00
rfs (PID:4480): Selected LNO:5 for T-1.S-502 dbid 2778483057 branch 1036108814
2021-09-26T21:28:25.263818+08:00
ARC2 (PID:3525): Archived Log entry 247 added for T-1.S-501 ID 0xa7521ccd LAD:1
2021-09-26T21:28:25.264142+08:00
ARC2 (PID:3525): Archive log for T-1.S-501 available in 60 minute(s)
2021-09-26T21:28:29.572537+08:00
rfs (PID:4480): Selected LNO:4 for T-1.S-503 dbid 2778483057 branch 1036108814
2021-09-26T21:28:29.580802+08:00
ARC3 (PID:3527): Archived Log entry 248 added for T-1.S-502 ID 0xa7521ccd LAD:1
2021-09-26T21:28:29.580964+08:00
ARC3 (PID:3527): Archive log for T-1.S-502 available in 59 minute(s)
2021-09-26T21:33:30.720113+08:00
rfs (PID:4480): Selected LNO:5 for T-1.S-504 dbid 2778483057 branch 1036108814
2021-09-26T21:33:30.737222+08:00
ARC0 (PID:3517): Archived Log entry 249 added for T-1.S-503 ID 0xa7521ccd LAD:1
2021-09-26T21:33:30.737298+08:00
ARC0 (PID:3517): Archive log for T-1.S-503 available in 60 minute(s)
2021-09-26T21:33:35.794308+08:00
rfs (PID:4480): Selected LNO:4 for T-1.S-505 dbid 2778483057 branch 1036108814
2021-09-26T21:33:35.825498+08:00
ARC1 (PID:3523): Archived Log entry 250 added for T-1.S-504 ID 0xa7521ccd LAD:1
2021-09-26T21:33:35.825563+08:00
ARC1 (PID:3523): Archive log for T-1.S-504 available in 60 minute(s)
2021-09-26T21:35:02.021214+08:00
rfs (PID:4493): Selected LNO:4 for T-1.S-505 dbid 2778483057 branch 1036108814
2021-09-26T21:35:02.225079+08:00
ARC2 (PID:3525): Archived Log entry 251 added for T-1.S-505 ID 0xa7521ccd LAD:1
2021-09-26T21:35:02.226252+08:00
ARC2 (PID:3525): Archive log for T-1.S-505 available in 59 minute(s)
2021-09-26T21:35:02.733024+08:00
rfs (PID:4488): Possible network disconnect with primary database
2021-09-26T22:01:13.848495+08:00
rfs (PID:20585): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is ASYNC (PID:20568)
rfs (PID:20585): New archival redo branch: 1084312848 current: 1036108814
rfs (PID:20585): Primary database is in MAXIMUM PERFORMANCE mode
2021-09-26T22:01:13.995667+08:00
rfs (PID:20587): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is ARCH (PID:14747)
rfs (PID:20587): New archival redo branch: 1084312848 current: 1036108814
2021-09-26T22:01:15.034084+08:00
rfs (PID:20585): No SRLs available for T-1
2021-09-26T22:01:15.317462+08:00
rfs (PID:20587): No SRLs available for T-1
Clearing online log 5 of thread 1 sequence number 0
2021-09-26T22:01:22.282624+08:00
rfs (PID:20587): Selected LNO:5 for T-1.S-1 dbid 2778483057 branch 1084312848
2021-09-26T22:01:22.283691+08:00
Clearing online log 4 of thread 1 sequence number 0
2021-09-26T22:01:22.380789+08:00
rfs (PID:20603): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is Foreground (PID:14755)
2021-09-26T22:01:24.283184+08:00
rfs (PID:20587): A new recovery destination branch has been registered
rfs (PID:20587): Standby in the future of new recovery destination branch(resetlogs_id) 1084312848
rfs (PID:20587): Incomplete Recovery SCN:0x0000000000e6e7e1
rfs (PID:20587): Resetlogs SCN:0x0000000000e88404
rfs (PID:20587): SBPS:0x00000000004c4f04
rfs (PID:20587): New Archival REDO Branch(resetlogs_id): 1084312848 Prior: 1036108814
rfs (PID:20587): Archival Activation ID: 0xaa3191f1 Current: 0xa7521ccd
rfs (PID:20587): Effect of primary database OPEN RESETLOGS
rfs (PID:20587): Managed Standby Recovery process is active
2021-09-26T22:01:24.283999+08:00
Incarnation entry added for Branch(resetlogs_id): 1084312848 (STDB2)
2021-09-26T22:01:25.510452+08:00
Setting recovery target incarnation to 4
2021-09-26T22:01:27.818244+08:00
PR00 (PID:3913): MRP0: Incarnation has changed! Retry recovery...
2021-09-26T22:01:28.056186+08:00
Errors in file /opt/oracle/diag/rdbms/stdb2/STDB2/trace/STDB2_pr00_3913.trc:
ORA-19906: recovery target incarnation changed during recovery
Recovery interrupted!
2021-09-26T22:01:29.960402+08:00
Stopping change tracking
2021-09-26T22:01:29.962316+08:00
Errors in file /opt/oracle/diag/rdbms/stdb2/STDB2/trace/STDB2_pr00_3913.trc:
ORA-19906: recovery target incarnation changed during recovery
2021-09-26T22:01:30.588055+08:00
rfs (PID:20585): Selected LNO:4 for T-1.S-2 dbid 2778483057 branch 1084312848
2021-09-26T22:01:30.942431+08:00
Started logmerger process
2021-09-26T22:01:32.562040+08:00
PR00 (PID:20631): Managed Standby Recovery not using Real Time Apply
2021-09-26T22:01:34.537732+08:00
ARC2 (PID:3525): Archived Log entry 252 added for T-1.S-1 ID 0xaa3191f1 LAD:1
2021-09-26T22:01:34.537903+08:00
ARC2 (PID:3525): Archive log for T-1.S-1 available in 59 minute(s)
2021-09-26T22:01:34.915506+08:00
max_pdb is 5
2021-09-26T22:01:36.394057+08:00
Parallel Media Recovery started with 4 slaves
2021-09-26T22:01:38.128953+08:00
Media Recovery start incarnation depth : 1, target inc# : 4, irscn : 15238147
Stopping change tracking
2021-09-26T22:01:39.202866+08:00
PR00 (PID:20631): Media Recovery Waiting for B-1036108814.T-1.S-495
2021-09-26T22:01:55.182487+08:00
rfs (PID:20587): Opened log for T-1.S-495 dbid 2778483057 branch 1036108814
2021-09-26T22:01:57.676763+08:00
rfs (PID:20585): Selected LNO:5 for T-1.S-3 dbid 2778483057 branch 1084312848
2021-09-26T22:01:58.495693+08:00
ARC3 (PID:3527): Archived Log entry 253 added for T-1.S-2 ID 0xaa3191f1 LAD:1
2021-09-26T22:01:58.495890+08:00
ARC3 (PID:3527): Archive log for T-1.S-2 available in 59 minute(s)
2021-09-26T22:01:59.879131+08:00
rfs (PID:20587): Archived Log entry 254 added for B-1036108814.T-1.S-495 ID 0xa7521ccd LAD:3
2021-09-26T22:02:00.978032+08:00
PR00 (PID:20631): Media Recovery Delayed for 11 minute(s) T-1.S-495
2021-09-26T22:02:06.096910+08:00
rfs (PID:20950): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is FAL (PID:14753)
2021-09-26T22:02:06.097083+08:00
rfs (PID:20948): krsr_rfs_atc: Identified database type as 'PHYSICAL STANDBY': Client is FAL (PID:14751)
2021-09-26T22:02:07.293676+08:00
rfs (PID:20587): Opened log for T-1.S-497 dbid 2778483057 branch 1036108814
2021-09-26T22:02:07.649778+08:00
rfs (PID:20950): Opened log for T-1.S-498 dbid 2778483057 branch 1036108814
2021-09-26T22:02:07.997282+08:00
rfs (PID:20948): Opened log for T-1.S-496 dbid 2778483057 branch 1036108814
2021-09-26T22:02:08.520189+08:00
rfs (PID:20587): Archived Log entry 255 added for B-1036108814.T-1.S-497 ID 0xa7521ccd LAD:3
2021-09-26T22:02:09.302061+08:00
rfs (PID:20950): Archived Log entry 256 added for B-1036108814.T-1.S-498 ID 0xa7521ccd LAD:3
2021-09-26T22:02:09.707710+08:00
rfs (PID:20948): Archived Log entry 257 added for B-1036108814.T-1.S-496 ID 0xa7521ccd LAD:3
2021-09-26T22:02:10.761235+08:00
rfs (PID:20587): Opened log for T-1.S-499 dbid 2778483057 branch 1036108814
2021-09-26T22:02:11.253039+08:00
rfs (PID:20587): Archived Log entry 258 added for B-1036108814.T-1.S-499 ID 0xa7521ccd LAD:3
2021-09-26T22:02:12.823920+08:00
rfs (PID:20948): Opened log for T-1.S-500 dbid 2778483057 branch 1036108814
2021-09-26T22:02:13.951243+08:00
rfs (PID:20948): Archived Log entry 259 added for B-1036108814.T-1.S-500 ID 0xa7521ccd LAD:3
2021-09-26T22:02:15.024761+08:00
rfs (PID:20587): Opened log for T-1.S-501 dbid 2778483057 branch 1036108814
2021-09-26T22:02:15.534976+08:00
rfs (PID:20950): Opened log for T-1.S-502 dbid 2778483057 branch 1036108814
2021-09-26T22:02:21.798580+08:00
rfs (PID:20950): Archived Log entry 260 added for B-1036108814.T-1.S-502 ID 0xa7521ccd LAD:3
2021-09-26T22:02:23.673255+08:00
rfs (PID:20587): Archived Log entry 261 added for B-1036108814.T-1.S-501 ID 0xa7521ccd LAD:3
2021-09-26T22:02:37.301066+08:00
rfs (PID:20585): Selected LNO:4 for T-1.S-4 dbid 2778483057 branch 1084312848
2021-09-26T22:02:37.452512+08:00
ARC0 (PID:3517): Archived Log entry 262 added for T-1.S-3 ID 0xaa3191f1 LAD:1
2021-09-26T22:02:37.452615+08:00
ARC0 (PID:3517): Archive log for T-1.S-3 available in 59 minute(s)
2021-09-26T22:10:31.699175+08:00
rfs (PID:20585): Selected LNO:5 for T-1.S-5 dbid 2778483057 branch 1084312848
2021-09-26T22:10:31.774120+08:00
ARC1 (PID:3523): Archived Log entry 263 added for T-1.S-4 ID 0xaa3191f1 LAD:1
2021-09-26T22:10:31.774245+08:00
ARC1 (PID:3523): Archive log for T-1.S-4 available in 59 minute(s)
2021-09-26T22:10:35.934765+08:00
rfs (PID:20585): Selected LNO:4 for T-1.S-6 dbid 2778483057 branch 1084312848
2021-09-26T22:10:35.938540+08:00
ARC2 (PID:3525): Archived Log entry 264 added for T-1.S-5 ID 0xaa3191f1 LAD:1
2021-09-26T22:10:35.938646+08:00
ARC2 (PID:3525): Archive log for T-1.S-5 available in 60 minute(s)
2021-09-26T22:12:59.695487+08:00
PR00 (PID:20631): Media Recovery Log /opt/oracle/archivelog/STDB2/1_495_1036108814.dbf
PR00 (PID:20631): Media Recovery Delayed for 19 minute(s) T-1.S-496
2021-09-26T22:21:12.441616+08:00
PR00 (PID:20631): Media Recovery Log /opt/oracle/archivelog/STDB2/1_496_1036108814.dbf
2021-09-26T22:21:15.172035+08:00
PR00 (PID:20631): Media Recovery Log /opt/oracle/archivelog/STDB2/1_497_1036108814.dbf
2021-09-26T22:21:16.215487+08:00
PR00 (PID:20631): Media Recovery Delayed for 20 minute(s) T-1.S-498
2021-09-26T22:22:10.156957+08:00
PR00 (PID:20631): Media Recovery Log /opt/oracle/archivelog/STDB2/1_498_1036108814.dbf
PR00 (PID:20631): Media Recovery Delayed for 21 minute(s) T-1.S-499
2021-09-26T22:23:15.521168+08:00
PR00 (PID:20631): Media Recovery Log /opt/oracle/archivelog/STDB2/1_499_1036108814.dbf
PR00 (PID:20631): Media Recovery Delayed for 22 minute(s) T-1.S-500
2021-09-26T22:24:17.854072+08:00
PR00 (PID:20631): Media Recovery Log /opt/oracle/archivelog/STDB2/1_500_1036108814.dbf
PR00 (PID:20631): Media Recovery Delayed for 26 minute(s) T-1.S-501
2021-09-26T22:28:31.972725+08:00
PR00 (PID:20631): Media Recovery Log /opt/oracle/archivelog/STDB2/1_502_1036108814.dbf
PR00 (PID:20631): Media Recovery Waiting for B-1036108814.T-1.S-503
2021-09-26T22:35:19.026115+08:00
rfs (PID:20585): Selected LNO:5 for T-1.S-7 dbid 2778483057 branch 1084312848
2021-09-26T22:35:19.052219+08:00
ARC3 (PID:3527): Archived Log entry 265 added for T-1.S-6 ID 0xaa3191f1 LAD:1
2021-09-26T22:35:19.052325+08:00
ARC3 (PID:3527): Archive log for T-1.S-6 available in 59 minute(s)



This has recognised the ResetLogs at 22:01 and has received the new Log Sequences (until Sequence#7) but still hasn't applied Sequence498  from the old Incarnation because that is not yet 1hour old [Sequence#497 has been applied to this Standby as at the time I view the alert log].  All  the new Incarnation Log Sequences# 1 to 6 and onwards will be applied when each is 1hour old.

Since STDB2, seems to have stopped applying ArchiveLogs afer Sequence#502, I stopped and restarted delayed recovery and re-registered ArchiveLogs for Sequence#503 to #505 (that had actually already been Received and Archived between 21:33 and 21:36 earlier).


oracle19c>sqlplus

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 26 23:07:38 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database recover managed standby database using archived logfile disconnect from session;

Database altered.

SQL> qui
SP2-0042: unknown command "qui" - rest of line ignored.
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 26 23:11:34 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> alter database register logfile '/opt/oracle/archivelog/STDB2/1_503_1036108814.dbf';

Database altered.

SQL> alter database register logfile '/opt/oracle/archivelog/STDB2/1_504_1036108814.dbf';

Database altered.

SQL> alter database register logfile '/opt/oracle/archivelog/STDB2/1_505_1036108814.dbf';

Database altered.

SQL>


2021-09-26T23:07:57.810826+08:00
Managed Standby Recovery Canceled (STDB2)
Completed: alter database recover managed standby database cancel
2021-09-26T23:08:13.863811+08:00
alter database recover managed standby database using archived logfile disconnect from session
2021-09-26T23:08:13.865416+08:00
Attempt to start background Managed Standby Recovery process (STDB2)
Starting background process MRP0
2021-09-26T23:08:13.877996+08:00
MRP0 started with pid=36, OS id=14015
2021-09-26T23:08:13.879036+08:00
Background Managed Standby Recovery process started (STDB2)
2021-09-26T23:08:18.889440+08:00
Started logmerger process
2021-09-26T23:08:18.917894+08:00
PR00 (PID:14043): Managed Standby Recovery not using Real Time Apply
max_pdb is 5
2021-09-26T23:08:19.572566+08:00
Parallel Media Recovery started with 4 slaves
2021-09-26T23:08:19.748754+08:00
Media Recovery start incarnation depth : 1, target inc# : 4, irscn : 15238147
Stopping change tracking
2021-09-26T23:08:19.885274+08:00
Completed: alter database recover managed standby database using archived logfile disconnect from session
2021-09-26T23:08:19.969571+08:00
PR00 (PID:14043): Media Recovery Waiting for B-1036108814.T-1.S-503
2021-09-26T23:12:02.832089+08:00
alter database register logfile '/opt/oracle/archivelog/STDB2/1_503_1036108814.dbf'
2021-09-26T23:12:02.832185+08:00
There are 1 logfiles specified.
ALTER DATABASE REGISTER [PHYSICAL] LOGFILE
2021-09-26T23:12:04.988846+08:00
.... (PID:15189): Resynchronizing from T-1-S-10 to T-1.S-503
Completed: alter database register logfile '/opt/oracle/archivelog/STDB2/1_503_1036108814.dbf'
2021-09-26T23:12:13.842691+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_503_1036108814.dbf
2021-09-26T23:12:15.812907+08:00
PR00 (PID:14043): Media Recovery Waiting for B-1036108814.T-1.S-504
2021-09-26T23:12:22.647534+08:00
alter database register logfile '/opt/oracle/archivelog/STDB2/1_504_1036108814.dbf'
2021-09-26T23:12:22.647609+08:00
There are 1 logfiles specified.
ALTER DATABASE REGISTER [PHYSICAL] LOGFILE
2021-09-26T23:12:24.805303+08:00
.... (PID:15189): Resynchronizing from T-1-S-503 to T-1.S-504
Completed: alter database register logfile '/opt/oracle/archivelog/STDB2/1_504_1036108814.dbf'
2021-09-26T23:12:30.299323+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_504_1036108814.dbf
PR00 (PID:14043): Media Recovery Waiting for B-1036108814.T-1.S-505
2021-09-26T23:12:36.167593+08:00
alter database register logfile '/opt/oracle/archivelog/STDB2/1_505_1036108814.dbf'
2021-09-26T23:12:36.167671+08:00
There are 1 logfiles specified.
ALTER DATABASE REGISTER [PHYSICAL] LOGFILE
2021-09-26T23:12:38.271665+08:00
.... (PID:15189): Resynchronizing from T-1-S-504 to T-1.S-505
Completed: alter database register logfile '/opt/oracle/archivelog/STDB2/1_505_1036108814.dbf'
2021-09-26T23:12:41.883099+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_505_1036108814.dbf
2021-09-26T23:12:43.883970+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_1_1084312848.dbf
2021-09-26T23:12:44.047740+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_2_1084312848.dbf
2021-09-26T23:12:44.473162+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_3_1084312848.dbf
2021-09-26T23:12:44.701811+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_4_1084312848.dbf
2021-09-26T23:12:52.961033+08:00
PR00 (PID:14043): Media Recovery Log /opt/oracle/archivelog/STDB2/1_5_1084312848.dbf
PR00 (PID:14043): Media Recovery Delayed for 59 minute(s) T-1.S-6


This is a bug I've noticed in the past. If a Standby has Received and Archived a logfile but hasn't applied it when it should have (Sequence#503 to 505 should have been applied between 22:33 and 22:36 as they were received at 21:33 to 22:35-- you can see the earlier set of alert log entries that show that they were Received and Archived by this database instance), you might shutdown, startup and resume Recovery or simply stop and start Recovery.  If that doesn't work, re-register the ArchiveLogs with the "ALTER DATABASE REGISTER LOGFILE" command again and re-attempt Recovery.

Now, all 3 of my Standby databases are in sync with the Primary (ie. STDBYDB and STDBNEW are current while STDB2 is lagging by 1hour, as designed).

Thus it is possible to do a RESETLOGS on a Primary as long as you ensure that 
1.  The Primary had Archived the last Online Redo Log before Shutdown
2.  The Shutdown was Normal or Immediate
3.  All the Standbys have, at least, Received all the ArchiveLogs (if there was a delay in shipping ArchiveLogs, you can manually copy them to the Standbys and use the ALTER DATABASE REGISTER LOGFILE command at the Standbys)
4.  No actual recovery from Online Redo Logs {that were not shipped to the Standby} was required -- this is the case when the Shutdown is Normal or Immediate
Categories: DBA Blogs

MLOps with TensorFlow and Kubernetes Powered by RabbitMQ and FastAPI

Andrejus Baranovski - Sun, 2021-09-26 06:26
I show how to run TensorFlow model training and data processing containers in a single Pod in Kubernetes. Model training container runs as a main container and data processing as a sidecar. Running both containers in a single Pod, allow to share files in common storage. I'm using persistence volume to store TensorFlow model and stats data from the data processing container. This video shows a walk through the complete use case of data processing, model training and ML microservices communication.

 

Serializable Transaction

Tom Kyte - Sat, 2021-09-25 02:06
Hi Tom, On page 130-131 of your book, "expert one-on-one Oracle," you gave an example of two sessions querying and inserting into tables a and b. The result is that both tables contain a row with the value zero in them. My question is that wouldn't the result be the same even if the isolation_level is not serializable? Not sure what you're demonstrating by this example. Thank - peter
Categories: DBA Blogs

Partitioning question

Tom Kyte - Sat, 2021-09-25 02:06
Let's say we have a table of documents with key of doc_id. This table contains millions of documents(rows). There is also a table that assigns one or more industry codes to each doc: create table doc_ind ( doc_id integer, ind_code varchar2(10) ) This table will therefore also contain millions of rows. The industry code is a part of the default set of industry codes used by our business. Now let's say that we sign deals with clients that want to use their own industry classifications. I don't want to create new tables for each client (there could eventually be hundreds of clients and therefore hundreds of new tables) and have some scheme where logic is built into the application to derive the correct table_name to query based on the client. My first thought is to create a table like: create table doc_client_ind ( client_id varchar2(10), doc_id integer, ind_code varchar2(10) ) The client id could be any value - usually resembling the client name. Because each new client would add millions of rows to this table, I want to use partitioning. After reading up on partitioning in your book, my situation does not seem to fit perfectly into standard scenarios. At first, it seemed I would need to use hash partitioning. However, the number of distinct client_id values in this table will be very small to start with and grow over time. It is not clear how I would easily manage hash partitioning well over time. In order to use range partitioning, it seems to me that client_id's would need to be assigned with values that have a natural ascending sort value (i.e., c0001, c0002, etc.). Then I could have one partition per client and create them on an ongoing as needed basis. I have three questions: 1. Are there other options (with or without partitioning) I don't see to manage this? 2. Is there a way to manage this effectively using the original essentially random client id values? 3. What do you suggest given the limited info above? 4. Given your suggestion in question 3, what are the disadvantages, caveats I need to be aware of? Thanks - Brian
Categories: DBA Blogs

removing attribute from very large xml

Tom Kyte - Sat, 2021-09-25 02:06
I have a xml of size 52kb and need to remove namespaces from xml, below is the code im using which is working fine when xml was of 3-4 kb but when size increased below is the error thrown is : is 19011. 00000 - "Character string buffer too small" *Cause: The string result asked for is too big to return back *Action: Get the result as a lob instead PROCEDURE remove_attributes IS V_DOM_DOC dbms_xmldom.DOMDocument; V_ROOT_ELIMENT DBMS_XMLDOM.DOMELEMENT; V_XMLTYPE XMLTYPE; BEGIN SELECT xml_col INTO V_XMLTYPE FROM table1; V_DOM_DOC := dbms_xmldom.newDOMDocument(V_XMLTYPE); V_ROOT_ELIMENT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_DOM_DOC); -- Remove exact namespace. dbms_xmldom.removeAttribute( V_ROOT_ELIMENT, 'xmlns' ); V_XMLTYPE := DBMS_XMLDOM.GETXMLTYPE(V_DOM_DOC); UPDATE table1 SET XML_COL_UPDT = V_XMLTYPE; update table1 set XML_COL_UPDT= replace(XML_COL_UPDT, 'rlc:',null); DBMS_OUTPUT.PUT_LINE ('XMLNS ATTRIBUTE REMOVED'); END remove_attributes;
Categories: DBA Blogs

Long Term Backup Retention [LTR] On Azure SQL Managed Instance

Online Apps DBA - Fri, 2021-09-24 06:13

In this blog, I have covered the overview of Long-Term Backup Retention [LTR] On Azure SQL Managed Instance. Also how to create, configure, restore a backup, and many more things which are essential to learn as a database administrator. What Is Long-Term Backup Retention Most of the applications have regulatory, consistency or other business purposes […]

The post Long Term Backup Retention [LTR] On Azure SQL Managed Instance appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Create a PDF in Memory

Tom Kyte - Thu, 2021-09-23 13:46
Is there a way to create a pdf in memory and save it in a blob? We don?t want to create a database directory and create the actual pdf on the server. So, I am looking for a way to create it in memory and insert it into a blob field using pl/sql or a database procedure written in Java.
Categories: DBA Blogs

Copy data with BLOB using @dblink to my Staging database

Tom Kyte - Thu, 2021-09-23 13:46
Hi :)! I have a database @dblink and I need to copy files with BLOB from @dblink to my staging database. What can I do? I can't CREATE tables because of restrictions, so we only should use TRUNCATE, UPDATE and INSERT. Thank you so much in advance!
Categories: DBA Blogs

I want to automate this using Ansible

Tom Kyte - Thu, 2021-09-23 13:46
Fix Text Remove any demonstration and sample databases, database applications, objects, and files from the DBMS. To remove an account and all objects owned by that account (using BI as an example): DROP USER BI CASCADE; To remove objects without removing their owner, use the appropriate DROP statement (DROP TABLE, DROP VIEW, etc.). This is check: make sure all example schemas are removed select distinct(username) from dba_users where username in ('BI','HR','OE','PM','IX','SH','SCOTT'); Expected result: Expected results: no rows returned. If rows are returned, then drop the users. sqlplus / as sysdba drop user bi cascade; drop user hr cascade; drop user oe cascade; drop user pm cascade; drop user is cascade; drop user sh cascade; drop user scott cascade;
Categories: DBA Blogs

Flashback Query using Flashback Data Archive and minvalue does not work as expected

Tom Kyte - Thu, 2021-09-23 13:46
Hello, I created a Flashback Data Archive and gave it a retention of one year. Now I have enabled flashback archive for a test table. My problem is that minvalue does not work as I expected. My expectation is that the specified query will return ALL values that are present in the flashback archive. In fact, it only returns the data from the last 15 min. Is this the correct behavior? If yes, how can I query all previous versions of a row with a query? The given example shows the DDL of the test objects and an example query with timestamps when they were executed. Sorry for not using LiveSQL, but i wasn't able to recreate the Problem there. <code> create table test ( tst_id number generated by default as identity, tst varchar2(255) ) create flashback archive flashback_test tablespace users_flashback retention 1 year; alter table test flashback archive flashback_test; 15.09.21 11:41:28,534508000 +02:00 insert into test(tst) values('Test1'); 15.09.21 11:43:15,736558000 +02:00 update test set tst = 'Test2' where tst_id = 1; 15.09.21 11:45:47,551388000 +02:00 update test set tst = 'Test3' where tst_id = 1; select tst, versions_starttime, versions_endtime from test versions between scn minvalue and maxvalue where tst_id = 1; tst versions_starttime versions_endtime 15.09.21 11:48:09,833296000 +02:00 Test3 15.09.21 11:45:47 Test2 15.09.21 11:43:22 15.09.21 11:45:47 Test1 15.09.21 11:41:22 15.09.21 11:43:22 15.09.21 11:58:20,512213000 +02:00 Test3 15.09.21 11:45:47 Test2 15.09.21 11:43:22 15.09.21 11:45:47 Test1 15.09.21 11:43:22 15.09.21 11:59:52,966693000 +02:00 Test3 15.09.21 11:45:47 Test2 15.09.21 11:45:47 15.09.21 12:04:07,629023000 +02:00 Test3 </code>
Categories: DBA Blogs

How to find the chr(10) or chr(13) is in between a CLOB string

Tom Kyte - Thu, 2021-09-23 13:46
I would need to replace the chr(10) or chr(13) from the string. However i would need to exclude those chr(10) or chr(13) in middle of the string. eg: (2S,4R)-1-{1-[(2-acetamido-2-deoxy-??-D- [(3-{5-[(2-acetamido-2-deoxy-??-Dgalactopyranosyl)oxy]pentanamido} 5,11,18-trioxo-14-oxa-6,10,17-triazanonacosan-29-oyl}-4-hydroxypyrrolidin-2- yl]methyl all-P-ambo-2'-O-methyl-P-thioguanylyl-(3'?5')-2'-O-methyl-Pthiouridylyl-(3'?5')-2'-O-methylcytidylyl-(3'?5')-2'-O-methyladenylyl-(3' In the above example, i need not replace the chr(10) or chr(13), which is in between the string. Need to replace only at the end of the string. Please help to provide plsql script for this. Thank you
Categories: DBA Blogs

Designing Good Audit Trails for an Oracle Database

Pete Finnigan - Thu, 2021-09-23 11:46
I have been asked to speak at the UKOUG Autumn Tech event. This is an online conference event and the agenda grid is live and I will speak at 15:00 to 15:45 BUT the link to the details of my....[Read More]

Posted by Pete On 23/09/21 At 09:58 AM

Categories: Security Blogs

Azure Solutions Architect | Day 1 Review and Q/A

Online Apps DBA - Thu, 2021-09-23 01:16

Azure Active Directory [AZ-303/304] Azure Active Directory ➪ is a cloud-based identity and access management service offered by Microsoft that helps employees sign in and access resources. Azure AD Join ➪ lets you join the devices directly to Azure AD without joining on-premises Active Directory. At the same time, it is user productive and secure. […]

The post Azure Solutions Architect | Day 1 Review and Q/A appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Exadata Cloud@Customer New Releases 2021

Online Apps DBA - Thu, 2021-09-23 00:48

Oracle Cloud has announced many new features on OCI the entire year. So it’s time to take a round trip on the top 4 new releases related to Exadata Cloud@Customer so that you don’t miss out on any important updates. Exadata is a database appliance designed by Oracle that has the ability to supply support […]

The post Oracle Exadata Cloud@Customer New Releases 2021 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

 Working With Jupyter Notebook

Online Apps DBA - Thu, 2021-09-23 00:11

* What Is Anaconda? Anaconda is a free and open-source distribution of the Python languages for data science and machine learning-related applications. It can be installed on Windows, Linux, and macOS systems. Conda is an open-source, cross-platform, package management system. Anaconda comes with so very nice tools like JupyterLab, Jupyter Notebook, Spyder, Glueviz, Visual Studio […]

The post  Working With Jupyter Notebook appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Azure RBAC vs Azure Policy vs Azure Blueprints

Online Apps DBA - Wed, 2021-09-22 23:47

➽ The blog post – https://k21academy.com/az30419 will cover an overview of azure governance and its services and their differences. ➽ What comes to your mind when you see the word Governance? Is it a rule, or is it policy? Whatever it may be, don’t you think the company needs Governance to run effectively and efficiently. […]

The post Azure RBAC vs Azure Policy vs Azure Blueprints appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

B2B data sharing

Tom Kyte - Wed, 2021-09-22 19:26
Hi guys, I have a little question on how does B2B data sharing work ? Say I am the department of home affairs and I want to share the data I hold (data that lives into an Oracle db) to others departments of the country. Especially the data about HR. So how to do with that : - give a SQLplus access to each department ? - create a web service that gives the desired data ? What's commonly done in this kind of situations ? Thanks in advance. Amine
Categories: DBA Blogs

Python for Data Science

Online Apps DBA - Wed, 2021-09-22 06:47

Data Science ⯮Data science is a field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data.Python⯮ Python is a general-purpose, object-oriented, high-level programming language. Its design philosophy emphasizes code readability. Python can handle every job, from data cleaning to data visualization to website development to executing embedded systems. Why […]

The post Python for Data Science appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[New Update] Oracle Cloud (OCI) Support Center

Online Apps DBA - Wed, 2021-09-22 06:40

If you are working with Oracle Cloud, use their services and need support from oracle. Earlier customers used My Oracle support from a non-cloud platform only. But now you can raise & manage service requests from the OCI console. Oracle launched Oracle Cloud Infrastructure (OCI) Support Center on July 8th, 2021, where Cloud support has […]

The post [New Update] Oracle Cloud (OCI) Support Center appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Text mining in R

Rittman Mead Consulting - Wed, 2021-09-22 05:04

As data becomes increasingly available in the world today the need to organise and understand it also increases. Since 80% of data out there is in unstructured format, text mining becomes an extremely valuable practice for organisations to generate helpful insights and improve decision-making. So, I decided to experiment with some data in the programming language R with its text mining package “tm” – one of the most popular choices for text analysis in R, to see how helpful the insights drawn from the social media platform Twitter were in understanding people’s sentiment towards the US elections in 2020.

What is Text Mining?

Unstructured data needs to be interpreted by machines in order to understand human languages and extract meaning from this data, also known as natural language processing (NLP) – a genre of machine learning. Text mining uses NLP techniques to transform unstructured data into a structured format for identifying meaningful patterns and new insights.

A fitting example would be social media data analysis; since social media is becoming an increasingly valuable source of market and customer intelligence, it provides us raw data to analyse and predict customer needs. Text mining can also help us extract sentiment behind tweets and understand people’s emotions towards what is being sold.

Setting the scene

Which brings us to my analysis here on a dataset of tweets made regarding the US elections that took place in 2020. There were over a million tweets made about Donald Trump and Joe Biden which I put through R’s text mining tools to draw some interesting analytics and see how they measure up against the actual outcome – Joe Biden’s victory. My main aim was to perform sentiment analysis on these tweets to gain a consensus on what US citizens were feeling in the run up to the elections, and whether there was any correlation between these sentiments and the election outcome.

I found the Twitter data on Kaggle, containing two datasets: one of tweets made on Donald Trump and the other, Joe Biden. These tweets were collected using the Twitter API where the tweets were split according to the hashtags ‘#Biden’ and ‘#Trump’ and updated right until four days after the election – when the winner was announced after delays in vote counting. There was a total of 1.72 million tweets, meaning plenty of words to extract emotions from.

The process

I will outline the process of transforming the unstructured tweets into a more intelligible collection of words, from which sentiments could be extracted. But before I begin, there are some things I had to think about for processing this type of data in R:

1. Memory space – Your laptop may not provide you the memory space you need for mining a large dataset in RStudio Desktop. I used RStudio Server on my Mac to access a larger CPU for the size of data at hand.

2. Parallel processing – I first used the ‘parallel’ package as a quick fix for memory problems encountered creating the corpus. But I continued to use it for improved efficiency even after moving to RStudio Server, as it still proved to be useful.

3. Every dataset is different – I followed a clear guide on sentiment analysis posted by Sanil Mhatre. But I soon realised that although I understood the fundamentals, I would need to follow a different set of steps tailored to the dataset I was dealing with.

First, all the necessary libraries were downloaded to run the various transformation functions. tm, wordcloud, syuzhet are for text mining processes. stringr, for stripping symbols from tweets. parallel, for parallel processing of memory consuming functions. ggplot2, for plotting visualisations.

I worked on the Biden dataset first and planned to implement the same steps on the Trump dataset given everything went well the first time round. The first dataset was loaded in and stripped of all columns except that of tweets as I aim to use just tweet content for sentiment analysis.

The next steps require parallelising computations. First, clusters were set up based on (the number of processor cores – 1) available in the server; in my case, 8-1 = 7 clusters. Then, the appropriate libraries were loaded into each cluster with ‘clusterEvalQ’ before using a parallelised version of ‘lapply’ to apply the corresponding function to each tweet across the clusters. This is computationally efficient regardless of the memory space available.

So, the tweets were first cleaned by filtering out the retweet, mention, hashtag and URL symbols that cloud the underlying information. I created a larger function with all relevant subset functions, each replacing different symbols with a space character. This function was parallelised as some of the ‘gsub’ functions are inherently time-consuming.

A corpus of the tweets was then created, again with parallelisation. A corpus is a collection of text documents (in this case, tweets) that are organised in a structured format. ‘VectorSource’ interprets each element of the character vector of tweets as a document before ‘Corpus’ organises these documents, preparing them to be cleaned further using some functions provided by tm. Steps to further reduce complexity of the corpus text being analysed included: converting all text to lowercase, removing any residual punctuation, stripping the whitespace (especially that introduced in the customised cleaning step earlier), and removing English stopwords that do not add value to the text.

The corpus list had to be split into a matrix, known as Term Document Matrix, describing the frequency of terms occurring in each document. The rows represent terms, and columns documents. This matrix was yet too large to process further without removing any sparse terms, so a sparsity level of 0.99 was set and the resulting matrix only contained terms appearing in at least 1% of the tweets. It then made sense to cumulate sums of each term across the tweets and create a data frame of the terms against their calculated cumulative frequencies. I went on to only experiment with wordclouds initially to get a sense of the output words. Upon observation, I realised common election terminology and US state names were also clouding the tweets, so I filtered out a character vector of them i.e. ‘trump’, ‘biden’, ‘vote’, ‘Pennsylvania’ etc. alongside more common Spanish stopwords without adding an extra translation step. My criterion was to remove words that would not logically fit under any NRC sentiment category (see below). This removal method can be confirmed to work better than the one tm provides, which essentially rendered useless and filtered none of the specified words. It was useful to watch the wordcloud distribution change as I removed corresponding words; I started to understand whether the outputted words made sense regarding the elections and the process they were put through.

The entire process was executed several times, involving adjusting parameters (in this case: the sparsity value and the vector of forbidden words), and plotting graphical results to ensure its reliability before proceeding to do the same on the Trump dataset. The process worked smoothly and the results were ready for comparison.

The results

First on the visualisation list was wordclouds – a compact display of the 100 most common words across the tweets, as shown below.

Joe Biden's analysis wordcloudJoe Biden's analysis barplotDonal Trump's analysis wordcloudDonald Trump's analysis barplot

The bigger the word, the greater its frequency in tweets. Briefly, it appears the word distribution for both parties are moderately similar, with the biggest words being common across both clouds. This can be seen on the bar charts on the right, with the only differing words being ‘time’ and ‘news’. There remain a few European stopwords tm left in both corpora, the English ones being more popular. However, some of the English ones can be useful sentiment indicators e.g., ‘can’ could indicate trust. Some smaller words are less valuable as they cause ambiguity in categorisation without a clear context e.g., ‘just’, ‘now’, and ‘new’ may be coming from ‘new york’ or pointing to anticipation for the ‘new president’. Nonetheless, there are some reasonable connections between the words and each candidate; some words in Biden’s cloud do not appear in Trump’s, such as ‘victory’, ‘love’, ‘hope’. ‘Win’ is bigger in Biden’s cloud, whilst ‘white’ is bigger in Trump’s cloud as well as occurrences of ‘fraud’. Although many of the terms lack context for us to base full judgement upon, we already get a consensus of the kind of words being used in connotation to each candidate.

Analysing further, emotion classification was performed to identify the distribution of emotions present in the run up to the elections. The syuzhet library adopts the NRC Emotion Lexicon – a large, crowd-sourced dictionary of words tallied against eight basic emotions and two sentiments: anger, anticipation, disgust, fear, joy, sadness, surprise, trust, negative, positive respectively. The terms from the matrix were tallied against the lexicon and the cumulative frequency was calculated for each sentiment. Using ggplot2, a comprehensive bar chart was plotted for both datasets, as shown below.

Side-by-side comparison of Biden and Trump's sentiment distribution.

Some revealing insights can be drawn here. Straight away, there is an absence of anger and disgust in Biden’s plot whilst anger is very much present in that of Trump’s. There is 1.6 times more positivity and 2.5 times more joy pertaining Biden, as well as twice the amount of trust and 1.5 times more anticipation about his potential. This is strong data supporting him. Feelings of fear and negativity, however, are equal in both; perhaps the audience were fearing the other party would win, or even what America’s future holds regarding either outcome. There was also twice the sadness and surprise pertaining Biden, which also makes me wonder if citizens are expressing potential emotions they would feel if Trump won (since the datasets were only split based on hashtags), alongside being genuinely sad or surprised that Biden is one of their options.

In the proportional bar charts, there is a wider gap between positivity and negativity regarding Biden than of Trump, meaning a lower proportion of people felt negatively about Biden. On the other hand, there is still around 13% trust in Trump, and a higher proportion of anticipation about him. Only around 4% of the words express sadness and surprise for him which is around 2% lower than for Biden – intriguing. We also must remember to factor in the period after the polls opened when the results were being updated and broadcasted, which may have also affected people’s feelings – surprise and sadness may have risen for both Biden and Trump supporters whenever Biden took the lead. Also, there was a higher proportion fearing Trump’s position, and the anger may have also creeped in as Trump’s support coloured the bigger states.

Proportional distribution of Biden-related sentimentsProportional distribution of Trump-related sentimentsSide-by-side comparison of proportional sentiment distribution

Conclusions
Being on the other side of the outcome, it is more captivating to observe the distribution of sentiments across Twitter data collected through the election period. Most patterns we observed from the data allude to predicting Joe Biden as the next POTUS, with a few exceptions when a couple of negative emotions were also felt regarding the current president; naturally, not everyone will be fully confident in every aspect of his pitch. Overall, however, we saw clear anger only towards Trump along with less joy, trust and anticipation. These visualisations, plotted using R’s tm package in a few lines of code, helped us draw compelling insights that supported the actual election outcome. It is indeed impressive how text mining can be performed at ease in R (once the you have the technical aspects figured out) to create inferential results instantly.

Nevertheless, there were some limitations. We must consider that since the tweets were split according to the hashtags ‘#Biden’ and ‘#Trump’, there is a possibility these tweets appear in both datasets. This may mean an overspill of emotions towards Trump in the Biden dataset and vice versa. Also, the analysis would’ve been clearer if we contextualised the terms’ usage; maybe considering phrases instead would build a better picture of what people were feeling. Whilst plotting the wordclouds, as I filtered out a few foreign stopwords more crept into the cloud each time, which calls for a more solid translation step before removing stopwords, meaning all terms would then be in English. I also noted that despite trying to remove the “ ’s” character, which was in the top 10, it still filtered through to the end, serving as an anomaly in this experiment as every other word in my custom vector was removed.

This experiment can be considered a success for an initial dip into the world of text mining in R, seeing that there is relatively strong correlation between the prediction and the outcome. There are several ways to improve this data analysis which can be aided with further study into various areas of text mining, and then exploring if and how R’s capabilities can expand to help us achieve more in-depth analysis.

My code for this experiment can be found here.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator