Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 14 hours 10 min ago

How to prevent sqlldr from aborting with WHEN clause

Fri, 2020-11-20 13:26
The file names will be the same.. Bad file has ONLY the 1000 record. The good file has many. I can load the 1st record for both the good file and the bad file but the good file aborts because it skips all the other record types. I was using the WHEN clause to load JUST the 1000 record. This is an example of a bad file <code>1000,payment file failed,002 - Duplicate File.</code> ===================================== This is an example of a good file <code>1000,1.0,TEMPSUA,10142020071021,10162020172131 4000,1.0,814605760,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605770,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605780,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605790,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605810,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605820,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605830,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605840,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605850,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 5000,9,0,9</code> I have a table with 1 header record and multiple detail records. A good file and a bad file will have the same file name. Only the header record will alert us to a failed file. I have a 'PRETEST' table that I only want to load the header record into. Then I have a 'PRETEST' script to test the header record. I use a WHEN clause and only load 1 record but the step aborts because it creates a discard file with all the other good records. How can I prevent this from aborting? Thank you Sherry Borden <code>SQL*Loader: Release 19.0.0.0.0 - Production on Wed Nov 18 10:44:55 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Control File: C:\AppWorx\sql\F_JPM_TF_RESPONSE_PRETEST_LOAD.ctl Data File: 3140121.tmp Bad File: 3140121.bad Discard File: 3140121.dis (Allow all discards) Number to load: ALL Number to skip: 0 Errors allowed: 50 Bind array: 250 rows, maximum of 1048576 bytes Continuation: none specified Path used: Conventional Table TEMPLE_FINANCE.JPM_SUA_RESPONSE_PRETEST, loaded when JPM_REC_TYPE = 0X31303030(character '1000') Insert option in effect for this table: REPLACE TRAILING NULLCOLS option in effect Column Name Position Len Term Encl Datatype ------------------------------ ---------- ----- ---- ---- --------------------- JPM_REC_TYPE FIRST * , O(") CHARACTER JPM_S_OR_F NEXT * , O(") CHARACTER JPM_ERRCODE_DESCRIP NEXT * , O(") CHARACTER Record 2: Discarded - failed all WHEN clauses. Record 3: Discarded - failed all WHEN clauses. Record 4: Discarded - failed all WHEN clauses. Record 5: Discarded - fa...
Categories: DBA Blogs

Performing sum of all matched substrings from a string using regular expression

Fri, 2020-11-20 13:26
I have a database table with name T_Usage and column name general_1. The general_1 field consists of below value. 14348860:1T:24:|120|1120|2000*14348860:1T:24:|120|1220|3000*14348860:1T:24:|120|1120|879609299148 I have to perform the sum of substrings enclosed between |(Pipe) and *(asterisk) .In the above input value we have two such substrings (2000,3000). using regexp_substr() function, I am able to identify first substring only. <code>select regexp_substr('input', '\|([0-9])+\*') test from dual;</code> How to identify all occurrences and perform addition. Please provide me SQL query if possible. Expected output should be = (2000 + 3000) = 5000
Categories: DBA Blogs

How to audit all Select and DML by a user?

Fri, 2020-11-20 13:26
Good Afternoon, How can we audit all select and DML statements by a user? I tried this: AUDIT ALL BY JCANTU; Then I ran a few selects, but the select didn't appear in the audit trail so I ended up just doing a SQL Trace. Is audit all supposed to create an audit log if I select a table so that the audit log shows that I performed a select operation and it logs the table that the user selected? Thanks,
Categories: DBA Blogs

RMAN : how to restore a dropped tablespace if no catalog and no PITR

Fri, 2020-11-20 13:26
Hello experts, I am in 12.2, multi-tenant architecture, no RMAN catalog, auto backup control file. I have a problem to restore with RMAN a deleted tablespace. I create it and I made a complete backup of my container with the PDB and the tbs. <code>SQL> CREATE TABLESPACE ZZTBS DATAFILE '/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf' size 10m EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; RMAN> BACKUP DATABASE PLUS ARCHIVELOG; ... Starting backup at 02-NOV-20 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00010 name=/u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf input datafile file number=00011 name=/u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf input datafile file number=00009 name=/u01/app/oracle/oradata/orcl12c/orcl/system01.dbf input datafile file number=00012 name=/u01/app/oracle/oradata/orcl12c/orcl/users01.dbf input datafile file number=00016 name=/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf input datafile file number=00013 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf input datafile file number=00014 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf channel ORA_DISK_1: starting piece 1 at 02-NOV-20 channel ORA_DISK_1: finished piece 1 at 02-NOV-20 piece handle=/u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp tag=TAG20201102T102548 comment=NONE ... </code> We see that the backup is OK : BS 2, Key 16 and, most important, the column Name is fill with the datafile of my tbs. <code>RMAN> list backup; List of Backup Sets =================== ... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 2 Full 1.41G DISK 00:00:34 02-NOV-20 BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20201102T102548 Piece Name: /u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp List of Datafiles in backup set 2 Container ID: 3, PDB Name: ORCL File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- 9 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/system01.dbf 10 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf 11 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf 12 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/users01.dbf 13 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf 14 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf 16 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf </code> I delete my tbs. <code>SQL> drop tablespace ZZTBS INCLUDING CONTENTS AND DATAFILES; Tablespace dropped.</code> The problem is that, after the delete tbs, in the control file there is no more reference to my tbs. So, when I use RMAN, connected to the PDB, I get an error message saying that it does not know my tbs. <code>RMAN> LIST BACKUP OF TABLESPACE ZZTBS; RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of list command at 11/02/2020 10:28:10 RMAN-20202: Tablespace not...
Categories: DBA Blogs

OLAP - How can I check that databae uses it

Fri, 2020-11-20 13:26
<i></i>Hello, This is my first post, please be understanding :). I have questions about olap: 1. In Oracle 11.2, olap was free (In Oracle 12.2, olap requires a license)? 2. What happens when I import a base using olaps into the environment without olap? 3. How to check if the factual database uses ola? <code>select name, first_usage_date, last_usage_date from dba_feature_usage_statistics where name like '%OLAP%' and first_usage_date is not null; </code> Is that enough? 4. how i check that database is warehouse ? Thank you Regars Krzysztof
Categories: DBA Blogs

More than 270 columns in a Table

Tue, 2020-11-17 12:06
We are using version 11.2.0.4 of oracle. We have a got a table having already ~270 columns and we just got request from dev team to add couple of more columns. But considering the rows chaining after ~255 columns we are asking team to not add those new columns but to plan for dropping those existing columns such that total number of columns will be restricted within ~255. But Dev team asking, can we show some evidence of what amount of overhead are we currently bearing having those additional columns in that table. And i was trying to see if by anyway we can relate statistics "table fetch continued row" with the database time. I do see in dba_hist_sysystat , there is ~400million "table fetch continued row" stats getting noted in per hour of AWR snapshot. But i am not sure , how that can be converted to the amount DB time its contributes to? So is there any way out for this? And also i am not able to associate these stats to any sql_id, so wanted to know if we have some AWR/ASH view which stores that stats("table fetch continued row") for specific sql_ids? Secondly, i tried running a query manually which in reality getting executed million of times/day from application. But when i tried fetching the stats "table fetch continued row" from v$sysystat for that session, I am seeing "0" value for that. So it means atleast at current stage this table is not suffered from "row chaining". But how to ensure that by adding couple of more columns we will still be safe and we wont suffer from row chaining symptom? The table is range partitioned by column CRT_DT and hold ~1Tb of data with Num Rows = 1,400,752,800 and AV_ROW_LEN noted as "236" at global level and it spans across ~73 partitions. SELECT * FROM TRAN_TAB WHERE ID = :B2 AND CRT_DT = to_date(:B1 ,'MM/DD/YYYY HH24:MI:SS'); Plan hash value: 2186597613 <code>------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 3 (100)| | | | | 1 | PARTITION RANGE SINGLE | | 1 | 265 | 3 (0)| 00:00:01 | KEY | KEY | | 2 | TABLE ACCESS BY LOCAL INDEX ROWID| TRAN_TAB | 1 | 265 | 3 (0)| 00:00:01 | KEY | KEY | | 3 | INDEX UNIQUE SCAN | TRAN_TAB_UK | 1 | | 2 (0)| 00:00:01 | KEY | KEY | -------------------------------------------------------------------------------------------------------------------------</code> I see few blogs stating issue with columns beyond ~255. But still trying to relate/match it to this database, if we are going to be affected by such issues in future( if will add N number of more columns) or we are already getting impacted unknowingly. https://jonathanlewis.wordpress.com/2018/02/28/255-columns-3/
Categories: DBA Blogs

Error ORA-00942 while exporting database 19c (unix) with exp 12x (windows).

Sun, 2020-11-15 23:26
I have an ORA-00942 while exporting 19c database with exp 12c client installed on Windows server 2016 - CATEXP.SQL already executed. - parameter file File = ..\Export\Export1.dmp Log = ..\Log\Export1.log Full = N Grants = Y Indexes = Y Rows = Y Constraints = Y Compress = Y Consistent = Y Statistics = none exp UserId="SAMADMIN/*****@SAMT" ... Export: Release 12.2.0.1.0 - Production on Thu Nov 12 17:56:07 2020 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set server uses WE8MSWIN1252 character set (possible charset conversion) . exporting pre-schema procedural objects and actions . exporting foreign function library names for user SAMADMIN . exporting PUBLIC type synonyms . exporting private type synonyms . exporting object type definitions for user SAMADMIN About to export SAMADMIN's objects ... . exporting database links . exporting sequence numbers . exporting cluster definitions . about to export SAMADMIN's tables via Conventional Path ... . . exporting table ACC 166 rows exported . . exporting table ACC_TAB 13 rows exported ... . . exporting table Z_IN_MODE 1 rows exported . . exporting table Z_OUT_CRA 0 rows exported . . exporting table Z_OUT_CRAD 0 rows exported . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting referential integrity constraints . exporting triggers . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting materialized views . exporting snapshot logs EXP-00056: ORACLE error 942 encountered ORA-00942: table or view does not exist EXP-00000: Export terminated unsuccessfully exp exited with error code 1 Thanks. Regards. Marc
Categories: DBA Blogs

"snapshot too old: rollback segment number %s with name \"%s\" too small"

Sun, 2020-11-15 23:26
Some general info about the setup: - We have one DB Server that is used for all reporting. That server has all the DB links to ADGs from the different applications. In this particular case I have a query that consists of 2 queries that are combined in a UNION ALL Query 1 executes on remote a, Query 2 on remote b; Executed separately they work just fine, combined in one union all it gives me the error: "snapshot too old: rollback segment number %s with name \"%s\" too small" As I look up the error, it always comes back to, try to run at different time when no DML is being executed etc, but that does not stroke with my case. I can perfectly execute them separately in parrallel with the unioned one and get results while the union all instantly fails. the word instantly is also important here, it's not the case that the query runs for hours; Qry 1 = 120s; Qry 2 = 90s; Qry union = <1s error; Result is reproducable every minute of the day.
Categories: DBA Blogs

flashback_transaction_query

Sun, 2020-11-15 23:26
Am trying to view the database to see the transaction that has taken place in the database. but whenever i query the flashback_transaction_query, it returns 'no rows selected' pls, is there something i need to do that am not doing, i need your help. below is the sql statement i ran to view the table. <code> SQL> ed Wrote file afiedt.buf 1 select name, course, id, versions_operation,versions_starttime as starttime, versions_xid from dept 2 versions between timestamp 3 to_timestamp('13-nov-2020 02:58:45', 'dd-mon-yyyy hh24:mi:ss') 4* and to_timestamp('13-nov-2020 03:09:07', 'dd-mon-yyyy hh24:mi:ss') SQL> / NAME COURSE ID V STARTTIME VERSIONS_XID ---------- ---------- ---------- - --------------------------------------------------------------------------- ---------------- Stephen Mech Eng 5 U 13-NOV-20 03.00.56 AM 0200010015050000 Moses Chem101 2 D 13-NOV-20 02.59.57 AM 03000F00DD040000 Moses Chem101 2 Belle chem101 3 Moniq Mech Eng 5 Serah Phy201 2 I 13-NOV-20 03.03.20 AM 0700150038040000 Ian Phy101 1 I 13-NOV-20 03.03.20 AM 0700150038040000 Ehiedu micro203 4 Loveth Med 6 9 rows selected. SQL> ed Wrote file afiedt.buf 1 select xid, start_scn,logon_user, table_name, undo_sql from flashback_transaction_query 2* where xid=hextoraw('0200010015050000') SQL> / no rows selected SQL> alter system switch logfile; System altered. SQL> alter system switch logfile; System altered. SQL> alter system switch logfile; System altered.SQL> select xid, start_scn,logon_user, table_name, undo_sql from flashback_transaction_query 2 where xid=hextoraw('0200010015050000'); no rows selected SQL> select * from dba_flashback_txn_report; no rows selected </code> am using ORACLE 12cR2 on oracle linux operating system. Thank you.
Categories: DBA Blogs

Looking for another ways for uncompress lobs

Fri, 2020-11-13 16:26
Hi, i am trying for uncompress the next lob using "alter table": OWNER TABLE_NAME COLUMN_NAME COMPRE ------------------------------ -------------------- ------------------------- ------ ALTIUS_AS_OWNER SURVEY XML_AVAIL MEDIUM <code>alter table ALTIUS_AS_OWNER.SURVEY MODIFY LOB (XML_AVAIL) (NOCOMPRESS KEEP_DUPLICATES) PARALLEL 4;</code> But its taking a lot of time, more than 8 hours and causing locks over the table, the questions is, Can i use the DBMS_REDEFINITION package for uncompress the column without cause locks on the table? If so, would this sentence be okay? <code><BEGIN DBMS_REDEFINITION.REDEF_TABLE( uname => 'ALTIUS_AS_OWNER', tname => 'SURVEY', table_compression_type => 'NULL', table_part_tablespace => 'NULL', index_key_compression_type => 'NULL', index_tablespace => 'NULL', lob_compression_type => 'COMPRESS NONE', lob_tablespace => 'NULL', lob_store_as => 'NULL'); END;></code>
Categories: DBA Blogs

Fatal NI connect error 28791

Thu, 2020-11-12 22:06
Hi Tom, reading an OBIEE 12 log, I've found the following error: <txt> Fatal NI connect error 28791, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)....... It seems that, when OBIEE attempt to access to DB it falls into this error, and it seems to verify often, also during the time no user is doing report. I would appreciate very much any suggestion you could give me. Thank you very much, Natascia
Categories: DBA Blogs

Undo Tablespace; autoextend or not to autoextend that is the question

Thu, 2020-11-12 22:06
Tom, I am looking for guidance on whether the undo tablespace data files should have auto extend on or off. Undo Auto-tuning is enabled. I am working through expanding the size of the UNDO tablespace in order for my users to take advantage of flashback query. The current tablespace size and workload supports just under 1 Days worth of daily change and based upon the happy accidents some of my users are making it would be beneficial to expand this to a couple of weeks. The database I have inherited is modestly sized database 20ish TBs; and the undo tablespace is 14 files (~500 GB) all fully pre-allocated out in size but autoextend was turned on. There are no ora-1555 errors within the past 12 months. If this was a traditional tablespace I would have turned autoextend off already. UNDO is a different beast and after a bunch of web research I am getting competing advice. But if everything works the way the oracle documentation says it does . .. . (grain of salt always) It seems to me that the best method for my database is going with Auto-extend off based upon the calculations that auto tune does. * auto tune with auto extend off will default to a retention threshold of 85% of the tablespace using undo_retention as a guide. I plan on changing the undo_retention parameter from the default 15 minutes upwards, maybe as high as 3 weeks. I plan on increasing the size of the undo tablespace and the undo_retention parameter in alternating stages to track performance, growth and impact to backups. Am I crazy? Is there a better approach? Thanks! Mark
Categories: DBA Blogs

Extracting data elements from Json within PL/SQL

Thu, 2020-11-12 03:46
I am building a package procedure that will accept a string of json as the input. In SQL, I can select a single value from the json object, but, using select x into var from ... in pl/sql throws an error: <code> ORA-06550: line 10, column 12: PL/SQL: ORA-19200: Invalid column specification </code> <code> with jdata as ( select treat('{"practitionerId":12345,"gender":"F"}' as json) as jrow from dual ) select j.jrow.practitionerId from jdata j; declare p_input_json varchar2(32767) := '{"practitionerId":12345,"gender":"F"}'; p_practitioner int; begin with jdata as ( select treat(p_input_json as json) as jrow from dual ) select j.jrow.practitionerId into p_practitioner from jdata j; end; / </code> Several hours of searching docs and ye olde internet, I cannot find a method of extracting scalars from json in pl/sql. Can you point me in the right direction?
Categories: DBA Blogs

Error Logging using LOG ERRORS INTO

Thu, 2020-11-12 03:46
I have a requirement to insert records (or any other DML) into a table - but in case of encountering error records, the inserts must continue and the errors must be logged. Logically I thought of using the LOG ERRORS INTO a table. I have a few questions pertaining to this - Can a collection or a temporary table be used instead of a error log table (which is created using DBMS_ERRLOG.create_error_log or a manual table) Is LOG ERRORS INTO the only way to "continue insert in case of erred records and return log info for erred records" scenario? 3.Will LOG ERRORS INTO still work if the RETURNING INTO clause is used in the DML statement? Thank you!
Categories: DBA Blogs

"alter database rename" online-redo causes "WARNING: Cannot delete Oracle managed file"

Thu, 2020-11-12 03:46
Hello Team, I'm restoring a db backup on tape, using rman, on the same host on a different new instance. The source db/instance is running and uses ASM for online-redologs, spfile, controlfiles, datafiles, etc. The rman restore/recover command works perfect. Just before I do an "open resetlogs", I rename the online-redologs on the recovered db on the new instance, here is where I need some advice: alter database rename file '+DATA/SOURCEDB_TLRSCAN1/ONLINELOG/group_1.260.1048356907' to '+DATA/NEWDB_TLRSCAN1/ONLINELOG/group_1.260.1048356907'; Completed! The rename was done, the new db can be started, but this warning on the alert-log bothers me: WARNING: Cannot delete Oracle managed file +DATA/SOURCEDB_TLRSCAN1/ONLINELOG/group_1.260.1048356907 Completed: alter database rename file '+DATA/SOURCEDB_TLRSCAN1/ONLINELOG/group_1.260.1048356907' to '+DATA/NEWDB_TLRSCAN1/ONLINELOG/group_1.260.1048356907' I suppose Oracle tries to delete the online-redo-files but the ASM files are blocked by the source instance/database. Is there a way to rename the online-redologs on a safer manner? and not get these warnings? Thank you
Categories: DBA Blogs

function name as column name

Thu, 2020-11-12 03:46
Hi, I have a select into query in stored procedure which uses a column group_id in where clause. But it is not filtering the data as the column name is same as oracle function name group_id. I cannot change the column name. How to use it in plsql where clause. Select lmt_id into l_lmt_id from table1 t where t.group_id = p_grp_id;
Categories: DBA Blogs

Export tables from different schemas using DBMS_DATAPUMP api

Wed, 2020-11-11 09:26
I have a requirement to export tables from different schemas using DBMS_DATAPUMP api. Below script is getting failed after exporting 0.5 GB data with below errors: <b>ORA-39126: Worker unexpected fatal error in KUPW$WORKER.FETCH_XML_OBJECTS [JOB:"MySchama3"] ORA-10260: limit size () of the PGA heap set by event 10261 exceeded ORA-10260: limit size (1048576) of the PGA heap set by event 10261 exceeded ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPW$WORKER", line 11252</b> Script: <code> h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => exportJobName); dbms_datapump.set_parallel(handle => h1, degree => 15); dbms_datapump.add_file(handle => h1, filename => exportLogFile, directory => exportLogDir, filetype => 3); dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0); dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_LIST', value => q'|'MySchema1','MySchema2','MySchema3'|'); dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => q'|in ('Table1','Table2','Table3') |', object_path => 'TABLE'); dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS'); dbms_datapump.add_file(handle => h1, filename => exportJobName || '_EXPDAT_' || to_char(systimestamp,'dd-mm-yyyy_hh24-mi-ss-FF') || '_%U.DMP', directory => exportStageDir, filetype => 1); dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); dbms_datapump.set_parameter(handle => h1, name => 'COMPRESSION', value => 'ALL'); dbms_datapump.set_parameter(handle => h1, name => 'ENCRYPTION_ALGORITHM', value => 'AES256'); dbms_datapump.set_parameter(handle => h1, name => 'ENCRYPTION_PASSWORD', value => exportEncryptionPwd); dbms_datapump.set_parameter(handle => h1, name => 'FLASHBACK_SCN', value => flashbackSCN); dbms_datapump.Start_job(h1); dbms_datapump.WAIT_FOR_JOB(h1,job_state); dbms_datapump.Detach(h1); exception WHEN others THEN raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);</code>
Categories: DBA Blogs

JSON - PL/SQL ORA-30625: method dispatch on NULL SELF argument is disallowed

Wed, 2020-11-11 09:26
Sample JSON: <code>[ { "term-id": "BA000000", "bank-id": "111", "location": "Poslovalnica banke", "address": "Cankarjeva ulica 2", "city": "Ljubljana", "post-code": "1000", "printer": "true", "deposit": "false", "accessible": "true", "cards": "DECPVR", "special-payments": "false", "BNA": "false", "transaction-receipt": "true", "latitude": 46.051671, "longitude": 14.505122 } ]</code> I am trying to utilize pljson: <code>declare w_req t_http_request := t_http_request(); w_res t_http_response; w_vrni clob; w_json pljson; w_jsonValue pljson_value; w_jsonList pljson_list; w_test varchar2(100); begin w_req.url := 'https://api.bankart.si/psd2/hub/v1/' || 'ATMList'; w_req.add_header('x-ibm-client-id', 'client-id'); w_res := https_client.doGet (w_req, 'DB'); w_vrni := hibis_util.convertBlobToClob(w_res.content_blob,'AL32UTF8'); w_jsonList := pljson_list(w_vrni); if w_jsonList is not null and w_jsonList.count > 0 then for i in 1..w_jsonList.count loop w_json := pljson(w_jsonList.get(i)); w_jsonValue := w_json.get('term-id'); w_test := w_jsonValue.get_string; dopl(w_test); end loop; end if; end;</code> I'm able to extract, I don't know maybe location or address values from this JSON, but when I want to extract elements term-id or bank-id I get error PL/SQL ORA-30625: method dispatch on NULL SELF argument is disallowed. Maybe because of "-" sign between? Thx!
Categories: DBA Blogs

split delimited column (by line feeds) to rows

Wed, 2020-11-11 09:26
My data is like the following: ID HOSTS --- ----- ID123 host1 host2 host3 ID124 host4 host5 The host column lists several values separated by return characters (not commas). I want to output my result as follows: ID123 host1 ID123 host2 ID123 host3 ID124 host4 ID124 host5 I have seen solutions where the source is a comma delimited list using LEVEL and CONNECT BY, but in this case it is line feeds / return characters. Thanks for your help :)
Categories: DBA Blogs

"Filter" step during chunk based processing.

Wed, 2020-11-11 09:26
Team: Please find below the test case(modelled like our business requirements) along with execution plan (to show predicates section) and the sql-monitor report for timings, this was run on Oracle 18c from Exacc platform. Please help us to understand why the optimizer generated the "filter" at step#2 in plan? In our real execution we could see - that particular step took around, 12min to 15 min (result set is getting buffered and that filter is getting validated) so how can we get rid of that "filter"? <code> drop table driver purge; drop table t1 purge; drop table t2 purge; drop table t3 purge; create table driver as select a.* from all_objects a, (select rownum from dual connect by level <=10 ) ; create table t1 as select a.* from all_objects a, (select rownum from dual connect by level <=100 ); create table t2 as select t1.* from t1; create table t3( object_id number, data_object_id number ); exec dbms_parallel_execute.create_task(task_name=>'DEMO_TASK'); begin dbms_parallel_execute.create_chunks_by_rowid( task_name=>'DEMO_TASK', table_owner=>user, table_name=>'DRIVER', by_row=>false, chunk_size=>100); end; / select count(*) from user_parallel_execute_chunks where task_name='DEMO_TASK'; declare l_sql long; begin l_sql :=' insert into t3( object_id, data_object_id) select t1.object_id, t2.data_object_id from t1, t2, driver t3 where t1.object_id = t2.object_id and t2.object_id = t3.object_id and t3.rowid between :start_id and :end_id '; dbms_parallel_execute.run_task( task_name=>'DEMO_TASK', sql_stmt=>l_sql, language_flag=>dbms_sql.native, parallel_level=>4); end; / demo@PDB1> select * from table( dbms_xplan.display_cursor('9cbx808m23rkg')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- SQL_ID 9cbx808m23rkg, child number 0 ------------------------------------- insert into t3( object_id, data_object_id) select t1.object_id, t2.data_object_id from t1, t2, driver t3 where t1.object_id = t2.object_id and t2.object_id = t3.object_id and t3.rowid between :start_id and :end_id Plan hash value: 3603417986 ---------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | ---------------------------------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | | | | 303K(100)| | | 1 | LOAD TABLE CONVENTIONAL | T3 | | | | | | |* 2 | FILTER | | | | | | | |* 3 | HASH JOIN | | 61M| 1825M| 22M| 303K (1)| 00:00:12 | |* 4 | HASH JOIN | | 622K| 15M| | 148K (1)| 00:00:06 | |* 5 | TABLE ACCESS STORAGE BY ROWID RANGE| DRIVER | 6267 | 104K| | 13586 (1)| 00:00:01 | | 6 | TABLE ACCESS STORAGE FULL | T2 | 25M| 215M| | 135K (1)| 00:00:06 | | 7 | TABLE ACCESS STORAGE FULL | T1 | 25M| 119M| | 135K (1)| 00:00:06 | ---------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(CHARTOROWID(:END_ID)>=CHARTOROWID(:START_...
Categories: DBA Blogs

Pages