Feed aggregator

AWS: How to delete a static website via aws cli

Dietrich Schroff - Thu, 2018-11-22 14:22
After the creation of a static website in S3 via cli, now the deletion:

First try was:

$ aws s3api delete-bucket --bucket my.webtest



An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty
Ok. This will not work. First get the objects:

$ aws s3api list-objects --bucket my.webtest

{

    "Contents": [

        {

            "LastModified": "2018-11-17T19:18:53.000Z",

            "ETag": "\"e56b419be959169c15260cd721735e47\"",

            "StorageClass": "STANDARD",

            "Key": "index.html",

            "Owner": {

                "DisplayName": "d.schroff",

                "ID": "6c301aed95f62fb17532da6c93209c898a1e07051e520c6bb7fab30769cc495c"

            },

            "Size": 568

        }

    ]

}
and the bucket can be deleted:
$ aws s3api delete-bucket --bucket my.webtest
A crosscheck via web console:


And the website is not there anymore:

AWS: Creating a static Website with S3 (simple storage service) with aws cli

Dietrich Schroff - 7 hours 25 min ago
There is a nice tutorial how to create a static webpage with using Amazon S3:
https://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html

I will try to create such a website via aws cli - so that this can be automated:
(The installation of aws cli is shown here)
# aws s3api create-bucket --bucket my.webtest --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1

{

    "Location": "http://my.webtest.s3.amazonaws.com/"

}

Then create a website.json file:

$ cat website.json 

{

    "IndexDocument": {

         "Suffix": "index.html"

     },

     "ErrorDocument": {

          "Key": "error.html"

     }

 }

and run

$ aws s3api put-bucket-website --bucket my.webtest --website-configuration file://website.json

After that the web console should show:
and

Next step is to create the file policy.json:

$ cat policy.json 

{

   "Version":"2012-10-17",

   "Statement":[{

     "Sid":"PublicReadForGetBucketObjects",

         "Effect":"Allow",

       "Principal": "*",

       "Action":["s3:GetObject"],

       "Resource":["arn:aws:s3:::my-webtest/*"

       ]

     }

   ]

 }

and run

aws s3api put-bucket-policy --bucket my.webtest --policy file://policy.json

You can check via:
$ aws s3api get-bucket-policy --bucket my.webtest

{

    "Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"PublicReadForGetBucketObjects\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::my.webtest/*\"}]}"

}
Via the web console:
 Then upload you html page:

$ aws s3 cp TestWebPage.html s3://my.webtest/index.html

upload: ./TestWebPage.html to s3://my.webtest/index.html  
 And here we go:


That was easy. Ok - a DNS resolution via Amazon route 53 is missing, but with these commands you are able to deploy a static website without clicking around...



Postings related to AWS:









Migration from 11g to 12c change execution plan(Adaptative plan)

Tom Kyte - Fri, 2018-11-16 15:06
Hi, we are working on a PeopleSoft Migration and Database too. We're migrating Oracle 11.2.0.3 to 12.2.0.1, so we have an issiue with a PeopleSoft Query. The query on actual database enviroment(11.2.0.3), have a excecution plan with minimal cost ...
Categories: DBA Blogs

pushing predicate into union-all view

Tom Kyte - Fri, 2018-11-16 15:06
Hi, LiveSQL link: https://livesql.oracle.com/apex/livesql/s/hjml6z0yg45qznob5sebg53vk I have the big table with an index on ID: <code> create table tst1 as select level id, mod(level, 10) code from dual connect by level < 1000000; create...
Categories: DBA Blogs

EBS Releases 12.1 and 12.2 certified with SLES 12

Steven Chan - Fri, 2018-11-16 12:08

I am pleased to announce that Oracle E-Business Suite Releases 12.1.3 and 12.2.6 (or higher) are now certified with SUSE Linux Enterprise Server (SLES) 12 on x86-64.

Installations of E-Business Suite on this operating system require specific patches to the latest startCD prior to installing, followed by the application of the 12.1.3 RUP or the 12.2.6 RUP (or higher) for EBS 12.1 and 12.2 respectively. Cloning of existing EBS 12.1.3 or 12.2 environments to SLES 12 is also certified using the standard Rapid Clone process.

There are also requirements to upgrade technology components such as the Oracle Database (to 11.2.0.4 or 12.1.0.2) and Fusion Middleware components as necessary. All requirements, known issues, patches needed, etc. are noted in the Installation and Upgrade Notes (IUN) below and must be reviewed and implemented.

For more information on requirements, please review the following documents:

 

Categories: APPS Blogs

Nvarchar to Varchar2 conversion (UTF8 to AL32UTF8)

Tom Kyte - Thu, 2018-11-15 20:46
We are planning to convert all NVarchar fields to Varchar2 fields as we're going to change our character set and since Oracle recommends AL32UTF8 character set encoding. My question is it 100% sure that all characters from Nvarchar (UTF8) can be conv...
Categories: DBA Blogs

sql plan management - difference in defining parameters at system and session level

Tom Kyte - Thu, 2018-11-15 20:46
Hi Tom, I am very new to performance tuning. there's something that I am unclear about sql plan management. which one is faster - 1. setting OPTIMIZER_CAPTURE_SQL_PLAN_BASELINE to TRUE at session level (inside function body) and OPTIMIZE...
Categories: DBA Blogs

Moving Oracle DB from one server to another

Tom Kyte - Thu, 2018-11-15 20:46
Hi, I am having an Oracle 11g database in an AIX linux server. I am planning to move this to a different server with same OS. I will be using same version of Oracle database in target DB as well. I have multiple schema in source database and in t...
Categories: DBA Blogs

SQL Slowdown ? A short list of potential reasons

Hemant K Chitale - Thu, 2018-11-15 20:14
Jonathan Lewis has published a short list of potential reasons why you might see a slowdown in SQL execution.  With newer releases 12.2, 18c and 19c, the list may have to be expanded.



Categories: DBA Blogs

AWS: Billing - how to delete a route 53

Dietrich Schroff - Thu, 2018-11-15 14:58
After playing around with AWS containers
i took a look at my billing page:

So let's delete this service.
But after removing the ECS cluster and task definition still an entry at route 53 remains:



The resource hostedzone/Z3JCO1N1BVHCKX can only be managed through servicediscovery.amazonaws.com (arn:aws:servicediscovery:eu-west-1:803404058350:namespace/ns-so7m3qbqbatzmlgn)


But the solution is the aws cli (for installation take a look here):
schroff@zerberus:~/AWS$ aws servicediscovery list-services
{

    "Services": [

        {

            "Id": "srv-46ffbkbwzupvblsb",

            "Arn": "arn:aws:servicediscovery:eu-west-1:803404058350:service/srv-46ffbkbwzupvblsb",

            "Name": "my-nginx-service"

        },

        {

            "Id": "srv-nicoewsbpufb3tlk",

            "Arn": "arn:aws:servicediscovery:eu-west-1:803404058350:service/srv-nicoewsbpufb3tlk",

            "Name": "my-ecs-service-on-fargate"

        }

    ]

}

schroff@zerberus:~/AWS$ aws servicediscovery delete-service --id srv-46ffbkbwzupvblsb
schroff@zerberus:~/AWS$ aws servicediscovery delete-service --id srv-nicoewsbpufb3tlk


and

schroff@zerberus:~/AWS$ aws servicediscovery list-namespaces

{

    "Namespaces": [

        {

            "Type": "DNS_PRIVATE",

            "Id": "ns-so7m3qbqbatzmlgn",

            "Arn": "arn:aws:servicediscovery:eu-west-1:803404058350:namespace/ns-so7m3qbqbatzmlgn",

            "Name": "local"

        }

    ]

}
Take the id and delete this namespace:
schroff@zerberus:~/AWS$ aws servicediscovery delete-namespace --id=ns-so7m3qbqbatzmlgn

{

    "OperationId": "4kdit33kf7kfuawscpfgifcrdktynen5-jog7l6h7"

}

And the the hosted zone was gone:

Oracle JET UI on Top of Oracle ADF With Visual Builder

Shay Shmeltzer - Thu, 2018-11-15 13:22

At Oracle OpenWorld this year I did a session about the future of Oracle ADF, and one of the demos I did there was showing the powerful combination of Oracle ADF backend with a new Oracle JET UI layer and how Oracle Visual Builder makes this integration very simple.

While we have many happy Oracle ADF customers, we do hear from some of them about new UI requirements that might justify thinking about adopting a new UI architecture for some modules. These type of requirements align with an industry trend towards adopting a more client centric UI architecture that leverages the power of JavaScript on the client. While ADF (which is more of a server centric architecture) does let you leverage JavaScript on the client and provides hook points for that in ADF Faces, some customers prefer a more "puristic" approach for new user interfaces that they are planning to build. Oracle's solution for such a UI architecture is based on Oracle JET - an open source set of libraries we developed and share with the community at http://oraclejet.org.

Oracle Visual Builder provides developers with a simpler approach to building Oracle JET based UIs - for both web and on-device mobile applications. Focusing on a visual UI design approach it drastically reduce the amount of manual coding you need to do to create JET based UIs. 

UIs that you build in Visual Builder connect at the back to REST services, and this is where you can leverage Oracle ADF. In version 12 of JDeveloper we introduced the ability to publish ADF Business Components as REST services through a simple wizard. Note that out-of-the-box you get a very powerful set of services that support things like query by example, pagination, sorting and more. If you haven't explored this functionality already, check out the videos showing how to do it here, and this video covering cloud hosting these services.

Once you have this ADF based REST services layer - you'll be glad to hear that in Visual Builder we have specific support to simplify consuming these REST services. Specifically - we understand the meta-data descriptions that these REST services provide and then are able to create services and endpoints mapping for you.

ADF Describe Dialog in Service Connection

You leverage our "Service from specification" dialog to add your ADF services to your Visual Builder app - and from that point on, it's quite simple to build new JET UIs accessing the data.

In the video below I show how simple it is to build a JET-based on-device mobile app that leverage a set of REST services that were created from Oracle JDeveloper 12. Check it out:

Categories: Development

num_index_keys

Jonathan Lewis - Thu, 2018-11-15 07:13

The title is the name of an Oracle hint that came into existence in Oracle 10.2.0.3 and made an appearance recently in a question on the rarely used “My Oracle Support” Community forum (you’ll need a MOS account to be able to read the original). I wouldn’t have found it but the author also emailed me the link asking if I could take a look at it.  (If you want to ask me for help – without paying me, that is – then posting a public question in the Oracle (ODC) General Database or SQL forums and emailing me a private link is the strategy most likely to get an answer, by the way.)

The question was about a very simple query using a straightforward index – with a quirky change of plan after upgrading from 10.2.0.3 to 12.2.0.1. Setting the optimizer_features_enable to ‘10.2.0.3’ in the 12.2.0.1 system re-introduced the 10g execution plan. Here’s the query:


SELECT t1.*
   FROM   DW1.t1
  WHERE   t1.C1 = '0001' 
    AND   t1.C2 IN ('P', 'F', 'C')
    AND   t1.C3 IN (
                    '18110034450001',
                    '18110034450101',
                    '18110034450201',
                    '18110034450301',
                    '18110034450401',
                    '18110034450501'
          );
 

Information supplied: t1 holds about 500 million rows at roughly 20 rows per block, the primary key index is (c1, c2, c3, c4), there are just a few values for each of c1, c2 and c4, while c3 is “nearly unique” (which, for clarity, was expanded to “the number of distinct values of c3 is virtually the same as the number of rows in the table”).

At the moment we don’t have any information about histograms and we don’t known whether or not “nearly unique” might still allow a few values of c3 to have a large number of duplicates, so that’s something we might want to follow up on later.

Here are the execution plans – the fast one (from 10g) first, then the slow (12c) plan – and you should look carefully at the predicate section of the two plans:


10g (pulled from memory with rowsource execution statistics enabled)
--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |      1 |        |      6 |00:00:00.01 |      58 |      5 |
|   1 |  INLIST ITERATOR             |                  |      1 |        |      6 |00:00:00.01 |      58 |      5 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1               |     18 |      5 |      6 |00:00:00.01 |      58 |      5 |
|*  3 |    INDEX RANGE SCAN          | PK_T1            |     18 |      5 |      6 |00:00:00.01 |      52 |      4 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR
              "T1"."C2"='P')) AND (("C3"='18110034450001' OR "C3"='18110034450101' OR
              "C3"='18110034450201' OR "C3"='18110034450301' OR "C3"='18110034450401' OR
              "C3"='18110034450501')))

 

12c (from explain plan)
---------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                  |     1 |   359 |     7   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR                     |                  |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1               |     1 |   359 |     7   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | PK_T1            |     1 |       |     6   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND ("T1"."C2"='C' OR "T1"."C2"='F' OR
              "T1"."C2"='P'))
       filter("C3"='18110034450001' OR "C3"='18110034450101' OR
              "C3"='18110034450201' OR "C3"='18110034450301' OR
              "C3"='18110034450401' OR "C3"='18110034450501')
  

When comparing plans it’s better, of course, to present the same sources from the two systems, it’s not entirely helpful to have the generated plan from explain plan in one version and a run-time plan with stats in the other – given the choice I’d like to see the run-time from both. Despite this, I felt fairly confident that the prediction would match the run-time for 12c and that I could at least guess the “starts” figure for 12c.

The important thing to notice is the way that the access predicate in 10g has split into an access predicate followed by a filter predicate in 12c. So 12c is going to iterate three times (once for each of the values  ‘C’, ‘F’, ‘P’) and then walk a potentially huge linked list of index leaf blocks looking for 6 values of c3, while 10g is going to probe the index 18 times (3 combinations of c2 x six combinations of c3) to find “nearly unique” rows which means probably one leaf block per probe.

The 12c plan was taking minutes to run, the 10g plan was taking less than a second. The difference in execution time was probably the effect of the 12c plan ranging through (literally) thousands of index leaf blocks.

There are many bugs and anomalies relating to in-list iteration and index range scans and cardinality calculations – here’s a quick sample of v$system_fix_control in 12.2.0.1:


select optimizer_feature_enable ofe, sql_feature, bugno, description
from v$system_fix_control
where
	optimizer_feature_enable between '10.2.0.4' and '12.2.0.1'
and	(   sql_feature like '%CBO%'
	 or sql_feature like '%CARDINALITY%'
	)
and	(    lower(description) like '%list%'
	 or  lower(description) like '%iterat%'
	 or  lower(description) like '%multi%col%'
	)
order by optimizer_feature_enable, sql_feature, bugno
;

OFE        SQL_FEATURE                      BUGNO DESCRIPTION
---------- --------------------------- ---------- ----------------------------------------------------------------
10.2.0.4   QKSFM_CBO_5259048              5259048 undo unused inlist
           QKSFM_CBO_5634346              5634346 Relax equality operator restrictions for multicolumn inlists

10.2.0.5   QKSFM_CBO_7148689              7148689 Allow fix of bug 2218788 for in-list predicates

11.1.0.6   QKSFM_CBO_5139520              5139520 kkoDMcos: For PWJ on list dimension, use part/subpart bits

11.2.0.1   QKSFM_CBO_6818410              6818410 eliminate redundant inlist predicates

11.2.0.2   QKSFM_CBO_9069046              9069046 amend histogram column tracking for multicolumn stats

11.2.0.3   QKSFM_CARDINALITY_11876260    11876260 use index filter inlists with extended statistics
           QKSFM_CBO_10134677            10134677 No selectivity for transitive inlist predicate from equijoin
           QKSFM_CBO_11834739            11834739 adjust NDV for list partition key column after pruning
           QKSFM_CBO_11853331            11853331 amend index cost compare with inlists as filters
           QKSFM_CBO_12591120            12591120 check inlist out-of-range values with extended statistics

11.2.0.4   QKSFM_CARDINALITY_12828479    12828479 use dynamic sampling cardinality for multi-column join key check
           QKSFM_CARDINALITY_12864791    12864791 adjust for NULLs once for multiple inequalities on nullable colu
           QKSFM_CARDINALITY_13362020    13362020 fix selectivity for skip scan filter with multi column stats
           QKSFM_CARDINALITY_14723910    14723910 limit multi column group selectivity due to NDV of inlist column
           QKSFM_CARDINALITY_6873091      6873091 trim histograms based on in-list predicates
           QKSFM_CBO_13850256            13850256 correct estimates for transitive inlist predicate with equijoin

12.2.0.1   QKSFM_CARDINALITY_19847091    19847091 selectivity caching for inlists
           QKSFM_CARDINALITY_22533539    22533539 multi-column join sanity checks for table functions
           QKSFM_CARDINALITY_23019286    23019286 Fix cdn estimation with multi column stats on fixed data types
           QKSFM_CARDINALITY_23102649    23102649 correction to inlist element counting with constant expressions
           QKSFM_CBO_17973658            17973658 allow partition pruning due to multi-inlist iterator
           QKSFM_CBO_21057343            21057343 order predicate list
           QKSFM_CBO_22272439            22272439 correction to inlist element counting with bind variables

There are also a number of system parameters relating to inlists that are new (or have changed values) in 12.2.0.1 when compared with 10.2.0.3 – but I’m not going to go into those right now.

I was sufficiently curious about this anomaly that I emailed the OP to say I would be happy to take a look at the 10053 trace files for the query – the files probably weren’t going to be very large given that it was only a single table query – but in the end it turned out that I solved the problem before he’d had time to email them. (Warning – don’t email me a 10053 file on spec; if I want one I’ll ask for it.)

Based on the description I created an initial model of the problem – it took about 10 minutes to code:


rem     Tested on 12.2.0.1, 18.3.0.1

drop table t1 purge;

create table t1 (
	c1 varchar2(4) not null,
	c2 varchar2(1) not null,
	c3 varchar2(15) not null,
	c4 varchar2(4)  not null,
	v1 varchar2(250)
)
;

insert into t1
with g as (
	select rownum id 
	from dual
	connect by level <= 1e4 -- > hint to avoid wordpress format issue
)
select
	'0001',
	chr(65 + mod(rownum,11)),
	'18110034'||lpad(1+100*rownum,7,'0'),
	lpad(mod(rownum,9),4,'0'),
	rpad('x',250,'x')
from
	g,g
where
        rownum <= 1e5 -- > hint to avoid wordpress format issue
;


create unique index t1_i1 on t1(c1, c2, c3, c4);

begin
        dbms_stats.gather_table_stats(
                null,
                't1',
                method_opt => 'for all columns size 1'
        );
end;
/

alter session set statistics_level = all;
set serveroutput off

prompt	==========================
prompt	Default optimizer features
prompt	==========================

select
        /*+ optimizer_features_enable('12.2.0.1') */
	t1.*
FROM	t1
WHERE
	t1.c1 = '0001' 
AND	t1.c2 in ('H', 'F', 'C')
AND	t1.c3 in (
		'18110034450001',
		'18110034450101',
		'18110034450201',
		'18110034450301',
		'18110034450401',
		'18110034450501'
	)
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

select 
        /*+ optimizer_features_enable('10.2.0.3') */
	t1.*
FROM	t1
WHERE
	t1.c1 = '0001' 
AND	t1.c2 in ('H', 'F', 'C')
AND	t1.c3 in (
		'18110034450001',
		'18110034450101',
		'18110034450201',
		'18110034450301',
		'18110034450401',
		'18110034450501'
	)
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

alter session set statistics_level = all;
set serveroutput off

The two queries produced the same plan – regardless of the setting for optimizer_features_enable – it was the plan originally used by the OP’s 10g setting:


-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |    20 (100)|      0 |00:00:00.01 |      35 |
|   1 |  INLIST ITERATOR             |       |      1 |        |            |      0 |00:00:00.01 |      35 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     18 |      2 |    20   (0)|      0 |00:00:00.01 |      35 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     18 |      2 |    19   (0)|      0 |00:00:00.01 |      35 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR "T1"."C2"='H')) AND
              (("T1"."C3"='18110034450001' OR "T1"."C3"='18110034450101' OR "T1"."C3"='18110034450201' OR
              "T1"."C3"='18110034450301' OR "T1"."C3"='18110034450401' OR "T1"."C3"='18110034450501')))

There was one important difference between the 10g and the 12c plans – in 10g the cost of the table access (hence the cost of the total query) was 20; in 12c it jumped to 28 – maybe there’s a change in the arithmetic for costing the iterator, and maybe that’s sufficient to cause a problem.

Before going further it’s worth checking what the costs would look like (and, indeed, if the plan is possible in both versions) if we force Oracle into the “bad” plan. That’s where we finally get to the hint in the title of this piece. If I add the hint /*+ num_index_keys(t1 t1_i1 2) */ what’s going to happen ? (Technically I’ve included a hint to use the index, and specified the query block name to make sure Oracle doesn’t decide to switch to a tablescan):


select
        /*+
            optimizer_features_enable('12.2.0.1')
            index_rs_asc(@sel$1 t1@sel$1 (t1.c1 t1.c2 t1.c3 t1.c4))
            num_index_keys(@sel$1 t1@sel$1 t1_i1 2)
        */
        t1.*
FROM        t1
WHERE
        t1.c1 = '0001'
AND        t1.c2 in ('H', 'F', 'C')
AND        t1.c3 in (
                '18110034450001',
                '18110034450101',
                '18110034450201',
                '18110034450301',
                '18110034450401',
                '18110034450501'
        )
;

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |   150 (100)|      0 |00:00:00.01 |     154 |      1 |
|   1 |  INLIST ITERATOR                     |       |      1 |        |            |      0 |00:00:00.01 |     154 |      1 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      3 |     18 |   150   (2)|      0 |00:00:00.01 |     154 |      1 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |      3 |     18 |   142   (3)|      0 |00:00:00.01 |     154 |      1 |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR "T1"."C2"='H')))
       filter(("T1"."C3"='18110034450001' OR "T1"."C3"='18110034450101' OR "T1"."C3"='18110034450201' OR
              "T1"."C3"='18110034450301' OR "T1"."C3"='18110034450401' OR "T1"."C3"='18110034450501'))

This was the plan from 12.2.0.1 – and again the plan for 10.2.0.3 was identical except for costs which became 140 for the index range scan and 141 for the table access. At first sight it looks like 10g may be using the total selectivity of the entire query as the scaling factor for the index clustering_factor to find the table cost while 12c uses the cost of accessing the table for one iteration (rounding up) before multiplying by the number of iterations.

Having observed this detail I thought I’d do a quick test of what happened by default if I requested 145 distinct values of c3. Both versions defaulted to the access/filter path rather than the pure access path – but again there was a difference in costs. The 10g index cost was 140 with a table access cost of 158, while 12c had an index cost of 179 and a table cost of 372. So both versions switch plans at some point – do they switch at the same point ? Reader, I could not resist temptation, so I ran a test loop. With my data set the 12c version switched paths at 61 values in the in-list and 10g switched at 53 values –

Conclusion: there’s been a change in the selectivity calculations for the use of in-list iterators, which leads to a change in costs, which can lead to a change in plans; the OP was just unlucky with his data set and stats. Possibly there’s something about his data or stats that makes the switch appear with a much smaller in-list than mine.

Footnote:

When I responded to the thread on MOSC with the suggestion that the problem was in part due to statistics and might be affected by out of date stats (or a histogram on the (low-frequency) c2 column) the OP noted that stats hadn’t been gathered since some time in August – and found that the 12c path changed to the efficient (10g) one after re-gathering stats on the table.

 

Oracle Accelerates Data Insights for Retailers with Oracle Digital Assistant

Oracle Press Releases - Thu, 2018-11-15 07:00
Press Release
Oracle Accelerates Data Insights for Retailers with Oracle Digital Assistant Integration of Core Retail Technology with Conversational AI Powers Targeted and Contextual Offers that Engage Customers and Drive Results

Redwood Shores, Calif.—Nov 15, 2018

Enabling retailers to build personalized customer experiences as well as voice-enabled assistants to help employees work smarter and more productively, Oracle Retail solutions are now integrated with Oracle Digital Assistant. Together, these offerings empower retailers to find answers to critical business questions such as, "What's the current margin on our new BOGO offer for trendsetters?" faster than ever before.

Announced at Oracle OpenWorld 2018, Oracle Digital Assistant leverages artificial-intelligence (AI) to understand context, derive intent, and identify and learn user behaviors and patterns to automate routine tasks proactively on behalf of the user. By integrating the technology with Oracle Retail Offer Optimization Cloud Service, analysts can easily streamline location-specific sales forecasting, promotional deployment and performance, approval automation and target prioritization.

To see these technologies in action, visit: https://youtu.be/UcDTQa7Ff-w.

"Most retailers think about conversational AI in the context of the store or e-commerce, however, retailers can now apply conversational AI to their core operations', via voice or text, to accelerate productivity and optimize processes," said Mike Webster, Senior Vice President and General Manager, Oracle Retail. "Smart digital interactions are an integral part of our everyday life as we query Alexa, Google Home and Siri for recommendations. This latest integration between Oracle Digital Assistant and Oracle Retail Offer Optimization Cloud Service brings the power and simplicity of voice to retail operations, speeding time to insight and action."

Built on Oracle Cloud Infrastructure, Oracle Digital Assistant goes well beyond standard chatbots available today that provide simple, single skilled, linear responses. By applying AI for natural language processing (NLP), natural language understanding (NLU) and machine learning (ML), Oracle is in a uniquely positioned to leverage its breadth and depth in enterprise applications to offer a digital assistant that can truly span the enterprise.

"Going forward digital assistants will transform how merchants, planners, and marketers collaborate, engage their company's information assets, and how they work," said Greg Girard, program director of intelligent product merchandising and marketing, IDC Retail Insights. "As digital assistants become more conversationally and analytically skillful and more aware of their users' intent and context we'll see more incisive decisions, made quicker, to deliver better business outcomes. Retailers should bring digital assistants into their digital transformation planning assumptions now."

In April, Oracle launched the next generation of promotion, markdown and offer optimization capabilities as a cloud service with the launch of Oracle Retail Offer Optimization Cloud Service. With these new updates retailers can analyze promotion and pricing decisions for the entire product lifecycle while providing consumers with targeted and contextual offers.

Contact Info
Matt Torres
Oracle
+1.415.595.1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

[BLOG] 15 Must Know Things on Oracle EBS (R12) on Cloud for Beginners

Online Apps DBA - Thu, 2018-11-15 03:32

Are you a Beginner who wants to move ahead in the journey towards learning EBS on Cloud? If yes, then visit: https://k21academy.com/ebscloud16 to learn about: ✔Various Cloud Service Models ✔The 2 main tiers of Oracle EBS(R12) ✔The ways to deploy EBS on Cloud & much more… Are you a Beginner who wants to move ahead […]

The post [BLOG] 15 Must Know Things on Oracle EBS (R12) on Cloud for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Log DML, DDL and DCL user activity

Tom Kyte - Thu, 2018-11-15 02:26
Hello, Ask TOM Team. I want to know if there's a straightforward (not using triggers or things like that lol) way to log DML, DDL and DCL user activity on specific objects (12c). I do not know if <b>Database Vault</b> can help me with that. Any Do...
Categories: DBA Blogs

DB link is not working between 2 databases

Tom Kyte - Thu, 2018-11-15 02:26
Hi, We have 2 databases DB A and DB B. we have created db link between 2 dbs as a2b; in DB A, we have below table_A and data, <code>create table table_a (emp_id number, emp_name varchar2(30)) / insert into table_a values (1,'Test1') / ...
Categories: DBA Blogs

ORA-64610: bad depth indicator with Utl_Call_Stack

Tom Kyte - Thu, 2018-11-15 02:26
Hi, I have a database in Oracle 12.2.0 There I have deployed a PL/SQL logic which is called from an update trigger. From this logic, following code segment is called to get the format call stack. <code>FUNCTION Format_Stack___ ...
Categories: DBA Blogs

Super Lock an Oracle Database

Pete Finnigan - Thu, 2018-11-15 02:26
I started this blog post a few weeks ago and kept adding to it from time to time but I have been incredibly busy helping people secure data in their Oracle databases that it has taken a long time to....[Read More]

Posted by Pete On 14/11/18 At 02:20 PM

Categories: Security Blogs

How to Launch PeopleSoft Cloud Manager Using a Pre-built Image for Oracle Cloud Infrastructure

PeopleSoft Technology Blog - Wed, 2018-11-14 23:58

With the latest updates to Oracle Cloud Infrastructure, you now have a cool new way of launching instances.  This new feature makes it easier than ever before to launch a PeopleSoft Cloud Manager instance.  We received your feedback on how downloading and uploading Cloud Manager was taking too much time.  I’m very happy to say that we’ve heard you, and now we have delivered a surprisingly simple solution.

Before creating the Cloud Manager instance, you should have set up your networking (VCN and subnet) and configured all required security rules.

To launch a new Cloud Manager instance, navigate to your tenancy and go to the dashboard, as shown here, or the Instances page.  From the dashboard, click the option Create a VM instance. 

You will now see the new pages to create an instance as shown below.  Enter the name for your instance and select the availability domain (AD) in which you want to create your Cloud Manager instance. 

Next step is to select the Cloud Manager image. By default, Oracle Linux 7.5 image is chosen.  Click Change Image Source to search for the Cloud Manager image.

On the Browse All Images page, select the Oracle Images tab.  Here you’ll find the latest Cloud Manager image for OCI.  Select the image and accept the terms and restrictions after reading them. Click Select Image to use the chosen image.

Choose the Virtual Machine instance type and select a VM shape of your choice. 

Select a compartment in which the Cloud Manager instance will be created.  Select a VCN and a subnet for the Cloud Manager instance configuration. Click Create to deploy the instance.

There you go!  Creating a Cloud Manager instance is now so easy. 

Note that you still need to download the Oracle Linux Image for Cloud Manager from My Oracle Support, upload it to object storage and import as a custom image.  After setting up Cloud Manager instance, continue from Downloading the Oracle Linux Image and Uploading to Object Storage step in the install guide here.

After you have the Oracle Linux image, and the Cloud Manager instance is provisioned and running, SSH into the instance and follow the instructions in the install guide to run the Cloud Manager Instance Configuration Wizard.

Pages

Subscribe to Oracle FAQ aggregator