Feed aggregator

Docker Birmingham – November 2019

Tim Hall - Thu, 2019-11-07 01:58

Yesterday evening I went to the Docker Birmingham meetup, sponsored by Black Cat Technology Solutions.

This event was a single longer session by Matt Todd called “Make Data Science Great Again (Part 2)” I didn’t see part 1, but he gave a summary, so that wasn’t too much of a problem.

Matt started off by introducing the scientific method and discussed reducing variables when testing, so you know the impact of a change. The suggestion being that development and data science should be the same. What better way to reduce variables than to package up a data science lab to make sure everyone is working on the same thing, so there is no/less variability between researchers, and they can focus on their work, not piecing together the kit.

He then went on to discuss Cloud Native Application Bundles (CNAB), and how they can be used to more reliably package multi-container applications, guaranteeing dependencies to a level greater than that possible by using Docker Compose alone. As an example he demoed his Digital Scientists Lab, which is a CNAB bundle containing a bunch of common kit used by data scientists (Jupyter, Spark, Flink, Kafka, RabbitMQ, Nifi, Elastic Stack etc.), which he could in theory give to several people to run experiments. It’s just his playground, but it gives you an idea of what’s possible. Using Nifi to link stuff together looked cool!

I started to make a few mental connections when he discussed the relationship to experimental data, because I look after the infrastructure for a research publishing system, and being able to keep not only the published research, but also the experimental data and potentially a way to reproduce the research findings is pretty important. It keeps those academics honest, if you know what I mean. It’s funny how just a few links to something you know a little bit about, and other stuff starts falling into place.

It was all a bit mind-blowing, but in a good way. I’m still only scratching the surface of this stuff, but it’s really good to see what else is going on in this space. I’ve added a few more things to my list of things to play around with.

It’s a couple of months until the next event, but there’s a CNCF event next month, so watch out for that!

Thanks to the folks at Black Cat Technology Solutions for sponsoring and organising the event, and to the Matt Todd for doing a great session. See you soon!



Docker Birmingham – November 2019 was first posted on November 7, 2019 at 8:58 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Free Oracle Cloud: 15. The request could not be mapped to any database

Dimitri Gielis - Wed, 2019-11-06 13:25
This post is the last post of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

At some point you might face the following message: "The request could not be mapped to any database":

Oracle is monitoring usage on your Always Free Account and whenever it finds there's no activity for 7 days, it will stop your database automatically. It will preserve the data in the database, but it won't be accessible anymore.

To fix the issue, log in to your Oracle Cloud Account and go to your Autonomous Database:

You will see the database is in a stopped state. Click the Start button:

The state will change to Starting...

And after a minute it becomes available again:

The above behavior is written in the end-user documentation:
Inactivity Monitoring and Database Stoppage
Persistently inactive Always Free Autonomous Databases are detected and handled as follows:
  • After being inactive for 7 days, the database will be stopped automatically, preserving its stored data. Inactivity measurements leading up to 7 days are based on database connections. Successfully making a SQL*Net or HTTPS connection resets these measurements to zero.
  • A database that is automatically or manually stopped and stays inactive for 90 days, cumulative, may be reclaimed and permanently deleted. Inactivity measurements leading up to 90 days are based on the database being inactive or in the stopped state. Starting a stopped database resets these measurements to zero.
    Start an Always Free Autonomous Database by clicking the Start button on the Oracle Cloud Infrastructure console. Start a stopped Always Free Autonomous Database before 90 days to avoid losing access to its data.

But this week there were some people complaining that although they had activity, their database was stopped anyway. I witnessed the same behavior in my account, so I reached out to Oracle and they confirmed their code to identify inactivity, is not properly accounting for APEX/ORDS usage. They are already working on a fix, which they hope to apply very soon. I will update this post when I get confirmation the fix is in the data centers.

Categories: Development

Table Space

Jonathan Lewis - Wed, 2019-11-06 11:59

There’s a request on the Oracle Developer Forum for assistance to write a report that shows the total space allocation for a table – including its indexes and LOB columns. There is no requirement to allow for partitioned tables, but it’s not entirely clear whether the code should allow for index-organized tables and clustered tables, and there’s no comment about nested tables.

The OP has made an attempt to write a suitable query, but the query joins dba_tables to dba_indexes then to dba_lobs then three times to dba_segments (all the joins being outer joins) before aggregating on table name. Unsurprisingly this produces the wrong results because (for example) if a table has two indexes the join to from dba_tables to dba_indexes will double the bytes reported for the table. As Andrew Sayer points out in the thread, it would be better to aggregate on each of the separate component types before joining.

This introduces an important, generic, principle to writing code.

  • If it’s difficult to write a single statement can you break the task down into separate components that are easier to handle.
  • If you can express the problem as a small set of simpler components, can you then combine the components
  • If this approach works is any loss of efficiency balanced by a gain in clarity and maintainability.

In the case of the OP, this isn’t a query that’s going to be runing every few seconds – maybe it will be once per day, maybe three or four times per day. so there’s no need to be worried about making it as efficient as possible – so let’s go for simplicity.

Here’s a query that reports the space allocate for a simple heap table. It references dba_tables and dba_segments, so has to be run by a user with DBA privileges, and at the moment it’s restricted to reporting a single user.

Wait – before I write the query I ought to create a testbed to see if the query is working. So let’s take an empty schema and create a few objects. Let’s create

  • a simple heap table with two indexes and two LOB columns.
  • an Index Organized Table (IOT) with an overflow and two LOB columns.
  • a table with two columns that are a nested table type

Here’s some suitable code:

rem     Script:         sum_table_space.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem     Last tested 

prompt  ============================================================
prompt  Simple heap table - copy of all_objects - then add two CLOBs
prompt  ============================================================

create table test_user.t1 as select * from all_objects;

alter table test_user.t1 add (c1 clob) lob (c1) store as basicfile t1_lob1 (disable storage in row);
alter table test_user.t1 add (c2 clob) lob (c2) store as basicfile t1_lob2 (disable storage in row);

update test_user.t1 set c1 = 'x', c2 = 'x' where rownum <= 125;

create index test_user.t1_i1 on test_user.t1(object_name, object_type, object_id);
create index test_user.t1_i2 on test_user.t1(object_type, object_name, object_id);

prompt  =========================================================
prompt  Simple IOT table - with a couple of CLOBs in the overflow
prompt  =========================================================

create table test_user.iot1 (
        id              number          not null,
        ind_pad         varchar2(20)    not null,
        n2              number          not null,
        n3              number          not null,
        padding         varchar2(50),
        ct1             clob,
        ct2             clob,
        constraint iot1_pk primary key (id, ind_pad)
organization index
        including n3
lob(ct1) store as basicfile iot_lob1(
        disable storage in row
lob(ct2) store as basicfile iot_lob2(
        disable storage in row

insert into test_user.iot1
        case when mod(rownum,100) = 0 then rpad('x',100,'x') end,
        case when mod(rownum,100) = 0 then rpad('x',100,'x') end
        rownum <= 10000


prompt  ====================================================
prompt  Now create types to allow creation of a nested table
prompt  ====================================================

create type test_user.jpl_item as object (n1 number, v1 varchar2(32));

create type test_user.jpl_tab_type as table of jpl_item;

create table test_user.demo_nest (
        id      number,
        nest1   test_user.jpl_tab_type,
        nest2   test_user.jpl_tab_type
nested table nest1 store as t_nested1
return as locator
nested table nest2 store as t_nested2
return as locator

create unique index test_user.tn1_pk on test_user.t_nested1(nested_table_id, n1) compress 1;
create unique index test_user.tn2_pk on test_user.t_nested2(nested_table_id, n1) compress 1;

create index  test_user.tn1_v1 on test_user.t_nested1(v1);
create index  test_user.tn2_v1 on test_user.t_nested2(v1);

insert into test_user.demo_nest values (
                test_user.jpl_item(1,'Item One one'),
                test_user.jpl_item(2,'Item One two')
                test_user.jpl_item(1,'Item One one'),
                test_user.jpl_item(2,'Item One two')

insert into test_user.demo_nest values (
                test_user.jpl_item(1,'Item Two one'),
                test_user.jpl_item(2,'Item Two two')
                test_user.jpl_item(1,'Item Two one'),
                test_user.jpl_item(2,'Item Two two')


You’ll notice that I’ve prefixed every table, index and type name with a schema name. This is because I set up this test to run as a DBA so I’m creating the objects while connected with a different id.

Of course, before trying to report allocations summed by base table, it would be sensible to produce a simple list of the segments we should see so that we know when we’ve reported all of them. So let’s start with that very simple report:

column bytes format 999,999,999,999
column segment_name format a25

break on report on owner on object_name skip 1
compute sum of bytes on report

        segment_name, segment_type, bytes
        owner = 'TEST_USER'
order by
        segment_type, segment_name

SEGMENT_NAME              SEGMENT_TYPE                  BYTES
------------------------- ------------------ ----------------
IOT1_PK                   INDEX                     1,048,576
SYS_C008380               INDEX                     1,048,576
SYS_C008381               INDEX                     1,048,576
SYS_FK0000075579N00002$   INDEX                     1,048,576
SYS_FK0000075579N00004$   INDEX                     1,048,576
T1_I1                     INDEX                     5,242,880
T1_I2                     INDEX                     5,242,880
TN1_PK                    INDEX                     1,048,576
TN1_V1                    INDEX                     1,048,576
TN2_PK                    INDEX                     1,048,576
TN2_V1                    INDEX                     1,048,576
SYS_IL0000075565C00027$$  LOBINDEX                  1,048,576
SYS_IL0000075565C00028$$  LOBINDEX                  1,048,576
SYS_IL0000075572C00006$$  LOBINDEX                  1,048,576
SYS_IL0000075572C00007$$  LOBINDEX                  1,048,576
IOT_LOB1                  LOBSEGMENT                1,048,576
IOT_LOB2                  LOBSEGMENT                1,048,576
T1_LOB1                   LOBSEGMENT                2,097,152
T1_LOB2                   LOBSEGMENT                2,097,152
T_NESTED1                 NESTED TABLE              1,048,576
T_NESTED2                 NESTED TABLE              1,048,576
DEMO_NEST                 TABLE                     1,048,576
SYS_IOT_OVER_75572        TABLE                     1,048,576
T1                        TABLE                    12,582,912
sum                                                47,185,920

So we have a list of segments, and we have a sum of bytes to aim for. One thing you might notice, though, is that the name “IOT1”  has “disappeared”, instead the thing we see as a “TABLE” is its overflow segment, called SYS_IOT_OVER_75572 (the number being the object_id of the table we originally defined.  We will have to do something in our code to translate that sys_iot_over_75572 to iot1 if we want to make our final report easy to comprehend.

There are a few other oddities in the list, but some of them we may be familiar with already – the indexes with names like SYS_IL0000075565C00027$$ are the lob indexes for the four lobs we created. (I deliberately created basicfile LOBs in case the OP wasn’t using securefile LOBs that most systems are now using)  The 75565 in the sample here is the object_id of the base table the C00027 tells us that the LOB is column 27 of the table definition.

You may be wondering about the indexes like SYS_C008380 and SYS_FK0000075579N00002$. We may need to work on those – but I’ll tell you the answer straight away – the former is a unique index on the “nest1” column  in the demo_nest table, the latter is a “foreign key” index on the (invisible) nested_table_id column in the nested tables.

Side note: although the sys_fk0000075579N000025 index looks as if it’s supposed to be a “foreign key” index, as suggested by the name, the dependency, and the function, there’s no declared foreign key constraint declared between the parent table and the nested table. Also, if you check dba_indexes, the generated column has the value ‘N’, even though the name of the index was clearly generated by Oracle internal code.  (The index at the other end of the join – sys_c008380 – is, however, flagged with generated = ‘Y’)

Let’s start building our solution with the easiest bit.

        cast('TABLE' as varchar2(11))   object_type,
        nvl(seg_tab.bytes,0)            bytes  
        dba_tables        tab  
left outer join
        dba_segments      seg_tab  
        seg_tab.owner      = tab.owner
and     seg_tab.table_name = tab.segment_name  
        tab.owner          = 'TEST_USER'

OWNER           OBJECT_TYPE TABLE_NAME                      BYTES
--------------- ----------- -------------------- ----------------
TEST_USER       TABLE       T1                         12,582,912
                TABLE       SYS_IOT_OVER_75572          1,048,576
                TABLE       DEMO_NEST                   1,048,576
                TABLE       IOT1                                0
***************                                  ----------------
sum                                                    14,680,064

Immediately we have to correct two problems – we can see the original Iiot1 table – but it doesn’t have an associated data segment, it has a “top”, which is the primary key index segment, and an “overflow” which is the designated by Oracle as a table segment. So we need to check the columns in dba_tables to work out how to get the more appropriate name reported for the table segment, and add a predicate to hide the original name. Here’s “tables only mark 2”:

        cast('TABLE' as varchar2(11))   object_type,
                 when tab.iot_type = 'IOT_OVERFLOW' 
                        then tab.iot_name
                else tab.table_name
        end                             object_name,
                 when tab.iot_type = 'IOT_OVERFLOW' 
                        then tab.table_name
        end                             auxiliary_name,
        nvl(seg_tab.bytes,0)            bytes  
        dba_tables        tab  
left outer join
        dba_segments      seg_tab  
        seg_tab.owner        = tab.owner
and     seg_tab.table_name   = tab.segment_name  
        tab.owner            = 'TEST_USER'
and     nvl(tab.iot_type,'NORMAL') != 'IOT'

OWNER           OBJECT_TYPE OBJECT_NAME          AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       TABLE       T1                                                   12,582,912
TEST_USER       TABLE       IOT1                 SYS_IOT_OVER_75572               1,048,576
TEST_USER       TABLE       DEMO_NEST                                             1,048,576

I’ve identified the table type using the iot_type column in dba_tables, hidden rows where the iot_type is ‘IOT’, and reported the iot_name (with table_name as an auxiliary name) when the iot_type is ‘IOT_OVERFLOW’. And we can now check that the result is consistent with the content of dba_segments that lists segment_type = ‘TABLE’.

So let’s move on to indexes.

        ind.table_owner                 owner, 
        cast('INDEX' as varchar2(11))   object_type,
        index_name                      auxiliary_name,
        nvl(seg_ind.bytes,0)            bytes  
        dba_indexes       ind  
left outer join 
        dba_segments      seg_ind  
        ind.owner      = seg_ind.owner
and     ind.index_name = seg_ind.segment_name  
        ind.table_owner = 'TEST_USER'

OWNER           OBJECT_TYPE TABLE_NAME           AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       INDEX       T1                   SYS_IL0000075565C00027$$         1,048,576
TEST_USER       INDEX       T1                   SYS_IL0000075565C00028$$         1,048,576
TEST_USER       INDEX       T1                   T1_I1                            5,242,880
TEST_USER       INDEX       T1                   T1_I2                            5,242,880
TEST_USER       INDEX       IOT1                 IOT1_PK                          1,048,576
TEST_USER       INDEX       IOT1                 SYS_IL0000075572C00007$$         1,048,576
TEST_USER       INDEX       IOT1                 SYS_IL0000075572C00006$$         1,048,576
TEST_USER       INDEX       T_NESTED1            SYS_FK0000075579N00002$          1,048,576
TEST_USER       INDEX       T_NESTED2            SYS_FK0000075579N00004$          1,048,576
TEST_USER       INDEX       DEMO_NEST            SYS_C008380                      1,048,576
TEST_USER       INDEX       DEMO_NEST            SYS_C008381                      1,048,576
TEST_USER       INDEX       T_NESTED1            TN1_PK                           1,048,576
TEST_USER       INDEX       T_NESTED2            TN2_PK                           1,048,576
TEST_USER       INDEX       T_NESTED1            TN1_V1                           1,048,576
TEST_USER       INDEX       T_NESTED2            TN2_V1                           1,048,576

A quick check shows that we’ve picked up the 15 indexes reported by dba_segments – although I’ve labelled them all just as “INDEX” while dba_segments does label LOB indexes differently. As you can see I’ve reported the table name in each case with the index name as the “auxiliary” name. This will be of use when I’m summing up the space associated with each table.

On to the LOBs – also easy:

        cast('LOB' as varchar2(11))   object_type,
        lob.column_name               auxiliary_name,
        nvl(seg_lob.bytes,0)          bytes  
        dba_lobs    lob
left outer join 
        dba_segments      seg_lob
        seg_lob.owner        = lob.owner
and     seg_lob.segment_name = lob.segment_name
        lob.owner            = 'TEST_USER'

OWNER           OBJECT_TYPE TABLE_NAME           AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       LOB         T1                   C1                               2,097,152
TEST_USER       LOB         T1                   C2                               2,097,152
TEST_USER       LOB         IOT1                 CT1                              1,048,576
TEST_USER       LOB         IOT1                 CT2                              1,048,576

And finally the nested tables:

        cast('NESTED' as varchar2(11))   object_type,
        nst.parent_table_name            object_name,
        table_name                       auxiliary_name,
        nvl(seg_nst.bytes,0)             bytes  
        dba_nested_tables    nst  
left outer join 
       dba_segments          seg_nst  
        seg_nst.owner        = nst.owner
and     seg_nst.segment_name = nst.table_name  
        nst.owner            = 'TEST_USER'

OWNER           OBJECT_TYPE OBJECT_NAME          AUXILIARY_NAME                       BYTES
--------------- ----------- -------------------- ------------------------- ----------------
TEST_USER       NESTED      DEMO_NEST            T_NESTED1                        1,048,576
TEST_USER       NESTED      DEMO_NEST            T_NESTED2                        1,048,576

A quick check tells us that the four pieces of code produce an exact match for all the segments reported in dba_segments – so all we have to do now is stitch the four queries together with UNION ALL, then sum(bytes) by owner and table_name.

There are 3 little problems to deal with though.

  • We have the predicate “where {owner} = ‘TEST_USER'” appearing 4 times in total, which is a bit messy. If we put this outside the UNION ALL will Oracle be smart enough to push it back inside the UNION ALL as part of its query transformation. It seems to, but the plan (which is several hundred lines long – so I’m not going to look too closely) does change a little, but on my little test system it didn’t make much difference to the performance. Initially it looks as if it might be okay to create a view from this query with no restricition on table owner, and allow users to query the view with a schema name.
  • The code to report indexes reports the indexes on the nested tables under t_nested1 and t_nested2 – we want the indexes to be associated with table demo_nest, so we need to refine that bit of the code. It needs an outer join to dba_nested_tables to supply the parent_table_name if it exists.
  • On my little system the query takes several seconds to run. Should I worry about that. Not until I’ve tested it on a much bigger system, and not until I know how frequently it needs to run. It may be good enough as it stands, and the ease with which I actually modified the first version of my code to handle the nested tables indexing issue is an indicator of the benefits of keeping a complex job as the sum of its parts if it’s reasonable to do so.

Let’s go straight to the final (so far) SQL:

        owner, object_name, sum(bytes) 
from    (
                cast('TABLE' as varchar2(11))   object_type,
                         when tab.iot_type = 'IOT_OVERFLOW' 
                                then tab.iot_name
                        else tab.table_name
                end                             object_name,
                         when tab.iot_type = 'IOT_OVERFLOW' 
                                then tab.table_name
                end                             auxiliary_name,
                nvl(seg_tab.bytes,0)            bytes  
                dba_tables        tab  
        left outer join
                dba_segments      seg_tab  
                seg_tab.owner        = tab.owner
        and     seg_tab.segment_name = tab.table_name
                tab.owner            = 'TEST_USER'
        and     nvl(tab.iot_type,'NORMAL') != 'IOT'
        union all
                ind.table_owner                  owner, 
                cast('INDEX' as varchar2(11))    object_type,
                )                                table_name,
                index_name                       auxiliary_name,
                nvl(seg_ind.bytes,0)             bytes  
                dba_indexes       ind  
        left outer join
                dba_nested_tables nst
                nst.owner       = ind.table_owner
        and     nst.table_name  = ind.table_name
        left outer join 
                dba_segments      seg_ind  
                seg_ind.owner         = ind.owner
        and     seg_ind.segment_name  = ind.index_name 
                ind.table_owner = 'TEST_USER'
        union all
                cast('LOB' as varchar2(11))   object_type,
                lob.column_name               auxiliary_name,
                nvl(seg_lob.bytes,0)          bytes  
                dba_lobs    lob
        left outer join 
                dba_segments      seg_lob
                seg_lob.owner        = lob.owner
        and     seg_lob.segment_name = lob.segment_name
                lob.owner            = 'TEST_USER'
        union all
                cast('NESTED' as varchar2(11))   object_type,
                nst.parent_table_name            object_name,
                table_name                       auxiliary_name,
                nvl(seg_nst.bytes,0)             bytes  
                dba_nested_tables    nst  
        left outer join 
               dba_segments          seg_nst  
                seg_nst.owner        = nst.owner
        and     seg_nst.segment_name = nst.table_name
                nst.owner         = 'TEST_USER'
        owner = 'TEST_USER'
group by
        owner, object_name
order by

OWNER           OBJECT_NAME                SUM(BYTES)
--------------- -------------------- ----------------
TEST_USER       IOT1                        6,291,456
TEST_USER       DEMO_NEST                  11,534,336
TEST_USER       T1                         29,360,128
sum                                        47,185,920

I wrote this note because my response to the original query was an informal suggestion of taking this approach; then I thought it might be worth writing a general comment about solving big problems by starting with little problems and stitching the pieces together using this case as a concrete example – then I just went ahead to finish the thing off within the bourndaries of the original requirements.

There’s a reason, though, why I tend to avoid publishing SQL that might be useful – it’s always possible to overlook bits and end up with people trying to use code that’s not totally appropriate to their system. In this case, of course, there’s a total (but deliberate) failure to handle clusters, partitions and subpartitions, but I don’t know how many other of the “less commonly used” bits of Otacle I haven’t considered. I do know that I haven’t touched on domain indexes (such as text indexes with all their convoluted collections of tables and indexes) but there may be some oddities of (e.g.) advanced queueing, replication, and audit vault that fall outside the range of the possibilities I’ve covered above.

Update 7th Nov 2019

The OP from OTN reported seeing some nested table in the output – which I thought I’d catered for so I modified the code I have on my sandbox to add a nested table to the IOT, and added a couple of extra indexes on the parent of the nested table, changed a couple of columns and object names to quoted mixed case, and everything still seemed to work okay.

It turned out that the OP had nested tables that were nested 5 deep – which means I need a hierarchical query to connect the 2nd to 5th (or nth) layer of nesting back to the parent to report the total volume against the top level table.

Watch this space – but not for a day or two.


Once-Reticent Utilities Accelerating Move to the Cloud

Oracle Press Releases - Wed, 2019-11-06 07:00
Press Release
Once-Reticent Utilities Accelerating Move to the Cloud Survey shows desire to boost customer experience driving utilities’ cloud adoption, although cybersecurity remains a top concern

Redwood Shores, Calif.—Nov 6, 2019

A large majority (71 percent) of utilities now use cloud software, up from just 45 percent three years ago according to a new Oracle survey. Improving customer experience was noted as the key driver for increased cloud investment. But while progress has been made, 85 percent of respondents still cite security as a top concern and noted that regulatory acceptance remains a barrier to cloud adoption.

“Just a few years ago, utilities looked at cloud computing with skepticism at best. Today, many realize it’s essential to their survival,” said Brian Bradford, vice president, industry solutions, Oracle. “Utilities are increasingly seeing cloud as a fundamental asset in meeting ever-evolving expectations, mitigating security concerns, and turning data into an opportunity to modernize and evolve their operations to better serve constituents and customers.”

The report, conducted by research firm Zpryme, surveyed more than 150 global utility leaders to better understand the pace and breadth of cloud computing adoption. Respondents spanned investor-owned, municipal, cooperative and district/federal utilities representing electric, gas, water, and renewables. To download the full report, visit: https://etsinsights.com/reports/the-acceleration-of-cloud-computing-for-utilities/

Changing Customer Expectations

Customers are increasingly looking for more information regarding their overall utility usage and bill and how to lessen the impact of both. And they expect this with the high level of service and personalization that they are enjoying in other sectors, such as retail and banking. Utilities see cloud technologies as a means to provide the speed and control to meet these customer expectations. So, it was no surprise to see continued investment in customer experience and engagement (31 percent) and customer information (29 percent) cloud systems in the top five investment priorities for utilities in the coming years.

Drowning in Data

Sixty-four percent of utilities noted that cloud computing is critical to my company’s future success. A key driver is the need to better manage an ever-growing pool of data coming from smart meters, IoT sensors, customers’ home energy devices, and more. Utilities realize that it is becoming too great a task and risk to manage this influx without a significant investment in tools that can capture and analyze grid edge, supply chain, and customer data quickly. Utilities hope to use the technological flexibility provided by cloud computing to innovate by using data to solve business problems from grid optimization to managing distributed energy resources (DERs).

“Utilities are seeing the impact that digital transformation is having on their business model,” said Jason Rodriguez, CEO of Zpryme. “The challenge is so much greater than simply capturing millions of disparate data points. Utilities need IT tools to manage and drive better business decisions and cloud computing provides the agility to more effectively integrate data predictively use it.”

Security Concerns Remain

Utilities remain concerned about security (85 percent) and privacy (81 percent) and also rated these as the biggest barriers they face when it comes to using or expanding cloud computing. Concerns, however, were not around the vulnerability of cloud computing technology itself, but rather a growing barrage of increasingly sophisticated cybersecurity threats. Whether it is IoT devices, field area networks, or 5G networks, utilities are operating in a hyper-connected digital ecosystem that can provide new and varied opportunities for nefarious actors to expose their networks. Utilities recognize that they need to stay vigilant and that will require help from both regulators and vendors.

Regulatory Right-Sizing

While improving, many regulators globally still don’t allow for utilities to earn a rate of return on their cloud investments, which impedes upgrading legacy IT systems. While utilities recognize the role that regulators play in approving their prudent expenditures, 26 percent cited regulatory acceptance as a continued barrier to cloud adoption. The majority of utility respondents (75 percent), however, also note that regulators can play a strong role in helping protect customer privacy and security.

Clouds Ahead

Many utilities dipped their toe in the cloud waters with enterprise applications, such as enterprise resource management (ERP) or human capital management (HCM) and 74 percent plan to spend more on cloud in the next 3-5 years. While a move to better serve and engage customers is driving this next phase of cloud adoption, operational system investments are not far behind. Forty-three percent of utilities have increased their operational spending by at least 25 percent over 2015 levels.

To learn more about Oracle’s utility solutions, please visit: www.oracle.com/utilities

Contact Info
Kristin Reeves
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

Turning On or Tuning Out? Aiconix Applies AI to See What Holds TV Viewers’ Interest

Oracle Press Releases - Wed, 2019-11-06 06:00
Turning On or Tuning Out? Aiconix Applies AI to See What Holds TV Viewers’ Interest

By Guest Blogger, Oracle—Nov 6, 2019

Why does one video go viral and another lose viewers? News agencies, publishers, and entertainment companies are becoming increasingly methodical about measuring audiences’ engagement. Aiconix, a German startup based in Hamburg, is applying artificial intelligence to ferret out the insights needed to help organizations answer those questions.

Deploying groups of testers in a neurolab in its home city, aiconix uses electroencephlogram technology to measure viewers’ emotions, including surprise or boredom, while watching video clips. It’s developed AI software called aingine based on data that can predict whether certain factors in clips, such as abrupt subject matter shifts or too little connection between a main theme and protagonist’s words, cause viewers to switch off. In addition, the software can identify what roles speakers play in a video, help journalists quickly identify and label subjects in video clips, and automatically produce transcripts with time codes, taking away some of the grunt work of TV production.

Thanks to a reliable and high-performing IT cloud infrastructure from Oracle, aiconix can offer its customers fast, secure performance with the ability to scale. The company, part of the Oracle Global Startup Ecosystem, is using Oracle Cloud Infrastructure, including state-of-the-art graphics processors, to train its AI system. Oracle’s startup program connects promising startups with Oracle’s cloud resources and enterprise expertise.

“We needed a reliable data center in Germany where our aingine could run, scale, and grow with us,” says Eugen Gross, CEO and co-founder of aiconix. “Oracle Cloud Infrastructure provides us with a range of cloud resources and features we need. In addition, being part of the Global Startup Ecosystem gives us the personal support and direct access to customers.”

So far, the company has attracted customers at organizations including a large European press agency and a German news photography agency.

Read More Stories from Oracle Cloud

Aiconix is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

How to scale up a Patroni cluster

Yann Neuhaus - Wed, 2019-11-06 04:43

During the preparation of my presentation for the pgconf.eu I ran into one big issue. I had to stop my cluster to add a new node. That was not the way I wanted to archive this. I want a high availability solution, that can be scaled up without any outage. Due to a little hint during the pgconf.eu I was able to find a solution. In this post I will show the manually scale up, without using a playbook.

Starting position

We start with a 3 node patroni cluster which can be created using this blog post.
Now we want to add a fourth node to the existing etcd and Patroni cluster. In case you also need a playbook to install a forth node, check out my GitHub repository.

Scale up the etcd cluster

This step is only needed, when you want to scale up your etcd cluster as well. To scale up a Patroni cluster it is not necessary to scale up etcd cluster. You can, of course, scale up Patroni without adding more etcd cluster members. But maybe someone also needs to scale up his etcd cluster and searches for a solution. If not, just jump to the next step.

Be sure the etcd and patroni service are not started on the forth node.

postgres@patroni4:/home/postgres/ [PG1] systemctl status etcd
● etcd.service - dbi services etcd service
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
postgres@patroni4:/home/postgres/ [PG1] systemctl status patroni
● patroni.service - dbi services patroni service
   Loaded: loaded (/etc/systemd/system/patroni.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
postgres@patroni4:/home/postgres/ [PG1]

Make the following adjustments in the etcd.conf of the 4th node.

postgres@patroni4:/home/postgres/ [PG1] cat /u01/app/postgres/local/dmk/etc/etcd.conf
name: patroni4
data-dir: /u02/pgdata/etcd
initial-cluster-state: 'existing'
initial-cluster: patroni1=,patroni2=,patroni3=,patroni4=

Next add the new etcd member to the existing etcd cluster. You can execute that on every existing member of the cluster.

postgres@patroni1:/home/postgres/ [PG1] etcdctl member add patroni4
Added member named patroni4 with ID dd9fab8349b3cfc to cluster


Now you can start the etcd service on the 4th node.

postgres@patroni4:/home/postgres/ [PG1] sudo systemctl start etcd
postgres@patroni4:/home/postgres/ [PG1] systemctl status etcd
● etcd.service - dbi services etcd service
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-10-17 16:39:16 CEST; 9s ago
 Main PID: 8239 (etcd)
   CGroup: /system.slice/etcd.service
           └─8239 /u01/app/postgres/local/dmk/bin/etcd --config-file /u01/app/postgres/local/dmk/etc/etcd.conf
postgres@patroni4:/home/postgres/ [PG1]

And after a short check, we can see, that Node 4 is added to the existing cluster

postgres@patroni4:/home/postgres/ [PG1] etcdctl member list
dd9fab8349b3cfc: name=patroni4 peerURLs= clientURLs= isLeader=false
16e1dca5ee237693: name=patroni1 peerURLs= clientURLs= isLeader=false
28a43bb36c801ed4: name=patroni2 peerURLs= clientURLs= isLeader=false
5ba7b55764fad76e: name=patroni3 peerURLs= clientURLs= isLeader=true
Scale up Patroni

Scale up the Patroni cluster is also really easy.
Adjust the host entry in the patroni.yml on the new node.

postgres@patroni4:/home/postgres/ [PG1] cat /u01/app/postgres/local/dmk/etc/patroni.yml | grep hosts

Afterwards, start the Patroni service.

postgres@patroni4:/home/postgres/ [PG1] sudo systemctl start patroni
postgres@patroni4:/home/postgres/ [PG1] systemctl status patroni
● patroni.service - dbi services patroni service
   Loaded: loaded (/etc/systemd/system/patroni.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-10-17 17:03:19 CEST; 5s ago
  Process: 8476 ExecStartPre=/usr/bin/sudo /bin/chown postgres /dev/watchdog (code=exited, status=0/SUCCESS)
  Process: 8468 ExecStartPre=/usr/bin/sudo /sbin/modprobe softdog (code=exited, status=0/SUCCESS)
 Main PID: 8482 (patroni)
   CGroup: /system.slice/patroni.service
           ├─8482 /usr/bin/python2 /u01/app/postgres/local/dmk/bin/patroni /u01/app/postgres/local/dmk/etc/patroni.yml
           ├─8500 /u01/app/postgres/product/11/db_5/bin/postgres -D /u02/pgdata/11/PG1/ --config-file=/u02/pgdata/11/PG1/postgresql.conf --listen_addresses= --max_worker_processes=8 --max_locks_per_transact...
           ├─8502 postgres: PG1: logger
           ├─8503 postgres: PG1: startup   waiting for 000000020000000000000006
           ├─8504 postgres: PG1: checkpointer
           ├─8505 postgres: PG1: background writer
           ├─8506 postgres: PG1: stats collector
           ├─8507 postgres: PG1: walreceiver
           └─8513 postgres: PG1: postgres postgres idle

To be sure, everything runs correctly, check the status of the Patroni cluster

postgres@patroni4:/home/postgres/ [PG1] patronictl list
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
|   PG1   | patroni1 | |        | running |  2 |       0.0 |
|   PG1   | patroni2 | |        | running |  2 |       0.0 |
|   PG1   | patroni3 | | Leader | running |  2 |       0.0 |
|   PG1   | patroni4 | |        | running |  2 |       0.0 |

Using the playbooks had one failure. The entry for host in the patroni.yml is only checking localhost. When starting the fourth node, Patroni is not looking for all the other hosts, it is just looking for its own availability. This works fine in an initial cluster, but not when you want to extended one.
And: Always keep in mind, you need an uneven number of members for an etcd cluster, don’t add only a forth etcd node.

Cet article How to scale up a Patroni cluster est apparu en premier sur Blog dbi services.

Galera Cluster 4 with MariaDB 10.4.8

Yann Neuhaus - Wed, 2019-11-06 02:31

Last month, by a new customer I had to install the latest version of the MariaDB server: 10.4.8 to setup a Galera Cluster with 3 master nodes.
The good news was that this version was shipped with the latest version of the Galera Plugin from Codership: Galera Cluster 4.0.
As usual, installation & configuration was quitte easy.

$ sudo yum -y install MariaDB-server
$ sudo yum list installed|grep -i mariadb
MariaDB-client.x86_64 10.4.8-1.el7.centos @mariadb-main
MariaDB-common.x86_64 10.4.8-1.el7.centos @mariadb-main
MariaDB-compat.x86_64 10.4.8-1.el7.centos @mariadb-main
MariaDB-server.x86_64 10.4.8-1.el7.centos @mariadb-main
galera.x86_64 26.4.2-1.rhel7.el7.centos @mariadb-main
New Features:

But now I just want to introduce some of the new interesting and high level features available in this version.

Streaming Replication

In the previous versions, when we had large and long-running write transactions, we had always conflicts during
the Certification Based Replication and because of this, often transactions were aborted and rolled back.
Some of our customers were really suffering and complaining because of this problem and limitation.
Now, when there will be a big transaction, the node who initiated the transaction will not have to wait till the “commit”
but will break it into fragments, will certify it and will replicate it on the other master nodes while still the transaction will be running.

New System Tables

When having a look to the mysql database, we can see 3 new system tables.
They are containing informations that are already in status variables but now they will be persistent:

MariaDB [mysql]> show tables from mysql like 'wsrep_%';
| Tables_in_mysql (wsrep_%) |
| wsrep_cluster             |
| wsrep_cluster_members     |
| wsrep_streaming_log.      |
3 rows in set (0.000 sec)

These 3 new tables should bring to the database administrators a better overview of the current status of the cluster.

MariaDB [mysql]> select * from wsrep_cluster;
| cluster_uuid                         | view_id | view_seqno | protocol_version | capabilities |
| 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | 3       | 1967.      | 4.               | 184703       |
1 row in set (0.000 sec)

cluster_uuid is the uuid of the cluster, corresponding to the status variable: wsrep_cluster_state_uuid
view_id is the number of cluster configuration changes, corresponding to the status variable wsrep_cluster_conf_id
view_seqno is the latest Galera sequence number, corresponding to the status variable: wsrep_last_committed
protocol_version is the MariaDB wsrep patch version, corresponding to the status variable: wsrep_protocol_version
capabilities is the capabilities bitmask provided by the Galera library.

MariaDB [mysql]> select * from wsrep_cluster_members;
| node_uuid                            | cluster_uuid                         | node_name | node_incoming_address |
| 6542be69-ffd5-11e9-a2ed-a363df0547d5 | 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | node1     | AUTO                  |
| 6ae6fec5-ffd5-11e9-bb70-da54860baa6d | 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | node2.    | AUTO                  |
| 7054b852-ffd5-11e9-8a45-72b6a9955d28 | 6c41b92b-e0f9-11e8-9924-de3112d0ce21 | node3     | AUTO                  |
3 rows in set (0.000 sec)

This system table display the current membership of the cluster.
It contains a row for each node and member in the cluster.
node_uuid is the unique identifier of the master node.
cluster_uuid is the unique identifier of the cluster. It must be the same for all members.
node_name is explicit
node_incoming_address stores the IP address and port for client connections.

MariaDB [mysql]> select * from wsrep_streaming_log;
Empty set (0.000 sec)

This system table will contains rows only if there is a transaction which have the “streaming replication” enabled

Synchronization Functions

This new SQL functions can be used in wsrep synchronization operations.
It is possibble to use them to obtain the GTID (Global Transaction ID)
WSREP_LAST_SEEN_GTID(): returns the GTID of the last write transaction observed by the client
WSREP_LAST_WRITTEN_GTID(): returns the GTID of the last write transaction made by the client
WSREP_SYNC_WAIT_UPTO_GTID(): blocks the client until the node applies and commits the given transaction


These new features and especially the streaming replication, which is really an improvement and a huge boost to large
transaction support, should bring to users and dba’s satisfaction and hopefully a better opinion of the MariaDB Galera cluster.
In another blog, I will try to demonstrate how this streaming replication works.

Cet article Galera Cluster 4 with MariaDB 10.4.8 est apparu en premier sur Blog dbi services.

Create a Kubernetes cluster with Google Kubernetes Engine

Yann Neuhaus - Wed, 2019-11-06 01:16

Nowadays the market for cloud providers is very competitive. Large companies are fighting a very hard battle over the services they provide. Each offers a wide range of more or less identical products with specific features for each.

In my point of view, having deployed Kubernetes clusters in several environments (Cloud and On-Premise), I pay particular attention to Google Cloud for its Google Kubernetes Engine offer. The deployment of a Kubernetes cluster is very fast and allows us to have a test/production environment in a few minutes.

Therefore, in this blog post, we will explain how to create a Kubernetes cluster in Google Cloud with some useful additional resources.


A Google account is needed. You can create one by following the sign-up link: https://cloud.google.com. Otherwise, you can use the free tier account: https://cloud.google.com/free/?hl=en.

Create your project

Go to the cloud portal through the following link: https://console.cloud.google.com/home/dashboard

The first step is the creation of a project. Before creating a resource, you will need to create a project in order to encapsulate all your resources within it. To properly create a project, follow the below steps:

Enter your project name and click on create:

After a few seconds, your project will be created, and you will have access to the home dashboard:

Create your cluster

Once the project is created and ready to use, let’s create now our Kubernetes cluster. Click on the Kubernetes Engine menu and clusters sub-menu to begin the creation process.

Once the Kubernetes Engine API is enabled, we can click on the create cluster button and configured our cluster as needed.

We choose a standard cluster with 3 cluster nodes. You can edit the resources of your cluster according to your needs. For our example, we kept the default configuration provided by the API.

Click on the create button and after a few minutes, your cluster is ready for usage.

Start using your Kubernetes cluster

Google SDK is needed to use your Kubernetes cluster in your favorite client platform. To install Google SDK follow the instructions here:

SDK Cloud is properly installed, we can now initialize our environment by the following steps:

mehdi@MacBook-Pro: gcloud init
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
  account: mehdi.bada68@gmail.com
  disable_usage_reporting: 'True'
  project: jx-k8s-2511
Pick configuration to use:
 [1] Re-initialize this configuration [default] with new settings
 [2] Create a new configuration
Please enter your numeric choice:  1
Your current configuration has been set to: [default]
You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).
Choose the account you would like to use to perform operations for
this configuration:
 [1] mehdi.bada68@gmail.com
 [2] Log in with a new account
Please enter your numeric choice:  1
You are logged in as: [mehdi.bada68@gmail.com].
Pick cloud project to use:
 [1] kubernetes-infra-258110
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  1
Your current project has been set to: [kubernetes-infra-258110].
Do you want to configure a default Compute Region and Zone? (Y/n)?  Y
Which Google Compute Engine zone would you like to use as project
If you do not specify a zone via a command-line flag while working
with Compute Engine resources, the default is assumed.
Please enter numeric choice or text value (must exactly match list
item):  8

Login now to gcloud :

mehdi@MacBook-Pro: gcloud auth login
You are now logged in as [mehdi.bada68@gmail.com].
Your current project is [kubernetes-infra-258110].  You can change this setting by running:
  $ gcloud config set project PROJECT_ID

Update your ~./kube/config file with the credentials of the new cluster created before:

mehdi@MacBook-Pro: gcloud container clusters get-credentials standard-cluster-1
Fetching cluster endpoint and auth data.
kubeconfig entry generated for standard-cluster-1.

Your kubectl client is now connected to your remote GKE cluster.

mehdi@MacBook-Pro: kubectl get nodes -o wide
NAME                                                STATUS   ROLES    AGE   VERSION          INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-standard-cluster-1-default-pool-1ac453ab-6tj4   Ready       56m   v1.13.11-gke.9   Container-Optimized OS from Google   4.14.145+        docker://18.9.7
gke-standard-cluster-1-default-pool-1ac453ab-s242   Ready       56m   v1.13.11-gke.9    Container-Optimized OS from Google   4.14.145+        docker://18.9.7
gke-standard-cluster-1-default-pool-1ac453ab-w0j0   Ready       56m   v1.13.11-gke.9   Container-Optimized OS from Google   4.14.145+        docker://18.9.7
Deploy Kubernetes Dashboard

After configuring the kubectl client we can start deploying resources on the Kubernetes cluster. One of the most popular resources in Kubernetes is the dashboard. It allows users and admin having a graphical view of all cluster resources.

Download the dashboard deployment locally:

curl -o dashboard.yaml  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

Then apply the deployment:

mehdi@MacBook-Pro: kubectl apply -f dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Create an admin Service Account and Cluster Role Binding that you can use to securely connect to the dashboard with admin-level permissions:

mehdi@MacBook-Pro: vi admin-sa.yaml 

apiVersion: v1
kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
  name: admin
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

mehdi@MacBook-Pro: kubectl apply -f admin-sa.yaml
serviceaccount/admin created
clusterrolebinding.rbac.authorization.k8s.io/admin created

First, retrieve the authentication token for the admin service account, as below:

mehdi@MacBook-Pro: kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin | awk '{print $1}')
Name:         admin-token-dpsl9
Namespace:    kubernetes-dashboard
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 888de3dc-ffff-11e9-b5ca-42010a800046

Type:  kubernetes.io/service-account-token

ca.crt:     1119 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1kcHNsOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijg4OGRlM2RjLWZmZmYtMTFlOS1iNWNhLTQyMDEwYTgwMDA0NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.DBrfylt1RFDpHEuTy4l0BY-kRwFqm9Tvfne8Vu-IZVghy87vVWtsCatjt2wzCtMjX-I5oB0YAYmio7pTwPV-Njyd_VvbWupqOF7yiYE72ZXri0liLnQN5qbtyOmswsjim0ehG_yQSHaAqp21cQdPXb59ItBLN7q0-dh8wBRyOMAVLttjbmzBb02XxtJlALYg8F4hAkyHjJAzHAyntMylUXyS2gn471WUYFs1usDDpA8uZRU3_K6oyccXa-xqs8kKRB1Ch6n4Cq9TeMKkoUyv0_alEEQvwkp_uQCl2Rddk7bLNnjfDXDPC9LXOT-2xfvUf8COe5dO-rUXemHJlhPUHw

Copy the token value.

Access to the Kubernetes dashboard using the kubectl proxy command line.

mehdi@MacBook-Pro: kubectl proxy
Starting to serve on

The dashboard is now available in the following link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Choose the token authentication and paste the value from the previous output.

You have now access to the Kubernetes dashboard and deployed your first Kubernetes resource!

Deploy an Ingress Load Balancer

In order to access your cluster service externally, we need to create an ingress load balancer for our GKE cluster. The ingress load balancer will make HTTP/HTTPS applications accessible publicly through the creation of an external IP address for the cluster.

Before creating the ingress, we need to deploy a test application for our example. Let’s deploy an NGINX server.

mehdi@MacBook-Pro: vi nginx-deployment.yaml

apiVersion: apps/v1beta2
kind: Deployment
  name: nginx-deployment
      app: nginx
  replicas: 2
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80
apiVersion: v1
kind: Service
  name: nginx
    app: nginx
  type: NodePort
    - port: 80
    app: nginx

mehdi@MacBook-Pro: kubectl apply -f nginx-deployment.yaml

deployment.apps/nginx-deployment unchanged
service/nginx created

Create the ingress resource and deploy it as following:

mehdi@MacBook-Pro: vi basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
name: basic-ingress
- http:
- backend:
serviceName: nginx
servicePort: 80

mehdi@MacBook-Pro: kubectl apply -f basic-ingress.yaml
ingress.extensions/basic-ingress created

Verify the status of the ingress:

mehdi@MacBook-Pro: kubectl get ing -o wide
NAME            HOSTS   ADDRESS         PORTS   AGE
basic-ingress   *   80      8h

The ingress resources have been properly created. We can see the result directly from the Google Cloud dashboard.

The NGINX service is now available via the Ingress Load Balancer and can be accessed through:

Cet article Create a Kubernetes cluster with Google Kubernetes Engine est apparu en premier sur Blog dbi services.

Which is Bigger – KB or MB?

VitalSoftTech - Tue, 2019-11-05 09:49

A computer noob like myself often gets confused about storage units in terms of the memory of a computer. All these KBs, MBs and GBs boggle my mind. Does that happen to you too? The question that baffles me the most is, which is Bigger – KB or MB? In addition to answering this question, […]

The post Which is Bigger – KB or MB? appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Honors Excellence in Construction and Engineering

Oracle Press Releases - Tue, 2019-11-05 09:18
Press Release
Oracle Honors Excellence in Construction and Engineering Customer honorees in the Americas region include Austin Commercial, Ameren, BlueCross BlueShield of South Carolina, DPR Construction and Facebook

Redwood Shores, Calif.—Nov 5, 2019

Oracle is today recognizing those Americas customers who are charting new paths and reimagining the building industry through technology with the 2019 Oracle Construction and Engineering Excellence Awards. Building on the Oracle Aconex Connect Awards, this new awards program highlights noteworthy customers that are successfully improving project delivery across industries.

Project- and asset-intensive organizations face numerous challenges, including growing project complexity, rising competition, fragmented project supply chains, and increasing risks. Selected from a large number of entrants, this year’s winners demonstrated compelling results in overcoming these challenges through their use of Oracle’s Construction and Engineering solution suite, including Oracle’s Primavera solutions, Oracle Aconex and Oracle Textura Payment Management.

“The Oracle Construction and Engineering Excellence Awards winners represent organizations of all sizes that have demonstrated remarkable success from the digital transformation of their businesses across the project and asset lifecycle,” said Mark Webster, senior vice president and general manager, Oracle Construction and Engineering. “From significant productivity gains, to better payment outcomes across the supply chain, to unlocking project intelligence for smarter decision making, the improvements realized by these organizations underscore their leadership in driving project delivery forward, with benefits that extend to all stakeholders. We congratulate the honorees and are thrilled to recognize their accomplishments.”

Select results from the 2019 Oracle Construction and Engineering Excellence Awards winners:

  • Austin Commercial achieved enterprise-wide standardization with Oracle Textura Payment Management, reducing pay application processing time from four to five days to just two days.
  • Ameren Corporation reduced the time spent evaluating project justifications by 90 percent (from 30-60 minutes, to just three to five) with Oracle’s Primavera P6 Enterprise Project Portfolio Management (EPPM) and Oracle’s Primavera Unifier.
  • BlueCross BlueShield of South Carolina accomplished organizational transformation with the help of Oracle’s Primavera Unifier and is now able to close internal work orders 50 percent faster than previously.
  • DPR Construction eliminated 360 accounting hours per week through its implementation of Oracle Textura Payment Management and a direct link to its ERP system, meaning tens of millions of dollars now reach subcontractors up to seven days faster.
  • Sage Homes Northwest sped up work package assignment to new projects by 66 percent utilizing Oracle Primavera Cloud.

Customers were nominated for the 2019 Oracle Construction and Engineering Excellence Awards in the following categories:

  • Data-driven Insights
  • Enterprise-wide Standardization
  • Innovation in Sustainability
  • Innovator of the Year (Individual award)
  • Platform Expansion
  • Social Impact Project of the Year
  • Solutions Implementation
  • Systems Integration

In addition, honorees were recognized for their achievements in the following segments:  Energy and Resources, Industrial Manufacturing, Public Infrastructure, and Residential and Commercial.

Below is a full list of 2019 Oracle Construction and Engineering Excellence Awards winners in the Americas:

  • Ameren Corporation (Ameren) – Platform Expansion in Energy and Resources, Primavera P6 Enterprise Project Portfolio Management (EPPM), Primavera Unifier
  • AMP United, LLC – Enterprise-wide Standardization in Residential and Commercial, Oracle Primavera Cloud
  • ATCO Ltd. (ATCO) – Enterprise-wide Standardization in Energy and Resources, Primavera P6 EPPM, Primavera Unifier
  • Austin Industries (Austin Commercial) – Enterprise-wide Standardization in Public Infrastructure, Oracle Textura Payment Management
  • BlueCross BlueShield of South Carolina – Enterprise-wide Standardization in Residential and Commercial, Primavera Unifier
  • DPR Construction – Systems Integration in Residential and Commercial, Oracle Textura Payment Management
  • Facebook – Innovator of the Year, Jason Mennig of the Data Center Engineering and Construction business unit, and Company award for Solutions Implementation in Residential and Commercial, Primavera P6 EPPM
  • FirstEnergy Corp. – Systems Integration in Energy and Resources, Primavera P6 EPPM, Primavera Unifier and Oracle Primavera Portfolio Management (OPPM)
  • Hampton Roads Sanitation District – Systems Integration in Public Infrastructure, Primavera P6 EPPM, Primavera Unifier
  • Intercorp Retail – Solutions Implementation in Residential and Commercial, Primavera Unifier
  • Lendlease Corporation – Innovator of the Year for Data-driven Insights in Residential and Commercial, Michael Mull, Oracle Textura Payment Management, Textura Business Analytics and Innovator of the Year for Enterprise-wide Standardization in Residential and Commercial, Dominic Giannola, Oracle’s Primavera P6 EPPM
  • Matthew Harris – Innovator of the Year for Solutions Implementation in Public Infrastructure, Oracle Aconex
  • Monteith Construction Corp. – Solutions Implementation in Residential and Commercial, Oracle Primavera Cloud
  • Oceaneering International, Inc. – Innovator of the Year for Enterprise-wide Standardization in Energy and Resources, Dalbert Varnell Jr., Primavera P6 EPPM
  • Sage Homes Northwest – Systems Integration in Residential and Commercial, Oracle Primavera Cloud
  • VESTA – Enterprise-wide Standardization in Residential and Commercial, Oracle’s Primavera Unifier

Read about our Global 2019 Oracle Construction and Engineering Excellence Awards winners: https://www.oracle.com/corporate/awards/construction-engineering/winners.html

Contact Info
Judi Palmer
Brent Curry
H+K Strategies
+1 312.255.3086
About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Brent Curry

  • +1 312.255.3086

Australia’s Favorite Department Store Gets More Agile with Oracle

Oracle Press Releases - Tue, 2019-11-05 07:00
Press Release
Australia’s Favorite Department Store Gets More Agile with Oracle Myer implements Oracle Retail technology to turbocharge merchandise, inventory, and planning processes

Redwood Shores, Calif. and Melbourne, Austrailia—Nov 5, 2019

Myer, Australia’s largest department store chain, embarked on a digital transformation to deliver more compelling products to customers. Carrying many products across 61 stores, Myer offers a wide range of cosmetics, clothing, footwear, toys, and homewares. By embracing Oracle Retail, Myer transformed its end-to-end merchandise systems, inventory processes, and planning capabilities, resulting in reduced stock and improved sales, margin, and markdown reduction. Today, Myer is better able to deliver the right inventory to delight customers across its properties.

“I am optimistic about the ability of our Oracle Merchandise Planning tools to improve the efficiency of our buying teams and to deliver a more profitable, more localised product range for our customers,” said Allan Winstanley, Chief Merchandise Officer, Myer.

The project was a business-led initiative supported by IT. The team established clear goals and expectations to achieve immediate business benefits and support long-term growth objectives. Myer built a solid foundation with Oracle Retail Merchandise Financial Planning and Oracle Retail Size Profile Optimization with support from Tata Consulting Services (TCS), a Platinum and Cloud Elite level member of Oracle PartnerNetwork (OPN).

Phase one of the implementations delivered immediate results with a stock reduction in time for holiday planning. In phase two, a cross-functional solution development team adopted an agile implementation methodology. The team worked in three weeks sprints to deliver regular solution functionality for seasonal strategies, item planning, clustering, option planning, and assortment planning from Oracle Retail Planning and Optimization. This approach allowed Myer to enjoy accelerated benefits and the project paid for itself.

“The project was delivered across various teams, including Merchandise, IT and Tata Consulting Services (TCS), where we transitioned from waterfall to an agile project delivery approach, which ensured the success of the project,” said Troy Smith, Chief Information Officer, Myer.

“Automate and simplify. We always encourage our customers to embrace these attributes and focus on operational excellence,” said Mike Webster, senior vice president and general manager, Oracle Retail. “By establishing a strong foundation, Myer can pivot to the customer and localize the assortment based on store demographics, climate, and other store attributes.”

The Myer Oracle project received multiple awards including the 2018 ACS Gold Disruptor Award, 2018-2019TCS APAC 4.0 EXCEL ERATE Best Project Embracing Business 4.0, and the 2017- 2018 Myer inspirational Team Award.

Contact Info
Kaitlin Ambrogio
Oracle PR
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website, www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Handling PostgreSQL installations from packages

Yann Neuhaus - Sun, 2019-11-03 13:18
In this blog I will show how to handle a PostgreSQL installation with a customized PGDATA using the packages provided by the PostgreSQL community.

One issue with the packages is the hard coded PGDATA, which will be overwritten in the Servicefile with each update of PostgreSQL. This blog entry based on PostgreSQL 12 with CentOS 7 and CentOS 8.

On a minimal installation in my mind a few things are missing, the net-tools package and nano as editor, I’m a friend of using nano instead of vi.

CentOS 7:

$ yum install net-tools
$ yum install nano

CentOS 8:

$ dnf install net-tools
$ dnf install nano

For using the PostgreSQL repository it is important to exclude PostgreSQL from the CentOS Repository.

By using CentOS 7 you need to edit the CentOS-Base repofile to exclude PostgreSQL from Base and Updates.

$ nano /etc/yum.repos.d/CentOS-Base.repo

# CentOS-Base.repo
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.

name=CentOS-$releasever - Base
#exclude PostgreSQL from os repository 

#released updates
name=CentOS-$releasever - Updates
#exclude PostgreSQL from os repository 

#additional packages that may be useful
name=CentOS-$releasever - Extras

#additional packages that extend functionality of existing packages
name=CentOS-$releasever - Plus
[ Read 46 lines ]

By using CentOS 8 it is just one command to exclude PostgreSQL from the distribution repository:

$ dnf -y module disable postgresql

Add PostgreSQL Repository to CentOS 7, in this example it is ProstgreSQL 12

$ yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

And the same for CentOS 8

$ dnf install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm

Now it is time to install PostgreSQL 12 out of the PostgreSQL repository BUT NO INITDB at the moment.

CentOS 7:

$ yum install postgresql12 postgresql12-server postgresql12-contrib

CentOS 8:

$ dnf install postgresql12 postgresql12-server postgresql12-contrib

Now it is time to create the override file to the PostgreSQL Service file, the steps are identical on CentOS 7 and CentOS 8.

In my example PGDATA is in /pg_data/12/data mounted as own volume.

So edit the postgresql-12.service file with sysctl edit:

$ systemctl edit postgresql-12.service

And add the needed content for your customized PGDATA:


Save the change, it will create a /etc/systemd/system/postgresql-12.service.d/override.conf file which will be merged with the original service file.

To check the content:

$ cat /etc/systemd/system/postgresql-12.service.d/override.conf

Reload Systemd

$ systemctl daemon-reload

Hopefully your PGATA is owned by the postgres user if not make sure that it is:

$ chown -R postgres:postgres /pg_data/

Create the PostgreSQL instance as root user:

$ /usr/pgsql-12/bin/postgresql-12-setup initdb
Initializing database ... OK

Here it is:

[root@centos-8-blog /]# cd /pg_data/12/data/
[root@centos-8-blog data]# ls
base          pg_dynshmem    pg_multixact  pg_snapshots  pg_tblspc    pg_xact
global        pg_hba.conf    pg_notify     pg_stat       pg_twophase  postgresql.auto.conf
log           pg_ident.conf  pg_replslot   pg_stat_tmp   PG_VERSION   postgresql.conf
pg_commit_ts  pg_logical     pg_serial     pg_subtrans   pg_wal

From now on PostgreSQL minor updates will be done with yum update on CentOS 7 or dnf update on CentOS 8 in one step, no extra downtime for it.

But be careful, before running yum update or dnf update STOP ALL POSTGRESQL INSTANCES!

This is also working in environments with many instances, you need a service file and an override.conf for each instance, an additional instance needs to be created with initdb -D and not with PostgreSQL-12-setup initdb.

This method is also working with SLES 12.


Cet article Handling PostgreSQL installations from packages est apparu en premier sur Blog dbi services.

Bridge network missing Gateway – Docker Issue

DBASolved - Sun, 2019-11-03 11:41

Here is a little something for you.  I’m working on building a demo of Oracle GoldenGate Microservices between three (3) containers. In order to do this, I wanted to setup a dedicated network between the containers. In order to setup a dedicated network, I needed to configure a network for the containers to use.  Docker […]

The post Bridge network missing Gateway – Docker Issue appeared first on DBASolved.

Categories: DBA Blogs

My Personal Thanks to the Chicago Police Department - the First Real Proving Ground for Oracle APEX

Joel Kallman - Sat, 2019-11-02 07:18

In 2001, the Chicago Police Department took a chance on APEX.  And with all thanks to them for the opportunity they provided us, Oracle APEX is what it is today.  We owe them a big debt of gratitude.  Let me explain.

As many people know, the genesis of Oracle APEX was an internal development project that began in 1999, to build a Web-based HTML calendar for use by Oracle employees.  My manager, Mike Hichwa, was the inventor of Oracle Web DB.  And when faced with the assignment of creating a new HTML calendar application for the company, the choices were a) WebDB, b) a lovingly hand-crafted PL/SQL application from scratch, or c) a yet-to-be-created application metadata framework (using Mike's lessons learned from WebDB).  We went with the latter, and Mike began the creation of the APEX framework while I developed a calendar application which was "programmed" to use this new framework.  Mocked and doubted by many at Oracle, we went live with the first production application in 3 months, rolled out to thousands of employees.  Having Tom Kyte to help us was instrumental in our success.

Over the next 18 months, we evolved this framework and created a number of other internal applications.  We thought we were ready to offer this framework for customers to use.  But one of the best things happened for APEX at that time.  When Larry Ellison was visiting New York City, Mike traveled to meet with him and brief him on the state of the framework, as well as Mike's aspirations to offer this framework as another tool from Oracle.  The advice offered by Larry to Mike - prove the framework with 30 real-world customers before you consider taking this live.  Invaluable guidance.

In 2001, Mike and I had an internal meeting in Chicago with Oracle Consulting.  The back-end information system for the Chicago Police Department (CPD), the Criminal History Record Information System (CHRIS), was written in Oracle Forms.  It had been developed over many years, and was a joint effort between Oracle Consulting and the Chicago Police Department.  The purpose of this meeting, at the time, was to discuss possible alternatives to the next state of CHRIS.  This meeting was ultimately precipitated by the estimated hardware requirements to run the next version of Oracle Forms.  They had estimated that the backend database server requirements alone would require 4 very large and very expensive new Sun Enterprise 10000 servers.  This was a lot of money to be spent on hardware with effectively no net gain in functionality for their end users.  We proposed APEX ("Flows", at the time), and they went with it.

Over a period of more than a year, a number of today's APEX product development team members worked directly, onsite, with Oracle Consulting and Chicago Police Department to move the functionality of their Oracle Forms applications to APEX.  It wasn't a 1-to-1 mapping, and it required a level of application and UI redesign.  But we were able to capitalize on the existing data structures and business logic, already present in the database.  The Oracle Forms applications and APEX apps were able to easily co-exist, because they were built on the same foundation.  There were also new systems developed as part of this effort, named CLEAR.  You can still read about CLEAR from this article from 2004.

This entire exercise was hugely beneficial to us.  We thought we were ready to go to market.  But once we dug into the requirements of a large-scale enterprise system like CHRIS, it uncovered many significant gaps in the functionality of the APEX framework.  Fortunately, we owned the framework and were able to simultaneously fill those functional gaps in APEX.  As a simple example, at the time there was no way to manage vectors of information in APEX session state.  This real-world requirement resulted in today's APEX collections.  When you own the framework and you are concurrently building the app, you can do anything!

Scalability was another concern.  While the original calendar application we wrote for Oracle had more than 25,000 users, let's face it - the use of a calendar is occasional throughout the day.  Contrast this with CHRIS, which had more than 10,000 total users, the vast majority who would interact with CHRIS frequently throughout the day.  The heavy concurrent usage of these applications provided us numerous opportunities to tune and optimize the APEX execution engine.  And talk about mission-critical applications - "business" slows to a crawl if you can't look up information about a person or log evidence.  And when business slows to a crawl, public safety is jeopardized.

Fast forward to 2019, and here we are with a large global community of hundreds of thousands of developers.  There are dedicated conferences, stickers, bloggers, videos, meetup groups, awards, books, podcasts, webinars, hosting providers, cloud services, partners & consulting companies, and thousands upon thousands of real-world successes from around the globe.  Much of our success can be traced to this proving ground, which was afforded us by the Chicago Police Department.

The purpose of this blog post is simple - I wish to offer my personal, sincere thanks to the Chicago Police Department for the gamble they took on us.  There was no true guarantee that APEX was going to exist beyond a "skunkworks" project, but they still forged ahead, given some assurances from Oracle and the alternatives.  They banked on us and they won.  Their real-world use cases stretched us and the technology in ways we had never imagined.  We learned so many valuable lessons during this project, and all of it resulted in a much more scalable, hardened, proven system by the time APEX was first offered as an Oracle Database feature in 2004.  We will forever be grateful to them.

For the record, these internal systems still run on Oracle APEX today, and are used by thousands of Chicago Police Department employees every day.  Now that is longevity, and a great investment.  Amidst today's rapid technology churn, this remains an extraordinary success story.

Patch by City of Chicago - http://www.publicsafetypatches.org/IL/Police/, Public Domain, Link

Global non partitioned index on table partitions

Tom Kyte - Fri, 2019-11-01 18:47
Hi, I have recently got some sql statements that is not performing well. <code>select * from v where a=? and b not in(,,,,....) and c =? and rownum<-100 </code> where v is a view. Original sql statement is similar to above statement. From explai...
Categories: DBA Blogs

Event Based Job is not working

Tom Kyte - Fri, 2019-11-01 18:47
Hi, I am struggling several days with following issue. I am trying to implement event based job. At start all was working fine. But after several payload type modifications + several times recreated queue + recreated scheduled job ... schedu...
Categories: DBA Blogs

Timestamp Functions and Presentation Variables in Oracle Cloud Analytics

Rittman Mead Consulting - Fri, 2019-11-01 04:12

One of the most popular Rittman Mead blog posts over the last 10 years is Timestamps and Presentation Variables. As we are seeing more and more migrations to OAC, we decided to review and revise this post for the latest version of Oracle Cloud Analytics (OAC), 105.4.0-140 as of October 2019. Read more about the latest updates here.


One could say that creating a chart is not the most complex task in the world of Business Intelligence but we would argue that creating a meaningful report that perfectly illustrates the message hidden in data and therefore adds value to the management is nowhere close to being easy!    A good way to make a report as informative as possible is to use trends and comparison. And to do so, a perfect tool would be the time analysis functions. For example comparing sales in a period of time this year to the same period of time the year before. Or measure the similarity or dissimilarity of sales in different months of the year.

Demo Platform

I have used a free trial instance of OAC for this demo. If you haven’t done yet, sign up for a free 30-day trial Oracle Cloud account (different to an Oracle account). Use the account to access the Oracle Cloud Infrastructure (OCI) console which is the latest Oracle movement towards having one integrated cloud platform to manage all your Oracle cloud applications, platforms, and infrastructure in one place.
From the OCI console it is 5 to 10 minutes before your free trial instance of OAC is up and running. For the detailed step by step of creating a new instance read here.

Demo Goals

In this blog post I intend to show you how to combine the power of timestamp functions and presentation variables to create robust, repeatable reports. We will create a report that displays a year over year analysis for any rolling number of periods, by week or month, from any date in time, all determined by the user. This entire demo will only use values from a date and a revenue field.


TIMESTAMPADD() manipulates data of the data types DATE and DATETIME based on a calendar year.

Syntax: TIMESTAMPADD(interval, expr, timestamp)
Example: TIMESTAMPADD(SQL_TSI_MONTH, 12,Time."Order Date")
Description: Adds a specified number of intervals to a timestamp, and returns a single timestamp.

Read more about other calendar functions.

Building Filters

Starting to build our demo, the filter below returns all dates greater than or equal to 7 days ago including the current date.

In other words we have now a functional filter to select all the rows where Date >= a week ago.

As a good practice, always include a second filter giving an upper limit to the time filter. For example "Periods"."Day Date" < CURRENT_DATE would confirm that there won’t be any records that you don’t want in the mix and therefore no unnecessary strain on the system.

Let’s go one step further, instead of going 7 days back, we could try and include all the previous days in the current month or in other words dates >= the first day of the month. In this scenario, we can use the DAYOFMONTH() function to get the calendar day of any date. From here it will be easy to calculate the number of days in the month so far. Our new filter would look like this:

For example, if today is October 16th, DAYOFMONTH(CURRENT_DATE) would equal 16. Thus, we would subtract 16 days from CURRENT_DATE to go back to September 30th, and adding one will give us October 1st.

Presentation Variables

A presentation variable is a variable that can be created from the front end, the Analytics as part of one of the following types of dashboard prompts:

  • Column prompt, Associated with a column and the values that it can take come from the column values. For information on working with column prompts, see Creating a Column Prompt.
  • Variable prompt, Not associated with any column, and you define the values that it can take. For information on working with variable prompts, see Creating a Variable Prompt.

Each time a user selects a value in the column or variable prompt, the value of the presentation variable is set to the value that the user selects and will then be sent to any references of that filter throughout the dashboard page. This could be filters, formulas and even text boxes.

The first presentation variable we could introduce is to replace the CURRENT_DATE with a prompted value. Let’s call this presentation variable pv_Date,

  • Use the syntax @{pv_Date} to call this variable in the reports.
  • For variables of type string, surround the name in single quotes: ‘@{pv_String]’
  • It is good practice to assign a default value to the presentation variables so that you can work with your report before publishing it to a dashboard. For example the default value for the pv_Date is CURRENT_DATE so the new syntax would be @{pv_Date}{CURRENT_DATE}

Demo Time!

Our updated filter after replacing the CURRENT_DATE looks like below. Will will refer to this filter later as Filter 1 (F1).

The filter is starting to take shape. Now let's say we are going to always be looking at a date range of six months before the selected date. All we would need to do is create a nested TIMESTAMP function. To do this, we will “wrap” our current TIMESTAMP with another that will subtract six months:

Now we have a filter to select dates that are greater than or equal to the first day of the month of any given date and all the six months prior to that.

To take this one step further, we can create another presentation variable called  pv_n to allow the users to determine the amount of months to include in this analysis from a dashboard prompt.

Here is the updated version of our filter using the number of periods presentation variable and a default value of 6, @{pv_n}{6}. We will refer to the following filter as Filter 2 (F2).

Our TIMESTAMPADD function is now fairly robust and will give us any date greater than or equal to the first day of the month from n months ago from any given date. Now we will see what we just created in action by creating date ranges to allow for a Year over Year analysis for any number of months. Consider the following filter set:

This may appear to be pretty intimidating at first but if we break it into parts we can start to understand its purpose. Notice we are using the exact same filters from before; Filter 1 and Filter 2.   What we have done here is filtered on two time periods, separated by the OR statement.

  • The first date range defines the period as being the most recent completed n months from any given prompted date value, using a presentation variable with a default of today. Dates in the current month have been removed from the set by Filter 1.
  • The second time period, after the OR statement, is the exact same as the first only it has been wrapped in another TIMESTAMP function subtracting a year, giving you the exact same time frame for the year prior.

This allows us to create a report that can run a year over year analysis for a rolling n month time frame determined by the user.

A note on nested TIMESTAMPS: you will always want to create nested TIMESTAMPS with the smallest interval first. Then you will wrap intervals as necessary. In this case our smallest increment is day, wrapped by month, wrapped by year.

Let’s Go Crazy

A more advanced trick, If you use real time or near real time reporting: using CURRENT_DATE may be how you want to proceed. Otherwise, instead of using today as your default date value, use yesterday’s date since most data are only as current as yesterday.  Using yesterday will be valuable especially when pulling reports on the first day of the month or year - you generally want the entire previous time period rather than the empty beginning of a new one.  So, to implement, wherever you have @{pDate}{CURRENT_DATE} replace it with @{pDate}{TIMESTAMPADD(SQL_TSI_DAY,-1,CURRENT_DATE)}

One more change on our filter to make it extra-flexible here is to use a new presentation variable to determine if you want to display year over year values, or by month, or by week. This can be done by inserting a variable into your SQL_TSI_MONTH and DAYOFMONTH statements; changing MONTH to SQL_TSI_@{pv_INT}{MONTH} and DAYOF@{pv_INT}{MONTH}, where pv_INT is the name of our variable.

Start by creating a dummy variable in your prompt to allow users to select either MONTH or WEEK.  You can try something like this: CASE MOD(DAY("Time"."Date"),2) WHEN 0 'WEEK' WHEN 1 THEN 'MONTH' END

The updated filter now look like this:

In order for our interaction between Month and Week to run smoothly we have to factor in one last consideration: if we are to take the date December 1st, 2019 and subtract one year we get December 1st, 2018.  However, if we take the first day of this week, Sunday December 15, 2019 and subtract one year we get Saturday December 15, 2014.  In our analysis this will cause an extra partial week to show up for prior years.  To get around this we will add a case statement determining if '@{pv_INT}{MONTH}' = 'Week' THEN subtract 52 weeks from the first of the week ELSE subtract 1 year from the first of the month. With this, our final filter set will look like this:

With the use of these filters and some creative dashboarding, you can construct a report that easily allows you to view a year over year analysis from any date in time for any number of periods either by month or by week.

Filtered by Week intervals;

The formula below will give you the value of period rolling to use in the analysis;

In this post, we created a cloud version of the amazing demo previously described by Brian Hall.  As demonstrated, Timestamp functions and their power have been within the interesting topics of the visualisation and reporting for as long as we at Rittman Mead remember and can still be used in the realm of the Oracle Cloud Services in a very similar way as the past.

Feel free to get in touch, let us know your reviews and comments.

Categories: BI & Warehousing

pg_auto_failover: Setup and installation

Yann Neuhaus - Fri, 2019-11-01 02:25

When I attended PGIBZ 2019 earlier this year, I talked with Dimitri about pg_auto_failover and I promised to have a look at it. Well, almost half a year later and after we’ve met again at pgconf.eu it is time to actually do that. You probably already know that citudata was acquired by Microsoft earlier this year and that Microsoft seems to be committed to open source since a few years. pg_auto_failover is one of the projects they contribute back to the PostgreSQL community. This will be a multi-blog series and in this very first post it is all about getting it up and running. In a following post we will then look at failover and switchover scenarios.

As usual, when you need auto failover you need at least three nodes and pg_auto_failover is no exception to that. The following graphic is stolen from the pg_auto_failover github page:

We have one PostgreSQL master, one PostgreSQL replica and in addition a monitoring host. In may case that maps to:

pg-af1.ti.dbi-services.com master pg-af2.ti.dbi-services.com replica pg-af3.ti.dbi-services.com monitor/cluster management

All of these nodes run CentOS 8 and I will be going from source code as that gives most flexibility. As pg_auto_failover depends on PostgreSQL (of course) the first step is to install PostgreSQL on all three nodes (PostgreSQL 12 in this setup). If you need further information on how to do that you can e.g. check here. Basically these steps have been executed on all the three nodes (given that the postgres user already exists and sudo is configured):

[postgres@pg-af1 ~]$ sudo dnf install -y gcc openldap-devel python36-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget unzip sysstat xorg-x11-xauth systemd-devel bash-completion python36 policycoreutils-python-utils make git
[postgres@pg-af1 ~]$ wget https://ftp.postgresql.org/pub/source/v12.0/postgresql-12.0.tar.bz2
[postgres@pg-af1 ~]$ tar -axf postgresql-12.0.tar.bz2
[postgres@pg-af1 ~]$ cd postgresql-12.0
[postgres@pg-af1 postgresql-12.0]$ sudo mkdir -p /u01 /u02
[postgres@pg-af1 postgresql-12.0]$ sudo chown postgres:postgres /u01 /u02
[postgres@pg-af1 postgresql-12.0]$ PGHOME=/u01/app/postgres/product/12/db_0/
[postgres@pg-af1 postgresql-12.0]$ SEGSIZE=2
[postgres@pg-af1 postgresql-12.0]$ BLOCKSIZE=8
[postgres@pg-af1 postgresql-12.0]$ WALSEGSIZE=64
[postgres@pg-af1 postgresql-12.0]$ ./configure --prefix=${PGHOME} \
> --exec-prefix=${PGHOME} \
> --bindir=${PGHOME}/bin \
> --libdir=${PGHOME}/lib \
> --sysconfdir=${PGHOME}/etc \
> --includedir=${PGHOME}/include \
> --datarootdir=${PGHOME}/share \
> --datadir=${PGHOME}/share \
> --with-pgport=5432 \
> --with-perl \
> --with-python \
> --with-openssl \
> --with-pam \
> --with-ldap \
> --with-libxml \
> --with-libxslt \
> --with-segsize=${SEGSIZE} \
> --with-blocksize=${BLOCKSIZE} \
> --with-systemd \
> --with-extra-version=" dbi services build"
[postgres@pg-af1 postgresql-12.0]$ make all
[postgres@pg-af1 postgresql-12.0]$ make install
[postgres@pg-af1 postgresql-12.0]$ cd contrib
[postgres@pg-af1 contrib]$ make install
[postgres@pg-af1 contrib]$ cd ../..
[postgres@pg-af1 ~]$ rm -rf postgresql*

We will go for an installation from source code of pg_auto_failover as well (again, on all three nodes):

postgres@pg-af1:/home/postgres/ [pg120] git clone https://github.com/citusdata/pg_auto_failover.git
postgres@pg-af1:/home/postgres/ [pg120] cd pg_auto_failover/
postgres@pg-af1:/home/postgres/pg_auto_failover/ [pg120] make
postgres@pg-af1:/home/postgres/pg_auto_failover/ [pg120] make install
postgres@pg-af1:/home/postgres/pg_auto_failover/ [pg120] cd ..
postgres@pg-af1:/home/postgres/ [pg120] rm -rf pg_auto_failover/

That’s it, quite easy. What I like especially is, that there are no dependencies on python or any other libraries except for PostgreSQL. What the installation gives us is basically pg_autoctl:

postgres@pg-af1:/home/postgres/ [pg120] pg_autoctl --help
pg_autoctl: pg_auto_failover control tools and service
usage: pg_autoctl [ --verbose --quiet ]

Available commands:
+ create   Create a pg_auto_failover node, or formation
+ drop     Drop a pg_auto_failover node, or formation
+ config   Manages the pg_autoctl configuration
+ show     Show pg_auto_failover information
+ enable   Enable a feature on a formation
+ disable  Disable a feature on a formation
run      Run the pg_autoctl service (monitor or keeper)
stop     signal the pg_autoctl service for it to stop
reload   signal the pg_autoctl for it to reload its configuration
help     print help message
version  print pg_autoctl version

The first step in setting up the cluster is to initialize the monitoring node:

postgres@pg-af3:/home/postgres/ [pg120] pg_autoctl create --help
pg_autoctl create: Create a pg_auto_failover node, or formation

Available commands:
pg_autoctl create
monitor    Initialize a pg_auto_failover monitor node
postgres   Initialize a pg_auto_failover standalone postgres node
formation  Create a new formation on the pg_auto_failover monitor

postgres@pg-af3:/home/postgres/ [pg120] sudo mkdir -p /u02/pgdata
postgres@pg-af3:/home/postgres/ [pg120] sudo chown postgres:postgres /u02/pgdata
postgres@pg-af3:/home/postgres/ [pg120] unset PGDATABASE
postgres@pg-af3:/home/postgres/ [] pg_autoctl create monitor --pgdata /u02/pgdata/PG12/af
INFO  Initialising a PostgreSQL cluster at "/u02/pgdata/PG12/af"
INFO   /u01/app/postgres/product/12/db_0/bin/pg_ctl --pgdata /u02/pgdata/PG12/af --options "-p 5432" --options "-h *" --waitstart
INFO  Granting connection privileges on
INFO  Your pg_auto_failover monitor instance is now ready on port 5432.
INFO  pg_auto_failover monitor is ready at postgres://autoctl_node@pg-af3:5432/pg_auto_failover
INFO  Monitor has been succesfully initialized.

Once that succeeds you’ll a new PostgreSQL instance running and pg_auto_failover PostgreSQL background worker processes:

postgres@pg-af3:/home/postgres/ [af] ps -ef | grep "postgres:"
postgres  5958  5955  0 14:15 ?        00:00:00 postgres: checkpointer
postgres  5959  5955  0 14:15 ?        00:00:00 postgres: background writer
postgres  5960  5955  0 14:15 ?        00:00:00 postgres: walwriter
postgres  5961  5955  0 14:15 ?        00:00:00 postgres: autovacuum launcher
postgres  5962  5955  0 14:15 ?        00:00:00 postgres: stats collector
postgres  5963  5955  0 14:15 ?        00:00:00 postgres: pg_auto_failover monitor
postgres  5964  5955  0 14:15 ?        00:00:00 postgres: logical replication launcher
postgres  5965  5955  0 14:15 ?        00:00:00 postgres: pg_auto_failover monitor worker
postgres  5966  5955  0 14:15 ?        00:00:00 postgres: pg_auto_failover monitor worker

The initialization of the monitor node also created a new database and two roles:

postgres@pg-af3:/home/postgres/ [af] psql postgres
psql (12.0 dbi services build)
Type "help" for help.

postgres=# \l
List of databases
Name       |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
pg_auto_failover | autoctl  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
postgres         | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
template0        | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres
template1        | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres

postgres=# \du
List of roles
Role name   |                         Attributes                         | Member of
autoctl      |                                                            | {}
autoctl_node |                                                            | {}
postgres     | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

What we got in the new database is the pgautofailover extension:

pg_auto_failover=# \dx
List of installed extensions
Name      | Version |   Schema   |         Description
pgautofailover | 1.0     | public     | pg_auto_failover
plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

For our management kit to work properly a few PostgreSQL parameters will be set:

pg_auto_failover=# alter system set log_truncate_on_rotation = 'on';
pg_auto_failover=# alter system set log_filename = 'postgresql-%a.log';
pg_auto_failover=# alter system set log_rotation_age = '1440';
pg_auto_failover=# alter system set log_line_prefix = '%m - %l - %p - %h - %u@%d - %x';
pg_auto_failover=# alter system set log_directory = 'pg_log';
pg_auto_failover=# alter system set log_min_messages = 'WARNING';
pg_auto_failover=# alter system set log_autovacuum_min_duration = '60s';
pg_auto_failover=# alter system set log_min_error_statement = 'NOTICE';
pg_auto_failover=# alter system set log_min_duration_statement = '30s';
pg_auto_failover=# alter system set log_checkpoints = 'on';
pg_auto_failover=# alter system set log_statement = 'ddl';
pg_auto_failover=# alter system set log_lock_waits = 'on';
pg_auto_failover=# alter system set log_temp_files = '0';
pg_auto_failover=# alter system set log_timezone = 'Europe/Zurich';
pg_auto_failover=# alter system set log_connections=on;
pg_auto_failover=# alter system set log_disconnections=on;
pg_auto_failover=# alter system set log_duration=on;
pg_auto_failover=# select pg_reload_conf();
(1 row)

What we need for the other nodes is the connection string to the monitoring node:

postgres@pg-af3:/home/postgres/ [af] pg_autoctl show uri

Once we have that we can proceed with creating the master instance on the first host:

postgres@pg-af1:/home/postgres/ [pg120] unset PGDATABASE
postgres@pg-af1:/home/postgres/ [] sudo mkdir /u02/pgdata
postgres@pg-af1:/home/postgres/ [] sudo chown postgres:postgres /u02/pgdata
postgres@pg-af1:/home/postgres/ [] pg_autoctl create postgres --pgdata /u02/pgdata/12/PG1 --nodename pg-af1.it.dbi-services.com --monitor postgres://autoctl_node@pg-af3:5432/pg_auto_failover
INFO  Found pg_ctl for PostgreSQL 12.0 at /u01/app/postgres/product/12/db_0/bin/pg_ctl
INFO  Registered node pg-af1.it.dbi-services.com:5432 with id 1 in formation "default", group 0.
INFO  Writing keeper init state file at "/home/postgres/.local/share/pg_autoctl/u02/pgdata/12/PG1/pg_autoctl.init"
INFO  Successfully registered as "single" to the monitor.
INFO  Initialising a PostgreSQL cluster at "/u02/pgdata/12/PG1"
INFO  Postgres is not running, starting postgres
INFO   /u01/app/postgres/product/12/db_0/bin/pg_ctl --pgdata /u02/pgdata/12/PG1 --options "-p 5432" --options "-h *" --wait start
INFO  The user "postgres" already exists, skipping.
INFO  The database "postgres" already exists, skipping.
INFO  FSM transition from "init" to "single": Start as a single node
INFO  Initialising postgres as a primary
INFO  Transition complete: current state is now "single"
INFO  Keeper has been succesfully initialized.

Once the master if up bring up the replica on the second node:

postgres@pg-af2:/home/postgres/ [pg120] pg_autoctl create postgres --pgdata /u02/pgdata/12/PG1 --nodename pg-af2.it.dbi-services.com --monitor postgres://autoctl_node@pg-af3:5432/pg_auto_failover
17:11:42 INFO  Registered node pg-af2.it.dbi-services.com:5432 with id 2 in formation "default", group 0.
17:11:42 INFO  Writing keeper init state file at "/home/postgres/.local/share/pg_autoctl/u02/pgdata/12/PG1/pg_autoctl.init"
17:11:42 INFO  Successfully registered as "wait_standby" to the monitor.
17:11:42 INFO  FSM transition from "init" to "wait_standby": Start following a primary
17:11:42 INFO  Transition complete: current state is now "wait_standby"
17:11:47 INFO  FSM transition from "wait_standby" to "catchingup": The primary is now ready to accept a standby
17:11:47 INFO  The primary node returned by the monitor is pg-af1.it.dbi-services.com:5432
17:11:47 INFO  Initialising PostgreSQL as a hot standby
17:11:47 INFO  Running /u01/app/postgres/product/12/db_0/bin/pg_basebackup -w -h pg-af1.it.dbi-services.com -p 5432 --pgdata /u02/pgdata/12/backup -U pgautofailover_replicator --write-recovery-conf --max-rate 100M --wal-method=stream --slot pgautofailover_standby ...
17:11:49 INFO  pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/2000028 on timeline 1
pg_basebackup: starting background WAL receiver
0/23699 kB (0%), 0/1 tablespace (/u02/pgdata/12/backup/backup_label )
136/23699 kB (0%), 0/1 tablespace (/u02/pgdata/12/backup/global/4184  )
23708/23708 kB (100%), 0/1 tablespace (...data/12/backup/global/pg_control)
23708/23708 kB (100%), 1/1 tablespace
pg_basebackup: write-ahead log end point: 0/2000100
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: syncing data to disk ...
pg_basebackup: base backup completed

17:11:49 INFO  Postgres is not running, starting postgres
17:11:49 INFO   /u01/app/postgres/product/12/db_0/bin/pg_ctl --pgdata /u02/pgdata/12/PG1 --options "-p 5432" --options "-h *" --wait start
17:11:50 INFO  PostgreSQL started on port 5432
17:11:50 INFO  Transition complete: current state is now "catchingup"
17:11:50 INFO  Keeper has been succesfully initialized.

The next step is to start the so called keeper process (this is the process which communicates with the montoring node about state changes):

postgres@pg-af1:/home/postgres/ [] pg_autoctl run --pgdata /u02/pgdata/12/PG1
INFO  Managing PostgreSQL installation at "/u02/pgdata/12/PG1"
INFO  The version of extenstion "pgautofailover" is "1.0" on the monitor
INFO  pg_autoctl service is starting
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".
INFO  Calling node_active for node default/1/0 with current state: single, PostgreSQL is running, sync_state is "", current lsn is "0/0".

To integrate that into systemd:

postgres@pg-af2:/home/postgres/ [PG1] pg_autoctl show systemd
20:28:43 INFO  HINT: to complete a systemd integration, run the following commands:
20:28:43 INFO  pg_autoctl -q show systemd --pgdata "/u02/pgdata/12/PG1" | sudo tee /etc/systemd/system/pgautofailover.service
20:28:43 INFO  sudo systemctl daemon-reload
20:28:43 INFO  sudo systemctl start pgautofailover
Description = pg_auto_failover

WorkingDirectory = /u02/pgdata/12/PG1
Environment = 'PGDATA=/u02/pgdata/12/PG1'
User = postgres
ExecStart = /u01/app/postgres/product/12/db_0/bin/pg_autoctl run
Restart = always
StartLimitBurst = 0

WantedBy = multi-user.target

postgres@pg-af2:/home/postgres/ [PG1] pg_autoctl -q show systemd --pgdata "/u02/pgdata/12/PG1" | sudo tee /etc/systemd/system/pgautofailover.service
Description = pg_auto_failover

WorkingDirectory = /u02/pgdata/12/PG1
Environment = 'PGDATA=/u02/pgdata/12/PG1'
User = postgres
ExecStart = /u01/app/postgres/product/12/db_0/bin/pg_autoctl run
Restart = always
StartLimitBurst = 0

WantedBy = multi-user.target

postgres@pg-af2:/home/postgres/ [PG1] systemctl list-unit-files | grep pgauto
pgautofailover.service                      disabled
20:30:57 postgres@pg-af2:/home/postgres/ [PG1] sudo systemctl enable pgautofailover.service
Created symlink /etc/systemd/system/multi-user.target.wants/pgautofailover.service → /etc/systemd/system/pgautofailover.service.

If you are on CentOS/Red Hat 8 you will also need this as otherwise the service will not start:

postgres@pg-af1:/u01/app/postgres/local/dmk/ [PG1] sudo semanage fcontext -a -t bin_t /u01/app/postgres/product/12/db_0/bin/pg_autoctl
postgres@pg-af1:/u01/app/postgres/local/dmk/ [PG1] restorecon -v /u01/app/postgres/product/12/db_0/bin/pg_autoctl

After rebooting all the nodes (to confirm that the systemd service is working as expected) the state of the cluster reports one primary and a secondary/replica as expected:

postgres@pg-af3:/home/postgres/ [af] pg_autoctl show state
Name |   Port | Group |  Node |     Current State |    Assigned State
pg-af1.it.dbi-services.com |   5432 |     0 |     1 |           primary |           primary
pg-af2.it.dbi-services.com |   5432 |     0 |     2 |         secondary |         secondary

The various states are documented here.

Remember: As this is based on PostgreSQL 12 there will be no recovery.conf on the replica. The replication parameters have been added to postgresql.auto.conf automatically:

postgres@pg-af2:/u02/pgdata/12/PG1/ [PG1] cat postgresql.auto.conf
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
primary_conninfo = 'user=pgautofailover_replicator passfile=''/home/postgres/.pgpass'' connect_timeout=5 host=''pg-af1.it.dbi-services.com'' port=5432 sslmode=prefer sslcompression=0 gssencmode=disable target_session_attrs=any'
primary_slot_name = 'pgautofailover_standby'

That’s it for the setup. Really easy and simple, I like it. In the next post we’ll have a look at controlled switch-overs and fail-over scenarios.

Cet article pg_auto_failover: Setup and installation est apparu en premier sur Blog dbi services.

Getting Started with APEX Plugins

Jeff Kemp - Thu, 2019-10-31 22:21

There is a great deal you can build in APEX without even thinking about using a plugin. The development platform has an excellent range of built-in functionality and components so that almost all user requirements can be covered using standard APEX features. Restricting yourself to only the built-in, declarative features (i.e. “low code”) of the tool will result in applications that are easy to maintain and resistant to regression issues when upgrading. However, there will come a time when some custom code is required, whether some special data processing (using PL/SQL), some special user interface styling (using CSS), or some special user interface behaviour (using JavaScript or jQuery).

After you have built an application with (a minimum of) custom code like this, it sometimes happens that the same user requirement crops up again elsewhere; whether on another page, another application, or another workspace. The first time you encounter this you will probably just do a “copy and paste” to replicate the custom code. When you encounter the requirement a third time, you should be asking yourself, “how can I make this code more re-usable” – because chances are one day you’ll need to fix a bug or make an enhancement to that code, and you’ll want it to be easy to deploy the change wherever it has been used. This is where an APEX plugin could be a suitable solution.

This post is mostly based on a talk given in October 2019 at the Australian Oracle User Group conference in Perth, Western Australia.

What is an APEX Plugin?

An APEX Plugin is a Shared Component that can be used in any number of pages in the application. When used, it generates a Region, an Item, a Process, or a Dynamic Action; or, it may implement an Authentication or Authorization Scheme. It is a self-contained set of attributes, code, and (optionally) file attachments. It may be copied to another application, or exported as a SQL script for importing into an application in another workspace or another APEX instance.

Where can I get started?

In my opinion, a great way to get started learning about plugins is to examine a lot of plugins that others have created. The two sources I would recommend you look at are:

  1. Install the Sample Database Application – go into its Shared Components -> Plugins, and look at each of the plugins therein. Look through each attribute and examine the code; read the help for each attribute as well. Look at how the plugins are used in the application.
  2. Browse the plugins at apex.world, and install a few that seem interesting to you.

When looking at these plugins, be aware that many of them have been created by people just like you and me – at differing levels of experience and skill; so they may or may not perfectly reflect “best practice”. This especially goes for the plugins published by yours truly :). In addition, depending on how long ago a plugin was written, it might be more complex than it needs to be, as it might not take advantage of recent enhancements to the APEX plugin API.

Plugin Concepts
Plugin Types

Plugin Type – a plugin may be for a page Region, a page Item, a page Process or a Dynamic Action; or it may implement an Authentication Scheme or an Authorization Scheme. For example, if you create or install an Item Plugin, it will appear in the list of item types available for adding to a page.

PL/SQL and Callbacks – if needed, you can embed PL/SQL code that APEX will call when rendering the plugin component or for other events such as handling an AJAX callback or validating an item.

The Callbacks attributes allow you to specify the name of the function for APEX to call; these might be functions that you define within the PL/SQL Code section, or they can call functions in a database package you have defined.

The functions you define must have the exact parameters and return types as documented in the help for each Callback attribute.

Standard & Custom Attributes – some plugin types have Standard Attributes (defined by APEX), and can have additional Custom Attributes (defined by you). For example, a Region plugin has standard attributes that define whether it supports a Data Source or supports the CSS Classes attribute.

If needed you can define up to 15 custom Application attributes. These would be set by the developer when they first install the plugin into their application, and their values apply to all usages of the plugin across their application.

You can also define up to 25 custom Component attributes, which are set on each usage of the plugin separately. The developer will find these in the Attributes node.

Each Custom attribute can be any of a long list of different types, including Text, TextArea, Yes/No, Number, Checkboxes, Select list, Item name, Icon, PL/SQL Code, JavaScript Code, and many others. You can provide a default value for each attribute if needed.

Files, JS and CSS – you can attach any number of resources including JavaScript, CSS scripts, images, or anything else that are required by your plugin. To load JavaScript or CSS files on pages where your plugin is used, you can simply enter them in the File URLs to Load, or call the APEX API routines in your render PL/SQL code (e.g. APEX_JAVASCRIPT.add_library or APEX_CSS.add_file).

When you export the plugin, all the attached resources will be included in the SQL script.

Plugin Events – if your plugin needs to allow your developers to add Dynamic Actions that respond to events raised by your plugin (e.g. to respond to user actions) you can define any number of custom Events here.


I’ve written these instructions to get you started creating plugins. They assume you are already reasonably familiar with building simple applications in APEX, and are comfortable editing PL/SQL.


Getting Your Hands Dirty with TensorFlow 2.0 and Keras API

Andrejus Baranovski - Thu, 2019-10-31 15:00
Diving into technical details of the regression model creation with TensorFlow 2.0 and Keras API. In TensorFlow 2.0, Keras comes out of the box with TensorFlow library. API is simplified and more convenient to use.

Read the complete article here.


Subscribe to Oracle FAQ aggregator