Feed aggregator

2019 Public Appearances (What In The World)

Richard Foote - Wed, 2019-08-14 21:29
I’ll be presenting at a number of Oracle events over the remainder of the year. Details as follows: Oracle Open World – San Francisco (16-19 September 2019)
Categories: DBA Blogs

Find Docker Container IP Address?

DBASolved - Wed, 2019-08-14 11:38

This is just simple post for later reference, if I need it …

In setting up some docker containers for testing Oracle GoldenGate, I needed to find the IP address of the container where my database was running (I keep my database in a seperate container in order not to rebuild it every time).

To find the address of my database container, I had to use the docker “inspect” command. This command returns low level infomation on Docker objects.

The syntax is as follows:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  

 

 

Enjoy!!!!

Categories: DBA Blogs

Identifying the TNS_ADMIN for an deployment in GoldenGate Microservices

DBASolved - Wed, 2019-08-14 10:30

Setting up network routing/access with Oracle GoldenGate Microservices (12.3 and later) can be an interesting experience. As many in the Oracle space knows, you simply have to setup TNS_ADMIN to point to the location where your sqlnet.ora and tnsnames.ora files are located. This would normally look like this:


export TNS_ADMIN=${ORACLE_HOME}/network/admin


or


export TNS_ADMIN=$(ORA_CLIENT_HOME}/network/admin

 

These examples still working for Oracle GoldenGate Classic, however, when we start looking at this through the lens of Microservices; it changes a bit. Within the Oracle GoldenGate Microservices architecture the TNS_ADMIN enviroment variable has to be set “per deployment”. Depending on the number of deployments that are deployed with in the architecture, it is possible to have 1+N TNS_ADMIN variables.

As a illistration, it would look something like this:

As you can see this is specific to the Microservices architecture and how to setup network routing for individual deployments.

Setting TNS_ADMIN

How do you set the TNS_ADMIN environment variable for each deployment? It is quite simple, when you are building a deployment using the Oracle GoldenGate Configuration Assistant (OGGCA). Priort to running OGGCA, you can set the TNS_ADMIN variable at the OS level and the OGGCA will pick it up for that run and configuration of that specific deployment.

Optionally, you don’t have to set it at the OS level. During the OGGCA walkthrough, you will be able to set the variable manually. The OGGCA will not move past the enviornment variables step until it is provided.

Changing TNS_ADMIN

After building a deployment, you many want to chang the location of your network related files. This can be done from either the HTML5 web page for the deployment or from REST API.

To change TNS_ADMIN from the HTML5 pages within Oracle GoldenGate Microservices, you need to start at the ServiceManager Overview page. At the bottom on this page, there is a section called “Deployments”

The select the deployment you want to work with. After clicking on the deployment name, you should now be on the “Deployment Information” page. This page has two tabs at the top. The first tab is related to details of the deployment. The second table is related to configurations for the deployment.

Within the second tab – Configurations, is where you can set/change the environment variables for the deployment. In this case, we want to to modify the TNS_ADMIN enviornment variable.

 

To the right of the variable in the “Actions” column, click on the pencil icon. This will allow you to edit the environment variable. Change to the new location and save it. You may need to restart the deployment (hint, that step is on the ServiceManager Overview page).

At this point, you should now be able to change the location of your TNS_ADMIN variable. This is also handy for Oracle GoldenGate Microserivces on Marketplace as well … just saying.

Using REST API

This same process can be done quickly using the REST API. The below sample code, is only and sample and has not been tested. Use at your own risk!

curl -X PATCH \
  <a href="https://<ip_address>/services/v2/deployments/alpha" target="_blank" rel="noopener">https://<ip_address>/services/v2/deployments/alpha</a> \
  -H 'cache-control: no-cache' \
  -d '{
    "oggHome":"/opt/app/oracle/product/19.1.0/oggcore_1",
    "oggEtcHome":"/opt/app/oracle/gg_deployments/Atlanta/etc",
    "oggVarHome":"/opt/app/oracle/gg_deployments/Atlanta/var",
    "environment"{
    	"tns_admin":"/opt/app/oracle/product/18.1.0/network/admin"
    }
    "status":"restart"
}'

Enjoy!!!

Categories: DBA Blogs

gather_system_stats

Jonathan Lewis - Wed, 2019-08-14 08:20

What happens when you execute dbms_stats.gather_system_stats() with the ‘Exadata’ option ?

Here’s what my system stats look like (12.2.0.1 test results) after doing so. (The code to generate the two different versions is at the end of the note).


System Stats
============
Status: COMPLETED
Timed: 13-Aug-2019 15:00:00 - 13-Aug-2019 15:00:00
--------------------------------------------------
CPUSPEED        :
CPUSPEEDNW      :          918
IOSEEKTIM       :           10
IOTFRSPEED      :      204,800
MAXTHR          :
MBRC            :          128
MREADTIM        :
SLAVETHR        :
SREADTIM        :

PL/SQL procedure successfully completed.

MBRC       :          128
MREADTIM   :
SREADTIM   :
CPUSPEED   :
CPUSPEEDNW :          918
IOSEEKTIM  :           10
IOTFRSPEED :      204,800
MAXTHR     :
SLAVETHR   :

PL/SQL procedure successfully completed.

All the code does is set the MBRC, IOSEEKTIM, and IOTFRSPEED to fixed values and the only real gather is the CPUSPEEDNW. The parameters showing blanks are deliberately set null by the procedure – before I called the gather_system_stats() every parameter had a value. You could also check the SQL trace file (with bind captured enabled) to see the statements that deliberately set those parameters to null if you want more proof.

What are the consequences of this call (assuming you haven’t also done something with the calibrate_io() procedure? Essentially Oracle now has information that says a single (8KB) block read request will take marginally over 10 milliseconds, and a multiblock read request of 1MB will take just over 15 milliseconds: in other words “tablescans are great, don’t use indexes unless they’re really precisely targetted”. To give you a quantitative feel for the numbers: given the choice between doing a tablescan of 1GB to pick 1,500 randomly scattered rows and using a perfect index the optimizer would choose the index.

To explain the time calculations: Oracle has set an I/O seek time of 10 ms, and a transfer rate of 204,800 bytes per ms (200 MB/s), with the guideline that a “typical” multiblock read is going to achieve 128 blocks. So the optimizer believes a single block read will take 10 + 8192/204800 ms = 10.04ms, while a multiblock read request for 1MB will take 10 + 1048576/204800 ms = 15.12 ms.

It’s also important to note that Oracle will use the 128 MBRC value in its calculation of the cost of the tablescan – even if you’ve set the db_file_mulitblock_read_count parameter for the session or system to something smaller; and if you have set the db_file_multiblock_read_count that’s the maximum size of multiblock read that the run-time engine will use.

Code

Here are the two procedures I used to report the values above. You will only need the privilege to execute the dbms_stats package for the second one, but you’ll need the privilege to access the SYS table aux_stats$ to use the first. The benefit of the first one is that it can’t go out of date as versions change.


rem
rem     Script:         get_system_stats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2002
rem
rem     Last tested
rem             18.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

set linesize 180
set trimspool on
set pagesize 60
set serveroutput on

spool get_system_stats

declare
        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
begin
        for r1 in (
                select  rownum rn, pname
                from    sys.aux_stats$
                where   sname = 'SYSSTATS_MAIN'
        ) loop
                dbms_stats.get_system_stats(m_status, m_start, m_stop, r1.pname, m_value);
                if r1.rn = 1 then
                        dbms_output.put_line('System Stats');
                        dbms_output.put_line('============');
                        dbms_output.put_line('Status: ' || m_status);
                        dbms_output.put_line(
                                'Timed: ' ||
                                to_char(m_start,'dd-Mon-yyyy hh24:mi:ss') ||
                                ' - ' ||
                                to_char(m_stop ,'dd-Mon-yyyy hh24:mi:ss')
                        );
                        dbms_output.put_line('--------------------------------------------------');
                end if;
                dbms_output.put_line(rpad(r1.pname,15) ||  ' : ' || to_char(m_value,'999,999,999'));
        end loop;
end;
/

declare
        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
begin
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MBRC', m_value);
        dbms_output.put_line('MBRC       : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MREADTIM', m_value);
        dbms_output.put_line('MREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SREADTIM', m_value);
        dbms_output.put_line('SREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEED', m_value);
        dbms_output.put_line('CPUSPEED   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEEDNW', m_value);
        dbms_output.put_line('CPUSPEEDNW : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOSEEKTIM', m_value);
        dbms_output.put_line('IOSEEKTIM  : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOTFRSPEED', m_value);
        dbms_output.put_line('IOTFRSPEED : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MAXTHR', m_value);
        dbms_output.put_line('MAXTHR     : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SLAVETHR', m_value);
        dbms_output.put_line('SLAVETHR   : ' || to_char(m_value,'999,999,999'));
end;
/

spool off

Effects Of Technology on Society; A Boon or Bane for Us?

VitalSoftTech - Wed, 2019-08-14 00:11

We live in a day and age that it is virtually impossible to separate the effects of technology from human life. With every passing day, our dependency on technology is increasingly on the rise. Humankind has depended on technology for business, travel, education, healthcare, agriculture, communication but most of all, comfort and recreation. Technology has […]

The post Effects Of Technology on Society; A Boon or Bane for Us? appeared first on VitalSoftTech.

Categories: DBA Blogs

Menu Popup with Declarative List

Jeff Kemp - Tue, 2019-08-13 23:04

In the past when I’ve needed to add half a dozen or more buttons to a page, I’ve sometimes encased them in a Collapsible region so that the user can slide them out of the way instead of clogging up the screen. Recently however I’ve started (sparingly) using a Menu Popup, as per this tutorial. The issue I have with this method, however, is that the menu items are defined in a shared component (a List) which means it’s not defined on a per-page basis.

Some of the actions simply need to do a Submit on the page, which is simple enough: set the URL Target to something like:

In other cases, the action needs to do something more specific to the page, e.g. show a region:

apex.theme.openRegion("popupQuestion")

Or the action might need to navigate to another page, passing parameters based on specific items on the page. This means the list, defined in Shared Components, now has hardcoded elements that are only useful for that one page; more to the point, they are defined outside of the page – I’d rather that everything specific to a page is defined within that page’s definition.

The approach I’m using now is to use a custom trigger. Each list item has its URL Target set to something like:

The third parameter is set to a unique code that the page can use to identify which menu option was chosen. This parameter will be passed to this.data in the custom trigger’s event handler.

On the page, I have a Dynamic Action with the following attributes:

  • Event: Custom
  • Custom Event: menuAction
  • Selection Type: JavaScript Expression
  • JavaScript Expression: document
  • True Action: Execute JavaScript Code, e.g.:
switch(this.data) {
  case 'OPEN_POPUP':
    apex.theme.openRegion("popupQuestion");
    break;

  default:
    apex.page.submit({request:this.data,showWait:true});
}

Note that to do a simple Submit on the page, all I need to do is set the request on the third parameter of the menu item’s URL. If I want to do something slightly different for a particular request, I can put an extra “case” in the JavaScript code to handle it.

The benefit of this approach is that this trigger becomes the jumping-off point for all such menu actions for this page. In theory I could re-use the same List on multiple pages (if the items in the list are generic enough) but have different behaviour occur for each menu item specific to each page. The only challenge with this approach might be if you needed some conditions on each menu item, e.g. so they are shown or hidden in specific circumstances. If the condition for a menu item references a particular page item the List will no longer be generic and re-usable. For this reason, I usually still use a separate List for each menu for each page.

Perhaps in a future release of APEX we will gain the ability to define a List on a Page instead of in Shared Components. In the meantime, if you are interested in all the details on this method (including a solution for implementing a redirect to another page, or to open a modal page), refer to this tip.

Multi-table

Jonathan Lewis - Tue, 2019-08-13 07:34

Here’s a problem (and I think it should be called a bug) that I first came across about 6 years ago, then forgot for a few years until it reappeared some time last year and then again a few days ago. The problem has been around for years (getting on for decades), and the first mention of it that I’ve found is MoS Bug 2891576, created in 2003, referring back to Oracle 9.2.0.1, The problem still exists in Oracle 19.2 (tested on LiveSQL).

Here’s the problem: assume you have a pair of tables (call them parent and child) with a referential integrity constraint connecting them. If the constraint is enabled and not deferred then the following code may fail, and if you’re really unlucky it may only fail on rare random occasions:


insert all
        into parent({list of parent columns}) values({list of source columns})
        into child ({list of child columns})  values({list of source columns})
select
        {list of columns}
from    {source}
;

The surprising Oracle error is “ORA-02291: integrity constraint ({owner.constraint_name}) violated – parent key not found”, and the reason is simple (and documented in MoS note 265826.1 Multi-table Insert Can Cause ORA-02291: Integrity Constraint Violated for Master-Detail tables: the order in which the insert operations take place is “indeterminate” so that child rows may be inserted before their parent rows (and for the multi-table insert the constraint checks are not postponed until the statement completes as they are, for instance, for updates to a table with a self-referencing RI constraint).

Two possible workarounds are suggested in Doc ID 265826.1

  • drop the foreign key constraint and recreate it after the load,
  • make the foreign key constraint deferrable and defer it before the insert so that it is checked only on commit (or following an explicit call to make it immediate)

The second option would probably be preferable to the first but it’s still not a very nice thing to do and could leave your database temporarily exposed to errors that are hard to clean up. There are some details of the implementation of deferrable constraints in the comments of this note on index rebuilds if you’re interested in the technicalities.

A further option which seems to work is to create a (null) “before row insert” trigger on the parent table – this appears to force the parent into a pattern of single row inserts and the table order of insertion then seems to behave. Of course you do pay the price of an increase in the volume of undo and redo. On the down-side Bug 2891576 MULTITABLE INSERT FAILS WITH ORA-02291 WHEN FK & TRIGGER ARE PRESENT can also be fouind on MoS, leading 265826.1 to suggests disabling triggers if their correctness is in some way dependent on the order in which your tables are populated. That dependency threat should be irrelevant if the trigger is a “do nothing” trigger. Sadly there’s a final note that I should mention: Bug 16133798 : INSERT ALL FAILS WITH ORA-2291 reports the issue as “Closed: not a bug”

There is a very simple example in the original bug note demonstrating the problem, but it didn’t work on the version of Oracle where I first tested it, so I’ve modified it slightly to get it working on a fairly standard install. (I suspect the original was executed on a database with a 4KB block size.)


drop table child purge;
drop table parent purge;

create table parent (id number primary key);

create table child  (id number, v1 varchar2(4000),v2 varchar2(3920));
alter table child add constraint fk1 foreign key (id) references parent (id);
 
create or replace trigger par_bri
before insert on parent
for each row
begin
        null;
end;
.

insert all
        into parent ( id ) values ( id )
        into child  ( id ) values ( id )
select  100 id from dual
;

In the model above, and using an 8KB block in ASSM, the code as is resulted in an ORA-02991 error. Changing the varchar2(3920) to varchar2(3919) the insert succeeded, and when I kept the varchar2(3920) but created the trigger the insert succeeded.

Fiddling around in various ways and taking some slightly more realistic table definitions here’s an initial setup to demonstrate the “randomness” of the failure (tested on various versions up to 18.3.0.0):


rem
rem     Script:         insert_all_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem             10.2.0.5
rem              9.2.0.8
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             small_vc,
        lpad(rownum,100,'0')            medium_vc,
        lpad(rownum,200,'0')            big_vc
from
        generator       v1
;

create table parent(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint par_pk primary key(id)
)
segment creation immediate
;

create table child(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint chi_pk primary key(id),
        constraint chi_fk_par foreign key (id) references parent(id)
)
segment creation immediate
;

create table child2(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint ch2_pk primary key(id),
        constraint ch2_fk_par foreign key (id) references parent(id)
)
segment creation immediate
;

I’ve created a “source” table t1, and three “target” tables – parent, child and child2. Table parent has a declared primary key and both child and child2 have a referential integrity constraint to parent. I’m going to do a multi-table insert selecting from t1 and spreading different columns across the three tables.

Historical note: When I first saw the “insert all” option of multi-table inserts I was delighted with the idea that it would let me query a de-normalised source data set just once and insert the data into a normalised set of tables in a single statement – so (a) this is a realistic test from my perspective and (b) it has come as a terrible disappointment to discover that I should have been concerned about referential integrity constraints (luckily very few systems had them at the time I last used this feature in this way).

The multi-table insert I’ve done is as follows:


insert all
        into parent(id, small_vc)  values(id, small_vc)
        into child (id, medium_vc) values(id, medium_vc)
        into child2(id, medium_vc) values(id, medium_vc)
--      into child2(id, big_vc)    values(id, big_vc)
select
        id, small_vc, medium_vc, big_vc
from
        t1
where
        rownum <= &m_rows_to_insert
;

You’ll notice that I’ve allowed user input to dictate the number of rows selected for insertion and I’ve also allowed for an edit to change the column that gets copied from t1 to child2. Althought it’s not visible in the create table statements I’ve also tested the effect of varying the size of the big_vc column in t1.

Starting with the CTAS and multi-table insert as shown the insert runs to completion if I select 75 rows from t1, but if I select 76 rows the insert fails with “ORA-02991: integrity constraint (TEST_USER.CHI_FK_PAR) violated – parent key not found”. If I change the order of the inserts into child1 and child2 the violated constraint is TEST_USER.CH2_FK_PAR – so Oracle appears to be applying the inserts in the order they appear in the statement in some circumstances.

Go back to the original order of inserts for child1 and child2, but use the big_vc option for child2 instead of the medium_vc. In this case the insert succeeds for 39 rows selected from t1, but fails reporting constraint TEST_USER.CH2_FK_PAR when selecting 40 rows. Change the CTAS and define big_vc with as lpad(rownum,195) and the insert succeeds with 40 rows selected and fails on 41 (still on the CH2_FK_PAR constraint); change big_vc to lpad(rownum,190) and the insert succeeds on 41 rows selected, fails on 42.

My hypothesis on what’s happening is this: each table in the multitable insert list gets a buffer of 8KB (maybe matching one Oracle block if we were to try different block sizes). As the statement executes the buffers will fill and, critically, when the buffer is deemed to be full (or full enough) it is applied to the table – so if a child buffer fills before the parent buffer is full you can get child rows inserted before their parent, and it looks like Oracle isn’t postponing foreign key checking to the end of statement execution as it does with other DML – it’s checking as each array is inserted.

Of course there’s a special boundary condition, and that’s why the very first test with 75 rows succeeds – neither of the child arrays gets filled before we reach the end of the t1 selection, so Oracle safely inserts the arrays for parent, child and child2 in that order. The same boundary applies occurs in the first of every other pair of tests that I’ve commented on.

When we select 76 rows from t1 in the first test the child and child2 arrays hit their limit and Oracle attempts to insert the child1 rows first – but the parent buffer is far from full so its rows are not inserted and the attempted insert results in the ORA-02991 error. Doing a bit of rough arithmetic the insert was for 76 rows totalling something like: 2 bytes for the id, plus a length byte, plus 100 bytes for the medium_vc plus a length byte, totalling 76 * 104 =7,904 bytes.

When we switch to using the big_vc for child2 the first array to fill is the child2 array, and we have 3 sets of results as we shorten big_vc:

  • 40 * ((1 + 2) + (1 + 200)) = 8160
  • 41 * ((1 + 2) + (1 + 195)) = 8159
  • 42 * ((1 + 2) + (1 + 190)) = 8148

While I’m fairly confident that my “8KB array” hypothesis is in the right ballpark I know I’ve still got some gaps to explain – I don’t like the fact that I’ve got a break point around 7,900 in the first example and something much closer to 8,192 in the other three examples.  I could try to get extra precision by running up a lot more examples with different numbers and lengths of columns to get a better idea of where the error is appearing – but I’m sufficiently confident that the idea is about right so I can’t persuade myself to make the effort to refine it. An example of an alternative algorithm (which is actually a better fit though a little unexpected) is to assume that the normal 5 byte row overhead (column count, lock byte, flags and 2-byte row directory entry) has been included in the array sizing code, and the insert takes place at the point incoming row breaks, or just touches, the limit. In this case our 4 results would suggest the following figures:

  • 75 * 109 = 8175
  • 39 * 209 = 8151
  • 40 * 204 = 8160
  • 41 * 199 = 8159

With these numbers we can see 8KB (8,192 bytes) very clearly, and appreciate that the one extra row would take us over the critical limit, hence triggering the insert and making the array space free to hold the row.

Bottom Line

If you’re using the multi-table “insert all” syntax and have referential integrity declared between the various target tables then you almost certainly need to ensure that the foreign key constraints are declared as deferrable and then deferred as the insert takes place otherwise you may get random (and, until now, surprisingly inexplicable) ORA-02991 foreign key errors.

A possible alternative workaround is to declare a “do nothing” before row insert trigger on the top-level as this seems to switch the process into single row inserts on the top-most parent that force the other array inserts to take place with their parent row using small array sizes and protecting against the foreign key error. This is not an officially sanctioned workaround, though, and may only have worked by accident in the examples I tried.

It is possible, if the 8KB working array hypothesis is correct, that you will never see the ORA-02991 if the volume of data (number of rows * row length) for the child rows of any given parent row is always less than the size of the parent row – but that might be a fairly risky thing to hope for in a production system. It might be much better to pay the overhead of deferred foreign key checking than having a rare, unpredictable error appearing.

 

Obvious But Not For Oracle Obviously

Michael Dinh - Mon, 2019-08-12 13:38

While dropping RAC database, I found error ORA-01081: cannot start already-running ORACLE – shut it down first from dbca log.

Looking up error: Cause is obvious

$ oerr ora 01081
01081, 00000, "cannot start already-running ORACLE - shut it down first"
// *Cause:  Obvious
// *Action:

Here is the process for 12.1.0:

$ ps -ef|grep pmon
oracle   41777     1  0 Aug09 ?        00:00:30 asm_pmon_+ASM2

$ srvctl config database
DBFS

$ srvctl status database -d DBFS -v
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Instance DBFS1 is not running on node node1
Instance DBFS2 is not running on node node2
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

$ dbca -silent -deleteDatabase -sourceDB DBFS
Connecting to database
9% complete
14% complete
19% complete
23% complete
28% complete
33% complete
38% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instances and datafiles
66% complete
80% complete
95% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/DBFS.log" for further details.

$ cat /u01/app/oracle/cfgtoollogs/dbca/DBFS.log
The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. 
All information in the database will be deleted. Do you want to proceed?
Connecting to database
DBCA_PROGRESS : 9%
DBCA_PROGRESS : 14%
DBCA_PROGRESS : 19%
DBCA_PROGRESS : 23%
DBCA_PROGRESS : 28%
DBCA_PROGRESS : 33%
DBCA_PROGRESS : 38%
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ORA-01081: cannot start already-running ORACLE - shut it down first
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DBCA_PROGRESS : 47%
Updating network configuration files
DBCA_PROGRESS : 48%
DBCA_PROGRESS : 52%
Deleting instances and datafiles
DBCA_PROGRESS : 66%
DBCA_PROGRESS : 80%
DBCA_PROGRESS : 95%
DBCA_PROGRESS : 100%
Database deletion completed.

$ srvctl config database
$

Patching or reimaging your ODA?

Yann Neuhaus - Mon, 2019-08-12 08:30
Introduction

One of the key features of the ODA (Oracle Database Appliance) is the ability to patch the entire stack every three months, the goal being to keep all the components up-to-date. Most of the customers won’t patch so often, but one patch a year is quite a good average. But when comes the time for patching, comes the time for anxiety for the DBA. And it’s totally justified.

Why ODA patching can eventually be a nightmare?

First of all, patching all the products is a complex operation. ODA is not a real appliance: it’s classic hardware composed of parts from various vendors, and with nearly standard software, including Linux, Grid Infrastructure, ASM, ACFS and database engines. And all these products need to be patched together. If you were allowed to patch the ODA components separately, it could last quite a long time. Yes, Oracle provides a single patch for the ODA, but it’s just a bundle of dozen of patches. It’s easier to apply, all together the patches are certified, but it’s still a complex operation to bring all the modules to the target patch level. This is why you can encounter multiple problems. For example if you installed your own RPMs onto the system (unable to update the OS), if you lack some free space (unable to complete the patching), if your databases have specific configuration, or eventually if you discover that there is a bug in the patch related to the version you come from and linked to your ODA model.

Also, some of the patches are not cumulative, meaning that you cannot directly upgrade to the latest version. You sometimes need to apply 4 or 5 patches to upgrade, making the patching even more uncertain.

Starting from these facts, you may think about reimaging, and you’re probably right.

What are the advantages and drawbacks of reimaging?

For sure, reimaging has a lot of advantages:

  • Guarantee of success (you start from scratch)
  • Cleaner ODA (no question about that)
  • Make sure you are able to do the reimage (in case of you really need it)
  • Make sure your documentation is good (and this is the only way to validate it!)
  • Avoid later problems if patching not worked correctly

These are the drawbacks:

  • Longer than a single patch succesfully applied on the first try (who knows)
  • Need to erase everything and restart as if it were a new ODA
  • You need to know how your ODA was installed and configured (not so simple if someone did the job for you)
  • You probably need another ODA with Data Guard or DBVisit to limit the downtime
Can reimaging be quicker than patching?

Patching last about 3 hours if everything is OK. But it’s only for one patch and only if everything is OK. With my patching experience, you probably need to plan 1 day for the first ODA you will patch.

Reimaging also last about 3 hours (more or less depending on your ODA version). But it’s only for reinstalling the software without any database. You will need to restore all your databases, and do all the things you’ve done at the first deployment: copy your scripts, setup your additional software, restore your crontabs, your specific configuration, put back monitoring, and so on.

So, reimaging is probably longer, but you are quite sure to redeploy your ODA in a known time. This is a strong argument. “It will take 8 hours” is always better than “it would take between 3 and 8 hours. Or maybe more. If I succeed”.

How to proceed with patches?

If you need to patch regularly, try to apply the patch on a ODA you can live without. If something goes wrong, you can decide to reimage very quickly instead of opening a SR on MOS. Please don’t get stuck because a patch is not applying correctly, it’s a waste of time.

If you patch every year, consider redeploying instead of patching. It’s probably more work but it will take the same amount of time, with success guarantee (you will love that point). Also, you will ensure that you are able to reimage completely. Reimaging is sometimes also needed if you move your ODA to another datacenter with a network change, so you could have to reimage even for other reasons than patching.

How to make sure that you are able to reimage?

This is the key: be able to reimage

Rules to follow:

  • restrict the access on your ODA to only people concerned about the appliance
  • document every change you make on the server, even a simple chmod
  • never use the GUI to deploy the appliance: deploy your ODA using odacli and save the deployment json file outside of the ODA
  • never use the GUI to create the databases: create the database with odacli and backup the used parameters in the documentation
  • use scripts to configure your databases (avoid one-shot changes)
  • install other products only if necessary: do you really need a backup tool on ODA? NFS backups are great and easy to configure without installing anything
  • install only RPMs manually from Oracle ISOs and only if needed
  • do everything from the command line and avoid using vi. Text editors prevent you from being able to repeat the exact same operation. For example, replace vi /etc/fstab by echo "srv-nfs:/orabackups /backup nfs rw,bg,hard,nolock,nointr" >> /etc/fstab
  • always consider your ODA not so critical by having the possibility to restore your database elsewhere (understand on another ODA), or adopt Data Guard or DBVisit for all your databases that cannot support to be down for hours (even development databases are production for developpers!)
  • keep the install zipfiles corresponding to your version somewhere secured to avoid searching for them on MOS the day you need to reimage

Regarding the scripts, I always create a scripts folder in /home/oracle on ODA, and each database has 3 dedicated scripts to speed up the database recreation if needed: create_SID.sh, configure_SID.sql and tbs_SID.sql. First script is for odacli database creation, first SQL script if for specific configuration (controlfile multiplexing for example, disabling the recycle bin or enabling the archive_lag_target, etc). Second SQL script is for tablespace creation. Target is to be able to recreate the database even for datapump-based restore. Make sure to backup these scripts somewhere else.

Few words about RPMs : for me the best way to install additional RPMs on ODA is to download the Oracle Linux ISO corresponding to the version on your ODA (the ISO you would use if you need to deploy a normal server), mount the ISO on your ODA and pickup only the RPMs you need from it (you can also put these few RPMs on /home/oracle/RPMs).

Conclusion

Reimaging should always be considered as an alternative way of patching. Or the best way. Companies already having integrated this are happier with their ODAs. And are taking the best from these appliances.

Cet article Patching or reimaging your ODA? est apparu en premier sur Blog dbi services.

Video : Vagrant : Oracle Database Build (19c on OL8)

Tim Hall - Mon, 2019-08-12 02:18

Today’s video is an example of using Vagrant to perform an Oracle database build.

In this example I was using Oracle 19c on Oracle Linux 8. It also installs APEX 19.1, ORDS 19.2, SQLcl 19.2, with ORDS running on Tomcat 9 and OpenJDK 12.

If you’re new to Vagrant, there is an introduction video here. There’s also an article if you prefer to read that.

If you want to play around with some of my other Vagrant builds, you can find them here.

If you want to read about some of the individual pieces that make up this build, you can find them here.

The star of today’s video is Noel Portugal. It’s been far too long since I’ve seen you dude!

Cheers

Tim…

Video : Vagrant : Oracle Database Build (19c on OL8) was first posted on August 12, 2019 at 8:18 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Troubleshooting

Jonathan Lewis - Sun, 2019-08-11 15:28

Here’s a question to provoke a little thought if you’ve got nothing more entertaining to do on a Sunday evening.  What threats do you think of when you see a statement like the following in (say) an AWR report, or in a query against v$sql ?

update tableX set
        col001 = :1, col002 = :2, col003 = :3, ...
        -- etc. 
        -- the names are supposed to indicate that the statement updates 302 columns
        -- etc.
        col301 = :301, col302 = :302
where
        pk_col = :303
;

I’ll be writing up some suggestions tomorrow (Monday, UK BST), possible linking to a few other articles for background reading.

Update

The first three comments have already hit the high points, but I’m going to jot down a few notes anyway.

The first two things that really (should) make an impact are:

  • There’s a table in the application with (at least) 303 columns – anything over 255 is a potential disaster area
  • An update statement that updates 302 columns is probably machine generated by a non-scalable application

A secondary detail that might be useful is recognising the pattern of the text – lower case for column names, simple “:nnn” for bind variables.  As it stands I don’t recognise the possible source for this style, but I know it’s not SQL embedded in PL/SQL (which would be all capitals with “:Bnnn” as bind variable names) and it’s not part of a distributed query from a remote database (which would be in capitals with quoted names, and table aliases like “A1, A2, …”), and it’s not “raw” Hiberbate code which produces meaningless table and column aliases based on numbers with a “t” for table and “c” for column.

So let’s think about possible problems and symptoms relating to the two obvious issues:

Wide tables

Once you have more than 255 (real) columns in a table – even if that count includes columns that have been marked unused – Oracle will have to split rows into “rowpieces” that do not exceed 255 columns and chain those pieces together. Oracle will try to be as efficient as possible – with various damage-limiting code changes appearing across versions – attempting store these row pieces together and keeping the number to a minimum, but there are a number of anomalies that can appear that have a significant impact on performance.

Simply having to visit two row pieces to pick up a column in the 2nd row piece (even if it is in the same block) adds to the cost of processing; but when you have to visit a second block to acquire a 2nd (or 3rd, or 4th) rowpiece the costs can be significant. As a quirky little extra, Oracle’s initial generation of row-pieces creates them from the end backwards – so a row with 256 columns starts with a row-piece of one column following by a rowpiece of 255 columns: so you may find that you have to fetch multiple row pieces for virtually every row you access.

It’s worth noting that a row splitting is based only on columns that have been used in the row. If your data is restricted to the first 255 column of a row then the entire row can be stored as a single row piece (following the basic rule that “trailing nulls take no space”); but as soon as you start to update such a row by populating columns past the 255 boundary Oracle will start splitting from the end – and it may create a new trailing row-piece each time you populate a column past the current “highest” column.  In an extreme case I’ve managed to show an example of a single row consisting of 746 row pieces, each in a different block (though that’s a bug/feature that’s been addressed in very recent versions of Oracle).

If rows have been split across multiple blocks then one of the nastier performance problems appears with direct path read tablescans. As Oracle follows the pointer to a secondary row piece it will do a physical read of the target block then immediately forget the target block so, for example, if you have inserted 20 (short) rows into a block then updated all of them in a way that makes them split and all their 2nd row pieces go to the same block further down the table you can find that Oracle does a single direct path read that picks up the head pieces, then 20 “db file sequential read” calls to the same block to pick up the follow-on pieces. (The same effect appears with simple migrated rows.) Contrarily, if you did the same tablescan using “db file scattered read” requests then Oracle might record a single, highly deceptive “table fetch continued row” because it managed to pin the target block and revisit it 20 times.

Often a very wide row (large number of columns) means the table is trying to hold data for multiple types of object. So a table of 750 columns may use the first 20 columns for generic data, columns 21 to 180 for data for object type A, 181 to 395 for data for object type B, and so on.  This can lead to rows with a couple of hundred used columns and several hundred null columns in the middle of each row – taking one byte per null column and burning up lots of CPU as Oracle walks a row to find a relevant column. A particularly nasty impact can appear from this type of arrangement when you upgrade an applications:  imagine you have millions of rows of the “type A” above which use only the first 180 columns.  For some reason the application adds one new “generic” column that (eventually) has to be populated for each row – as the column is populated for a type A row the row grows by 520 (null counting) bytes and splits into at least 3 pieces. The effect could be catastrophic for anyone who had been happy with their queries reporting type A data.

One of the difficulties of handling rows that are chained due to very high column counts is that the statistics can be quite confusing (and subject to change across versions). The most important clue comes from “table fetch continued row”; but this can’t tell you whether your “continued” rows are migrated or chained (or both), which table they come from, and whether you’ve been fetching the same small number multiple times or many different rows. Unfortunately the segment statistics (v$segstat / v$segment_statistics) don’t capture the number of continued fetches by segment – it would be nice if they did since it ought to be a rare (and therefore low-cost) event. The best you can do, probably, is to look at the v$sql_monitor report for queries that report tablescans against large tables but report large numbers of single block reads in the tablescan – and for repeatable cases enable SQL trace with wait tracing enabled against suspect queries to see if they show the characteristic mix of direct path reads and repeated db file sequential reads.

Update every column

The key performance threat in statements that update every column – including the ones that didn’t change – is that Oracle doesn’t compare before and after values when doing the update. Oracle’s code path assumes you know what you’re doing so it saves every “old” value to an undo record (which gets copied to the redo) and writes every “new” value to a redo change vector.  (Fortunately Oracle does check index definitions to see which index entries really have suffered changes, so it doesn’t visit index leaf blocks unnecessarily). It’s possible that some front-end tool that adopts this approach has a configuration option that switches from “one SQL statement for all update” to “construct minimal statement based on screen changes”.

The simple trade-off between these two options is the undo/redo overhead vs. parsing and optimisation overheads as the tool creates custom statements on demand. In the case of the table with more than 255 columns, of course, there’s the added benefit that an update of only the changed columns might limit the update to columns that are in the first rowpiece, eliminating the need (some of the time) to chase pointers to follow-up pieces.

Limiting the update can help with undo and redo, of course, but if the tool always pulls the entire row to the client anyway you still have network costs to consider. With the full row pulled and then updated you may find it takes several SQL*Net roundtrips to transfer the whole row between client and server.  In a quick test on a simple 1,000 column table with an update that set every column in a single row to null (using a bind variables) I found that the a default setup couldn’t even get 1,000 NULLs (let alone “real values”) across the network without resorting to one wait on “SQL*Net more data from client”


variable b1 number
exec :b1 := null;

update t1 set
        col0001 = :b1,
        col0002 = :b1,
        ...
        col1000 = :b1
;

Although “SQL*Net message to/from client” is typically (though not always correctly) seen as an ignorable wait, you need to be a little more suspicious of losing time to “SQL*Net more data to/from client”. The latter two waits mean you’re sending larger volumes of information across the network and maybe you can eliminate some of the data or make the transfer more efficient – perhaps a simple increase in the SDU (session data unit) in the tnsnames.ora, listener.ora, or sqlnet.ora (for default_sdu_size) might be helpful.

Warning

One of the features of trouble-shooting from cold is that you don’t know very much about the system you’re looking at – so it’s nice to be able to take a quick overview of a system looking for any potentially interesting anomalies and take a quick note of what they are and what thoughts they suggest before you start asking questions and digging into a system. This article is just a couple of brief notes along the lines of: “that doesn’t look nice- what questions does it prompt”.

Oracle JET - How To Reference JavaScript Module in HTML

Andrejus Baranovski - Sun, 2019-08-11 07:06
I will explain how to reference JavaScript module function in Oracle JET HTML. In previous Oracle JET versions, we were using $root to access parent appController module. Syntax $root looks a bit like magic, it is better to reference module through a predefined variable. I will show you how.

Sample app comes with number converter, which helps to format numeric value. For convenience reasons and better reuse, number formatting function is moved to a separate module. Our goal is to call function self.convertNumericValue from converter module inside our module HTML:


Import the converter module into your target module. Make sure to define a variable for the import. Then define the local variable and assign it with the value pointing to the imported module. This will allow calling functions from the imported module, anywhere within our target module:


To demonstrate how it works, I will include a call to format number for the table column. I have defined a table column template for that reason:


Within the template, I'm calling converter (function is called by referencing local variable) to format number:


It works well, the column with salary values is formatted by function from the imported module:


A sample application is available on my GiHub repo.

EDB EPAS 12 comes with interval partitioning

Yann Neuhaus - Sat, 2019-08-10 09:21

While community PostgreSQL 12 is in beta quite some time now (currently in beta 3) it usually takes some time until EDB EPAS will be available on top of the next major PostgreSQL release. Yesterday EDB finally released a beta and you can register for access here, release notes here. One of the new features is interval partitioning which you already might know from Oracle. Basically you do not need to create range partitions in advance but the system will create the partitions for you once you add data that does not fit into any of the current partitions. Lets see how that works.

Without interval partitioning you would need to create a range partitioned table like this (note that this is Oracle syntax which is supported by EPAS but not by community PostgreSQL):

create table my_part_tab ( id int
                         , dummy text
                         , created date
                         )
                         partition by range (created)
                         ( partition my_part_tab_1 values less than (to_date('01.02.2019','DD.MM.YYYY'))
                         );

The issue with that is, that once you want to add data that does not fit into any of the current partitions you will run into issues like that:

edb=# insert into my_part_tab (id,dummy,created) values (1,'aaa',to_date('05.01.2019','DD.MM.YYYY'));
INSERT 0 1
edb=# select * from my_part_tab;
 id | dummy |      created       
----+-------+--------------------
  1 | aaa   | 05-JAN-19 00:00:00
(1 row)

edb=# insert into my_part_tab (id,dummy,created) values (1,'aaa',to_date('05.02.2019','DD.MM.YYYY'));
psql: ERROR:  no partition of relation "my_part_tab" found for row
DETAIL:  Partition key of the failing row contains (created) = (05-FEB-19 00:00:00).
edb=# 

Only when you create the partition required manually you will be able to store the data (or it goes to a default partition, which comes with its own issues):

edb=# alter table my_part_tab add partition my_part_tab_2 values less than (to_date('01.03.2019','DD.MM.YYYY'));
ALTER TABLE
edb=# insert into my_part_tab (id,dummy,created) values (1,'aaa',to_date('05.02.2019','DD.MM.YYYY'));
INSERT 0 1
edb=# select * from my_part_tab;
 id | dummy |      created       
----+-------+--------------------
  1 | aaa   | 05-JAN-19 00:00:00
  1 | aaa   | 05-FEB-19 00:00:00
(2 rows)
edb=# \d+ my_part_tab
                                     Partitioned table "public.my_part_tab"
 Column  |            Type             | Collation | Nullable | Default | Storage  | Stats target | Descripti
---------+-----------------------------+-----------+----------+---------+----------+--------------+----------
 id      | integer                     |           |          |         | plain    |              | 
 dummy   | text                        |           |          |         | extended |              | 
 created | timestamp without time zone |           |          |         | plain    |              | 
Partition key: RANGE (created) NULLS LAST
Partitions: my_part_tab_my_part_tab_1 FOR VALUES FROM (MINVALUE) TO ('01-FEB-19 00:00:00'),
            my_part_tab_my_part_tab_2 FOR VALUES FROM ('01-FEB-19 00:00:00') TO ('01-MAR-19 00:00:00')

Of course it is not a big deal to create the partitions for the next 20 years in advance but there is a more elegant way of doing this by using interval partitioning:

drop table my_part_tab;
create table my_part_tab ( id int
                         , dummy text
                         , created date
                         )
                         partition by range (created)
                         interval (numtoyminterval(1,'month'))
                         ( partition my_part_tab_1 values less than (to_date('01.02.2019','DD.MM.YYYY'))
                         );

Having the table partitioned like that new partitions will be created on the fly as required:

edb=# insert into my_part_tab (id,dummy,created) values (1,'aaa',to_date('05.01.2019','DD.MM.YYYY'));
INSERT 0 1
edb=# insert into my_part_tab (id,dummy,created) values (1,'aaa',to_date('05.02.2019','DD.MM.YYYY'));
INSERT 0 1
edb=# insert into my_part_tab (id,dummy,created) values (1,'aaa',to_date('05.03.2019','DD.MM.YYYY'));
INSERT 0 1
edb=# \d+ my_part_tab
                                     Partitioned table "public.my_part_tab"
 Column  |            Type             | Collation | Nullable | Default | Storage  | Stats target | Descripti
---------+-----------------------------+-----------+----------+---------+----------+--------------+----------
 id      | integer                     |           |          |         | plain    |              | 
 dummy   | text                        |           |          |         | extended |              | 
 created | timestamp without time zone |           |          |         | plain    |              | 
Partition key: RANGE (created) INTERVAL ('1 mon'::interval)
Partitions: my_part_tab_my_part_tab_1 FOR VALUES FROM (MINVALUE) TO ('01-FEB-19 00:00:00'),
            my_part_tab_sys138880102 FOR VALUES FROM ('01-FEB-19 00:00:00') TO ('01-MAR-19 00:00:00'),
            my_part_tab_sys138880103 FOR VALUES FROM ('01-MAR-19 00:00:00') TO ('01-APR-19 00:00:00')

edb=# 

A nice addition which is not (yet) available in community PostgreSQL.

Cet article EDB EPAS 12 comes with interval partitioning est apparu en premier sur Blog dbi services.

OracleCode One 2019 - My Sessions Pick

Kuassi Mensah - Fri, 2019-08-09 18:14
OracleCode One 2019 is few weeks ahead and like many, you have not yet made up your mind on which sessions to attend or why attend?

Here is a selection of sessions that I highly recommend (full disclosure, I am involved in these sessions).




And you don't want to miss this exciting hands-on lab
See you there.

Troubleshooting

Jonathan Lewis - Fri, 2019-08-09 07:33

An anecdote with a moral.

Many years ago – in the days of Oracle 7.2.3, when parallel query and partition views were in their infancy and when RAC was still OPS (Oracle Parallel Server), I discovered a bug that caused parallel queries against partition views to crash (Maxim – mixing two new features is a good way to find bugs). I no longer recall the details of exact syntax but the problem revolved around the way Oracle used to rewrite queries for parallel execution. In outline it was something like the following:


create or replace view v1
as
select * from t1
union all
select * from t2
;

select  /*+ parallel(v1 2) */ * 
from    v1 
where   pv_col between 1 and 10
and     date_col = to_date('1-Apr-1999','dd-mm-yyyy')

If you had followed the rules about partition views then Oracle would generate some code that managed to combine the partitioning definitions with the view definition and query predicates and come up with rewritten code for the parallel query slaves that looked something like (e.g.)

select  {list of columns}
from    t1
where   pv_col between 1 and 10 
and     pv_col >= 0 and pv_col < 3000 -- > comment to avoid wordpress format issue 
and     date_col = to_date(:SYS_B1,'dd-mm-yyyy') 
union all 
select  {list of columns} 
from    t2 
where   pv_col between 1 and 10 
and     pv_col >= 3000 and pv_col < 6000 -- > comment to avoid wordpress format issue
and     date_col = to_date(:SYS_B2,'dd-mm-yyyy')

In this case I’ve defined my partitions (using constraints on the underlying tables) so that t1 will hold rows where pv_col >= 0 and pv_col < 3000, and t2 will hold rows where pv_col >= 3000 and pv_col < 6000. As you can see the optimizer has expanded the query with the view text and  pulled the constraints into the query and will be able to bypass all access to t2 because it can reduce the pv_col predicates on t2 into the contradiction “1 = 0”. Here’s the basic form of the execution plan we’d be looking for with partition elimination:


-----------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |    15 |  1500 |     2   (0)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 |    15 |  1500 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    VIEW                 | V1       |    15 |  1500 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|   4 |     UNION-ALL           |          |       |       |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR  |          |    12 |  1308 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL | T1       |    12 |  1308 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|*  7 |      FILTER             |          |       |       |            |          |  Q1,00 | PCWC |            |
|   8 |       PX BLOCK ITERATOR |          |     1 |   109 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  9 |        TABLE ACCESS FULL| T2       |     1 |   109 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------


Note the FILTER at operation 7 – the predicate there is actually “NULL IS NOT NULL” for new versions of Oracle, but would have been “1 = 0” for older versions, so operations 8 and 9 would not get executed.

Notice, also, that my date predicate changes from a literal to a bind variable. This just happened to be the way things were done for parallel query in that version of Oracle. So now we can get to the bug. Our data was actually partitioned by day using a date(-only) column, and the initial target was to keep two years of data. The trouble was that certain queries kept crashing with Oracle error “ORA-12801: error signaled in parallel query server”.

Naturally I created a simplified model (smaller tables, fewer partitions in the views) to try and track down the problem – and the problem disappeared. So I took a crashing query from production, and started creating partition views with fewer and fewer table until the query stopped crashing, and what I discovered was the following:

  • If you had 2 dates in the query it crashed if the view held 128 or more tables
  • If you had 3 dates in the query it crashed if the view held 86 or more tables

Think about the arithmetic for a moment: 2 * 128 = 256, 3 * 86 = 258. Does this give you a clue ?

What it suggested to me was that someone in Oracle Development had used a single byte to index the array of bind variables that they defined for use with the generated parallel query text, so when you had 2 dates and needed 256 bind variable the counter overflowed, when you had 3 dates and needed 258 bind variables the counter overflowed. Having made the hypothesis I predicted that a query would crash if there were 4 dates and 64 partitions, but survive if there were only 63 partitions. (And I was right.)

When I called this in to Oracle support (who remembers those early days when you were supposed to “phone” Oracle support on a “land-line”) and suggested the source of the problem I was told that there was “no limit on the number of bind variables a query could handle in Oracle”. Notice how this is essentially a true statement – but has nothing to do with my suggestion.

Several months (maybe even a couple of years) later – long after the client had upgraded to 7.3.2 then 7.3.4 and I was no long on site – I got a call from Oracle support who wanted to close the TAR (as SR’s used to be known) because they’d discovered the problem and it was … see above. I got the feeling that no-one had considered my suggestion for a long time because they “knew” it had to be  wrong.

The moral(s) of the story
  • Listen to the question carefully – you may not  have heard what you were assuming you would hear.
  • Listen to the answer carefully – it may sound like a convincing response to your question while not being relevant to the actual question.
  • “It’s not supposed to do that” isn’t the same as “That didn’t happen” or (to mis-quote a well-known philosophical problem): “you can’t turn an ‘ought not’ into a ‘did not”

One thing that’s worth emphasising is that everyone (and that does include me) will occasionally hear what they’re expecting to hear and completely misunderstand the point of the question. So when someone says something which is clearly very silly pause for thought then ask them, with care and precision, if they just said what you thought they said – maybe what they said and what you heard were very different. (The same is true for twitter, list servers, forums etc. – it’s very easy to misinterpret a short piece of text, and it may be the way it’s been written but it may be the way it’s been read.)

 

Split Partition

Jonathan Lewis - Fri, 2019-08-09 07:02

This is a little case study on “pre-emptive trouble-shooting”, based on a recent question on the ODC database forum asking about splitting a range-based partition into two at a value above the current highest value recorded in a max_value partition.

The general principle for splitting (range-based) partitions is that if the split point is above the current high value Oracle will recognise that it can simply rename the existing partition and create a new, empty partition, leaving all the indexes (including the global and globally partitioned indexes) in a valid state. There are, however, three little wrinkles to this particular request:

  • first is that the question relates to a system running 10g
  • second is that there is a LOB column in the table
  • third is that the target is to have the new (higher value) partition(s) in a different tablespace

It’s quite possible that 10g won’t have all the capabilities of partition maintenance of newer versions, and if anything is going to go wrong LOBs are always a highly dependable point of failure, and since all the examples in the manuals tend to be very simple examples maybe any attempt to introduce complications like tablespace specification will cause problems.

So, before you risk doing the job in production, what are you going to test?

In Oracle terms we want to check the following

  • Will Oracle have silently copied/rebuilt some segments rather than simply renaming old segments and creating new, empty segments.
  • Will the segments end up where we want them
  • Will all the indexes stay valid

To get things going, the OP had supplied a framework for the table and told us about two indexes, and had then given us two possible SQL statements to do the split, stating they he (or she) had tested them and they both worked. Here’s the SQL (with a few tweaks) that creates the table and indexes. I’ve also added some data – inserting one row into each partition.

rem
rem     Script:         split_pt_lob.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2019
rem
rem     Last tested 
rem             12.2.0.1
rem             10.2.0.5
rem

define m_old_ts = 'test_8k'
define m_new_ts = 'assm_2'

drop table part_tab purge;

create table part_tab(
  pt_id            NUMBER,
  pt_name          VARCHAR2(30),
  pt_date          DATE default SYSDATE,
  pt_lob           CLOB,
  pt_status        VARCHAR2(2)
)
tablespace &m_old_ts
lob(pt_lob) store as (tablespace &m_old_ts)
partition by range (pt_date)
(
  partition PRT1 values less than (TO_DATE('2012-01-01', 'YYYY-MM-DD')),
  partition PRT2 values less than (TO_DATE('2014-09-01', 'YYYY-MM-DD')),
  partition PRT_MAX values less than (MAXVALUE)
)
/

alter table part_tab
add constraint pt_pk primary key(pt_id)
/

create index pt_i1 on part_tab(pt_date, pt_name) local
/

insert into part_tab(
    pt_id, pt_name, pt_date, pt_lob, pt_status
)
values(
    1,'one',to_date('01-Jan-2011'),rpad('x',4000),'X'
)
/

insert into part_tab(
    pt_id, pt_name, pt_date, pt_lob, pt_status
)
values(
    2,'two',to_date('01-Jan-2013'),rpad('x',4000),'X'
)cascade=>trueee
/

insert into part_tab(
    pt_id, pt_name, pt_date, pt_lob, pt_status
)
values(
    3,'three',to_date('01-Jan-2015'),rpad('x',4000),'X'
)
/

commit;

execute dbms_stats.gather_table_stats(null,'part_tab',cascade=>true,granularity=>'ALL')

We were told that

The table has
– Primary Key on pt_id column with unique index (1 Different table has FK constraint that refers to this PK)
– Composite index on pt_date and pt_name columns

This is why I’ve added a primary key constraint (which will generate a global index) and created an index on (pt_date,pt_name) – which I’ve created as a local index since it contains the partitioning column.

The description of the requirement was:

  • The Task is to split partition(PRT_MAX) to a different tablespace
  • New partition(s) won’t have data at the moment of creation

And the two “tested” strategies were:

alter table part_tab split partition PRT_MAX at(TO_DATE('2019-08-01', 'YYYY-MM-DD')) into (
        PARTITION PRT3    tablespace &m_old_ts,
        PARTITION PRT_MAX tablespace &m_new_ts
);

alter table part_tab split partition PRT_MAX at(TO_DATE('2019-08-01', 'YYYY-MM-DD')) into (
        PARTITION PRT3    tablespace &m_old_ts LOB (pt_lob) store as (TABLESPACE &m_old_ts), 
        PARTITION PRT_MAX tablespace &m_new_ts LOB (pt_lob) store as (TABLESPACE &m_new_ts)
)
;
 

If we’re going to test these strategies properly we will need queries similar to the following:


break on object_name skip 1
select object_name, subobject_name, object_id, data_object_id  from user_objects order by object_name, subobject_name;

break on index_name skip 1
select index_name, status from user_indexes;
select index_name, partition_name, status from user_ind_partitions order by index_name, partition_name;

break on segment_name skip 1
select segment_name, partition_name, tablespace_name from user_segments order by segment_name, partition_name;

First – what are the object_id and data_object_id for each object before and after the split. Have we created new “data objects” while splitting, or has an existing data (physical) object simply changed its name.

Secondly – are there any indexes or index partitions that are no longer valid

Finally – which tablespaces do physical objects reside in.

On a test run of the first, simpler, split statement here are the before and after results for the object_id and data_object_id, followed by the post-split results for index and segment details:


Before Split
============

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23677          23677
                                 PRT2                        23678          23678
                                 PRT_MAX                     23679          23679
                                                             23676

PT_I1                            PRT1                        23690          23690
                                 PRT2                        23691          23691
                                 PRT_MAX                     23692          23692
                                                             23689

PT_PK                                                        23688          23688

SYS_IL0000023676C00004$$         SYS_IL_P252                 23685          23685
                                 SYS_IL_P253                 23686          23686
                                 SYS_IL_P254                 23687          23687

SYS_LOB0000023676C00004$$        SYS_LOB_P249                23681          23681
                                 SYS_LOB_P250                23682          23682
                                 SYS_LOB_P251                23683          23683
                                                             23680          23680

After split
===========

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23677          23677
                                 PRT2                        23678          23678
                                 PRT3                        23693          23679
                                 PRT_MAX                     23679          23694
                                                             23676

PT_I1                            PRT1                        23690          23690
                                 PRT2                        23691          23691
                                 PRT3                        23700          23692
                                 PRT_MAX                     23699          23699
                                                             23689

PT_PK                                                        23688          23688

SYS_IL0000023676C00004$$         SYS_IL_P252                 23685          23685
                                 SYS_IL_P253                 23686          23686
                                 SYS_IL_P257                 23697          23687
                                 SYS_IL_P258                 23698          23698

SYS_LOB0000023676C00004$$        SYS_LOB_P249                23681          23681
                                 SYS_LOB_P250                23682          23682
                                 SYS_LOB_P255                23695          23683
                                 SYS_LOB_P256                23696          23696
                                                             23680          23680


INDEX_NAME                       STATUS
-------------------------------- --------
PT_I1                            N/A
PT_PK                            VALID
SYS_IL0000023676C00004$$         N/A


INDEX_NAME                       PARTITION_NAME         STATUS
-------------------------------- ---------------------- --------
PT_I1                            PRT1                   USABLE
                                 PRT2                   USABLE
                                 PRT3                   USABLE
                                 PRT_MAX                USABLE

SYS_IL0000023676C00004$$         SYS_IL_P252            USABLE
                                 SYS_IL_P253            USABLE
                                 SYS_IL_P257            USABLE
                                 SYS_IL_P258            USABLE


SEGMENT_NAME              PARTITION_NAME         TABLESPACE_NAME
------------------------- ---------------------- ------------------------------
PART_TAB                  PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_I1                     PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_PK                                            TEST_8K

SYS_IL0000023676C00004$$  SYS_IL_P252            TEST_8K
                          SYS_IL_P253            TEST_8K
                          SYS_IL_P257            TEST_8K
                          SYS_IL_P258            TEST_8K

SYS_LOB0000023676C00004$$ SYS_LOB_P249           TEST_8K
                          SYS_LOB_P250           TEST_8K
                          SYS_LOB_P255           TEST_8K
                          SYS_LOB_P256           TEST_8K

Before the split partition PRT_MAX – with 4 segments: table, index, LOB, LOBINDEX – has object_id = data_object_id, with the values: 23679 (table), 23692 (index), 23683 (LOB), 23687 (LOBINDEX); and after the split these reappear as the data_object_id values for partition PRT3 (though the object_id values are larger than the data_object_id values) – so we infer that Oracle has simply renamed the various PRT_MAX objects to PRT3 and created new, empty PRT_MAX objects.

We can also see that all the indexes (including the global primary key index) have remained valid. We also note that the data_object_id of the primary key index has not changed, so Oracle didn’t have to rebuild it to ensure that it stayed valid.

There is a problem, though, the LOB segment and LOBINDEX segments for the new PRT_MAX partition are not in the desired target tablespace. So we need to check the effects of the second version of the split command where we add the specification of the LOB tablespaces. This is what we get – after rerunning the entire test script from scratch:


OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23727          23727
                                 PRT2                        23728          23728
                                 PRT_MAX                     23729          23729
                                                             23726

PT_I1                            PRT1                        23740          23740
                                 PRT2                        23741          23741
                                 PRT_MAX                     23742          23742
                                                             23739

PT_PK                                                        23738          23738

SYS_IL0000023726C00004$$         SYS_IL_P272                 23735          23735
                                 SYS_IL_P273                 23736          23736
                                 SYS_IL_P274                 23737          23737

SYS_LOB0000023726C00004$$        SYS_LOB_P269                23731          23731
                                 SYS_LOB_P270                23732          23732
                                 SYS_LOB_P271                23733          23733
                                                             23730          23730


OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23727          23727
                                 PRT2                        23728          23728
                                 PRT3                        23743          23743
                                 PRT_MAX                     23729          23744
                                                             23726

PT_I1                            PRT1                        23740          23740
                                 PRT2                        23741          23741
                                 PRT3                        23750          23750
                                 PRT_MAX                     23749          23749
                                                             23739

PT_PK                                                        23738          23738

SYS_IL0000023726C00004$$         SYS_IL_P272                 23735          23735
                                 SYS_IL_P273                 23736          23736
                                 SYS_IL_P277                 23747          23747
                                 SYS_IL_P278                 23748          23748

SYS_LOB0000023726C00004$$        SYS_LOB_P269                23731          23731
                                 SYS_LOB_P270                23732          23732
                                 SYS_LOB_P275                23745          23745
                                 SYS_LOB_P276                23746          23746
                                                             23730          23730

INDEX_NAME                       STATUS
-------------------------------- --------
PT_I1                            N/A
PT_PK                            UNUSABLE
SYS_IL0000023726C00004$$         N/A

INDEX_NAME                       PARTITION_NAME         STATUS
-------------------------------- ---------------------- --------
PT_I1                            PRT1                   USABLE
                                 PRT2                   USABLE
                                 PRT3                   UNUSABLE
                                 PRT_MAX                USABLE

SYS_IL0000023726C00004$$         SYS_IL_P272            USABLE
                                 SYS_IL_P273            USABLE
                                 SYS_IL_P277            USABLE
                                 SYS_IL_P278            USABLE

SEGMENT_NAME              PARTITION_NAME         TABLESPACE_NAME
------------------------- ---------------------- ------------------------------
PART_TAB                  PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_I1                     PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_PK                                            TEST_8K

SYS_IL0000023726C00004$$  SYS_IL_P272            TEST_8K
                          SYS_IL_P273            TEST_8K
                          SYS_IL_P277            TEST_8K
                          SYS_IL_P278            ASSM_2

SYS_LOB0000023726C00004$$ SYS_LOB_P269           TEST_8K
                          SYS_LOB_P270           TEST_8K
                          SYS_LOB_P275           TEST_8K
                          SYS_LOB_P276           ASSM_2


Before looking at the more complex details the first thing that leaps out to hit the eye is the word UNUSABLE – which appears for the status of the (global) primary key index and the PRT3 subpartition. The (empty) PRT_MAX LOB and LOBINDEX partitions are where we wanted them, but by specifying the location we seem to have broken two index segments that will need to be rebuilt.

It gets worse, because if we check the data_object_id of the original PRT_MAX partition (23729) and its matching index partition (23742) we see that they don’t correspond to the (new) PRT3 data_object_id values which are 23743 and 23750 respectively – the data has been physically copied from one data object to another completely unnecessarily; moreover the same applies to the LOB and LOBINDEX segments – the data object ids for the PRT_MAX LOB and LOBINDEX partitions were 23733 and 23737, the new PRT3 data object ids are 23746 and 23747.

If you did a test with only a tiny data set you might not notice the implicit threat that these changes in data_object_id tell you about – you’re going to be copying the whole LOB segment when you don’t need to.

Happy Ending (maybe)

A quick check with 12.2 suggested that Oracle had got much better at detecting that it didn’t need to copy LOB data and invalidate indexes with the second form of the code; but the OP was on 10g – so that’s not much help. However it was the thought that Oracle might misbehave when you specifyied tablespaces that made me run up this test – in particular I had wondered if specifying a tablespace for the partition that would end up holding the existing data might trigger an accident, so here’s a third variant of the split statement I tested, with the results on the indexes, segments, and data objects. Note that I specify the tablespace only for the new (empty) segments:


alter table part_tab split partition PRT_MAX at(TO_DATE('2019-08-01', 'YYYY-MM-DD')) into (
    PARTITION PRT3,
    PARTITION PRT_MAX tablespace &m_new_ts  LOB (pt_lob) store as (TABLESPACE &m_new_ts)
)
/

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23752          23752
                                 PRT2                        23753          23753
                                 PRT_MAX                     23754          23754
                                                             23751

PT_I1                            PRT1                        23765          23765
                                 PRT2                        23766          23766
                                 PRT_MAX                     23767          23767
                                                             23764

PT_PK                                                        23763          23763

SYS_IL0000023751C00004$$         SYS_IL_P282                 23760          23760
                                 SYS_IL_P283                 23761          23761
                                 SYS_IL_P284                 23762          23762

SYS_LOB0000023751C00004$$        SYS_LOB_P279                23756          23756
                                 SYS_LOB_P280                23757          23757
                                 SYS_LOB_P281                23758          23758
                                                             23755          23755

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23752          23752
                                 PRT2                        23753          23753
                                 PRT3                        23768          23754
                                 PRT_MAX                     23754          23769
                                                             23751

PT_I1                            PRT1                        23765          23765
                                 PRT2                        23766          23766
                                 PRT3                        23775          23767
                                 PRT_MAX                     23774          23774
                                                             23764

PT_PK                                                        23763          23763

SYS_IL0000023751C00004$$         SYS_IL_P282                 23760          23760
                                 SYS_IL_P283                 23761          23761
                                 SYS_IL_P287                 23772          23762
                                 SYS_IL_P288                 23773          23773

SYS_LOB0000023751C00004$$        SYS_LOB_P279                23756          23756
                                 SYS_LOB_P280                23757          23757
                                 SYS_LOB_P285                23770          23758
                                 SYS_LOB_P286                23771          23771
                                                             23755          23755
INDEX_NAME                       STATUS
-------------------------------- --------
PT_I1                            N/A
PT_PK                            VALID
SYS_IL0000023751C00004$$         N/A

INDEX_NAME                       PARTITION_NAME         STATUS
-------------------------------- ---------------------- --------
PT_I1                            PRT1                   USABLE
                                 PRT2                   USABLE
                                 PRT3                   USABLE
                                 PRT_MAX                USABLE

SYS_IL0000023751C00004$$         SYS_IL_P282            USABLE
                                 SYS_IL_P283            USABLE
                                 SYS_IL_P287            USABLE
                                 SYS_IL_P288            USABLE

SEGMENT_NAME              PARTITION_NAME         TABLESPACE_NAME
------------------------- ---------------------- ------------------------------
PART_TAB                  PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_I1                     PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_PK                                            TEST_8K

SYS_IL0000023751C00004$$  SYS_IL_P282            TEST_8K
                          SYS_IL_P283            TEST_8K
                          SYS_IL_P287            TEST_8K
                          SYS_IL_P288            ASSM_2

SYS_LOB0000023751C00004$$ SYS_LOB_P279           TEST_8K
                          SYS_LOB_P280           TEST_8K
                          SYS_LOB_P285           TEST_8K
                          SYS_LOB_P286           ASSM_2

All the index and index partitions stay valid; the new empty segments all end up in the target tablespace, and all the data object ids for the old PRT_MAX partitions becaome the data object ids for the new PRT3 partitions. Everything we want, and no physical rebuilds of any data sets.

Moral:

When you’re testing, especially when you’re doing a small test while anticipating a big data set, don’t rely on the clock; check the data dictionary (and trace files, if necessary) carefully to find out what activity actually took place.

Footnote:

It’s possible that there are ways to fiddle around with the various default attributes of the partitioned table to get the same effect – but since 12.2 is much better behaved anyway there’s no point in me spending more time looking for alternative solutions to a 10g problem.

 

Rolling out patched 19c home using gold image

Bobby Durrett's DBA Blog - Thu, 2019-08-08 18:49

For Oracle versions 11.2 through 18 I have been applying quarterly patches to a test database and then gathering the Oracle home into a tar file. When we want to roll out a new database server with the latest patches we untar the file and clone the Oracle home with a command like this:

$ORACLE_HOME/oui/bin/runInstaller -clone -silent ...

This command no longer works for 19c so now I am using a new command like this:

$ORACLE_HOME/runInstaller -silent -responseFile ...

Before running this command I had to unzip my gold image zip file into $ORACLE_HOME. I created the gold image zip file using a command like this:

./runInstaller -createGoldImage -destinationLocation ...

I ran this through MobaXterm to use their X server. I created the response file when I initially installed 19c on this test server. Then I patched the Oracle home with the July 2019 PSU and finally ran the above command to create the gold image.

Some useful links that I ran into:

Franck Pachot’s post about doing a silent 18c install using the new runInstaller

Oracle support document that says the old Oracle home cloning does not work in 19c:

19.x:Clone.pl script is deprecated and how to clone using gold-image (Doc ID 2565006.1)

Oracle documentation about installing silent with response file

DBA Stackexchange post about how you have to use a response file because the command line options don’t work

This is kind of terse but it has some of the key information. I may update it later if I can.

Bobby

Categories: DBA Blogs

2million PageViews

Hemant K Chitale - Wed, 2019-08-07 23:53

This blog has now achieved 2million PageViews :




(The "drop" at the end is the count for only the first week of August 2019).

Although this blog began in December 2006, the PageViews counts start with 8,176 in July 2010.  So, effectively, this blog has had 2million PageViews in 9years.

The first 1million PageViews were achieved in March 2015.

Unfortunately, the rate at which I have been publishing has declined since 2017 (36 posts in 2017, 30 in 2018 and only 8 so far this year).  I apologise for this.  Hopefully, I should be able to add more posts in the coming months.




Categories: DBA Blogs

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Rittman Mead Consulting - Wed, 2019-08-07 08:19
Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIsHow much did I spend so far? Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

It's a typical question we ask ourselves daily and what do we do? Probably check the bank account status on our phone and yell at ourselves for all the money we trashed the previous night at the pub.

The Cloud

One of the great benefits of the cloud is that there is no big upfront cost required to start playing with the latest tool or technology, we just need to fill in a few forms, write down the credit card details and there we go! However, the cloud doesn't mean free: most of the times we pay based on resource and time consumption and things can become pretty expensive if we don't manage our resources wisely.

The main Oracle Cloud Dashboard offers a preview of the Month to Date Cost and by clicking on it, we can easily understand the cost per product. Like in the example below we spend £322.8 month to date and precisely £262.80 on Oracle Analytics Classic.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

We can go another step down the line: if for example, we have multiple versions of the same product, we'll see a line for each version or licensing method. In our case, all the money comes from a single B88303 - OAC Enterprise Edition OCPU per Hour product with an overall 60 hours of uptime (OAC billing is per hour).  

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

However, this requires a manual login into the Oracle Cloud to check the balance, which is not optimal if we want to display this information in external tools or automate part of the cost-checking procedures. Fortunately, we can retrieve the same information with Oracle Cloud Account Metering REST APIs.

Oracle Cloud Account Metering REST APIs

Oracle Cloud Account Metering REST APIs expose a lot of useful information about our Oracle Cloud account via REST APIs. We can, for example, check our subscription details, credit promotions, resource usage, cost and quotas. All we need to test the REST APIs is cURL, a command-line utility for sending HTTP requests. The syntax to retrieve the data is

curl -X GET -u <USERNAME>:<PASSWORD> \
	-H "X-ID-TENANT-NAME:<TENANT_ID>" \
	"https://itra.oraclecloud.com/<BASE_PATH>/<RESOURCE_NAME>

Where

  • <TENANT_ID> is the identity domain ID, you can find it under the Oracle Analytics Cloud -> Overview
Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs
  • <BASE_PATH> is the base URI of the resource, e.g. /metering/api/v1
  • <RESOURCE_NAME> is the name of the specific resource we are requesting
Checking the Cost

If, as per the example below, we want to understand the cost, we simply need to call the usagecost resource passing the <ACCOUNT_ID> parameter which can be found in the Overview page of every service we already have in our account.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

The basic cURL command to check the cost then becomes the following

curl -X GET -u <USERNAME>:<PASSWORD> \
	-H "X-ID-TENANT-NAME:<TENANT_ID>" \
	"https://itra.oraclecloud.com/metering/api/v1/usagecost/<ACCOUNT_ID>?startTime=<START_TIME>&endTime=<ENDTIME>&timeZone=<TIMEZONE>" 

Where on top of the parameters defined above we have

  • <START_TIME> and <END_TIME> with the format YYYY-MM-DDTHH:mm:sssZ e.g. 2019-08-01T00:00:00.000
  • <TIMEZONE> we specify which timezone to use for the date and time filter

So if like before, we're aiming to understand the cost from the beginning of the month, our suffix becomes

<ACCOUNT_ID>?startTime=2019-08-01T00:00:00.000Z&endTime=2019-08-10T23:59:00.000Z&timeZone=Europe/Rome

The result is in JSON format which we can easily parse the result with the command line tool jq.

curl -X GET ... | jq '.'

The output is

{
  "accountId": "<ACCOUNT_ID>",
  "items": [
    ...
    {
      "subscriptionId": "...",
      "subscriptionType": "PRODUCTION",
      "serviceName": "ANALYTICS",
      "resourceName": "ANALYTICS_EE_PAAS_ANY_OCPU_HOUR",
      "currency": "GBP",
      "gsiProductId": "B88303",
      "startTimeUtc": "2019-08-01T00:00:00.000",
      "endTimeUtc": "2019-08-10T23:00:00.000",
      "serviceEntitlementId": "...",
      "costs": [
        {
          "computedQuantity": 60,
          "computedAmount": 262.8,
          "unitPrice": 4.38,
          "overagesFlag": "Y"
        }
      ]
    },
    ..,
  "canonicalLink": "/metering/api/v1/usagecost/<ACCOUNT_ID>?timeZone=Europe%2FRome&startTime=2019-08-01T00%3A00%3A00.000Z&endTime=2019-08-10T23%3A59%3A00.000Z"
}

As expected, we get, within the items section, an entry for every product and license type we have used. In our case we have the "serviceName": "ANALYTICS", with the Enterprise Edition option billed per hour ("resourceName": "ANALYTICS_EE_PAAS_ANY_OCPU_HOUR") and we used it for 60 hours with a unit price of £4.38 for a total amount of £262.8 perfectly in line with what we see in the webpage.

We can further filter our query using one of the following parameters:

  • computeType: the nature of the cost (Usage, Overcharge...)
  • datacenter: the datacenter for which cost needs to be retrieved
  • dcAggEnabled: to roll up the cost by datacenter
  • resourceName: the type of resource billing (e.g. ANALYTICS_EE_PAAS_ANY_OCPU_HOUR)
  • serviceEntitlementId: the Id of the service, can be found in the Overview page
Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs
  • serviceName: the name of the service e.g. ANALYTICS
  • usageType: the type of usage we want to be reported either TOTAL, HOURLY or DAILY

Unfortunately, none of the above filters allows us to check the cost associated with a precise instance of the service. If, for example, we have two instances with the same edition and type of billing, we can't determine, with the above call, what the cost associated to each of the two instances is since it's rolled up and instance type level. But we're not alone! We can achieve more granularity in the billing metrics by using the /tagged REST API and properly performing instance tagging on our services.

Instance Tagging

We can group instances of various services with Tags. Tags are labels that we can attach to an instance to group them based on our company rules. Oracle allows two types of tagging for resources: free-form and defined.

With free-form tagging we can append any key-value label to our instances, e.g. we may want to tag an instance as Environment:Training with Environment being the key and Training being the label.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

The problem with free-form tagging is that we don't have control of which tag keys get associated to a certain resource and it's an error-prone method since we have to type a key and value every time (and they're not visible for cost-tracking).

If instead, we want to use a more controlled approach to tagging, we can then go for the defined tagging: while with free-form anyone was able to associate any key or value, with define tagging we create a namespace which will contain a set of tag keys.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Once created the namespace, we can then create the set of keys within it. In this case, we create two pre-defined keys Environment and Project, please note that we flagged the COST-TRACKING checkbox to be able to use the tags with the Oracle Cloud Account Metering APIs.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Please be aware that there are limits on the number of namespaces, of tags per resource and of cost-tracking tags which are available under the tagging documentation.

Now it's time to attach the defined tags to our instances, we can do so in the web UI during instance creation or after by selecting "Add Tags". More information under the related documentation.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

After we added the tags marked for cost-tracking to our instances we may have to wait up to 5 hours to see them in the "My Services" or via the REST APIs.

Querying Tagged Resources

There is an API within the Oracle Cloud Account Metering REST APIs which allows to querying the cost associated with tagged resources. The call is very similar to the one we used above, with the additional tagged prefix and tags=.... parameter. Taking the example above, if we can see the consumption associated with instances tagged as Operations:Project=Training then the call is the following

curl -X GET -u <USERNAME>:<PASSWORD> \
	-H "X-ID-TENANT-NAME:<TENANT_ID>" \
	"https://itra.oraclecloud.com/metering/api/v1/usagecost/<ACCOUNT_ID>/tagged?startTime=<START>&endTime=<END>&timeZone=<TZ>&tags=operations:Project=Training"

And the result is

{
  "accountId": "<ACCOUNT_ID>",
  "items": [
    {
      "subscriptionId": "...",
      "subscriptionType": "PRODUCTION",
      "serviceName": "ADWC",
      "resourceName": "ADWC_PAAS_BYOL_OCPU_HOUR",
      "currency": "GBP",
      "gsiProductId": "B89039",
      "startTimeUtc": "2019-08-01T00:00:00.000",
      "endTimeUtc": "2019-08-10T23:00:00.000",
      "serviceEntitlementId": "...",
      "costs": [
        {
          "computedQuantity": 23.0,
          "computedAmount": 8.06235468,
          "unitPrice": 0.35053716,
          "overagesFlag": "N"
        }
      ]
    }
  ],
  "canonicalLink": "/metering/api/v1/usagecost/<ACCOUNT_ID>/tagged?timeZone=UTC&startTime=2019-08-01T00%3A00%3A00.000Z&endTime=2019-08-10T23%3A59%3A00.000Z&usageType=TOTAL&tags=operations%3AProject%3DTraining"
}

A usage of ADWC for 23 hours for a total of £8.06 which is also visible from the My Services webpage.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Appending the following jq command to the cURL call also displays the relevant information like serviceName, and cost details as separate columns

jq --raw-output '.items[] | "\(.serviceName)\t\(.subscriptionType)\t\(.resourceName)\t\(.currency)\t\(.costs[].computedAmount)\t\(.costs[].computedQuantity)\t\(.costs[].unitPrice)\t\(.costs[].overagesFlag)"' 

And the result is

ADWC	PRODUCTION	ADWC_PAAS_BYOL_OCPU_HOUR	GBP	8.06235468	23	0.35053716	N
Summary

Oracle Cloud Account Metering REST APIs offer an easy way to expose the Oracle cloud usage and cost externally. Used smartly in conjunction with instance tagging they provide a way to ensure cost and usage tracking down to the single resource or project.

If on the other way, the integration with REST APIs is not what you need, but you're looking into ways of getting notified when you're spending too much, check out the Alerts section of Managing and Monitoring Oracle Cloud.

Edit: You can also download your billing information as CSV from Oracle Cloud web-ui as per screenshot below which is very handy for one-off analysis. If, on the other side, you want to automate the export of billing information, then Oracle Cloud Account Metering REST APIs is the way to go!

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs
Categories: BI & Warehousing

[Solved] Oracle R12.2 AppsDBA: Adop Hotpatch Issue

Online Apps DBA - Wed, 2019-08-07 05:58

[Solved] Oracle R12.2 AppsDBA: Adop Hotpatch Issue While applying the patch 27254132 did you encountered the issue of Adop Hotpatch? If YES!! Check this https://k21academy.com/appsdba60 which covers the root cause & fixes of the Issue encountered during applying of Patch 27254132 and things like: ✔ ADOP Overview ✔What is Hotpatch? ✔ What is this Issue, that […]

The post [Solved] Oracle R12.2 AppsDBA: Adop Hotpatch Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator