Feed aggregator

SUSE Expert Day Zürich

Yann Neuhaus - Tue, 2019-05-28 01:44

On May 16th I visited the SUSE Expert Day in Zürich.
An interesting Agenda was waiting for me, all under the topic: “My kind of open”

After a small welcome cafe, SUSE started with the Keynote of the Markus Wolf (Country Manager and Sales Director ALPS Region). After a short introduction of the Swiss SUSE Team, he talked about the IT Transformation and his vision of the next years in IT – nice to hear, that IT is getting even more complex as it is now.
One slide that really impressed me:

Amazing, isn’t it?

As a customer story, Nicolas Christener, CEO and CTO of the adfinis sygroup showed with an impressive example, what you can reach with the SUSE Cloud Application Platform and what matters for the end customer. He also mentioned the great collaboration with SUSE during the project. I think that’s really nice to know that you get the help and support of SUSE that is needed, especially in new pioneer projects.

As third speaker Bo Jin (Sales Engineer and Consultant at SUSE ) was on stage. Really impressive knowledge! He told a lot about CloundFoundry as well as a lot about Kubernetes, Cloud Application Platform, CaaS. A highlight for me was his really impressive demo about pushing code to Clound Goundry and how to deploy from GitHub into a container. Everything seems to be really easy to manage.

Last but not least we got some insight to the SUSE Manager, how it could help you to centralize the system administration, the patch handling as well as autoyast profiles and kickstart files. This tool is suitable for Ubuntu, CentOS, Red Hat and, of course, SUSE servers. Everything centrally handled for almost all distributions. That makes life much easier.
Bo Jin also showed us the kernel live patching in a demo and gave us some background information. Did you for example know, that even if you have Kernel Live Patching enabled, you have to reboot at least once in 12 month?

In a nutshell – nice to see how pationate and innovative SUSE is, they presented great tools. Even they were only able to show us the tools mostly in scope at the moment – can’t wait to test them!

Cet article SUSE Expert Day Zürich est apparu en premier sur Blog dbi services.

Clarification of Application Management Pack for Oracle Utilities

Anthony Shorten - Mon, 2019-05-27 19:27

As the Product Manager for a number of products including Oracle Application Management Pack for Oracle Utilities, the Oracle Enterprise Manager plugin for managing Oracle Utilities Application Framework product there are always a number of requests that come across my desk that need clarification. I wanted to post a few responses to some common issues we are seeing in the field and how to address them.

  • Read the Installation Documentation. This is obvious but the installation documentation not only talks about the installation of the pack but also what features to enable on the Oracle Utilities Application Framework to maximize the benefits of the pack. For customers on older versions of the Oracle Utilities Application Framework, some of the advanced features of the pack are not available as those versions of the Oracle Utilities Application Framework do not exist. For example, most of the metrics were added in Oracle Utilities Application Framework 4.x.
  • Clustering Support Changes. In earlier versions of Oracle Utilities Application Framework with older versions of Oracle WebLogic, to use clustering you needed to use the localhost hostname. Whilst this worked for those versions (and has been used with later version), Oracle recommends to use the actual host name in the configuration and in particular for the JMX integration used by the Oracle Utilities Application Framework. Using localhost in the host name may prevent Oracle Enterprise Manager and the pack recognizing the active ports and will result in the target not being recognized.
  • Native Install Transition. A few years ago, the Oracle Utilities Application Framework transitioned to use Oracle WebLogic natively rather than the original embedded mode which was popular in legacy versions of the product. The main reason for moving to native mode was to allow customers full flexibility when using the Oracle WebLogic Domain so that the license value was increased and advanced configuration was supported. To support that transition a decision was made in respect to the pack:
    • Embedded Mode Customers. It is recommended that customers still on the old embedded mode that have not transitioned to a native installation, yet, use the Start / Stop functionality on the Oracle Utilities targets to manage the availability of Oracle Utilities targets. 
    • Native Mode Customers. Customers who have transitioned to the native installation are recommended to use the Oracle WebLogic targets to start and stop the product (as this is one of the benefits of the native mode). It is NOT recommended to use the start and stop functionality on the Oracle Utilities targets during the transition period. The current release of Enterprise Manager and the Application Management Pack for Oracle Utilities cannot manage cross targets at the present moment.

Essentially, if you embedded mode then use the start/stop on the Oracle Utilities targets, if you are native use the start/stop on the Oracle WebLogic targets.

Choiceology with Katy Milkman

Michael Dinh - Mon, 2019-05-27 15:15

Good listening I thought about sharing with you.

RSS Feed:
Choiceology with Katy Milkman – Exposing the psychological traps that lead to expensive mistakes

You can listen and subscribe to Choiceology with Katy Milkman for free in any podcast player—such as Apple Podcasts, Google Podcasts or Spotify.
How Do I Listen to Choiceology?

Re-partitioning 2

Jonathan Lewis - Mon, 2019-05-27 14:20

Last week I wrote a note about turning a range-partitioned table into a range/list composite partitioned table using features included in 12.2 of Oracle. But my example was really just an outline of the method and bypassed a number of the little extra problems you’re likely to see in a real-world system, so in this note I’m going to bring in an issue that you might run into – and which I’ve seen appearing a number of times: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION.

It’s often the case that a system has a partitioned table that’s been around for a long time, and over its lifetime it may have had (real or virtual) columns added, made inivisble, dropped, or mark unused. As a result you may find that the apparent definition of the table is not the same as the real definition of the table – and that’s why Oracle has given us (in 12c) the option to “create table for exchange”.

You might like to read a MoS note giving you one example of a problem with creating an exchange table prior to this new feature. ORA-14097 At Exchange Partition After Adding Column With Default Value (Doc ID 1334763.1) I’ve created a little model by cloning the code from that note.


rem
rem     Script:         pt_exchange_problem.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem

create table mtab (pcol number)
partition by list (pcol) (
        partition p1 values (1),
        partition p2 values (2)
);

alter table mtab add col2 number default 0 not null;

prompt  ========================================
prompt  Traditional creation method => ORA-14097
prompt  ========================================

create table mtab_p2 as select * from mtab where 1=0;
alter table mtab exchange partition P2 with table mtab_p2;

prompt  ===================
prompt  Create for exchange
prompt  ===================

drop table mtab_p2 purge;
create table mtab_p2 for exchange with table mtab;
alter table mtab exchange partition P2 with table mtab_p2;

[/sourcecode}


Here's the output from running this on an instance of 18.3


Table created.

Table altered.

========================================
Traditional creation method => ORA-14097
========================================

Table created.

alter table mtab exchange partition P2 with table mtab_p2
*
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

===================
Create for exchange
===================

Table dropped.


Table created.


Table altered.

So we don’t have to worry about problems creating an exchange table in Oracle 12c or later. But we do still have a problem if we’re trying to convert our range-partitioned table into a range/list composite partitioned table by doing using the “double-exchange” method. In my simple example I used a “create table” statement to create an empty table that we could exchange into; but without another special version of a “create table” command I won’t be able to create a composite partitioned table that is compatible with the simple table that I want to use as my intermediate table.

Here’s the solution to that problem – first in a thumbnail sketch:

  • create a table for exchange (call it table C)
  • alter table C modify to change it to a composite partitioned table with one subpartition per partition
  • create a table for exchange (call it table E)
  • Use table E to exchange partitions from the original table to the (now-partitioned) table C
  • Split each partition of table C into the specific subpartitions required

And now some code to work through the details – first the code to create and populate the partitioned table.


rem
rem     Script:         pt_comp_from_pt_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem

drop table t purge;
drop table pt_range purge;
drop table pt_range_list purge;

-- @@setup

create table pt_range (
        id              number(8,0)     not null,
        grp             varchar2(1)     not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range(id) (
        partition p200 values less than (200),
        partition p400 values less than (400),
        partition p600 values less than (600)
)
;

insert into pt_range
select
        rownum-1,
        mod(rownum,2),
        lpad(rownum,10,'0'),
        rpad('x',100,'x')
from
        all_objects
where
	rownum <= 600 -- > comment to avoid WordPress format issue
;

commit;

Then some code to create the beginnings of the target composite partitioned table. We create a simple heap table “for exchange”, then modify it to be a composite partitioned table with a named starting partition and high_value and a template defining a single subpartition then, as a variant on the example from last week, specifying interval partitioning.


prompt	==========================================
prompt	First nice feature - "create for exchange"
prompt	==========================================

create table pt_range_list for exchange with table pt_range;

prompt	============================================
prompt	Now alter the table to composite partitioned
prompt	============================================

alter table pt_range_list modify
partition by range(id) interval (200)
subpartition by list (grp) 
subpartition template (
        subpartition p_def      values(default)
)
(
	partition p200 values less than (200)
)
;

If you want to do the conversion from range partitioning to interval partitioning you will have to check very carefully that your original table will be able to convert safely – which means you’ll need to check that the “high_value” values for the partitions are properly spaced to match the interval you’ve defined and (as a special requirement for the conversion) there are no omissions from the current list of high values. If your original table doesn’t match these requirement exactly you may end up trying to exchange data into a partition where it doesn’t belong; for example, if my original table had partitions with high value of 200, 600, 800 then there may be values in the 200-399 range currently stored in the original “600” range partition which shouldn’t go into the new “600” interval partition. You may find you have to split (and/or merge) a few partitions in your range-partitioned table before you can do the main conversion.

Now we create create the table that we’ll actually use for the exchange and go through each exchange in turn. Because I’ve got an explicitly named starting partition the first exchange takes only two steps – exchange out, exchange in. But because I’m using interval partitioning in the composite partitioned table I’m doing a “lock partition” before the second exchange on all the other partitions as this will bring the required target partition into existence. I’m also using the “[sub]partition for()” syntax to identify the pairs of [sub]partitions – this isn’t necessary for the original range-partitioned table, of course, but it’s the only way I can identify the generated subpartitions that will appear in the composite partitioned table.


create table t for exchange with table pt_range;

prompt	=======================================================================
prompt	Double exchange to move a partition to become a composite subpartition
prompt	Could drive this programatically by picking one row from each partition
prompt	=======================================================================

alter table pt_range exchange partition p200 with table t;
alter table pt_range_list exchange subpartition p200_p_def with table t;

alter table pt_range exchange partition for (399) with table t;
lock  table pt_range_list partition for (399) in exclusive mode;
alter table pt_range_list exchange subpartition for (399,'0') with table t;

alter table pt_range exchange partition for (599) with table t;
lock  table pt_range_list partition for (599) in exclusive mode;
alter table pt_range_list exchange subpartition for (599,'0') with table t;

prompt	=====================================
prompt	Show that we've got the data in place
prompt	=====================================

execute dbms_stats.gather_table_stats(user,'pt_range_list',granularity=>'ALL')

break on partition_name skip 1

select  partition_name, subpartition_name, num_rows 
from    user_tab_subpartitions 
where   table_name = 'PT_RANGE_LIST'
order by
        partition_name, subpartition_name
;

Now that the data is in the target table we can split each default subpartition into the four subpartitions that we want for each partition. To cater for the future, though, I’ve first modified the subpartition template so that each new partition will have four subpartitions (though the naming convention won’t be applied, of course, Oracle will generate system name for all new partitions and subpartitions).


prompt  ================================================
prompt  Change the subpartition template to what we want
prompt  ================================================

alter table pt_range_list
set subpartition template(
        subpartition p_0 values (0),
        subpartition p_1 values (1),
        subpartition p_2 values (2),
        subpartition p_def values (default)
)
;

prompt  ====================================================
prompt  Second nice feature - multiple splits in one command
prompt  Again, first split is fixed name.
prompt  We could do this online after allowing the users in
prompt  ====================================================

alter table pt_range_list split subpartition p200_p_def
        into (
                subpartition p200_p_0 values(0),
                subpartition p200_p_1 values(1),
                subpartition p200_p_2 values(2),
                subpartition p200_p_def
        )
;

alter table pt_range_list split subpartition for (399,'0')
        into (
                subpartition p400_p_0 values(0),
                subpartition p400_p_1 values(1),
                subpartition p400_p_2 values(2),
                subpartition p400_p_def
        )
;

alter table pt_range_list split subpartition for (599,'0')
        into (
                subpartition p600_p_0 values(0),
                subpartition p600_p_1 values(1),
                subpartition p600_p_2 values(2),
                subpartition p600_p_def
        )
;

Finally a little demonstration that we can’t add an explicitly named partition to the interval partitioned table; then we insert a row to generate the partition and show that it has 4 subpartitions.

Finishing off we rename everything (though that’s a fairly pointless exercise).


prompt  ==============================================================
prompt  Could try adding a partition to show it uses the new template
prompt  But that's not allowed for interval partitions: "ORA-14760:"
prompt  ADD PARTITION is not permitted on Interval partitioned objects
prompt  So insert a value that would go into the next (800) partition
prompt  ==============================================================

alter table pt_range_list add partition p800 values less than (800);

insert into pt_range_list (
        id, grp, small_vc, padding
)
values ( 
        799, '0', lpad(799,10,'0'), rpad('x',100,'x')
)
;

commit;

prompt  ===================================================
prompt  Template naming is not used for the subpartitions,
prompt  so we have to use the "subpartition for()" strategy 
prompt  ===================================================

alter table pt_range_list rename subpartition for (799,'0') to p800_p_0;
alter table pt_range_list rename subpartition for (799,'1') to p800_p_1;
alter table pt_range_list rename subpartition for (799,'2') to p800_p_2;
alter table pt_range_list rename subpartition for (799,'3') to p800_p_def;

prompt  ==============================================
prompt  Might as well clean up the partition names too
prompt  ==============================================

alter table pt_range_list rename partition for (399) to p400;
alter table pt_range_list rename partition for (599) to p600;
alter table pt_range_list rename partition for (799) to p800;

prompt  =======================================
prompt  Finish off by listing the subpartitions 
prompt  =======================================

execute dbms_stats.gather_table_stats(user,'pt_range_list',granularity=>'ALL')

select  partition_name, subpartition_name, num_rows 
from    user_tab_subpartitions 
where   table_name = 'PT_RANGE_LIST'
order by
        partition_name, subpartition_name
;

It’s worth pointing out that you could do the exchanges (and the splitting and renaming at the same time) through some sort of simple PL/SQL loop – looping through the named partitions in the original table and using a row from the first exchange to drive the lock and second exchange (and splitting and renaming). For exanple something like the following which doesn’t have any of the error-trapping and defensive mechanisms you’d want to use on a production system:



declare
        m_pt_val number;
begin
        for r in (select partition_name from user_tab_partitions where table_name = 'PT_RANGE' order by partition_position) 
        loop
                execute immediate
                        'alter table pt_range exchange partition ' || r.partition_name ||
                        ' with table t';
        
                select id into m_pt_val from t where rownum = 1;
        
                execute immediate 
                        'lock table pt_range_list partition for (' || m_pt_val || ') in exclusive mode';
        
                execute immediate
                        'alter table pt_range_list exchange subpartition  for (' || m_pt_val || ',0)' ||
                        ' with table t';
        
        end loop;
end;
/

If you do go for a programmed loop you have to be really careful to consider what could go wrong at each step of the loop and how your program is going to report (and possibly attempt to recover) the situation. This is definitely a case where you don’t want code with “when others then null” appearing anywhere, and don’t be tempted to include code to truncate the exchange table.

 

Oracle Integration Cloud Services (OIC) Training: Step By Step Activity Guides/Hands-On Lab Exercise

Online Apps DBA - Mon, 2019-05-27 09:00

Oracle Integration Cloud Services (OIC): Step By Step Activity Guides/Hands-On Lab Exercise Find out How You Can be an Expert in Integration Cloud Service by going through the Hands-On Lab Exercises at https://k21academy.com/oic05 & get in-depth, Stepwise Study for: ► Hands-On Activity Guides that you must perform in order to learn Oracle Integration Cloud Service(OIC) […]

The post Oracle Integration Cloud Services (OIC) Training: Step By Step Activity Guides/Hands-On Lab Exercise appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

RMAN Incremental & Demo Part 1 (Level 0)

Zed DBA's Oracle Blog - Mon, 2019-05-27 07:15

This blog post is part of the “RMAN Back to Basics” series, which can be found here.

Incremental Backup

An incremental backup only backup up those data blocks that have changed since the last backup.

Types of Incremental Backups

There are 2 types of Incremental Backups:

  1. Level 0 are a base for subsequent backups.  Copies all blocks containing data similar to a full backup, with the only difference that full backups are never included in an incremental strategy.  Level 0 can be backup sets or image copies.
  2. Level 1 are subsequent backups of a level 0, backing up by default all blocks changed after the most recent level 0 or 1, known as differential incremental backup.  More on different types of level 1 backups is discuss in detail here.
Incremental Level 0 Demo

We take an incremental level 0 backup using my script 5_incremental_level_0.sh:

[oracle@dc1sbxdb001 demo]$ ./5_incremental_level_0.sh
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base remains unchanged with value /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next the incremental backup (Level 0) is taken:

---------------------------------------
Step 2: Take Incremental Level 0 Backup
---------------------------------------

Cotent of 5_incremental_level_0.cmd file:

BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
HOST 'read Press Enter to LIST BACKUP';
LIST BACKUP;

Press Enter to continue

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/5_incremental_level_0.cmd'

Recovery Manager: Release 12.2.0.1.0 - Production on Mon May 20 17:13:13 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
2> HOST 'read Press Enter to LIST BACKUP';
3> LIST BACKUP;
4> 
Starting backup at 20-MAY-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=15 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 20-MAY-19
channel ORA_DISK_1: finished piece 1 at 20-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_20/o1_mf_nnnd0_INCR_LEVEL_0_gg5njx01_.bkp tag=INCR LEVEL 0 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 20-MAY-19

Starting Control File and SPFILE Autobackup at 20-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_20/o1_mf_s_1008782004_gg5nk4cs_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 20-MAY-19


host command complete

We use host within RMAN just to wait for input before moving on for demo purposes.  Next we list the backup within RMAN using ‘LIST BACKUP‘:

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 498.93M DISK 00:00:08 16-MAY-19 
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20190516T173912
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008437952_gfv4kjko_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 353836 Ckp time: 16-MAY-19

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
3 77.83M DISK 00:00:01 16-MAY-19 
BP Key: 3 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP PLUS ARCHIVELOG
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4y7m2_.bkp

List of Archived Logs in backup set 3
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 13 332298 16-MAY-19 354044 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
4 Full 498.94M DISK 00:00:06 16-MAY-19 
BP Key: 4 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174603
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_TAG20190516T174603_gfv4yco3_.bkp
List of Datafiles in backup set 4
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
5 3.50K DISK 00:00:00 16-MAY-19 
BP Key: 5 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP PLUS ARCHIVELOG
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4ym4v_.bkp

List of Archived Logs in backup set 5
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 14 354044 16-MAY-19 354066 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
6 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 6 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174612
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008438372_gfv4ynmv_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 354077 Ckp time: 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
7 Full 8.31M DISK 00:00:00 17-MAY-19 
BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20190517T165453
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_17/o1_mf_s_1008521693_gfxpbfs7_.bkp
SPFILE Included: Modification time: 17-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 458035 Ckp time: 17-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
8 Incr 0 511.64M DISK 00:00:05 20-MAY-19 
BP Key: 8 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 0
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_20/o1_mf_nnnd0_INCR_LEVEL_0_gg5njx01_.bkp
List of Datafiles in backup set 8
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
9 Full 8.31M DISK 00:00:00 20-MAY-19 
BP Key: 9 Status: AVAILABLE Compressed: NO Tag: TAG20190520T171324
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_20/o1_mf_s_1008782004_gg5nk4cs_.bkp
SPFILE Included: Modification time: 20-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 571056 Ckp time: 20-MAY-19

Recovery Manager complete.

Press Enter to continue

Finally, we update the demo log table:

-------------------------------------
Step 3: Updating and viewing demo log
-------------------------------------
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.

Commit complete.

WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode
16-MAY-19 05.40.23.000000 PM Full Backup
16-MAY-19 05.46.17.000000 PM Full Backup plus Archive Logs
17-MAY-19 04.55.12.000000 PM Image Copy
20-MAY-19 05.13.44.000000 PM Incremental Level 0

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$
Reference Scripts
  1. 5_incremental_level_0.sh
  2. 5_incremental_level_0.cmd

To download all 2 in one zip: 5_incremental_level_0.zip

The rest of the series
  1. Oracle Database File Placement Best Practice & Create Database Demo
  2. RMAN Full Backup & Demo
  3. RMAN Image Copy & Demo
  4. RMAN Incremental & Demo Part 1 (Level 0)
  5. RMAN Incremental & Demo Part 2 (Level 1)
  6. RMAN Incremental with Block Change Tracking & Demo
  7. RMAN Incremental Differential vs Cumulative & Demo
  8. RMAN Incremental Updating Backup & Demo Part 1
  9. RMAN Incremental Updating Backup & Demo Part 2
  10. Flashback
  11. RMAN Block Media Recovery
  12. RMAN Recover database with only FRA
  13. RMAN Obsolete

Please Note: Links to the blog posts will be released daily and updated here.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Categories: DBA Blogs

[Solved] RW-50016: Error: – {0} Not Created During EBS R12.1 Installation

Online Apps DBA - Mon, 2019-05-27 06:55

[Solved] RW-50016: Error: – {0} Not Created During EBS R12.1 Installation If you’ve faced the issue while Installing Oracle EBS R12.1 with the bellow RW-50016: Error: – {0} was not created, Then we have got you covered in this blog! Check Troubleshooting at https://k21academy.com/appsdba55 & know 1) What This issue All About? 2) Symptom & […]

The post [Solved] RW-50016: Error: – {0} Not Created During EBS R12.1 Installation appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to change Remote Trail Location at Source– Golden Gate 12c

Syed Jaffar - Sun, 2019-05-26 02:29

There is a requirement to modify the remote trail location because of typo in the mount point. To change the remote trail location of the Golden Gate, first verify the extract trail information using the command below. 

GGSCI (hie-p-ggate) 48> INFO EXTTRAIL

       Extract Trail: /golgengate/ggs_home/dirdat/test/ru
        Seqno Length: 6
   Flip Seqno Length: yes
             Extract: DPMP1
               Seqno: 0
                 RBA: 0
           File Size: 500M

       Extract Trail: D:\app\oracle\product\ogg_1\lt
        Seqno Length: 6
   Flip Seqno Length: yes
             Extract: EXT1
               Seqno: 0
                 RBA: 0
           File Size: 100M
GGSCI (hie-p-ggate) 49>

First delete the current configuration of remote trail. 

GGSCI (hie-p-ggate) 50> delete rmttrail /golgengate/ggs_home/dirdat/test/ru, extract dpmp1
Deleting extract trail /golgengate/ggs_home/dirdat/test/ru for extract DPMP1

After removing, then add the remote trail again with the new location. 


GGSCI (hie-p-ggate) 51> add rmttrail /goldengate/ggs_home/dirdat/test/ru, extract dpmp1
RMTTRAIL added.


After deleting and adding now we can see the extract trail for the target with new location. 

GGSCI (hie-p-ggate) 52> INFO EXTTRAIL

       Extract Trail: /goldengate/ggs_home/dirdat/test/ru
        Seqno Length: 6
   Flip Seqno Length: yes
             Extract: DPMP1
               Seqno: 0
                 RBA: 0
           File Size: 500M

       Extract Trail: D:\app\oracle\product\ogg_1\lt
        Seqno Length: 6
   Flip Seqno Length: yes
             Extract: EXT1
               Seqno: 0
                 RBA: 0
           File Size: 100M



GGSCI (hie-p-ggate) 53>


[Blog] Increasing Service Limit in Oracle Cloud (OCI)

Online Apps DBA - Sun, 2019-05-26 02:07

[OCI: Service Limit & How to Increase] When you create a FREE Cloud trial account, Limits are set, meaning you can create only 1-2 Resources/ Service (VM, Load Balancer, etc ) While this is OK when you start but as you deploy business applications like RAC or Dataguard, EBS (R12), Peoplesoft, JD Edwards or HA […]

The post [Blog] Increasing Service Limit in Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Configuring Oracle DB data source in JBoss EAP 7.1

Yann Neuhaus - Sun, 2019-05-26 02:05

Introduction

This blog explains how to install and use an Oracle database JDBC driver in JBoss EAP 7.1 standalone instance and in a domain deployment.

Oracle JDBC driver installation
The first step is to install the JDBC driver in the JBoss installation. This can be done copying the files to the right directory or using the JBoss CLI to do the install properly.
I will use the JBoss CLI script for this.

Start the JBoss CLI without connecting to any JBoss instance.

/opt/jboss-eap-7.1/bin/jboss-cli.sh

Then use the module add CLI command to install the Oracle JDBC driver to the right place.

module add --name=com.oracle --resources=/home/pascal/jdbc_drivers/ojdbc8.jar --dependencies=javax.api,javax.transaction.api

This will place the driver in the following directory:

$JBOSS_HOME/modules/com/oracle/main

Note: This CLI command has to be run on each host participating to a domain deployment.
Once the module is installed, it can be use to declare the JDBC driver inside the JBoss instance

Create a Data-Source

For a standalone instance using the default profile:
a. Start jboss_cli.sh to connect to the standalone server and declare the JDBC driver in the Jboss instance

/opt/jboss-eap-7.1/bin/jboss-cli.sh -c --controller=192.168.56.21:9990
[standalone@192.168.56.21:9990 /] /subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.driver.OracleDriver)
{"outcome" => "success"}
[standalone@192.168.56.21:9990 /]

b. Confirm the JDBC driver has been declared successfully.

[standalone@192.168.56.21:9990 /] /subsystem=datasources/jdbc-driver=oracle:read-resource
{
    "outcome" => "success",
    "result" => {
        "deployment-name" => undefined,
        "driver-class-name" => undefined,
        "driver-datasource-class-name" => undefined,
        "driver-major-version" => undefined,
        "driver-minor-version" => undefined,
        "driver-module-name" => "com.oracle",
        "driver-name" => "oracle",
        "driver-xa-datasource-class-name" => "oracle.jdbc.driver.OracleDriver",
        "jdbc-compliant" => undefined,
        "module-slot" => undefined,
        "profile" => undefined,
        "xa-datasource-class" => undefined
    }
}

c. Create the data-source pointing to the Oracle Database

[standalone@192.168.56.21:9990 /] data-source add --name=testOracleDS --jndi-name=java:/jdbc/testOracleDS --driver-name=oracle --connection-url=jdbc:oracle:thin:@vm12:1521/orcl --user-name=scott --password=tiger --jta=true --use-ccm=true --use-java-context=true --enabled=true --user-name=scott --password=tiger --max-pool-size=10 --min-pool-size=5 --flush-strategy="FailingConnectionOnly"

d. Confirm the Datasource creation and parameters.

[standalone@192.168.56.21:9990 /] /subsystem=datasources/data-source=testOracleDS:read-resource
{
    "outcome" => "success",
    "result" => {
        "allocation-retry" => undefined,
        "allocation-retry-wait-millis" => undefined,
        "allow-multiple-users" => false,
        "authentication-context" => undefined,
        "background-validation" => undefined,
        "background-validation-millis" => undefined,
        "blocking-timeout-wait-millis" => undefined,
        "capacity-decrementer-class" => undefined,
        "capacity-decrementer-properties" => undefined,
        "capacity-incrementer-class" => undefined,
        "capacity-incrementer-properties" => undefined,
        "check-valid-connection-sql" => undefined,
        "connectable" => false,
        "connection-listener-class" => undefined,
        "connection-listener-property" => undefined,
        "connection-url" => "jdbc:oracle:thin:@vm12:1521/orcl",
        "credential-reference" => undefined,
        "datasource-class" => undefined,
        "driver-class" => undefined,
        "driver-name" => "oracle",
        "elytron-enabled" => false,
        "enabled" => true,
        "enlistment-trace" => false,
        "exception-sorter-class-name" => undefined,
        "exception-sorter-properties" => undefined,
        "flush-strategy" => "FailingConnectionOnly",
        "idle-timeout-minutes" => undefined,
        "initial-pool-size" => undefined,
        "jndi-name" => "java:/jdbc/testOracleDS",
        "jta" => true,
        "max-pool-size" => 10,
        "mcp" => "org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool",
        "min-pool-size" => 5,
        "new-connection-sql" => undefined,
        "password" => "tiger",
        "pool-fair" => undefined,
        "pool-prefill" => undefined,
        "pool-use-strict-min" => undefined,
        "prepared-statements-cache-size" => undefined,
        "query-timeout" => undefined,
        "reauth-plugin-class-name" => undefined,
        "reauth-plugin-properties" => undefined,
        "security-domain" => undefined,
        "set-tx-query-timeout" => false,
        "share-prepared-statements" => false,
        "spy" => false,
        "stale-connection-checker-class-name" => undefined,
        "stale-connection-checker-properties" => undefined,
        "statistics-enabled" => false,
        "track-statements" => "NOWARN",
        "tracking" => false,
        "transaction-isolation" => undefined,
        "url-delimiter" => undefined,
        "url-selector-strategy-class-name" => undefined,
        "use-ccm" => true,
        "use-fast-fail" => false,
        "use-java-context" => true,
        "use-try-lock" => undefined,
        "user-name" => "scott",
        "valid-connection-checker-class-name" => undefined,
        "valid-connection-checker-properties" => undefined,
        "validate-on-match" => undefined,
        "connection-properties" => undefined,
        "statistics" => {
            "jdbc" => undefined,
            "pool" => undefined
        }
    }
}

At this stage, the data-source is available to all applications deployed in the standalone server. I started with the standalone.xml default profile configuration file. To have additional subsystems, an other standalone profile configuration file should be used.
For a JBoss domain using the Full-ha profile
The domain I’m using in my tests is having a domain controller and two slave Hosts running two servers organized in two server groups.
a. Start jboss_cli.sh to connect to the domain master

/opt/jboss-eap-7.1/bin/jboss-cli.sh -c --controller=192.168.56.21:9990

b. Register the Oracle JDBC driver

[domain@192.168.56.21:9990 /] /profile=full-ha/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.driver.OracleDriver)
{
    "outcome" => "success",
    "result" => undefined,
    "server-groups" => {"Group2" => {"host" => {
        "host1" => {"server-two" => {"response" => {
            "outcome" => "success",
            "result" => undefined
        }}},
        "host2" => {"server-four" => {"response" => {
            "outcome" => "success",
            "result" => undefined
        }}}
    }}}
}
[domain@192.168.56.21:9990 /]

In the JBoss domain I used for the testing, the full-ha profile has been used when creating the Group2 Servers group.
c. Confirm the JDBC driver has been declared successfully.

[domain@192.168.56.21:9990 /] /profile=full-ha/subsystem=datasources/jdbc-driver=oracle:read-resource
{
    "outcome" => "success",
    "result" => {
        "deployment-name" => undefined,
        "driver-class-name" => undefined,
        "driver-datasource-class-name" => undefined,
        "driver-major-version" => undefined,
        "driver-minor-version" => undefined,
        "driver-module-name" => "com.oracle",
        "driver-name" => "oracle",
        "driver-xa-datasource-class-name" => "oracle.jdbc.driver.OracleDriver",
        "jdbc-compliant" => undefined,
        "module-slot" => undefined,
        "profile" => undefined,
        "xa-datasource-class" => undefined
    }
}
[domain@192.168.56.21:9990 /]

d. Create the Data source pointing to the Oracle Database

[domain@192.168.56.21:9990 /] data-source add --profile=full-ha --name=testOracleDS --jndi-name=java:/jdbc/testOracleDS --driver-name=oracle --connection-url=jdbc:oracle:thin:@vm12:1521/orcl --user-name=scott --password=tiger --jta=true --use-ccm=true --use-java-context=true --enabled=true --user-name=scott --password=tiger --max-pool-size=10 --min-pool-size=5 --flush-strategy="FailingConnectionOnly"

e. Confirm the data source has been create correctly

[domain@192.168.56.21:9990 /] /profile=full-ha/subsystem=datasources/data-source=testOracleDS:read-resource
{
    "outcome" => "success",
    "result" => {
        "allocation-retry" => undefined,
        "allocation-retry-wait-millis" => undefined,
        "allow-multiple-users" => false,
        "authentication-context" => undefined,
        "background-validation" => undefined,
        "background-validation-millis" => undefined,
        "blocking-timeout-wait-millis" => undefined,
        "capacity-decrementer-class" => undefined,
        "capacity-decrementer-properties" => undefined,
        "capacity-incrementer-class" => undefined,
        "capacity-incrementer-properties" => undefined,
        "check-valid-connection-sql" => undefined,
        "connectable" => false,
        "connection-listener-class" => undefined,
        "connection-listener-property" => undefined,
        "connection-url" => "jdbc:oracle:thin:@vm12:1521/orcl",
        "credential-reference" => undefined,
        "datasource-class" => undefined,
        "driver-class" => undefined,
        "driver-name" => "oracle",
        "elytron-enabled" => false,
        "enabled" => true,
        "enlistment-trace" => false,
        "exception-sorter-class-name" => undefined,
        "exception-sorter-properties" => undefined,
        "flush-strategy" => "FailingConnectionOnly",
        "idle-timeout-minutes" => undefined,
        "initial-pool-size" => undefined,
        "jndi-name" => "java:/jdbc/testOracleDS",
        "jta" => true,
        "max-pool-size" => 10,
        "mcp" => "org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool",
        "min-pool-size" => 5,
        "new-connection-sql" => undefined,
        "password" => "tiger",
        "pool-fair" => undefined,
        "pool-prefill" => undefined,
        "pool-use-strict-min" => undefined,
        "prepared-statements-cache-size" => undefined,
        "query-timeout" => undefined,
        "reauth-plugin-class-name" => undefined,
        "reauth-plugin-properties" => undefined,
        "security-domain" => undefined,
        "set-tx-query-timeout" => false,
        "share-prepared-statements" => false,
        "spy" => false,
        "stale-connection-checker-class-name" => undefined,
        "stale-connection-checker-properties" => undefined,
        "statistics-enabled" => false,
        "track-statements" => "NOWARN",
        "tracking" => false,
        "transaction-isolation" => undefined,
        "url-delimiter" => undefined,
        "url-selector-strategy-class-name" => undefined,
        "use-ccm" => true,
        "use-fast-fail" => false,
        "use-java-context" => true,
        "use-try-lock" => undefined,
        "user-name" => "scott",
        "valid-connection-checker-class-name" => undefined,
        "valid-connection-checker-properties" => undefined,
        "validate-on-match" => undefined,
        "connection-properties" => undefined
    }
}
[domain@192.168.56.21:9990 /]

At this stage, all servers in the Server group Group2 have been targeted with the data-source and all applications deployed to those servers can use it.
As the data-source has been targeted to the profile, all server groups created with this profile will allow their JBoss servers instances to use it.

Cet article Configuring Oracle DB data source in JBoss EAP 7.1 est apparu en premier sur Blog dbi services.

[Oracle Integration Cloud] Create Instance: Step-by-Step

Online Apps DBA - Sat, 2019-05-25 06:53

[Oracle Integration Cloud] Create Instance to Practice, Step-by-Step! Best way to learn integration on Cloud using ‘Oracle Integration Cloud’ is by doing it on Cloud and for that, you’ll need an Oracle Integration Cloud Instance. You can now create a FREE Trial Cloud Integration Instance Get Step by Step document at https://k21academy.com/oic12 covering, ▪ Two […]

The post [Oracle Integration Cloud] Create Instance: Step-by-Step appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle JDeveloper: Is this IDE dead? Are there new releases?

Dietrich Schroff - Fri, 2019-05-24 12:22
Some weeks ago i read the Java Magazine (http://www.javamagazine.mozaicreader.com)

They report from a survay taken place in 2018 about Java. One of the questions was which application server do you use in production:

This was not really amazing.
But the question about the IDE gave a really surprising result:



More developers use IntelliJ than Eclipse?
Not really surprising was the usage of Oracle's JDeveloper: Only 1% of the developers use this tool.

You have to keep in mind, that with using Oracle's SOA Suite / BPEL / ESB you are forced to use JDeveloper - which means this SOA stuff is not used by many people out there.

If you take a look on the Oracle homepage you will find:


The last version was released in august 2017 - this does look like living software.

For architects and consultants JDeveloper can be very useful because it has a very nice visualization of XSDs and XSLTs (Take a look here). So hopefully JDeveloper will stay alive...

RMAN Image Copy & Demo

Zed DBA's Oracle Blog - Fri, 2019-05-24 09:38

This blog post is part of the “RMAN Back to Basics” series, which can be found here.

Image Copy

An Image copy backup are exact copies of the datafiles including the free space.  They are not stored in RMAN backup pieces but as actual datafiles, therefore are a bit-for-bit copy.

Image Copy Demo

We take an image copy backup using my script 4_image_copy.sh:

[oracle@dc1sbxdb001 demo]$ ./4_image_copy.sh 
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base remains unchanged with value /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next the image copy is taken:

------------------------------
Step 2: Take Image Copy Backup
------------------------------

Cotent of 4_image_copy.cmd file:

BACKUP AS COPY DATABASE TAG 'IMAGE COPY';
HOST 'read Press Enter to LIST BACKUP';
LIST COPY;

Press Enter to continue

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/4_image_copy.cmd'

Recovery Manager: Release 12.2.0.1.0 - Production on Fri May 17 16:54:31 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

RMAN> BACKUP AS COPY DATABASE TAG 'IMAGE COPY';
2> HOST 'read Press Enter to LIST BACKUP';
3> LIST COPY;
4> 
Starting backup at 17-MAY-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=32 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
output file name=/u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_system_gfxp9txc_.dbf tag=IMAGE COPY RECID=1 STAMP=1008521678
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
output file name=/u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_sysaux_gfxpb246_.dbf tag=IMAGE COPY RECID=2 STAMP=1008521685
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
output file name=/u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_undotbs1_gfxpb9b0_.dbf tag=IMAGE COPY RECID=3 STAMP=1008521691
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
output file name=/u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_users_gfxpbdgo_.dbf tag=IMAGE COPY RECID=4 STAMP=1008521693
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 17-MAY-19

Starting Control File and SPFILE Autobackup at 17-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_17/o1_mf_s_1008521693_gfxpbfs7_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 17-MAY-19

host command complete

We use host within RMAN just to wait for input before moving on for demo purposes.  Next we list the backup within RMAN using ‘LIST COPY‘:

specification does not match any control file copy in the repository
List of Datafile Copies
=======================

Key File S Completion Time Ckp SCN Ckp Time Sparse
------- ---- - --------------- ---------- --------------- ------
1 1 A 17-MAY-19 458020 17-MAY-19 NO 
Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_system_gfxp9txc_.dbf
Tag: IMAGE COPY

2 2 A 17-MAY-19 458023 17-MAY-19 NO 
Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_sysaux_gfxpb246_.dbf
Tag: IMAGE COPY

3 3 A 17-MAY-19 458027 17-MAY-19 NO 
Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_undotbs1_gfxpb9b0_.dbf
Tag: IMAGE COPY

4 4 A 17-MAY-19 458029 17-MAY-19 NO 
Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/datafile/o1_mf_users_gfxpbdgo_.dbf
Tag: IMAGE COPY

List of Archived Log Copies for database with db_unique_name ZEDDBA
=====================================================================

Key Thrd Seq S Low Time 
------- ---- ------- - ---------
3 1 15 A 16-MAY-19
Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/archivelog/2019_05_17/o1_mf_1_15_gfxp940y_.arc

Recovery Manager complete.

Press Enter to continue

Finally, we update the demo log table:

-------------------------------------
Step 3: Updating and viewing demo log
-------------------------------------
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.

Commit complete.

WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode
16-MAY-19 05.40.23.000000 PM Full Backup
16-MAY-19 05.46.17.000000 PM Full Backup plus Archive Logs
17-MAY-19 04.55.12.000000 PM Image Copy

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$
Reference Scripts
  1. 4_image_copy.sh
  2. 4_image_copy.cmd

To download all 2 in one zip: 4_image_copy.zip

The rest of the series
  1. Oracle Database File Placement Best Practice & Create Database Demo
  2. RMAN Full Backup & Demo
  3. RMAN Image Copy & Demo
  4. RMAN Incremental & Demo Part 1 (Level 0)
  5. RMAN Incremental & Demo Part 2 (Level 1)
  6. RMAN Incremental with Block Change Tracking & Demo
  7. RMAN Incremental Differential vs Cumulative & Demo
  8. RMAN Incremental Updating Backup & Demo Part 1
  9. RMAN Incremental Updating Backup & Demo Part 2
  10. Flashback
  11. RMAN Block Media Recovery
  12. RMAN Recover database with only FRA
  13. RMAN Obsolete

Please Note: Links to the blog posts will be released daily and updated here.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Categories: DBA Blogs

[Solved] EBS R12.2: TNS-12560: TNS:protocol adapter error, TNS-00584: Valid node checking configuration error

Online Apps DBA - Fri, 2019-05-24 06:59

[Solved] EBS R12.2: TNS-12560: TNS: protocol adapter error, TNS-00584: Valid node checking configuration error Are you an Apps DBA and have encountered with TNS: protocol adapter error, TNS-00584 & searching for the Solution? Then our New Blog Post at https://k21academy.com/appsdba56 can help you to figure out: ✔ What is this Issue, that occurs while starting […]

The post [Solved] EBS R12.2: TNS-12560: TNS:protocol adapter error, TNS-00584: Valid node checking configuration error appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Re-partitioning – 18

Jonathan Lewis - Fri, 2019-05-24 06:50

In yesterday’s note on the options for converting a range-partioned table into a composite range/list parititioned table I mentioned that you could do this online with a single command in 18c, so here’s some demonstration code to demonstrate that claim:


rem
rem     Script:         pt_comp_from_pt_18.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem

create table pt_range (
        id              number(8,0)     not null,
        grp             varchar2(1)     not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range(id) (
        partition p200 values less than (200),
        partition p400 values less than (400),
        partition p600 values less than (600)
)
;

insert into pt_range
select
        rownum - 1,
        mod(rownum,2),
        lpad(rownum,10,'0'),
        rpad('x',100,'x')
from
        all_objects
where
        rownum <= 600 -- > comment to avoid WordPress format issue
;

commit;

alter table pt_range modify
partition by range(id) interval (200)
subpartition by list (grp) 
subpartition template (
        subpartition p_0 values (0),
        subpartition p_1 values (1),
        subpartition p_2 values (2),
        subpartition p_def values (default)
)
(
        partition p200 values less than (200)
)
-- online
;

execute dbms_stats.gather_table_stats(null, 'pt_range', granularity=>'all')

break on partition_name skip 1

select  partition_name, subpartition_name, num_rows 
from    user_tab_subpartitions 
where   table_name = 'PT_RANGE'
order by
        partition_name, subpartition_name
;

Run this (with or without the online option) and you’ll (probably) see the Oracle error “ORA-00604: error occurred at recursive SQL level 1” with one of two underlying errors:

    ORA-01950: no privileges on tablespace 'SYSTEM'
    ORA-01536: space quota exceeded for tablespace 'SYSTEM'

So what’s gone wrong – it ought to work.

After enabling a 10046 trace I repeated the “alter table” command then scanned the trace file for the text “err=1950” (that being the error I’d received on my first attempt) and scanned backwards for the “PARSING IN CURSOR” line with a matching cursor id:


ERROR #139721552722200:err=1950 tim=26541227462

PARSING IN CURSOR #139721552722200 len=182 dep=1 uid=104 oct=1 lid=0 tim=26541224560 hv=2451601965 ad='7f1377267890' sqlid='0wsjfgk920yjd'
create table  "TEST_USER"."SYS_RMTAB$$_H124028"  ( src_rowid rowid not null , tgt_rowid rowid not null) 
    segment creation immediate nologging 
    tablespace  "SYSTEM"  
    rowid_mapping_table
END OF STMT

The code is trying to create a “rowid_mapping_table” in the system tablespace and I have no quota for the tablespace. (The 124028 in the table name relates to the object_id of the table I was trying to modify, by the way.)

The source of the error offered a big clue about a possible workaround sp I gave myself a quota (unlimited) on the system tablespace (alter user test_user quota unlimited on system) and that made it possible for the restructuring to take place. It’s not really an appropriate workaround for a production system though – especially if you’re using the online option and the table is subject to a lot of change.  (Note – this “rowid_mapping_table” and a “journal” table are created even if you haven’t selected the online option.)

Footnotes
  • The problem has been fixed in 19c (tested on LiveSQL)  and is listed on MoS as Bug 27580976 : INTERNAL RECURSIVE MAPPING TABLE MISTAKENLY PLACE IN SYSTEM FOR ONLINE OPS. There are no patches for 18c at present.
  • After I’d finished testing the quota workaround I tried to deprive myself of the quota on the system tablespace. I may have missed something in the manuals but it looks like the only way to do this is to give myself a zero quota (or, as I have done occasionally in the past, drop user cascade) because there’s no option for “quota denied” or “revoke quota” . This is why you may get one of two different messages after the ORA-00604. If you’ve never had a quota on the system tablespace you’ll get the “ORA-1950: no privileges” message, if you’ve had a quota at some time in the pasat and then had it set to zero’ you’ll get the “ORA-01536: space quota exceeded” message.

 

The problem with Googling for solutions

Tim Hall - Fri, 2019-05-24 05:35

I started to write a post, then realised I’ve already written it several times before, with the most coherent of them here.

So instead I’m going to change it up a little and tell a story.

I’m a generalist, and as you will know it’s really hard to be good at everything, so clearly there are some things I’m “not so good at”. Like most people, I use Google a lot, and my Google-fu is strong.

A couple of weeks ago we did a security scan of an existing system, which revealed some security flaws. It was a non-Oracle product, so I didn’t have a recipe to follow and I started Googling for solutions. The product in question is very popular, and there were lots of responses to my Google search, with most of the top results coming from Stack Exchange (Stack Overflow). Happy days I thought, as the Stack Exchange sites is effectively peer-reviewed, in that the correct answers are usually up-voted.

I looked at the first few different threads and people were saying the same thing. The highest up-voted answer on each thread gave a very direct and simple parameter value to solve the issue I had, so I was happy…

I followed the advice, set the parameter, restarted the service and tested. It didn’t do what everyone claimed it would. Armed with the parameter name, I searched the product documentation, and clearly the parameter didn’t do what the Stack Exchange answers said it did.

That seemed very odd, so I assumed these must be answers that were correct for an old version of the product. I checked the docs for previous versions. Same result. After reading the docs I found the real answer, implemented it, tested it and it worked.

What is really worrying about this is the answers on several threads on Stack Exchange were wrong. Those incorrect answers had been up-voted by lots of people, which suggests they agreed with the answer, even though these solutions could *never* have worked. So this seems to indicate one of two things to me.

  • People read the answer, it sounded plausible, which it did, so they up-voted it without trying it.
  • People had actually used this solution, thought it was the right solution and up-voted it, but clearly never tested their system or they would have seen it didn’t work and they still have the same security flaw.

One of the things I say in that post linked above is.

“Remember, even when you have built up a list of trusted sources, you should still constantly test what they say. Everyone can make mistakes.”

That’s really important because the internet is full of great information, but it’s also full of bullshit. Being able to tell the difference is really important, and the only way to do that is to test it, or do further research if it’s something you can’t test for yourself…

Cheers

Tim…

The problem with Googling for solutions was first posted on May 24, 2019 at 11:35 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Latest Blog Posts by Oracle ACE Associates - May 5-11, 2019

OTN TechBlog - Fri, 2019-05-24 05:00
Magic Beans?

This is the last of three posts that list blog posts published by members of the Oracle ACE Program, May 5-11, 2019. Normally I publish the complete list of all posts by ACE Program members during a given week. But during that particular week these folks must have been hyper-caffeinated or revved up on particularly effective nutritional supplements. Whatever the cause, they generated an unusually large number of posts -- more than 70 in a single week. Whatever it takes!

 

Oracle ACE Associate Alfredo AbateAlfredo Abate
Senior Oracle Systems Architect, Brake Parts Inc LLC
McHenry, Illinois

 

Oracle ACE Associate Omar ShubeilatOmar Shubeilat
Cloud Solution Architect EPM, PrimeQ (ANZ)
Sydney, Australia

 

Oracle ACE Associate Robin ChatterjeeRobin Chatterjee
Head of Oracle Exadata Centre of Excellence, Tata Consultancy Services
Kolkata, India

 

Sandra Flores
Arquitecto de soluciones, DevX MX
Mexico City, Mexico

 

Oracle ACE Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

 
Additional Resources

Demo: GraphQL with Oracle Database SODA and node-oracledb

Christopher Jones - Fri, 2019-05-24 02:18

This is a GraphQL demo that stores data in an Oracle Database SODA collection. GraphQL is a query language for services. Oracle Database SODA is a NoSQL-like API for storing documents in Oracle Database. The demo follows on from my previous "Demo: GraphQL with Oracle Database and node-oracledb" which shows the same demonstration using relational tables and SQL queries. Read that post to learn about GraphQL and GraphQL queries. Also see the associated SODA presentation Getting Started with GraphQL APIs on Oracle Database with Node.js which shows some SODA features.

Demo prerequisites:

  • Oracle Database 18.3, or later

  • Oracle Client libraries 18.5 or 19.5, or later

Steps to run the demo:

  1. Download the source code from here and extract it with tar -xf graphql_oracle_soda.tgz

  2. Change to the graphql_oracle_soda directory, edit dbconfig.js and set your database schema credentials.

  3. Run npm install. This will install dependencies and run setup.js to create the SODA schema.

  4. Run npm start to start the GraphQL server.

  5. Load the URL http://localhost:3000/graphql

  6. In the browser, execute GraphQL requests shown in SAMPLE_QUERIES.txt

Since my previous blog discussed the behavior of the GraphQL inputs and outputs, let's look at the SODA implementation. In setup.js a collection is created:

const sodaCollectionName = "myBlogs"; let collection = await soda.openCollection(sodaCollectionName);

This method will actually open the collection if it already exists, so the next commands drop the collection so the demo can recreate it and run with a known, clean collection:

collection.drop(); await conn.execute(`begin execute immediate 'drop sequence "mySequence"'; exception when others then if sqlcode <> -2289 then raise; end if; end;`);

The cleanup is in two parts because the collection that will be created uses a sequence for the document keys. While SODA will auto-create the sequence, the collection.drop() method doesn't automatically drop the created sequence.

The collection is then created using custom metadata:

const metaData = { "schemaName": config.user.toUpperCase(), "tableName": "myBlogs", "keyColumn": { "name": "ID", "assignmentMethod": "SEQUENCE", "sequenceName": "mySequence" }, "contentColumn": { "name": "JSON_DOCUMENT", "sqlType": "BLOB", "compress": "NONE", "cache": true, "encrypt": "NONE", "validation": "STANDARD" }, "versionColumn": { "name": "VERSION", "method": "SHA256" }, "lastModifiedColumn": { "name": "LAST_MODIFIED" }, "creationTimeColumn": { "name": "CREATED_ON" }, "readOnly": false}; collection = await soda.createCollection(sodaCollectionName, {metaData: metaData});

I only needed to specify the metadata because I wanted to change the default key generation from a SHA hash to be a sequence, to align with GraphQL's use of integers for identifiers.

The final part of the setup is creation of a couple of documents in the collection:

await collection.insertOne({"title": "Blog Title 1", "content": "This is blog 1"}); await collection.insertOne({"title": "Blog Title 2", "content": "This is blog 2"});

I could have done all the setup in a SQL file, but decided to be more Node.js-centric in this particular demo.

Let's look at the GraphQL code in graphql_oracle_soda.js. The GraphQL typedefs and resolvers are the same as the previous SQL example - as is the output from running the demo. The updated helpers that interact with the database are the interesting parts.

In the new SODA example, the helper to get one document aka blog is:

async function getOneBlogHelper(id) { let conn = await oracledb.getConnection(); let soda = conn.getSodaDatabase(); let collection = await soda.openCollection(sodaCollectionName); let document = await collection.find().key(id.toString()).getOne(); let j = null; if (document) { let c = document.getContent(); j = {id: id, title: c.title, content: c.content}; } await conn.close(); return j; }

The SODA filter find() is used to find the SODA document with the requested id. The document content is extracted with getConection() and the document values are mapped to a JavaScript object returned back up via the GraphQL resolver. The other helpers are similarly straightforward.

This simple demo shows how to use some of Oracle's exciting SODA features. SODA APIs are available in a number of languages including Java and Python, letting you access your stored data from familar environments. If you do interesting things with SODA, let us know.

Latest Blog Posts by Oracle ACEs - May 5-11, 2019

OTN TechBlog - Thu, 2019-05-23 11:20
Skills on display...

As I explained in Tuesday's blog post, members of the Oracle ACE Program were extraordinarily busy blogging the week of May 5-11, 2019, so much so that I've had to break the list of the latest posts into separate lists for the three levels in the program: ACE Directors , ACEs , and ACE Associates . Tuesday's post featured the ACE Director posts. Today it's the ACE's turn. Posts from ACE Associates will soon follow.

 

Oracle ACE Anoop JohnyAnoop Johny
Senior Developer, Wesfarmers Chemicals, Energy & Fertilisers
Perth, Australia

Oracle ACE Anton ElsAnton Els
VP Engineering, Dbvisit Software Limited
Auckland, New Zealand

 

Oracle ACE Atul KUmarAtul Kumar
Founder & CTO, K21 Academy
London, United Kingdom

 

Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio IT Solutions
Brusque, Brazil

 

Oracle ACE Kyle GoodfriendKyle Goodfriend
Vice President, Planning & Analytics, Accelytics Inc.
Columbus, Ohio

 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France

 

Oracle ACE Marcelo OchoaMarcelo Ochoa
System Lab Manager, Facultad de Ciencias Exactas - UNICEN
Buenos Aires, Argentina

 

Oracle ACE Marko MischkeMarco Mischke
Group Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden.Germany

 

Oracle ACE Martin BergerMartin Berger
Database Teamleader, Hutchison Drei Austria GmbH
Vienna, Austria

 

Oracle ACE Miguel PalaciosMiguel Palacios
Gerente General, Global Business Solutions Perú
Peru

 

Oracle ACE Peter ScottPeter Scott
Principal/Owner, Sandwich Analytics
Marcillé-la-Ville, France

 

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Oracle ACE Rodrigo DeSouzaRodrigo De Souza
Solutions Architect, Innive Inc.
Tampa, Florida

 

 

Additional Resources

Pages

Subscribe to Oracle FAQ aggregator