Feed aggregator

ATG Rollup 4 and my Custom schema

Fadi Hasweh - Mon, 2007-07-16 01:38
After Appling ATG Rollup 4 patch no. (4676589) on our HP-UX server successfully we start to receive the following error only on our customized schema but not on the standard schemas.
The error was showing when we try to run any procedure from this customized schema we keep getting the following even though it used to work fine before the patch
"
ORA-00942: table or view does not existORA-06512: at "APPS.FND_CORE_LOG", line 23ORA-06512: at "APPS.FND_CORE_LOG", line 158ORA-06512: at "APPS.FND_PROFILE", line 2468ORA-06512: at "APPS.XX_PACKAGE_PA", line 682ORA-06512: at line 4
"

After checking on metalink we got a hint from note 370000.1 the note dose not apply for the same case but it did gave us the hint and the solution was as follow

connect as APPLSYSGRANT SELECT ON FND_PROFILE_OPTIONS TO SUPPORT;GRANT SELECT ON FND_PROFILE_OPTION_VALUES TO SUPPORT;


Support is my customized schema Custom
Have an error free day ;-)

fadi

Blogging away!

Menon - Sat, 2007-07-14 18:00
For a long time, I wanted to create a web site with some articles that reflected my thoughts on database and J2EE. During the 15 odd years of my experience in the software industry, I have realized that there is a huge gap between the middle tier folks in Java and the database folks (or the backend folks.) In fact my book - Expert Oracle JDBC Programming - was largely inspired by my desire to fill this gap for Java developers who develop Oracle-based applications. Although most of my industry experience has been in developing Oracle-based applications, during the last 2 years or so, I have had the opportunity to work with MySQL and SQL Server databases as well. This has given me a somewhat unique perspective on developing Java applications that use database (a pretty large spectrum of applications.)

This blog will contain my opinions on this largely controversial subject (think database-independence for example), on good practices related to Java/J2EE and database programming (Oracle, MySQL and SQL Server). From time to time, it will also include any other personal ramblings I may choose to add.

Feel free to give comments on any of my posts here.

Enjoy!

Using dbx collector

Fairlie Rego - Sat, 2007-07-14 08:45
It is quite possible that you have a single piece of sql which consumes more and more cpu over time without an increase in logical i/o for the statement or due to increased amount of hard parsing.

The reason could be extra burning of cpu in an Oracle source code function with time which has not been instrumented as a wait in the RDBMS kernel. One way to find out which function in the Oracle source code is the culprit is via the dbx collector function available in the Sun Studio 11. I guess dtrace would also help but I haven’t played with it. This tool can also be used in diagnosing increased cpu usage of Oracle tools across different RDBMS versions.

Let us take a simple example on how to run this tool on a simple insert statement.

SQL> create table foo ( a number);

Table created.

> sqlplus

SQL*Plus: Release 10.2.0.3.0 - Production on Sat Jul 14 23:46:03 2007

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Enter user-name: /

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> set sqlp sess1>>
sess1>>

Session 2
Find the server process servicing the previously spawned sqlplus session and attach to it via the debugger.

> ps -ef | grep sqlplus
oracle 20296 5857 0 23:47:38 pts/1 0:00 grep sqlplus
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus
> ps -ef | grep 17205
oracle 20615 5857 0 23:47:48 pts/1 0:00 grep 17205
oracle 17237 17205 0 23:46:04 ? 0:00 oracleTEST1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus

> /opt/SUNWspro/bin/dbx $ORACLE_HOME/bin/oracle 17237

Reading oracle
==> Output trimmed for brevity.

dbx: warning: thread related commands will not be available
dbx: warning: see `help lwp', `help lwps' and `help where'
Attached to process 17237 with 2 LWPs
(l@1) stopped in _read at 0xffffffff7bfa8724
0xffffffff7bfa8724: _read+0x0008: ta 64
(dbx) collector enable


Session 1
==================================================================
begin
for i in 1..1000
loop
insert into foo values(i);
end loop;
end;
/

Session 2
==================================================================

(dbx) cont
Creating experiment database test.3.er ...
Reading libcollector.so

Session 1
==================================================================
PL/SQL procedure successfully completed.

sess1>>exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

Session 2
=========

execution completed, exit code is 0
(dbx) quit

The debugger creates a directory called test.1.er.
You can analyse the collected data by using analyser which is a GUI tool.

> export DISPLAY=10.59.49.9:0.0
> /opt/SUNWspro/bin/analyzer test.3.er



You can also generate a callers-callees report using the following syntax

/opt/SUNWspro/bin/er_print test.3.er
test.3.er: Experiment has warnings, see header for details
(/opt/SUNWspro/bin/er_print) callers-callees

A before and after image of the performance problem would help in diagnosing the function in the code which consumes more CPU with time.

Architectural Differences in Linux

Solution Beacon - Fri, 2007-07-13 16:55
In this second edition in the Evaluating Linux series of posts I want to discuss what is both one of the strengths and weaknesses of Linux, namely the architectural differences between it and the traditional UNIX platforms. The relevant architectural differences between Linux and UNIX (AIX, HP-UX, Solaris: take your pick) can be viewed as several broad categories:hardware differencesfilesystem

Oracle Ireland employee # 74 signing off...

Donal Daly - Fri, 2007-07-13 08:29
I will shortly be starting my life outside Oracle after some 15 years there. My last day is today.

I've enjoyed it immensely and am proud of our accomplishments. It really doesn't seem like 15 years, and I have been lucky to work on some very exciting projects with some very clever people, many of whom have become friends. I look forward to hearing about all the new releases coming from Database Tools in the future.

Next it is two weeks holidays in France (I hope the weather gets better!) and then the beginning of my next adventure in a new company. More on that later.

I think I'll continue to blog on database tools topics.

Eclipse JSF Tools Turns 1.0

Omar Tazi - Thu, 2007-07-12 18:54
I would like to congratulate Raghu Srinivasan from Oracle (Eclipse JSF Tools Project Lead) and his team for helping the community produce its first official release of the JSF Tools Project. A couple of weeks ago the Eclipse Foundation announced the Europa release which among other things included Web Tools Platform (WTP) 2.0 of which the JSF Tools Project v1.0 is an important piece.

JSF Tools v1.0 is a key milestone as it simplifies the development of JavaServer Faces applications in the Eclipse environment. The highlights of this release include performance improvements, a new Web Page Editor as well as a graphical editor for building HTML/JSP/JSF web pages. This release is also extensible by design, it comes with an extensibility framework that allows third party developers to come up with their own enhancements.

This release is yet another milestone in delivering "productivity with choice" to our customers. For more information on other recent activities around Oracle's involvement with Eclipse check out this blog entry.

- Download Eclipse Europa: http://download.eclipse.org/webtools/downloads/drops/R2.0
- Release notes for Eclipse WTP 2.0:
http://www.eclipse.org/webtools/releases/2.0

Can a change in execution plan change the results?

Rob Baillie - Thu, 2007-07-12 08:15
We've been using Oracle Domain indexes for a while now in order to search documents to get back a ranked order of things that meet certain criteria. The documents are releated to people, and we augment the basic text search with other filters and score metrics based on the 'people' side of things to get an overall 'suitability' score for the results in a search. Without giving too much away about the business I work with I can't really tell you much more about the product than that, but it's probably enough of a background for this little gem. We've known for a while that the domain index 'score' returned from a 'contains' clause is based not only on the document to which that score relates, but also on the rest of the set that is searched. An individual document score does not live in isolation, rather in lives in the context of the whole result set. No problem. As I say, we've known this for a while and so have our customers. Quite a while ago they stopped asking what the numbers mean and learned to trust them. However, today we realised something. Since the results are affected by the result set that is searched, this means that the results can be affected by the order in which the optimizer decides to execute a query. I can't give you a full end to end example, but I can assure you that the following is most definately the case on one of our production domain indexes (names changed, obviously): We have a two column table 'document_index', which contains 'id' and 'document_contents'. Both columns have an index. The ID being the primary key and the other being a domain index. The following SQL gives the related execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, :1, 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX DOMAIN INDEX SCOTT.DOCUMENT_INDEX_IDX01 However, the alternative SQL gives this execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, 'Some text', 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX INDEX UNIQUE SCAN SCOTT.DOCUMENT_INDEX_PK Normally, this kind of change in execution path wouldn't be a problem. But as stated earlier, the result of a score operation against a domain index is not just dependant on the individual records, but the context of the whole result set. The first execution provides you a score for the single document in the context of the all the documents in the table, the second gives you a score within the context of just that document. The scores are different. Now obviously, this is an extreme example, but more subtle examples will almost certainly exist if you combine the domain index lookups with any other where clause criteria. This is especially true if you're using literal values instead of bind variables in which case you may find the execution path changing between calls to the 'same' piece of SQL. My advice? Well, we're going to split our domain index look ups from all the rest of the filtering criteria, that way we can prepare the set of documents we want the search to be within and know that the scoring algorithm will be applied consistently.

How to OBFUSCATE passwords and ENCRYPT sensitive fields in BPEL PM?

Arvind Jain - Wed, 2007-07-11 15:19
Here is a small tip on security while using Oracle BPEL Process Manager.

Many a times you have to supply password information and other sensitive information in your BPEL PM project files (*.bpel, *.xml, *.wsdl). How do you ensure that these are not visible as clear text to others who do not have access to source codes? Here is a quick tip on using the XML tag <encryption="encrypt">.

Where can this be used?

- to obfuscate password info while accessing a partnerlink that refers to a WebService secured by Basic Authentication ... login/password.

Example:

Suppose you have a partnerlink definition defined with LOGIN PASSWORD info as shown below. You want to obfuscate the password i.e. You do not want to see clear text "cco-pass"

(sample)
<partnerLinkBinding name="PartnerProfileService">
<property name="wsdlLocation">PartnerProfileWSRef.wsdl</property>
<property name="basicUsername">cco-userid</property>
<property name="basicPassword">cco-pass</property>
<propertyname="basicHeaders">credentials</property>
</partnerLinkBinding>

Add the property encryption="encrypt" for sensitive fields, this will cause the value to be encrypted at deployment. So the new XML will look like


(sample)
<partnerLinkBinding name="PartnerProfileService">
<property name="wsdlLocation">PartnerProfileWSRef.wsdl</property>
<property name="basicUsername">cco-userid</property>
<property name="basicPassword" encryption="encrypt">cco-pass</property>
<property name="basicHeaders">credentials</property>
</partnerLinkBinding>


Then deploy your process and the password will be encrypted.
Have fun encrypting things !!

Backing Up and Recovering Voting Disks

Pankaj Chandiramani - Tue, 2007-07-10 21:31

Backing Up and Recovering Voting Disks

What is a voting disk & why its needed ?
The voting disk records node membership information. A node must be
able to access more than half of the voting disks at any time.

For example, if you have seven voting disks configured, then a node must
be able to access at least four of the voting disks at any time. If a
node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.

Backing Up Voting Disks

When to backup voting disk ?

  1.       After installation
  2.       After adding nodes to or deleting nodes from the cluster
  3.       After performing voting disk add or delete operations

To make a backup copy of the voting disk, use the Linux dd command. Perform this operation on every voting disk as needed where voting_disk_name is the name of the active voting disk and backup_file_name is the name of the file to which you want to back up the voting disk contents:
dd if=voting_disk_name of=backup_file_name

If your voting disk is stored on a raw device, use the device name in place of voting_disk_name. For example:
dd if=/dev/sdd1 of=/tmp/voting.dmp

Note : When you use the dd command for making backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process before taking a backup of the voting disk.

Recovering Voting Disks

If a voting disk is damaged, and no longer usable by Oracle Clusterware, you can recover the voting disk if you have a backup file.

dd if=backup_file_name of=voting_disk_name

Categories: DBA Blogs

Another Trinidad Milestone

Omar Tazi - Tue, 2007-07-10 19:04
Last week the Apache MyFaces Trinidad team announced another milestone, the release of Trinidad v 1.2.1. This release comes with a JavaServer Faces 1.2 component library initially based on parts of Oracle's ADF Faces. Featured tags in this release include : breadcrumbs, navigation panels, panes, and tabbed panels. More tags can be found on this page. JSF 1.1 is still supported via Trinidad v 1.0.1.

Trinidad 1.2.1 binary and source distributions can be found in the central Maven repository under group id "org.apache.myfaces.trinidad". Downloads are available here.

If you need more frequent information on Trinidad, visit Matthias' blog.

Administring OCR

Pankaj Chandiramani - Mon, 2007-07-09 20:52

Administring OCR
We will see how OCR(Oracle cluster Registry) backup & recovery is done .

Backup
Oracle clusterware automatically creastes a OCR backup every 4 hrs & retains the last 3 backups . Actually the CRSD process creates & manages the backup for each full day & a weekly backup at nd of the week .
Default backup Location : $CRS_HOME/cdata/$clustername

Other than the automated backup , you can export the content any time you want to a file .
eg : $ ocrconfig -export emergency_export.ocr

You can see the list of ocrbackup by using :
$ ocrconfig -showbackup

As the backup directory is default , you can change the dir by using below command
$  ocrconfig -backuploc &lt;directory>

Restore
OCR can be restored (if you have a backup ) be below command

NOTE: Should you need to restore, make sure all CRS daemons on all nodes are stopped.

To perform a restore, execute the command:

$ cd CRS_Home/cdata/crscluster
$ ocrconfig -restore  week.ocr

If you had exported using the above command & want to resore , then you can use import
IMPORTANT: Importing a backup when CRS daemons are running will only corrupt OCR.  

$ ocrconfig -import emergency_export.ocr

If anything is wrong than you can use the OCRDUMP comand to dump all info to a file & check
$ ocrdump OCR_DUMP

Also you can use :

$ ocrcheck 
to check for the stats of OCR

Categories: DBA Blogs

On the Road and Upcoming Talks

Marcos Campos - Mon, 2007-07-09 20:51
This week I am going to be in San Francisco. I have been invited to give a talk at the San Francisco Bay ACM Data Mining SIG on Wednesday. The title of the talk is In-Database Analytics: A Disruptive Technology. Here is a link with information on the talk.On Friday morning, I am presenting at the ST Seminar at Oracle's headquarter. The title of that talk is In-Database Mining: The I in BI. If Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

SQL Techniques Tutorials: Pattern Matching Over Rows (New SQL Snippets Tutorial)

Joe Fuda - Mon, 2007-07-09 16:00

This topic was inspired by Tom Kyte's So, in your opinion ... blog post about a new SQL feature Oracle is considering (described at Pattern matching in sequences of rows).

I'll admit I've never tackled this kind of pattern matching before and I didn't understand the entire paper. It's a pretty dense read. From what I can tell though, using the new feature would be a lot like applying regular expressions to rows of values. This got me thinking. Instead of adding a whole new feature for this, why not simply convert the rows into strings and then use existing regular expression support to do the pattern matching?

Even if the feature described in the paper does something more sophisticated than this, tackling the requirement with existing functionality using simple string aggregation logic and regular expressions sounded like a fun challenge. Here's my stab at a solution.


...

Using SERVICE_NAMES in Oracle

Hampus Linden - Sun, 2007-07-08 12:42
The use of "SERVICE_NAMES" in Oracle is quite an old and probably well known feature but perhaps not everyone is familiar with it yet.
Got asked today about a recovery scenario where the administrator had a failed instance (broken data files, no logs, no backups, just a nightly exp), a new database was created with 'dbca', but with a new name to test importing the exp file.
All worked fine, but there was a problem with the database name. The application had the service name set in a number of config files and there was also a number of ETL scripts with service names hardcoded. The thinking at the time was to delete the old instance, remove all traces of it (oratab etc.) and then create it *again* with the same name.
Now hold on here, we have tested the imp in a new database, all is fine and all we want to do is allow connections to the old database instance name?
That's pretty much a one-liner, not a new database.
We can simply add the new name we want to "listen on" to the SERVICE_NAMES parameter.
Easy peasy.

Ok, here is what we should do. Quite a long example for something simple.
But hey, just want to make it clear.
oracle@htpc:admin$ lsnrctl status
-- What's the current db_name and service_names?
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:31:22

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 7 min. 43 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:24 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string peggy
SQL> show parameter service_names

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string peggy
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- What can we connect to?
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/peggy

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:53 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:31:58 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:

-- Ouch, that's the one we want!

oracle@htpc:admin$ rsqlplus hlinden/hlinden as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:32:01 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- Here is the 'one-liner'
SQL> alter system set service_names='peggy,dog' scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 599785472 bytes
Fixed Size 2022600 bytes
Variable Size 167772984 bytes
Database Buffers 423624704 bytes
Redo Buffers 6365184 bytes
Database mounted.
Database opened.
SQL>
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

-- Let's see what changed. What can we connect to now?
oracle@htpc:admin$ lsnrctl status

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-JUL-2007 18:33:57

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 08-JUL-2007 18:23:39
Uptime 0 days 0 hr. 10 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/10g/network/admin/listener.ora
Listener Log File /u01/oracle/10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=htpc)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "dog" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy" has 2 instance(s).
Instance "peggy", status UNKNOWN, has 1 handler(s) for this service...
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggyXDB" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
Service "peggy_XPT" has 1 instance(s).
Instance "peggy", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@htpc:admin$ rsqlplus hlinden/hlinden@//htpc/dog

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 8 18:34:18 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-- It works, but where are we?

SQL> select sys_context('userenv','SERVICE_NAME') from dual;

SYS_CONTEXT('USERENV','SERVICE_NAME')
------------------------------------------------------------------------------------------------------------------------
dog

SQL> select sys_context('userenv','DB_NAME') from dual;

SYS_CONTEXT('USERENV','DB_NAME')
------------------------------------------------------------------------------------------------------------------------
peggy

Issues while installing oracle redhat linux

Fadi Hasweh - Sun, 2007-07-08 04:25
Before a couple of days me and my friend ghassan who is also a certified apps 11i dba was doing an installation at my new p.c and while installing oracle linux on the new Intel 965 motherboard the installer freezes at " PCI: Probing PCI hardware (bus 00)" stage after checking redhat forums we found out that we need to start the installation using the following command from the command line to over come this issue
linux all-generic-ide pci=nommconf
of course that solved our issue after the installation finished and the system reboot we had to do the following the grub screen

at the grub menu. Select the kernel we want to edit, and press (e) to edit. Then move the cursor down to the "kernel" line and press (e) to edit that line. And add the all-generic-ide pci=nommconf at the kernel line.
Then press ENTER to accept the changes. And then press (b) to boot.

When the server boot successfully we had to add the all-generic-ide pci=nommconf at the /boot/grub/grub.conf file.

I guess that this issue in not only related to oracle linux but general to all redhat with this Intel motherboard (from what we saw at the redhat forums), anyway after that we did the installation for vision database and the installation went successfully but with one small issue (RW-50004) at step 2 of 5
At the log it shows (ORA-01031: insufficient privileges) this error was a typo of the oracle user group the group was ordba instated of oradba so we fixed that and the installation went successfully


Hope that helped and thank you guz for your help
Fadi
P.S I got the redhat issue resolved mostly from the http://www.linuxquestions.org/questions/showthread.php?t=479778 and some other forums

SOA Suite 10.1.3.3 patchset and "lost" instances

Clemens Utschig - Fri, 2007-07-06 14:09
It has been a long time since I have blogged the last time - mainly due to the huge amount of miles flown over the last 2 months.

After being in europe for conferences where I did evangelism on our next generation infrastructure, that is based on Service Component Architecture, it was time to revisit my roots and do some consulting in a POC as well as helping one of our customers with their performance problems.

Meanwhile, while I was on the road, we have released the 10.1.3.3 patchset, which includes among many small fixes here and there - some real cool enhancements, e.g
  1. A fault policy framework for BPEL, which allows you to specify policies for faults (e.g remoteFault) outside of the BPEL process and trigger retry activities, or submit the activity with a different payload from the console.

  2. Performance fixes for Oracle ESB - which boost the performance way up

  3. Several SOAP related fixes, especially around inbound and outbound WS-Security - if you happen to have Siebel and use WS-Sec features, the patch will make you happy

  4. Adapters: several huge improvements on performance and scalability

  5. BPEL 2 ESB: several fixes with transaction propagation as well more as sophisticated tracking
You can download it from metalink - patch number is 6148874

After working on 10.1.3.3 for the last 3 weeks, we added an enhancement to implement a federated ESB, where an ESB system binds to another via UDDI. The enhancement request's number is 6133448 and will be part of 10.1.3.4 (our next patch release) - and works exactly the way it works today in BPEL 10.1.3.1.

Back to my performance adventure.
The customer reported that under high load of his 10.1.3.1 instance, a lot of async instances (that were submitted to the engine) "got lost", which means - they could not find any trace of a running instance, nor have the target systems that were called out of the process being updated. Strange, isn't it?

A quick look into the recovery queue (basically a select against the invoke_message table) revealed that a lot of instances have been scheduled (status 0) - but somehow they stayed in the queue. Hugh, why that? Restarting the server helped, some instances were created but, - hugh still way to many weren't.

Checking the settings that we preseed - we figured out - that there is an issue with them. Looking into the Developer's guide it states:

"the sum of dspMaxThreads of ALL domains should be <= the number of listener threads on the workerbean".

Hmm - checking orion-ejb-jar.xml, section workerbean, in the application-deployments/orabpel/ejb_ob_engine folder revealed
  1. there are no listener-threads set and
  2. there are 40 ReceiverThreads
means? Given that we seed each domain with dspMaxThreads being 100, if you have five domains, 500 workerbean threads would be needed - way to much. And what happened to listener-threads?

<message-driven-deployment name="WorkerBean" instances="100" resource-adapter="BPELjms">

A quick check with the JMS engineering enlighted me on that. As we use JMS connectors now - you need to change the ReceiverThreads, to match the above formula.

<config-property>
  <config-property-name>ReceiverThreads</config-property-name>
  <config-property-value>40</config-property-value>
</config-property>

- and tune the dispatcherThreads on the domains to a reasonable level.

Next question: what are dispatcherThreads, and what does the engine need them for?

"ReceiverThreads specifies the maximum number of MDBs that can process BPEL requests asynchronously across all domains. Each domain can allocate a subset of these threads using the dspMaxThreads property; however, the sum of dspMaxThreads across all domains must not exceed the ReceiverThreads value.

When a domain decides that it another thread to execute an activity asychronously, it will send a JMS message to a queue; this message then gets picked up by a WorkerBean MDB, which will end up requesting the dispatcher for some work to execute. If the number of WorkerBean MDBs currently processing activities for the domain is sufficient, the dispatcher module may decide not to request for another MDB. The decision to request or an MDB is based on the current number of active MDBs, the current number pending (that is, where a JMS message has been sent but an MDB has not picked up the message), and the value of dspMaxThreads.

Setting both ReceiverThreads and dspMaxThreads to an appropriate value is important for maximizing throughput and minimizing thread context switching. If there are more dspMaxThreads specified than ReceiverThreads, the dispatcher modules for all the domains will think there are more resources they can request for than actually exist. In this case, the number of JMS messages in the queue will continue to grow as long as request load is high, thereby consuming memory and cpu. If the value of dspMaxThrads is not sufficient for a domain's request load, throughput will be capped.
Another important factor to consider is the value for ReceiverThreads - more threads does not always correlate with higher throughput. The higher the number of threads, the more context switching the JVM must perform. For each installation, the optimal value for ReceiverThreads needs to be found based on careful analysis of the rate of Eden garbage collections and cpu utilization. For most installation, a starting value of 40 should be used; the value can be adjusted up or down accordingly. Values greater than 100 are rarely suitable for small to medium sized boxes and will most likely lead to high cpu utilization just for JVM thread context switching alone."

With all the above in place, and a tuned dehydration store, we got them back on the track, even under high load all messages where picked up, and ended up as instances - recap:
  1. Make sure your settings of ReceiverThreads do match the sum of dspMaxThreads of all domains, and are set appropriately.

  2. If you have external adapters in use, that connect e.g to AQ, make sure AQ is tuned and also the adapter - this is where you are most likely to get timeouts, that would also contribute to recvoverable messages.

Verifying a Virtual X-Server (Xvfb) Setup

Solution Beacon - Thu, 2007-07-05 17:52
With E-Business Suite Release 11i and Release 12, an X-Server display is required for correct configuration. The application framework uses this for generating dynamic images, graphs, etc. It is also needed by reports produced in bit-map format. Note that for functionality using Java technology, the “headless” support feature can be implemented (requires J2SE 1.4.2 or higher). However, reports

Handy "Alert Debugging" tool

Rob Baillie - Wed, 2007-07-04 13:29
One of the coolest things about OO Javascript is that methods can be written to as if they are variables. This means that you can re-write functions on the fly. Bad for writing maintainable code if you're not structured; Fantastic for things like MVC controllers (rather use the controller to forward calls on to the model, you use it to rewire the view so that it calls it directly, and all without the view even realising it!). What I didn't realise was that the standard window object (and probably so many others out there) can have its methods overwritten like any other. Probably the simplest example of that proves to be incredibly useful... changing the alert function so that the dialog becomes a confirm window. Clicking cancel means that no further alerts are shown to the user. Great for when you're writin Javascript without a debugger and have to resort to 'alert debugging'.

window.alert = function(s) {
if( !confirm(s) ) window.alert = null;
}
In case you're wondering... I found it embedded in the comments on this post: http://www.joehewitt.com/blog/firebug_for_iph.php. Cheers Menno van Slooten

BAAG

Herod T - Wed, 2007-07-04 12:24

I joined the BAAG party awhile back - Battle Against Any Guess.

Go and give it a read, especially you folks that send emails that have a subject of PLZ HELP or URGENT PLZ or something similiar.


Access migration to Application Express without direct SQL Access

Donal Daly - Wed, 2007-07-04 09:05
I got asked a question recently how to complete an Access migration when you don't have direct SQL access to the Oracle instance where Oracle Application Express is installed (e.g. apex.oracle.com)?

For dealing with the application part, it is not an issue as the Application Migration Workshop feature of APEX (3.0+) allows you to load the results from the Oracle Migration Workbench Exporter for Microsoft Access, so you can capture the meta data for Access Forms and Reports. You can even download a copy of the exporter from the workshop itself.

The challenge is really the schema and data migration part using Oracle SQL Developer (1.2+). By default SQL Developer expects to be able to make a SQL connection to the target Oracle database. However I did think about this use case as we were designing this new Migration Workbench tool. I will describe a solution below.

The only requirement, is that you have SQL access to any Oracle database (9iR2+), because the workbench is driven using an underlying migration repository. You could use the Express Edition of Oracle for this purpose, which is totally free, if you didn't have SQL access to an existing Oracle database.

So let me outline the main steps involved:
  1. Start SQL Developer 1.2
  2. Make sure you set the following preference: Tools -> Preferences -> Migration -> Generation Options: Least Privilege Schema Migration
  3. Create a connection to your Access database. Make sure you can browse the tables in the access database and see the data
  4. Export the table data to csv format: For each table you want to migrate, use the context menu associated with tables to export as csv format. Make sure you select an encoding that matches your target database. I try to keep everything in UTF-8
  5. Create a connection to an Oracle schema.
  6. Create a migration repository in this connection. You can do this via the context menu on a connection
  7. From your Access connection, context menu, select: Capture Microsoft Access. This will launch the exporter and initiate the capture step of the migration.
  8. Take your captured model and now create an Oracle (converted) model by selecting the captured model and via the context menu: Convert to Oracle Model
  9. With you converted model, you can now create an object creation script using the context menu: Generate
  10. The result of step 9 is presented in a SQL Worksheet, you can edit this to remove objects you are not interested in, then via File -> Save As, save the contents to a SQL file.
  11. Login to your APEX Workspace
  12. To execute the object creation script you have just created. Goto SQL Workshop -> SQL Scripts -> Upload.
  13. Once the script is uploaded, View it and select the RUN action. This should create all your schema objects, view the results to make sure all the object were create successfully. You now be able to view these schema objects in the SQL Workshop -> Object Browser.
  14. To load our CSV files we will use the Utilities -> Data Load/Unload -> Load, selecting Load Spreadsheet Data. You will do this for each table we want to load data into. Select Load To : Existing Table and Load From: Upload File. You may need to apply appropriate format masks to get the data to load properly.
Notes:
  1. You should complete the schema and data migration part of your migration, prior to creating a migration project via the Application Migration Workshop.
  2. You may have some post migration cleanup steps, if you had access auto increment columns in your tables, you will need to reset the values of the sequences we have created.
  3. Another option to explore depending on your data, would be to export the data from Access tables as SQL INSERT statements, and then it just a simple matter of loading and run that SQL script via apex.

Pages

Subscribe to Oracle FAQ aggregator