Charles Schultz

Subscribe to Charles Schultz feed
Tales of a Sojourner in the land of OracleCharles Schultzhttp://www.blogger.com/profile/07973399674184183130noreply@blogger.comBlogger100125
Updated: 12 hours 57 min ago

Debug mode for MOS

Wed, 2011-07-27 08:06
Had an SR in which I learned about a debug mode for FLASH MOS (tried it in HTML, no go *grin*). Hold down the Control key and click on the Oracle My Oracle Support Logo in the upper left-hand corner:


Here is a short video, using Oracle's recommendation of CamStudio:

MOS 5.3: Search and Knowledge

Wed, 2011-07-20 09:19
After my initial MOS 5.3 post, I received some most excellent feedback and an opportunity to speak with Jan Syssauw. He was exceedingly gracious and honored me with a 1-hour phone conference in which we covered many different aspects of Search. In response, I thought it would only be appropriate to follow up here.

I have a lot of respect for Jan (pronounced Yan). He is a very good listener and has a very professional demeanor, accepting my comments (and sometimes criticism) without belying how personally he cares for his team and his product. And yet he does - Jan very much wants Search and the KB to be an awesome experience, so I appreciate that he is reaching out to see what at least one user thinks.

In retrospect, I think I may have disproportionately emphasized certain features or areas by leveling the playing field (from my point of view) and perhaps giving the impression that I thought all areas were equal and required an equal degree of attention. Let me say that I wish to correct this. :) And I'll focus first on what I think the big issues are, and leave the small things for another time.

Search SuggestionsFirst and foremost, "Search Suggestions" are a huge improvement. I initially called these "type-ahead" or "autofill", but the proper term (so I have learned) is "search suggestions". I really like the direction they are going with this. We talked about some of the nuances; for example, the issue with versions I raised in my initial post (typing "Oracle server ent" gives me several suggestions, but none of them are 11g and only one is 10g). The reason Jan gave for this was that there are lots of suggestions that kind of fall off the end of the list - if I set my preferences to see only 5 "search suggestions", then even more of those suggestions would fall off the list. Jan hopes that the more popular and more relevant suggestions will start to percolate to the top of the suggestion list. I still find it odd that Oracle 7.3 would supercede 11g.... *grin*

Need for SpeedOk, so FLASH is REALLY slow for me, and I REALLY do not like the way FLASH was implemented for MOS. However, setting FLASH aside and focusing on Search, I was struck again by a need for speed. I think these are both very related, but I am going to break it into two different paragraphs to help spell it out.

First, there are times when typing in a somewhat generic term (ie, "install"), I have to wait several seconds (I counted 9 seconds to retrieve 1.1 million records). What about first_rows_k?? *grin* Seriously, Oracle's flagship product is supposed to be the database, and they call themselves the "Information Company". While 9 seconds out of my life is peanuts, other companies (do I need to spell it out?) don't have you sitting around for a few seconds wondering how your fingernails got a little dirty. I have even filed a bug on a case where you can do a search on nothing and the backend server times out - as if I wanted to see every single document. *laugh*

Second, Jan mentioned a couple times when we were talking about setting limits for maximum number of Search Suggestions and Number of Records to Display that setting the limit higher would negatively impact performance. How much? If I want, let's get extreme, 100 search suggestions and 1000 results per page, will my search slow down to 10 seconds, or 10 minutes?

Search needs to be fast! Freakishly fast. They need Jimmy Johns to come and tell them about fast! *grin*


Preferences, options, filters....This one is quite tricky, and I partially found the right words when I spoke to Jan and I hope I communicated it clear enough. I and other customers have asked for more options, more flexibility, more controls. And by golly, Oracle delivered! We know have Preferences, various ways to set filters via PowerView, Product, Task/Intent, etc. In my opinion, the way these options are presented to the user are a bit confusing. Take for example PowerViews. PowerViews came out a little while ago (MOS 5.1, 5.2?) and thus are a separate form of customization than, say, refining a search via the Knowledge Browser options (by Product, Task, etc). If I set a PowerView to view only Solaris platforms, that affects all my searches. So even though the PowerView summary is at the top of the page, I often forget I have a PowerView set and I rather puzzled when various search results are filtered out. Imagine trying to look for patches specific to Red Hat if you have a PowerView set for Solaris. :)

I do think it is awesome that Oracle is giving us, the customer, this much flexibility and control, and I applaud that. I only hope that we can work together to make the presentation of that power and flexibility more aesthetic and easier to grasp. In some ways, it is almost like going from a FPS game with simple controls to a Flight Simulator with tons of controls. I think this issue will be a matter of acclimation and user training for advanced users that want to take advantage of such knobs and levers; all other users wanting a quick, "just get me the results like Google" approach should not have to worry about these extra bells and whistles. Perhaps one possible way to approach this is to have an "Advanced Search" preference, and consolidate all such advanced features in one area. Maybe. At the very least, have the one preference to toggle the kitchen sink on and off. :)


And some lower-priority itemsSome other low-hanging fruit we can take advantage of might be to reduce confusing by consolidating the use of "Task" and "Intent". My guess is that a bunch of folks got together and brainstormed on what to call this, but could not come to a final decision without offending others, so they compromised. :) So I will say, as one customer, just call them Tasks. Forget about "Intent". In my mind, "intent" sounds like you want to practice your ESP or something. But you knew that before I typed it, right....

Redo the way an article is displayed. For a while now, the articles slide in from right to left, squeezing the left-hand side. Either get rid of that effect, or make it happen in .4 seconds max.

I like the "intent" (tongue in cheek) behind the "auto-detection" features. When on the conference with Jan, he demonstrated what this auto-detection is currently doing; for example, if you type "install" in the search box, you wait a few seconds than get a screenful of suggested articles. At the top of the list is a box asking you to clarify your intent (hmm... there is that word again), which can be used to help filter results. A similar thing happens if you search for database (refine by product, for example). This is somewhat similar to what we developed at "The Bridge".

MOS 5.3

Thu, 2011-07-14 01:01
re: Document 1326487.1.  My Oracle Support Release 5.3 Features

I knew that new features/enhancements were coming to MOS this past weekend, and I was looking forward to seeing what they were. I had heard that Search was improving. So when I read this Release Document, I was just a little underwhelmed at the level of detail. The "Release Highlights" all sound good, but I wanted meat. And then just today I realized that the details are actually in another document, My Oracle Support Release 5.3 Detailed Benefits Table (Doc ID 1329876.1)

More details in this document. But still not quite what I was hoping for. None-the-less, I decided to look into them.

Please note, I am a bit cynical in my comments. I realize a lot of hard work went into these enhancements, and there is a ton of "stuff" going on behind the scenes that I do not know about.


[EDIT: Egads, Blogger really mucked up my tables!!! Holy off-the-page, batman!]


Knowledge preferences
Customize your preference page
Knowledge preferences page on the Settings tab allows you to customize your knowledge settings
There are only 4 things you can customize, "Search Term Suggestion" (and associated number, max of 10), "Search Intent Clarification", "Search Result Set Length" (with a measly max of 25), and default "Searched Sources". That's it. I am disappointed. The "Searched Sources" only has two options, either the KB or ALL. Some preference. Better than nothing, I guess.



Support Identifier ManagementAbility to delete multiple Support Identifiers at onceNew window added for sizing control to change number of rows displayed for Support Identifier and managed users




meh. Could be helpful for organizations with really large numbers to manage. Glad we don't.




Search ImprovementsSearch Term Suggestion; search box has a type-ahead feature offering search suggestions using our knowledge base dictionaryFaster search results






Here we go, the fabled Search Improvements!
I have put this to the test already and it does help a bit. The only downside is that it is only available in FLASH (of course), since FLASH slows things down for me.




Message Center on the DashboardThis new feature consolidates all the pending user action for Automated Service Request, Configurations, and new user Support Identifier requests




I had to look carefully for this - it is really small, and in the upper right-hand corner, and often occluded by the annoying green slider message box that I was told was going away. Automated Service Requests (ASR) are the domain of Sun products, thus not relavant for me personally. I could care less if a "pending user request" popped up, since I get mailed about them anyway. I am actively looking for a way to make this "Message Center" go away. Not sure why this is under the "Search Improvements" section.



Auto-detection of product namesQuickly refine your search results for a product


This is still a bit buggy. When I type "Oracle server ent", I get 8 products, 6 of which are more than 6 years old, nothing for 11g or 9i, only one for 10g. "Quickly refine"?!? Or maybe I am simply using it incorrectly - not really obvious to me.


Auto-detection of common tasks and intentsQuickly refine your search results based on selected task 


Not exactly sure how this works. Obviously, not relevant for the search bar on every page, since there is no "task" selected. And it seems you can only select a "task" once a product is selected, no way to skip right to it. When I did look for a "task", the "Customer Recommended" document was #23 out of 25. I see that it is sorted by date, but not finding any way to sort by, say, reverse date, or even "Customer Recommended" for that matter. Or even by rating. The "Refine Search" bar on the left is really slow.



'Customer Recommended' indicator on search hit listBenefit from the feedback of other users who liked knowledge base documents



Mentioned above. It is not clear exactly how a document gets the fabled rating. Seems similar to the "like" button.



More comprehensive and intelligent search tipsAdditional guidance to get the most out of the search engine


Not much to say about this one - I'll have to take their word for it. The "type-ahead" auto-fill thing is probably the most visible aspect, and I like it so far.



More search sources in the form of additional product documentationExpands the knowledge base of possible solutions



So, now, I can search for JDEdwards and Sun documentation. Umm.. yippee?



Ability to set default search source to "All Sources" (in preferences)Expands the knowledge base of possible solutions



Mentioned above, but don't oversell it. A small step in the right direction. :)



Browse / Navigation ImprovementsRedesigned Knowledge Management tab with new "Browse Knowledge" regionEasier to find content that solves your problem




Not seeing how this works. Thus not easier at all. How do I do a search for a specific product and a specific intent with my keywords? Seems like when I enter keywords in the search bar, it completely ignores what I selected for product and intent. Where are the enhancements?



Direct navigation to "premium content" using task, product and release refinementEasier to find content that solves your problem



Easier, eh? Still can't find it. I must be really stupid. :)



Removed product browse tree (in favor of Knowledge Guidance)Easier to find content that solves your problem




Umm... they just replaced "tree" with "hierarchy". And moved it into a drop-down button. Oh wait, they had that before as well.




Health RecommendationsAbility to set lifecycle (Production, Stage, Test, Development) of multiple Systems at once Easier to maintain and manage Systems and Targets





Interesting. I guess that is a good thing, and it seems to work. Gotta love the little warning message (which should be removed, imo):
"The change will take approximately 2 seconds to complete. A message will inform you of the final result."


I am not going to comment on the other 5 sections of "Health Recommendations" because I do not use them. For folks that do, I can see how these might be good things, but given what I have see so far with other "enhancements", I have to question how much of an improvement this actually is.



Guided ResolutionTask-based Advisor helps to guide through a series of steps in a taskHelps you solve problems based on the process or workflow



I have no idea what this means. Where does one find this "Task-based Advisor"? Going to have to skip this whole section because I have no idea what they are talking about. It does recall to mind the old Libraries they used to keep. Again, sounds like a good idea, just not really obvious. Or maybe I really am dumb.



My Oracle Support CommunityNew user profile enhancementAllows user to enter biographical or specialized information and the ability to control emails sent from Oracle.





Not finding anything I want to fill out in this "profile enhancement". Provide my education?? Manage Certification logos? Gah! And the only knob for "controlling emails" is a radio button that says "receive emails based on your community subscriptions". Not what I was expecting at all.



Ability to upload documents, pictures, and videosAdd information in the format that communicates your question or situation clearly 



This sounds like a good thing - I did not realize you could not do that before. But I only have 8 Community points, so....



Allow users to view and manage the documents they subscribe toHelps to manage solutions so you can solve problems faster



This made me laugh.






General notes:

  • still see annoying green slider box in upper right.
  • when displaying a document, still see the annoying "slide left" effect that takes a couple seconds.
  • Good ideas and intention. Disappointed with the implementation.
  • still hate the lack of any decent "back" functionality (a FLASH thing). Clicking through breadcrumbs erases options I might have selected earlier. (Big problem with search!!)
  • new tab for "Advanced Customer Services". Really, an advertisement?!? How do I delete that?



To Do:

  • Learn more about Health Recommendations and Guided Resolutions. Dare I have high hopes?

Bug 11858963: optimization goes wrong with FIRST_ROWS_K (11g)?

Fri, 2011-06-17 08:07
At the beginning of March, I noticed some very odd things in a 10053 trace of a problem query I was working on. I also made some comments on Kerry Osborn's blog related to this matter. Oracle Support turned this into a new bug (11858963), unfortunately an aberration of Fix 4887636. I was told that this bug will not be fixed in 11gR1 (as 11.1.0.7 is the terminal release), but it will be included in future 11gR2 patches.

If you have access to SRs, you can follow the history in SR 3-314198695. For those that cannot, here is a short summary.

We had a query that suffered severe performance degradation after upgrading from 10.2.0.4 to 11.1.0.7. I attempted to use SQLT but initially run into problems with the different versions of SQLT, so I did the next best thing and looked at the 10053 traces directly. After a bit of digging, I noticed several cases where the estimated cardinality was completely off. For example:


First K Rows: non adjusted N = 1916086.00, sq fil. factor = 1.000000
First K Rows: K = 10.00, N = 1916086.00
First K Rows: old pf = 0.1443463, new pf = 0.0000052
Access path analysis for FRRGRNL
***************************************
SINGLE TABLE ACCESS PATH (First K Rows)
Single Table Cardinality Estimation for FRRGRNL[FRRGRNL] 
Table: FRRGRNL Alias: FRRGRNL
Card: Original: 10.000000 Rounded: 10 Computed: 10.00 Non Adjusted: 10.00



So, the idea behind FIRST_ROWS_K is that you want the entire query to be optimized (Jonathan Lewis would spell it with an "s") for the retrieval of the first K rows. Makes sense, sounds like a good idea. The problem I had with this initial finding is that every single rowsource was being reduced to having a cardinality of K. That is just wrong. Why is it wrong? Let's say you have a table with, um, 1916086 rows. Would you want the optimizer to pretend it has 10 rows and make it the driver of a Nested Loop? Not me. Or likewise, would you want the optimizer to think "Hey, look at that, 10 rows, I'll use an index lookup". Why would you want FIRST_ROWS_K to completely obliterate ALL your cardinalities?

I realize I am exposing some of my naivete above. Mauro, my Support Analyst corrected some of my false thinking with the following statement:

The tables are scaled under First K Rows during the different calculations (before the final join order is identified) but I cannot explain any further how / when / why.
Keep in mind that the CBO tweak -> cost -> decide (CBQT is an example)
Unfortunately we cannot discuss of the CBO algorithms / behaviora in more details, they are internal materials.
Regarding the plans yes, they are different, the "bad plan" is generated with FIRST_ROWS_10 in 11g
The "good" plan is generated in 10.2.0.4 (no matter which optimizer_mode you specify, FIRST_ROWS_10 is ignored because of the limitation) or in 11g when you disable 4887636 (that basically reverts the optimizer_mode to ALL_ROWS).
Basically the good plan has never been generated under FIRST_ROWS_10 since because of 4887636 FIRST_ROWS_10 has never been used before



I still need to wrap my head around "the limitation" in 10.2.0.4 and how we never used FIRST_ROWS_K for this particular query, but I believe that is exactly what Fix 4887636 was supposed to be addressing.

Here are some of the technical details from Bug 1185896:

]]potential performance degradation in fkr mode
]]with fix to bug4887636 enabled, if top query block
]]has single row aggregation
REDISCOVERY INFORMATION:
fkr mode, top query block contains blocking construct (i.e, single row aggregation). Plan improves with 4887636 turned off
WORKAROUND:
_fix_control='4887636:off'
I assume fkr mode is FIRST_ROWS_K, shortened to F(irst)KR(ows). The term "blocking construct" is most interesting - why would a single row aggregation be labeled as a "block construct"?

Also, this was my first introduction to turning a specific fix off. That in itself is kinda cool.

alert.log appears not be updated

Mon, 2011-05-16 15:09
After a few days of spinning my wheels and subjecting the poor recipients of oracle-l to multiple posts, I have identified an issue in Oracle code that I believe needs to be looked at.

First, some background.
We are running Oracle EE 11.1.0.7 on Solaris 10. We also have a job that occasionally bzips (compresses) the alert.log. The logic in the job is supposed to check if the file is actively being written to before zapping it, but by pure chance (so it would seem), in this particular case the alert.log was still open by the database when the file was scorched. This led to the appearance of the alert.log not receiving any more updates from the database. We attempted to bounce the database which had no discernible effect. I also changed the diagnostic_dest, which caused us to go from slightly strange to absolutely bizarre, and what opens the door for the rest of this post.


What I found
After changing diagnostic_dest several times, posting on oracle-l, the Oracle Community forums and playing tag with an Oracle Support Analyst, and doing lots of truss commands against sqlplus, I started to focus on this result from truss:
access("./alert.log", F_OK)              = 0

Now, you may notice that this "access" command is saying that the file in question ("./alert.log") is legit. This caused no small amount of head-scratching. I got the same results no matter which directory I ran the commands from. In my system, I only had two files with this name, one in $ORACLE_HOME/dbs and one in $DIAG/trace. Neither were actively updated by the database. It was not clear to me, at first, that Oracle was finding one of these log files. Especially since it never did anything with it. I searched file descriptors in /proc/*/fd and found nothing. I even grepped keywords from all text files looking for strings that should show up in this particular alert.log.

For the life of me, I could not figure out what directory ./alert.log was in. When I compared to other databases, this same access always returned Err#2 ENOENT. So I knew this must be key, but not sure exactly how. On a whim, I decided to delete the alert.log in $ORACLE_HOME/dbs. Lo and behold, the problem seemed to go away magically.

The BUG
So here is the root problem, in my opinion. The Oracle code line is looking for $ORACLE_HOME/dbs/alert.log, but completely fails to write to the file if it is found. Instead, the branch simply exits out. How is that helpful?

In retrospect....
I believe when I changed diagnostic_dest to a non-existing directory, Oracle automatically created alert.log in $ORACLE_HOME/dbs. I guess I learned a few things from this. :) Also, I learned a few tidbits along the way. One can use KSDWRT to write messages to the alert.log. Dan Morgan's library (still hosted by PSOUG) shows this. Also learned a little more about truss and dtrace as I was researching this issue.

Now the hard part; convincing Oracle that this is a problem and needs to be corrected.

"The Bridge": Day 3 (part 2)

Tue, 2011-05-10 08:53
I have received some pictures (not all, but most the important ones).

First, a recap of Day 2:
Our "realistic" picture evolved a little bit; Ahjay added some grouping tags ("WHERE", "WHAT") which we incorporated from there on out.


And here is what our OBJECT list finally looked like; complete with attributes and verbs:



Day 3
Hard at work.


After hashing things out in the morning, we finally had something akin to a prototype forming at our fingertips.

I really struggled with the overall complexity; I wanted simplicity. As a compromise, we worked very hard to make as much optional as possible, attempting to capitalize on pre-filled defaults and "quickfill" options, trying to use the technology and data that should already be available to reduce user interaction. For instance, if the user might be presented with the most recent Products at the top of one's list. Or setting your default QuickFill option (Previous SR, Profile or OCM) in your global Preferences. You will see, also, at the top left blue stickies for "Support Recommended" and "Product specific tips"; these are to be dynamically populated as you type and fill in information - the more information the user provides, the more relevant and specific the search becomes. I do not have any pictures, but on one of our white sheets we put in a meter as a gimmick to relate how more information upfront helps the user and the analyst focus on the problem (akin to the Password Strength Meter).

Near the end of the day, our final draft prototype was looking like this:


Again, you can see how "insta search" is being populated in the right-hand side, hopefully not too distracting, but also hopefully to be filled with information that would perhaps prevent an SR or guide a customer down the right path. Again, we are assuming huge improvements to Search. :)  This picture also demonstrates one possible "multi-screen" approach, trying to cram in as much as possible "above the fold". I argued for the "one-screen" approach, but compromised and suggested that a Preference be added to allow either one-page or multiple pages.

Another thing that might be slightly less obvious is that we are trying to keep the big picture in mind, or "tell a story" as Kelli put it. We are trying to describe a problem, which has a beginning (ie, the environment), a middle or body (the Description) and an ending (optional files, template questions, further elaboration, etc).

In the end, it still feels like way too much complexity to me. I noted earlier that I really want to talk to a human to route the issue (which obviates the whole "Category" mess). I do not mind filling in all the technical details, but what if you had a "Contact Analyst" button that, like Amazon and many other companies, auto-dialed you (the user) and attempted to get a IHUB person on the phone asap? Yes, I realize from Oracle's standpoint this is impractical. But does anyone else want that?

It will be interesting to see what comes out of this project. I think I am excited. The workshop itself was definitely very productive, eye-opening and an awesome experience that I am fully thankful to Oracle for.

Before we all parted ways, we did get a group photo. Say "Cheese!"

"The Bridge": Day 3 (part 1)

Thu, 2011-05-05 21:33
Still no pictures yet, so this is Part 1 of Day 3.

Day 3 was crunch time; by 5:pm we were aiming to have a working prototype. Because we expanded our scope (rather significantly) and spent so much time on tangential (but very important and sometimes relevant) details, the idea of getting a working prototype seemed rather dubious. But I think we did it. To a degree.

Picking up where we left off, we started to tackle the actual UI design itself. We had already done a lot of work on Search, so we needed to focus on the SR part of it. I came in a little earlier and drew up my own mock ups - they are horribly cluttered, but I personally think they are kinda cool. :) Basically, my mockup capitalizes on the vast similarities between Search and Creating an SR; providing keywords (ie, title), a product (and version) and you can start going to town. Category is a bit tricky, and I will cover it a little more in the last paragraph, but if you can nail down Category you can potentially narrow down your Search (called "Task Intent") rather dramatically and better yet, you are primed to punch in and route an SR. So why not do both in parallel? Maybe even on the same screen. You start filling in information, and in one pane you start seeing search results aggregated by facets (like what Advanced Search does now, but much more dynamic and insta-search), while at the same time your "Create SR" button lights up. And maybe even a "Post to Forums" button. I briefly argued for this approach, and I readily admitted that the huge downside is that the screen gets very cluttered very fast. I think we adopted a hybrid (eg, compromise), where the "Related articles" shows up insta-matically in a somewhat unobtrusive region floating off to the side.

We did a couple of usability tests; frankly, I think we need specific "Usability Test" training to learn how to do these better. :) I was not entirely satisfied with the particular way we approached this topic. But the good news is that we discovered many holes in our current prototype. Late in the day, we voted and started to tackle some of the more critical (or easy-to-fix) issues. Near the top of that list was whether or not to display the entire SR Creation process as one page or multiple pages. Again, some were very concerned about cluttering the screen and wanted "screen-sized" sections. I want everything on one page. In the end I posited that the user should have a preference for how he/she wants to view this process. We will see what happens with that.

Actually, this topic consumed a bit of time. After we green-lighted the idea of multiple pages, we got to work going through several permutations of possible screen layouts. Again, I found it ironic that we kept coming back to a design that is very similar to what we have today in MOS. Granted, we are added a lot of behind-the-scenes features that auto-fills (and insta-searches) as much as possible - that is not to be overlooked. But our final "look and feel" does not diverge much from the current design, in my opinion. In fact, if I count correctly, our final design may actually look more complicated. It is hard to say without having a real GUI to step through. Even though it looks more complicated, we are actively working to allow the user to input as little as possible to get the SR filed.

I have mentioned this previously, but it bears repeating. We were very much biased by the current implementation. In some ways, we spent a huge chunk of time trying to "fix" and patch current brokeness, instead of redesigning from the ground up. This is not to say we did not think out of the box (or at least try to).  And right now as I type this, I cannot think of one single "out of the box" new thing we pushed. Maybe I am simply tired and not remembering well.

Another point of discussion that came up, and in retrospect I wished we spent more time on, is the current super-criticality of "categories". Currently, SRs are routed based on the sub-category (or category if no sub exists). These are currently filtered by which product one chooses. In our experience, choosing the most appropriate sub/category is often tedious and seems like a relatively useless step from the users point of view. We briefly talked about driving the sub/category off keywords in the Description field, and to be done in the "insta-search" way (you start typing, and the list of possible sub/categories to choose from grows smaller). But the bigger issue, in my opinion, is all about the routing in the first place. Oracle has placed a lot of emphasis on building automated logic to get the SR to a specialist team. I have a problem with that, at least how it is done currently. In my personal "Bleu Sky" vision (Day 1), I created a big easy "Create SR" button, with no requirements whatsoever. How the heck is that any good? Well, think about it, what happens? Rather, what if you changed the button to say "Chat with a human being"? By the end, we made comparisons to various other companies (ie, Amazon) that allows you to fill in call-back information, a computer actually calls you 1 second later, and then attempts to connect you to a live person. I love that concept!! As you can imagine, the managers and directors and support representatives at the meeting hated that idea. :) Yes, currently, it is hugely impractical - the IHub would be drowned to oblivion. Currently. But if we are thinking Utopian thoughts.... There are other ideas to simply routing. For instance, drastically reduce the number of routes. How? Well.... we didn't talk about that, yet. :)

"The Bridge": Day 2

Wed, 2011-05-04 01:00
It is late and I am exhausted. And I have no pictures from today (/me looks at Richard Miller).


We started off on a good foot, having already taken a good stab at objectifying the tasks. We further hashed out all more objects, added more attributes, added verbs and relationships. Some objects were much easier than others; for instance, Product really only consists of a product name and version number in the scope of an SR. Yes, it is a "child" member of other objects.

In retrospect, we got bogged down in many areas, and sometimes it seemed like those areas were really minor and we were spinning our wheels. But the "spinning wheel" did demonstrate that even in our small group there is ambiguity and misunderstanding of core elements. For example, is a Primary Contact a subset of a "Contact Type"? Or is it different enough from other contacts (ie, Secondary, Manager) that it deserves it own types. One member argued quite vociferously that it is its own object because it is hanlded differently, like populated from a User Profile, while the others are not. After ripping up and redoing Contacts in various permutations, we finally decided on a single Contact object with various conditional properties and verbs.

There were other examples of the same thing, I just do not remember them off the top of my head. These little excursions took up a bit of time. On top of that, we also delved heavily into the Knowledge Base and Search, since we had decided to expand our scope the previous day. While much of our journey through this topic is quite useful in the context of filing and resolving an SR, it consumed time as well. So even though we had covered a bit of ground, Richard Miller declared that we were several hours behind in the late afternoon. :) I am not sure what that means for tomorrow.

Some very newsworthy things that came out of our session. I have not signed any non-disclosure agreements, but I do think the Managers want to keep a lot of new developments under wraps. So I'll go about it indirectly. We chatted up some more "Blue Sky" features as we discussed things we did not like about the current implementation. One of the key features to our new approach is using Search heavily at the outset of a possible SR creation process. I know, you are thinking this is a HORRID idea. But if Search were actually much improved (in terms of performance and relevance), we see this as being a huge boon. We could be wrong, time will tell. Basically, you have a form that provides an opportunity for the user to provide a ton of information. Much of the form is optional, but the idea is that the more you provide, the better the search results. Using ideas like Google's word-completion and instant results, and eBay's and Amazon's left-hand pane of refining and drilling-down, we explain how these kinds of features would significantly enhance the user's perception of Search by providing fast, dynamic feedback on the criteria entered. On top of that, the user may have a chance to save the search filters/results and shove all the pertinent information entered into an SR, or maybe even a Community forum post. Some of the above ideas have already been developed and we saw some simple demos. Like using quickfill and/or word completion in various areas. Very nice to see that they are already make in-roads in that direction.


I am particularly torn about the latest prototype GUI mock-up that our group has achieve so far. I claim my role, so I am not blaming everyone else. I say I am torn because the pages/screens that we "developed" today still look very busy and crammed full of things to fill out. It almost looks like we have merely re-arranged the existing SR fields that one normally fills out. I think the key importance in our approach today is that we are aiming for two things:
 - allow as many optional fields as possible
 - thus giving the user a choice between providing less detail and possibly a more vague search, or more detail and possibly a more accurate search

We are both assuming that search will be improved significantly, and providing ideas on how exactly to do that. The dynamic feedback mechanism is crucial I think, since it gives the user a good idea as to how many docusments are returned and how to refine it. I think. It looks good on paper right now. :)

Ok, that's it for me. I hope to procure more pictures tomorrow.

Do people really do this in real life?

Tue, 2011-05-03 10:44
"My name is Newton Sequeira and I am an Author Relationship Executive at Packt Publishing. Packt recently green lit a book on Oracle 11g R2 RAC Administration Cookbook and we are now searching for an author to develop the book.
I was reading through your blog and wondered whether you might be interested in this project?
Thanks for considering this proposal. I would appreciate if you could please let me know your views."



This simply scares me. Do publishers really approach potential authors this way? Believe you me, I am the very last person any publisher would want to be writing this particular book. Even if I were to complete such an ambitious project, I would be the laughing stock of the Oracle community. For at least a year. And based on what little I know of author-publisher relationships, the author endures crushing timelines and relentless editors and enjoys a very small fraction of the royalties. Yeah, awesome incentive there.


If you see my name on this book, don't buy it. Please.

"The Bridge": Day 1

Tue, 2011-05-03 00:42
Today we had a great session. Obvious introductions were first; another functional user who files a lot of SRs for Finance and HR modules, a couple of upper-level managers, a IHUB engineer and a front-end "user experience" ADF developer (was that too redundant?).

Honestly, I was a bit overwhelmed at first, having never done task flows in a group like this before. But I liked the concept. Our moderator/taskmaster Mitch is a good guy, and at times we tried his patience. :) We started off defining what we thought the "Big Picture" is - note how we labeled it "Create SR". At one point, one of the managers said, half-jokingly, "My god, what have we created?!?":


As you can see, we have lots of stickies. Mitch loves sticky notes (aka, post-it notes). We identified key processes and showed how they related to each other, and finally we marked it up with pink stickies for problematic areas. We also identified some "out of scope" topics. This first go was a really rough draft but provided a framework from which to build.

After we had the current picture in mind, Mitch asked us to dream about what we wanted it to look like. To dream a little. To think of a Blue Sky. Spelling errors (wrong to call them typos when you write them out? *grin*) were the trademark of the evening, but we pushed forward. The following picture is what the other user and I came up with:


I did the one on the left. I kept it really basic, because that is what Mitch indicated. And I wanted to emphasize how we need to keep the process simple and as fast as possible. The other user representative has a lot of experience filing SRs so has essentially figure out how to "game" the system to make it work fast. Included on the other side are a number of additional (and some optional) items, some of which overlap mine. I do not have a picture of the diagram the managers/developers came up with, but it wanted more forms and more questions answered. :) After getting it all down, we users then marked each step with how desirable it was (H = High, M = Medium, L = Low), and the developer group marked how feasible it was (H = Hard, E = Easy). The goal was to find as many highly desirable and easily feasible points as possible. I kinda think we did not pay too much attention to that. OH well. The next phase was putting these two (Realistic + Desirable) together. In the middle, we experience a Scope Changed because we started to see how important "Search" is to this process, and how we users much rather find existing information that solves our problem then filing an SR in the first place. Thus our Scope evolved into "Solving Problems" and this "Step 1" reflects how many of the things that could be used to initialize an SR could actually be pointed at the Knowledge Base. Like so:



Lest you despair (some consider KB to be a four-letter word), we had lots of talks about improving the KB search functions, and especially focusing on using the Advanced Search capabilities. This reflects the combination of our "Blue Sky" ideas - of simplifying the existing framework and trying to think of what is the bare amount needed to go search for information, while still providing plenty of robust functionality for power users who want to provide a ton of extra detail.

Here is a shot of us "in action" - you can briefly glimpse the chaos:



In the end, we also worked on "Step 2", which was the SR Creation portion of it, and discussed at length how these two steps play together, and how Sev 1 changes the ballgame a little (more often than not, if you file a Sev 1, you are not going to take the time to Search). With 30 minutes left of the night, we dove into objectifying the tasks. We merely scratched the surface, but I think we all felt it was significant progress.

I am certainly very impressed by this process. There are some obvious inter-group challenges when certain folks dominate the discussion, but overall we are making wonderful progress and I am very happy we are having these discussions. I only hope we are drastically pushing the managers and developers in a direction we will later regret. :) I am also struck by the complexity and the number of pieces involved. We very briefly touched on how OCM/EM play a role in providing data to the SR creation process, and we obviously tacked the bigger "purple elephant" of Search and the Knowledge Base (how many people are using it and finding what they need?).

And now I am mentally exhausted. And need to grab some sleep.

MOS Workshop: Fixing SRs

Mon, 2011-04-25 11:42
So I am heading to Oracle the first week of May (May 2-4) to talk about improving MOS, specifically the SR creation process.

I have two similar previous posts on this topic:
http://orajourn.blogspot.com/2011/04/heading-out-to-talk-to-mos-devs-in-may.html
http://orajourn.blogspot.com/2011/04/mos-mashup-summary-or-saga.html

The agenda is:

The Service Request process is undergoing a redesign and a specific customer intensive feedback type session called “The Bridge” is being used to evaluate changes to the design. This process works over a 3 day period with two customers, the business owner, a lead developer, a designer and two facilitors to help the structured process to move forward.

The results are extensive requirements and user interfaces which are tested and approved during these sessions by development, customers, and business owners. This process works because the customers are in the room.



I ask for your help because I am but one person. I have filed a goodly number of SRs, ranging from the stupid to the complex, so I feel comfortable in that I can represent my own thoughts in what I would like to see improved. And I'll detail those ideas below. But if there are other pressing matters that I miss, please speak up! :)


Overall, I want the SR creation process to be easier on me, the client. I see the merit of Configurations and suppling an extra level of detail to the SR Analyst, and I see how the OCM intends to make the collection of such configurations mostly transparent. But above and beyond what is best for the analyst, I want to have a satisfying and confident SR creation experience.


Currently, it takes a minimum of 17 steps (skipping all optional steps) in both the Flash and HTML versions to get to the point of hitting the "Create SR" button (different steps to be sure, but they amount to the same thing). Some of the steps are completely redundant, some are nonsense. I would contend that 95% of those steps can be deferred until after the SR is created - basically, you just need the SR to end up in the right Support group. A note about the OCM - In the HTML version, I found that it was faster (according to the wall clock) to not use the OCM because the pop-up window to choose the system/host can take a long time to churn through the available systems (at least for us). The Flash version is a bit smarter and fills-in as you type, which is perhaps one of the best things about the Flash version.


What information does Oracle need at an absolute minimum to file the SR with the right group? Well, for starters, how about displaying all the possible groups? Currently (in both the HTML and Flash versions), the LOV (list of Values) that populate the "Problem" drop-down menu are determined by the Product that is chosen. Personally, I would prefer to pick an area of Support to send my SR to, instead of having to wade through various menus that play out like a "choose your own adventure" story.


Once the proper group within Support is selected, I want a "File it NOW!" button. All the other information can be entered after the SR is filed. I would even be ok with Oracle spitting out a message like "the analyst is going to ask a lot of questions unless you can provide more details". This makes sense. If all you have done is quickly file an SR without providing the product, version and some details of the problem, what is the analyst supposed to do? Practice ESP?


Next, I want the ability to fill in information that is pertinent to my case. If I use the a configuration, I want a list of configurations that gives priority to usage such that those configs that I use more often would percolate to the top. Same with the product and versions. I want the whole operation geared around getting it done as fast as possible. I want it all to be saved as I go so that if my connection is lost or I timeout (emergency meeting with the boss), I want to be able to slide back into it where I left off with no hassles.


In terms of "Related Knowledge" or other relevant documents, I do not mind if Oracle wants to spin extra cycles looking and filtering for possible metalink docs that might help me out. Just do not be obnoxious about it. Run the search in the background and populate a sidebar that I can click on at my convenience. In fact, I would want all related docs to be here, including any others that the analyst might find and possible bugs.


I want my SR to be filed with an analyst who shares my working hours. I prefer they speak my language proficiently, but initially and more importantly, I want to know that when I am at work, so is my analyst. I want the option of specifying different work hours. There has been a bit of talk about indicating the skill level of the DBA filing the SR so as to get a competent analyst, and this idea has been shot down with good reason. Rather, I want Oracle to provide top quality analysts from the get-go. If you have a newbie who is taking the SR, fine, but make sure there is some oversight from an escalation manager right off the bat. I do not want to escalate the SR simply because I am smarter than the analyst.


Lastly, I want my experience to be completely independent of my browser choice. I realize this is a huge obstacle as HTML "standards" are not standard at all.


Here are some things that Oracle is doing well, and I want the basic functionality to be retained. In both the HTML and Flash versions, there is an attachment link where you can view uploaded files. I like how the Flash version allows you to map a system after filing the SR. Although, I do not like how you have to change other parameters as well just to make that stick. I like how entries in the SR can be filtered and/or sorted. I like the concept of the OCM (as mentioned previously) - I think there is still untapped potential there. I like how the Flash version allows one to navigate the various sections of the SR creation process (the HTML version only has a "back" and a "next" button). Pre-filled values - the more the merrier.


I am toying with the idea of generating a step-by-step example of my concerns. I have already down two recorded webX sessions with Oracle about this, and it would be simpler just to make those public. :) But I did not record them, Oracle did.

MOS Mashup: the summary or the saga?

Wed, 2011-04-20 15:18
Jonathan Lewis started a small conversation; what I gleaned from that thread is he and Tanel (and other experts?) mainly use MOS for "bug hunting" and looking up specific documents. Not so much for filing SRs. Therefore their comments revolved around the utility of the site in that context. However, the general consensus is that they do not use the Flash version, only the HTML version.

Lots of threads on oracle-l - I'll provide the first thread from the freelists and let you read through it if you like.

Robert Freeman "Do you ask the question: How do I work with Oracle Support....?":
Lots of varied comments here, great for "mining" what users are expecting. If I have a ton of free time, I would love to go through and categorize what I find here more thoroughly. In summary, 1 overtly positive comment, 12 negative and 11 on the fence (both good and bad). This thread was mostly about the analysts and not MOS, per se, but a few comments did tickle MOS (negative).

Jared Still "Just my opinion - the move of MOS to Flash is still a bunch of crap":
14 negative comments and 2 "neutral" - I did not find a single person who absolutely loved Flash, let alone MOS. In fact, the majority seem to feel that Flash is REALLY BAD(tm) and the HTML version is passable. General sense that the design was driven from top-heavy management structure, not from collected opinions of the user-base. A little progress seen on fixing bugs, but not nearly enough. MOS still excruciatingly slow.

Yon Huang "Anything Flash MOS can do HTML MOS cannot?":
A number of browser differences ("you got an error in XXXX broswer, try the YYYY browser"). Some comments about how Flash was initially better at creating SRs, but now it seems the HTML version is more robust. With the possible exception of annoying timeouts. As if it would take an hour to file an SR, say it ain't so!!!

Andrew Kerber "more MOS pain":
I think the initial issue might not have been the interface (MOS) itself, but more about how some documents are not published ("unpublished"). I agree, I also find this practice highly annoying. If I cannot see, don't mention it.

Don Granaman "Obtuse errors at MOS":
I also have seen a number of these errors, even recently. This can be generalized into a category of the "Unexplainable", strange messages that pop up for no apparent reason with no apparent solution path. Or like when the entire GUI is in Japanese.

Amit Bansal "Problems with MOS":
Browser and performance issues.

Jon Crisler "Metalink fiasco":
This long strand of messages wandered all over the place and I could not bring myself to read all of them. There are some good efforts to point to specific problems and possible solutions. I am dearly hoping that someone categorized that already.... you know, reinventing the wheel and all. :)

There is a ton more on oracle-l - what I have above only scratches the surface. Not to mention the proliferation of myriad blogs. But two I do want to mention are from the folks at Oracle who have started a couple blogs which have garnered their own collection of colorful ideas.

Chris Warticki's "Support":
I actually took it as a good sign when Oracle briefly pulled the plug on Chris after a noticeably contentious article. Chris knows there are issues with the GUI and Support in general, and he tries really hard to put a positive spin on all of it. Its just that there is only so much positive spin one can put on.... anyway, many of the folks who commented on the oracle-l articles are active here as well.

Support Portal - maintained by members of the Dev team:
I have had a lot of great conversations with Richard Miller, and I am glad he started blogging a bit more. As I blogged about earlier (a long time ago it seems), they have been doing a great job of collecting feedback and Richard did write a series of posts (1, 2, 3) about that collection process. Good stuff. The only major downside is... what did they actually do with all that awesome feedback? How is MOS better for it? *pause* I do not hear anyone singing the praises of MOS.

Whew.... that is a lot of stuff. Here is my Very Basic, Gross Summary(tm).
Customers want the online Support Site to be very fast and they want it to work. They do not want to see silly little nonsense messages. They do not want to jump through hoops and tie themselves in knots to do basic things. Customers want to talk with and interact with humans. Not monkeys reading scripts. Not a cumbersome website. Customers want a powerful search utility that helps them find documents and information quickly. Lastly, customers expect that when they are asked for their feedback, something will magically happen. When nothing happens, the pool of that feedback can quickly turn sour and/or dry.

Heading out to talk to MOS Devs in May

Tue, 2011-04-19 15:17
I have been invited to a workshop to talk about enhancements to MOS. I am dearly hoping to collect and possibly organize feedback from the user community in general. So here is what I am looking for:
1) What do you like about MOS? Ie, the things you do not want to see changed.
2) What do you not like about MOS? The more specific the better, and bonus points for suggesting an alternative.

I'll be compiling my own list in the next few days. I realize many user communities have tossed this topic around ad nauseum, so this little effort is mostly my feeble way to gather all that wonderful feedback into a small concise package that can be communicated in a very clear and distinct manner.

Concepts Guide: 11/27 - Oracle Utilities

Fri, 2010-06-25 15:14
Wow, this chapter was hugely disappointing! I mean, it makes for a better sales pitch than technical introductions to useful features. I believe I could summarize this chapter using a pseudo-code:


products[] = getListofProducts();
foreach product in products[]
do
printHeader "Overview of $product"
print "$product is a powerful utility to manage your data quickly"
end



What is even more disappointing is that I have used all of these features/products (with the exception of the Data Pump API) and know first-hand that they are all quite useful and handy. DataPump in particular is blazingly fast at moving raw data (but amazingly slow with the subsequent ddl like indexes and stats). I mean, I could go on and say a number of excellent things about these products and the specific "things" they do, and only scratch the surface at that, and I would have surpassed what is covered in the Concepts Guide.

The one thing I did learn was that I did not realize DBID could be used to set the DBNAME. I'll have to get that a try sometime.

Woot, two chapters in one day!

Concepts Guide: 10/27 - Application Architecture

Fri, 2010-06-25 14:57
Again, I am struck by the archaic terminology (minicomputers and mainframes?). In a way, I guess the fact that the underlying technologies have not changed all that much speaks to the stability of those particular designs. And that's a good thing, right?

The architecture described in the first few pages is interesting. With a title like "Application Architecture", I was mislead into thinking this chapter was more about the application, but rather it is the fundamental pieces that Oracle has built to interface with various applications. I am a bit cautious about the apparent benefits of scaling vertically and horizonatally; obviously, everyone wants the option to scale if needed. While Vertical scaling seems to be the most common solution, I am a bit discouraged how hard Oracle PR has pushed Horizontal scaling in the form of RAC, almost as if it were a panacea for all functional and performance issues. But I digress.

I was excited to see the section "How Oracle Net Services work". As with previous technical material in this document, I was again disappointed with the high-level summary provided, instead of the real nuts and bolts. Ironically, in light of the coverage, I was surprised to find mention of "industry-standard higher level protocols"; seems to be a bit of bandwidth to advertise how compliant they are. I would think the reader would be more interested in the details that specifically relate to how Oracle talks to itself, leaving the underlying transports systems for a book of another scope. The whole point of an API is to abrstract out the details that one does not really care about. So I was glad to move on to the next section about the Listener and Services.

Yet my concern did not stop there. Check out this quote from the Listener section:
When multiple databases or instances run on one computer, as in Real Application Clusters, service names enable instances to register automatically with other listeners on the same computer. A service name can identify multiple instances, and an instance can belong to multiple services. Clients connecting to a service do not have to specify which instance they require.

Wow. Ok, so RAC runs on one computer?!? Since when? I have to admit that I am greatly impressed by how PMON communicates not only with the local listener, but also remote listeners on different computers. But there is no mention of local_listener, remote_listeners or how those play a huge role. Worse, "services" have not even been covered in sufficient detail yet; it would probably help to point out that while a service may map to multiple instances, all such instances must be part of the same database. Regardless, I have to repeat that I am duly impressed by the slickness we call "services" (head nod to Jeremy Schneider for his paper on making it a little more public). If only more beans were spilled out of the can here in the Concepts Guide.

And then the chapter ends right there. Egads! 6 pages covers Application Architecture?!?

Concepts Guide: 9/27 - Process Architecture

Fri, 2010-06-04 13:26
"Figure 9-1 can represent multiple concurrent users running an application on the same computer as Oracle. This particular configuration usually runs on a mainframe or minicomputer."

Wow, this section of the documentation must have been recycled for a number of years. =)

Good pictures, descriptions of various processes.

In general, I like the "See also" sections, but I wish the link would go directly to the relevant section of the reference, instead of the top-most TOC page.

This section confused me:
"When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.

The statistic DBWR checkpoints displayed by the System_Statistics monitor in Enterprise Manager indicates the number of checkpoint requests completed."

If The CKPT process is responsible for updating the datafile headers and DBWR is responsible for something else (writing blocks to disk), why is the statistic called DBWR checkpoints? That is quite misleading, and perhaps leads to the confusion that spawned the warning about the DBWR in the first place. =)

Both PMON and SMON "check regularly". What is "regularly"?

While there are a lot of good ideas imbedded in Oracle, it is surprising that some of the still have such an antiquated and/or obfuscated interfaced. For example, the job scheduling system. The job queue processes are quite cool, but using them is a pain in the arse. The EMGC GUI is not too shabby, but what really sucks is the API; what about a simple API for those of us who do command-line work? VPD and Streams are the same way (have not yet seen any GUI for VPD). At least Shared Server is a little easier to grasp and implement, but it is still very easy to shoot yourself in the foot.

In terms of performance in the context of Shared Server, would not immediate results from FIRST_ROWS_N operations be queued as well? So it would be possible that queued results would actually return slower than when using a dedicated server?


Overall I found this chapter disappointingly light on details, or examples for that matter. I would love to see the program flow, end-to-end, of requesting, establishing, executing and concluding a transaction. Likewise, the last few sections (under "The Program Interface") don't really say much at all - it is most useful as a dictionary or appendix, nothing really that describes what things are or how they work, or the role they play in the larger picture. I mean, they do a little, but not a whole lot.

VPD + bad ANYDATA practices can really bite

Thu, 2010-02-18 13:20
After several days of intense testing, 4 SRs with Oracle Support (and another with the ERP vendor), and the very helpful information from Maxim Demenko about "out-of-range" date values, I have developed a testcase that demonstrates how using bad ANYDATA practices in the context of VPD can really mess you up.

Some background:
We have an application that recently started to utilize ANYDATA. Unfortunately, the application did not implement validation checks, and the nature of ANYDATA makes table check constraints a near impossibility (I have not found any good ways to go about it). So we (not I, but colleagues) developed VPD rules to validate data. After a month of testing, a tester noticed that we had some really funny dates, ranging from 4290 BC to 5090 BC.

We tried tracing (10046, 10053, 10730), but nothing jumped out at us; except we may have uncovered a new bug, but more on that in a second. We tried using LogMiner, but Oracle Support finally convinced us that LogMiner does not support ANYDATA. :-( Finally we just started shooting in the dark, testing different combinations of rules and data inputs.

We stumbled upon the fact that using CAST to convert ANYDATA into a datatype has bad consequences. In particular, if you try something like cast(some_anydata_column as varchar2(1)) and the column is a DATE, for example, you get a ora-3113/ora-7445 (under 10.2.0.4 + JanPSU2010). The fine folks who had written our RLS policies had used CAST extensively, and the ironic part is that no errors were being generated on the application side. Instead, bad dates were sneaking into the dataset.

After reading the documentation a bit more, I discovered that ANYDATA is an object-oriented object (much to my surprise), and it has member functions. We had a hard time trying to figure out exactly how to use the member functions since one needs to instantiate a member first, and the documentation does not give any examples, let alone explain the usage of "object-oriented" in a relationship database. Finally I stumbled upon using sys.anydata as an instantiation, which seemed to work well for us.

Why did Oracle develope ANYDATA?!? It seems anti-RDBMS. And it makes things messy for us DBA types. As I explained to my colleagues, object-oriented data buckets are great for developers, up until they break. Then they are a pain to figure out.

I still have an outstanding question of exactly how the ANYDATA column overflows into the DATE field and gives us whacked out dates. If any Oracle gurus out there want to chime in, please do so.

Here is the code I used to replicate our issue:

drop user test cascade;
drop user test_no_vpd cascade;

create user test_no_vpd identified by test4#;
grant create session, EXEMPT ACCESS POLICY to test_no_vpd;

create user test identified by test3#;
grant create session, alter session, resource, create any context to test;
grant execute on dbms_rls to test;
connect test/test3#;

CREATE TABLE GORSDAV (
GORSDAV_TABLE_NAME VARCHAR2(30 CHAR) NOT NULL,
GORSDAV_VALUE SYS.ANYDATA NOT NULL,
GORSDAV_ACTIVITY_DATE DATE NOT NULL,
pill_1 number default 1,
pill_2 number default 2,
pill_3 number default 3)
;

insert into gorsdav values ('some_table_1',sys.anydata.convertnumber(1),sysdate,0,0,0);
insert into gorsdav values ('some_table_1',sys.anydata.convertdate(sysdate),sysdate,0,0,0);
insert into gorsdav values ('some_table_1',sys.anydata.convertvarchar2('Y'),sysdate,1,0,0);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('Yes'),sysdate,0,0,0);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('Y'),sysdate,0,0,3);
insert into gorsdav values ('some_table_2',sys.anydata.convertvarchar2('No'),sysdate,0,0,0);
insert into gorsdav values ('some_table_3',sys.anydata.convertvarchar2('MaybeSo'),sysdate,0,0,0);

commit;

-- Using FGAC example from http://www.orafusion.com/art_fgac.htm

-- A dummy procedure to satisfy the CREATE CONTEXT command; does not actually do anything

PROMPT Create Application Role Procedure
create or replace procedure
set_testapp_role(p_user varchar2 default sys_context('userenv', 'session_user')) is
v_ctx varchar2(16) := 'testapp_ctx';
begin
dbms_session.set_context(v_ctx,'rolename','APP_OWNER');
end;
/


PROMPT Create context
create or replace context testapp_ctx using set_testapp_role;


-- This is just a mock up test; I am not concerned about real-life roles or security,
-- thus I am returning the same predicate no matter who the user is

PROMPT Create security function
create or replace function testapp_security_function (p_schema varchar2, p_object varchar2)
return varchar2 is
begin
return '(sys.anydata.accessvarchar2(gorsdav_value) = ''Y'' and pill_1 = 1) or pill_1 <> 1';
end;
/


PROMPT Create RLS Table Policy
declare
begin
DBMS_RLS.ADD_POLICY (
object_schema => 'TEST',
object_name => 'GORSDAV',
policy_name => 'TESTAPP_POLICY',
function_schema => 'TEST',
policy_function => 'TESTAPP_SECURITY_FUNCTION',
statement_types => 'SELECT,UPDATE,INSERT,DELETE',
update_check => TRUE,
enable => TRUE,
static_policy => FALSE);
end;
/

PROMPT Inserting a control row into the table to show the date and insert are fine
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('123456789'),sysdate,0,0,0);
commit;

PROMPT Selecting data from table - should return eight rows with no errors
select * from gorsdav;

-- The following function uses CAST to get the varchar2 data; however, a majority of the
-- data is larger than the CAST target, thus we get an error. Even if we use varchar2(200),
-- some datatypes are DATE and NUMBER.

PROMPT Create "bad" security function
create or replace function testapp_security_function (p_schema varchar2, p_object varchar2)
return varchar2 is
begin
return '((cast(gorsdav_value as varchar2(1)) = ''Y'' and pill_1 = 1) or pill_1 <> 1)';
end;
/

PROMPT Inserting into table - this will work with no problems.
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('Y'),sysdate,0,2,0);

commit;


PROMPT Inserting into table - this will complete successfully, but will insert a "bad" date
insert into gorsdav values ('some_table_4',sys.anydata.convertvarchar2('123456789'),sysdate,0,0,0);

commit;

-- PROMPT Selecting data from table - should hang for about 10 seconds and kick you out with
-- PROMPT ORA-3113 and ORA-7445 in the alert.log
-- select * from gorsdav;

grant select on test.gorsdav to test_no_vpd;

PROMPT Connecting as a non-VPD user (exempt access policy)
connect test_no_vpd/test4#
select * from test.gorsdav;



TreeDump

Tue, 2010-01-05 15:36
Daniel Morgan alerted me to an index rebuild package he worked on; as I was reading up on sys_op_lbid (which is incredibly interesting, btw), I came across Richard Foote's "Index Internals", another awesome read.

Getting past the humurous myth busters (does anyone escape his righteous wrath?!? *grin*), I was delighted to learn about treedump:
alter session set events 'immediate trace name treedump level &index_object_id'; -- smartquotes removed

I had been experimenting with block dump on index blocks, trying to slug my way through the various header and pagetable blocks. While that is eye-opening in itself, the treedump really paints a human-understandable picture. As Richard has stated in other documents, indexes rarely ever look like the typical pyramid scheme (the one that everyone on the Planet uses, including himself *smile*); instead, more often than not, the "trees" get really wide very fast, and are usually rather shallow (not too many levels deep). The treedump not only exposes this commonality, but succinctly demonstrates why. Namely, each branch block of size DB_BLOCK_SIZE can contain hundreds or thousands of references to children blocks (be they further branch blocks or leaf blocks).

Mr. Foote's "Internals" presentation goes on to divulge other useful tidbits and I intend to revisit it to reinforce my learning. He is one smart dude.

My only request now is a live, dynamic graphical representation of the index "tree"; when teaching students (for example) how indexes are built, having a visual component really helps to emphasize what is going on. When I do them by hand (whiteboard and marker), I find it also goes a long way to demystify otherwise confusing (and thus prone to myths) concepts as deleted index entries, "fragmentation", "unbalanced" and index block splits. What can I say, I am visually oriented.

Thanks to Dan Morgan for vigilantly working to improve Oracle for us lower-lifeforms (ie, the index rebuild/coalesce package is really helpful). Without his nudging me, I would not have yet found sys_op_lbid, nor explored the internals a bit more.

Concepts Guide: 8/27 - Memory Architecture

Thu, 2009-10-08 14:08
Like the picture on the 2nd page. Yes, I am visually oriented.

On page 4:
When automatic SGA memory management is enabled, the sizes of the different SGA
components are flexible and can adapt to the needs of a workload without requiring
any additional configuration.

However, the SGA components are flexible regardless of the setting for automatic SGA memory management (aka, automatic shared

memory management, ak ASMM). While I agree that ASMM as a conceptual feature is rather cool, there are known issues when

allocations and deallocations oscillate rapidly, causing locks on the underlying pools and decreasing performance overall.

To be truly effective, dynamic memory management would allow the kernel to dynamically allocate memory segments from the OS.

As it stands, 100% of greater(SGA_TARGET,SGA_MAX_SIZE) is allocated when the instance starts; if SGA_MAX_SIZE is greater, the

memory is allocated but not used, saved away for a potential future use. How is that dynamic?

From page 5:
"As the workload changes, memory is redistributed to ensure optimal performance."

*cough cough* Unless of course you hit a bug....

Good overview and discussion of the SGA and buffer cache in general.

Page 6:
"If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer
to the write list and continues to search."

I did not realize that the user process could move dirty buffers to the write list; I thought only the DBWriter processes

could do that. Hmm...

Slightly disappointed that there is no talk about the negative ramifications of setting a large buffer cache, specifically

how it adversely affects the time to scan the LRU. The positive effects are fairly obvious. =) I was also hoping for more

coverage of the Redo Log buffer. But, the couple of paragraphs in the document (and more laterin Chap 9?) at least introduce

the concept. I still find it strange that the defaults are so small.

By the time I get to page 10, I am ready for another picture; it would be nice to start with the overall picture, and then

"zoom in" on each section to go along with the text a little bit.

Data Dictionary Cache, aka Row Cache. Is this the same cache that 11g is using to store sql results?

Like the detailed steps on page 12; this is the kind of details I have been waiting for. And now I can rip them up. =)
The first bullet of #2 states that the ANALYZE command (when used to update/modify stats) flushes the cursor from the shared

pool. People still use the ANALYZE command in 10g? I thought the dbms_stats procedures were preferred and more popular. In

any event, the underlying mechanics are still the same; if you alter the stats, the cursor is no longer valid and dropped.

Now if only we had a way to surgically splice out individual cursors without having to make relatively "impactful" changes...

Bullet 3 is interesting, I had no idea that changing the global name flushed the shared pool. Wow.

The couple of paragraphs about the Large Pool are succint, a bit too much so. How does Oracle decide when to use Large Pool

memory, as opposed to the Shared Pool? What are some good rules of thumb in terms of how much to allocate (probably covered

in the Performance Tuning Guide, but the Concepts Guide should say _something_)? No Large Pool Advisor? =)

The section on Streams Pool is even worse. It does not tell me what exactly the memory is used for, how to tune it, what to

do with it, nothing. And the 2nd paragraph is quite confusing; the Streams Pool may come from different parent pools

depending on how SGA_TARGET is set? Who signed off on that little bit of tom-foolerly?

The section on Dynamic SGA is almost misleading. The shared memory allocated to the server instance does not actually change

at all; 100% of the requested memory (even if it is not used) is given during instance startup. This concept of a "dynamic

memory usage" feature totally ignores that fact. So if the instance gets 50GB of memory but only uses 500MB, you are wasting

45GB for no good reason. That is not dynamic memory.

"Both the buffer cache and the SGA pools can grow and shrink at runtime according to some internal, Oracle-managed policy."

This is the Concepts Guide, for crying out loud! Some internal, Oracle-managed policy?!? Who wrote that?!? Worse, this

particular policy has some serious drawbacks, especially earlier in 10g. We observed that Oracle would thrash itself trying

to resize the "dynamic" pools many times a second, apparently oscillating between two different "optimal" thresholds. Some

policy.

LOCK_SGA: I have never heard of this parameter. And, oh, here is the reason why; we run Solaris. Good thing the

Concepts Guide cleared that up. So, this is the Concepts Guide, right? I am not reading some PR BS, am I? Why would

LOCK_SGA be a bad thing? If you are running a Production database, especially on a host that is dedicated to Oracle, why

would I ever want memory pages to swap out?

SHARED_MEMORY_ADDRESS: Again, never heard of this one. In fact, Google is suspiciously quiet on this one as well. Why would I

want to set this parameter? Don't get me wrong, I am glad the Concepts Guide at least says *something*, but some explanations

would be nice. I assume this parameter is offset; if you set the same thing in multiple databases, I would have a hard time

believing that this parameter actually is meant to grab an absolute, OS Memory address. That would be just silly. Actually, I

am surprised this is not a underscore parameter. Which makes me wonder, is this short little blurb the only thing that keeps

this parameter from being "internally supported"?


Didn't realize the PGA had two independently-maintained subsections (pesistent area and run-time area). Good to know.

It is puzzling that the author would choose to include v$sesstat and v$sysstat as means of collecting performance statistics for the PGA without giving any insight whatosever as to what to look for; in fact, just browsing v$statname (the lookup table for the statistic#) does not make this any more clearer either. I personally find it quite unhelpful a document that purports to "demystify" something flippantly references a source that is not at all obvious.

I'll wrap up with that. I still find it hard to wade through these chapters; drains my brain. *grin* Overall I felt that I learned a few small things about the Memory Architecture, but I still feel like I have a ways to go before I fully grasp the skeleton that holds everything together. I hope this becomes more lucid as the chapters progress.

Concepts Guide: 7/27 - Data Dictionary

Wed, 2009-09-23 09:36
I still cannot believe "someone" had the gall to reduce the data dictionary to 5 pages. =) And one blank page to rub salt into the wound.

I had an interesting challenge when trying to explain the dictionary to a colleague. So we all have a general idea of what the data dictionary is and what it does (just read the first page of this chapter in the docs). When does it get built? Interesting question. Usually we say that catalog.sql builds the dictionary, or even bsq.sql. However, did you realize that 601 fixed tables (10.2.0.4) exist in memory just for starting the instance (NOTE: no database!)? Try this one on for size:
/u01/app/oracle/product/10.2.0.4/dbs: echo "db_name = SAC" > initSAC.ora
/u01/app/oracle/product/10.2.0.4/dbs: . oraenv
ORACLE_SID = [DUMMY] ? SAC
ORACLE_HOME = [/u01/app/oracle] ? /u01/app/oracle/product/10.2.0.4
/u01/app/oracle/product/10.2.0.4/dbs: sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Sep 23 09:32:35 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SAC_SQL > startup nomount
ORACLE instance started.

Total System Global Area 419430400 bytes
Fixed Size 2286584 bytes
Variable Size 114105352 bytes
Database Buffers 268435456 bytes
Redo Buffers 34603008 bytes


SAC_SQL > select count(*) from X$KQFTA;

COUNT(*)
----------
601

If you want all the names of those fixed tables, select KQFTANAM.

So, even without running any scripts, we have "data dictionary" objects residing in memory. This helps me to understand what the dictionary is and what it does. For an analogy, it is like the overhead address space a program uses to store all its global variables. In this case, it is just happens to be organized into relational tables with gobbly-gook table/column names and normalized up the wazoo! =)

I have to confess, I was quite disappointed with this chapter. Of the 5 pages, the first 4 tell you not to change anything and give a brief overview of the differences between USER, ALL and DBA views. The last page starts to get just a tiny bit juicy and at least mention dynamic performance views, but then like a cheap battery dies out too soon.

There is some really cool stuff locked away in the x$ tables, let alone the rest of the db metadata. For a quick glimpse under the covers, I found eygle's list quit informative and helpful; it is allegdegly a copy of Metalink note 22241.1, which is supposed to be internal only (I cannot see it at this point in time); it merely lists the fixed tables to demonstrate the naming convention, but it also gives us an idea how things are organized and what types of things the kernel tracks. I hope one day Oracle wakes up and realizes the advantage of sharing more information like this.

Yong Huang also has a listing.

Pages