Feed aggregator

Speed Bumps

Mary Ann Davidson - Tue, 2007-08-14 12:15

Summertime tends to be the time of year when people naturally slow down. (In some cases, it is because of absolutely unbearable heat -- who wants to move fast when it's 98 degrees out?) Summer is the season when you take vacations, change gears and get away from work. You realize there is a big wide world out there that comes to you through other vectors than email and the Internet. Even if you are working, there are a lot of distractions, like warm summer nights, sun-dappled days for bike rides, warm water and late sunsets for surfing. Summer is just one big speed bump telling you to slow down, observe, to pay attention to something other than the daily commute. You need those speed bumps.


I was out surfing recently at my usual surf habitat in Pacifica. Normally, you catch a wave, the face slopes; you slot into it and keep riding the face of the wave. My best surfing buddy, Kerry, calls me "the Queen of Trim," because when I catch a wave, I am really good at finding the absolutely right spot on the board to keep it in perfect trim: slotted into the wave so I can keep riding until there is no more wave. In my opinion, "kicking out" before the ride is done is the 8th deadly sin and a waste of a perfectly good wave.


The local surf break I frequent has a bit at the north end of the beach that's kind of steep, so depending on the tides, you may be riding in and find that you meet the backwash of the previous wave -- or two. Those backwash bumps are really a surprise when you hit them, because you end up riding UP the back of a wave and down the front again, sometimes more than once. Fun. Strange. Shakes up the surf session. Surfing is not, in general, supposed to be like riding a roller coaster. Except when it is. You remember not to surf on autopilot or you will wipe out when you hit the "speed bump." It's the ocean's way of getting you to pay attention.


Some of the speed bumps I get are in my inbox: I hear from people I haven't heard from in awhile, I end up being distracted from my "regular work," only the distraction turns out to be pretty important. I got an email recently from a colleague and friend -- Jaime Chanaga. I was impressed with Jaime the first time I met him (I think we were on a panel together at some security-fest or another). Jaime is among the few -- the very few -- who, as a CISO several years ago was already putting questions in his RFPs asking about the kind of care his vendors took in the way they built security into products.


Anyway, Jaime recently emailed me with a PPT he had done on security excellence, which, in my own nosy parker way, I suggested he turn into a blog (he did). His security excellence principles include things like valuing people and leading with integrity. Why is his blog a good speed bump? Because you could read an endless procession of Management Tomes without ever finding useful advice like this. His advice is good, solid, and timeless. I know this because I spent two years getting an expensive graduate business degree from Wharton and I am not really sure that most people there know or care about the difference between management and leadership. (Unfortunately, too many people who espouse "principle-centered leadership" -- or whatever the latest business buzzword is -- are not practitioners of it: "look after people" really means "look after (self; by using) people."


Since I know him, I can say with confidence that Jaime practices what he preaches. His suggestions for security excellence are good reminders for those days when you are feeling like crankily cutting corners with people you work with or who work for you. Thank you, Jaime, for a speed bump reminder -- your email -- that being a decent and good person of high integrity is among the most valuable business skills there are, for security gurus and others. (These principles are mom-approved, too.)


I am playing fast and loose with the term "speed bump," but I would like to extend it to various other cautionary signs that admonish some attention on the road from where you've been to where you are going. I add that some of these signs have real meaning here in Idaho.


"Game crossing," for example, does not mean a bunch of geeks with X-boxes are likely to cross Highway 20, no sirree. Not only do "Game Crossing" signs in Idaho mean that herds of critters -- like elk -- move along various corridors that cross highways, the state of Idaho adds a flag to the Game Crossing signs during the seasons when the animals are most active. It's Idaho Fish and Game's way of saying, "Pay attention: we really mean it!" I just read a blog entry by the former police chief of Hailey, Idaho talking about near misses with large critters. His theory is that suicidal and vengeful deer cross the highway to target people who have hunting licenses. (I feel compelled to add that Brian's blog entry provides a far better sense of a road trip across America than most anything else I have read.)


Game crossing signs, aside from mitigating the risk of "bull elk as unwanted hood ornament," do help you slow down and look at the scenery, which in Idaho is pretty spectacular. On the road from Boise to Ketchum, I've seen elk, moose, deer, antelope, peregrine falcons (Idaho is the home of the World Center for Birds of Prey, and has done more than any other state to help bring the peregrine falcon back from near-extinction), coyote, skunk, owls, porcupine, sand hill cranes, and foxes. There are wolves in the northern part of the Wood River Valley, though I've never seen them. Only last night, I saw a mother mule deer and twin fawns crossing Sun Valley Road at dusk. I hope I never get jaded at seeing such amazing animals. (It sure beats doing that 512th email of the day.)


The other speed bumps that really mean something around here are the fire warning signs. This summer has been unusually hot, dry, and windy. A perfect (fire) storm. Large parts of the west are burning, in some cases because of lightning strikes, in other cases because of stupid and careless individuals. A sign in a campground that says "No Campfires" means no campfires. "Please do not set off illegal fireworks," means it, too. The first fire of the summer here in Blaine County burned one of my favorite walks in Sun Valley -- Trail Creek. The entire mountain is blackened and looks like a moonscape. All because someone carelessly set a fire during high burn conditions. If you are camping or hiking in the west this summer, be careful, folks. Fire warning signs really mean something this year and there is no Undo button if you get it wrong.


Since I happily set up a topic on cautionary notes/speed bumps, I am going to add one myself, and it goes to the always contentious, World-Wrestling-Federation-has-never-seen-anything-like-it area of how to handle security vulnerabilities. I'm going to avoid the pothole -- for now -- of responsible disclosure vs. full disclosure except to note that, like everything else in life, there is no perfect disclosure policy that optimizes on all parameters. And in fact, there are no perfect patching policies, either. Only in Never- Never Land or Oz can you patch every single security vulnerability of every single severity in real time on all affected versions such that patch application is real time and perfect, too. That should be obvious -- when we get into these discussions -- but it isn't.


The gritty reality is that life is constrained. No company or organization or individual has infinite resources ("resource" includes time, expertise, people, pretty much anything that you need to effect positive change but doesn't grow on trees). By definition, something that is not infinite or free  -- or both -- is constrained. For example, I don't have to pay to use the ocean, but my surfing is constrained by the number of other people out in the water, the time between swells (sometimes it's a good 20 minutes between waves, some days it's more like "catch a wave, paddle out, catch another wave right away, come in after 45 minutes because you are exhausted") and so on. Even surfing is constrained because while there are an infinite number of waves -- over time -- there are only so many during the time I am out surfing and each wave holds two people at most.


We shouldn't always attribute evil motives to the fact that organizations live with constraints. Constraints affect both vendors and their customers. Vendors cannot always create patches for every single issue on every single old version of product (sometimes, a fix would require an architectural change, which we all know can't happen on old products in all cases since architectural fixes aren't always backportable). Constrained resources also apply to the companies who apply patches. At a minimum, companies need their systems to be up some reasonable amount of time so that people can work (one reason that companies really, truly hate taking systems down for "emergency fixes" unless it's truly an emergency -- and not a manufactured emergency, either).


Even if a vendor could create the equivalent of The Security Patch That Ate Cleveland (e.g., a patch that includes fixes for all security vulnerabilities from the beginning of time), the amount of work for a customer to actually apply The Security Patch That Ate Cleveland is equivalent to upgrading to a new product version (containing all those fixes already), which many customers do regularly for other reasons. Living with constraints mean that as a vendor, you sit around and try, within some basic principles, to figure out how to do the most effective good for the most people. It doesn't mean doing the minimum and hoping nobody notices, but it does mean weighing a lot of different factors in trying to make the best use of time and people to protect the most customers you can in the most cost effective way for them.


Small digression: this is a good time to give a quick recap of the amount of work that does go into fixing security issues, which is why we continue to work to avoid these problems in the first place (through coding standards, better vulnerability testing tools, process improvements, and so on). So, here goes: Oracle has multiple large product stacks, each of which has multiple supported versions, each of which runs on multiple operating systems (the last major release of the Oracle database alone shipped on something like 19 OSs). The stacks are interdependent (for example, Oracle eBusiness Suite runs on Oracle Application Server and the Oracle Database).  And of course, we have made many product acquisitions that also have "other Oracle product" dependencies.


In short, it's not enough just to fix a product problem where it occurs, you need to make sure it does not break something else that depends on the product, otherwise the "fix" is useless to a customer. And of course, all the moving parts (patches) across the company have to come out on the same day, because you don't want customers bringing down, for example, Oracle Application Server one week to apply an Oracle Database (client-side) patch, the second week to fix an Oracle Application Server bug, and the third week to patch some Oracle Collaboration Suite components that run on the middle tier. The interdependencies, time constraint (we have a fixed delivery date that can't move) and "ripple" effect are the main reason why fixing a security issue is something measured on a calendar, not a stopwatch. I try to explain this in terms of the well-known Mastercard ads:


  • Two line code change fixing security bug -- 20 minutes
  • Finding all similar/related bugs, on all affected product versions, fixing them, thoroughly testing the fix across all versions and product dependencies -- 3 months
  • Handing customers a fix that doesn't break anything else --priceless

Where does this leave us? With a speed bump that says, in effect, newer versions of products -- almost any vendor's products -- are probably, all other things being equal, "more secure." This seems obvious, but it is worth stating. Vendors -- most of us -- know more about secure development and secure coding than we did even three or four years ago. Newer products reflect that. Also, even if we can't fix every single security issue on old product versions, we certainly are going to fix it in new versions. Preferably, as soon as we can because it is just good business and common sense to do this.


I think I should pause now and comment on a predictable screaming point -- the idea unfortunately -- but widely -- promulgated that all security issues should be fixed at exactly the same time for everybody. If they aren't, the conventional wisdom goes, the vendor is being evil-minded towards their customers.


With all due respect, that is a lot of hooey, for reasons that should be obvious.


Suppose I am a developer building a housing development containing 100 houses. Suppose also that 20 houses into my development, I realize that I have a problem with leaky bathroom sinks. In fact, it's systemic enough that I need a different sink to be dropped in. I can't just fix the leaks -- I need to swap out the sink. I have several choices here:


n      I can finish the 80 other houses with the old leaky sinks, so everybody's house is equally leaky. Then, I rip out 100 sinks and retrofit them all at the same time. In the construction business (and I know this because I used to work in construction) this option is called "How Dumb Can You Be?"


n      I can use the new sinks in the next house I build (number 21) and in the rest of them (houses 22-100). I can then go back and fix the 20 houses with the old bad sinks. We can argue about the exact timing of who, in houses 1 through 20, gets the new sinks when, but one thing that is clear -- if I am just about to drop a bathroom sink into house number 21, and I have a chance to put a WORKING sink in that doesn't leak, that's what I should do. Having 100 flooded bathrooms to prove a principle of equality is a recipe for being out of business. It's also really dumb, expensively so, for everybody.


Note that I am not disputing that maybe, the sink design should have been reviewed earlier, or more carefully. Or that the contractor should learn from the sink problem so new housing developments have better sinks (maybe the architect was at fault, or maybe the contractor had a bad supplier). These are all good points, and valid ones. But the reality is, and I also know this from working in construction, there is just no such thing as a building that goes up with absolutely no change orders (contract modifications due to something needing to be fixed during construction). I spent about 80% of my first job in the Navy negotiating change orders to construction contracts, and about 20% of my time managing the actual contracts. And I had mostly good architects, good engineers, and good contractors.


Oracle has tried to optimize fixing critical -- and we mean critical -- security issues in reasonably consumable chunks, four times a year for people on older product versions. We call these "critical patch updates." I also note that we actually fix security issues going forward (in new versions and patch sets) FIRST. This means, all things being equal, newer versions and patch sets have more security fixes, and generally have them sooner. We do this because, if we have a product train leaving the station, and there is a critical security problem, it makes sense -- and protects customers best -- if we get that critical issue fixed going forward if we possibly can. We bundle many -- but not all -- security fixes into critical patch updates and release those four times a year. Because of the amount of overhead in fixing, backporting, testing and integrating combined fixes, it means that there may be and often is a time lag to get a fix into older product versions (which is when we announce the fix -- when a patch goes out for those older versions). Just like the folks in houses 1-20 might have to wait to get new sinks (while people in houses 21-100 don't have that problem). It's not a perfect solution, but it is better than not fixing anything going forward to the point that everybody ends up having to install the The Security Patch That Ate Cleveland.


The most unpleasant conversations I ever have with customers, and I have had several of these, occurs when a customer has never applied any security fixes (in the days when we did security alerts) and is running on a version that was (at that time) long out-of-support. (Even with new support models we've put in place, we do not issue security alerts or CPUs for all versions.) The customer is now running on what can only be construed as an archeological version of product (as in, Oracle7 or Oracle 7.2, and I am not making that up), and yet it is a mission critical application. The customer wants to know if they are "at risk" from unpatched security vulnerabilities. I think I can safely say that they are. And I have. I also tell them that we do not do security analysis on out-of-support versions of product, but in many cases an issue we are fixing (via a critical patch update) has been in code awhile and probably does exist in the really old, out-of-support product version.


I know people like nice, stable versions of product; who doesn't? (I drove a Honda CRX for 17 years and only got a new car since the Honda was coming up on 300,000 miles and I couldn't retrofit cup holders into it.) But I tell customers they need to plan on some regular maintenance, and -- all things being equal -- newer versions of product are more secure. If I had to put this into rote form, it would be: "Dear customer: it is in your interests to upgrade from time to time, because we cannot fix every single security issue of every single severity on every single old version. Nobody can. We try as best we can to protect the most customers to the best of our ability. Part of that also means making newer versions better. Please don't move to the second-from-oldest supported version to 'get current,' please move to the latest and greatest product version if you possibly can."


I offer the above as my own speed bump -- a chance for people racing along the highway of security and in particular, security vulnerability handling to stop, look, observe, and slow down.


For more information:


Jaime Chanaga's good advice on security leadership (July 16 blog entry):




The Hailey, Idaho (former) police chief's blog on Vengeful Deer, Jesus and Bob and Big Water:




Pictures of the Trail Creek fire:




The World Center for Birds of Prey:




The release of the new Idaho quarter (with a peregrine falcon on it!):




Platform Migration from Sun-Solaris to HP-UX PA RISC

Madan Mohan - Tue, 2007-08-14 10:22


----> For Customer Specific patch

1. Apply the Platform Migration patch 3453499 (ADX.F)
2. Make sure you have zip2.3 installed on Source Machine
3. Generate and upload the manifest of customet specific files.
- Log into source as applmgr user and source the APPL_TOP environment file.
- Generate the customer specific file manifest by executing the below command.It
generates the file adgenpsf.txt under $APPL_TOP/admin/$TWO_TASK/out

- perl $AD_TOP/bin/adgenpsf.pl
4. Go to http://updates.oracle.com/PlatformMigration and use your metalink username
and password and follow the instructions on the screen to upload the manifest
file "adgenpsf.txt" which was created in step3.

----> Foe export / import

5. Apply the AD minipack F 2141471 (conditional).
6. Apply the Applications consolidated export/import utility patch 4872830.
7. If source is on 11.5.7, then apply the materialized views patch 2447246.
8. Apply latest Applications database preparation scripts patch 4775612.
9. Identify the Global_name
- select global_name from global_name;
10. create the export parameter file "exp_parameter.dat

1. Run the Rapid Install to create the 9.2.0 Home withour database portion.

Cost Allocation flexfield and Costing process

RameshKumar Shanmugam - Mon, 2007-08-13 17:40
Following are the important Key flexfield in HR and Payroll


  • Job Flexfield
  • Position flexfield
  • Grade Flexfield
  • People Group
  • Cost Allocation
  • Competence Flexfield
  • Personal Analysis Flexfield
  • Soft Coded KeyFlexfield

  • People Group Flexfield
  • Cost Allocation Flexfield
  • Bank Details KeyFlexField
Setting up of Cost allocation flexfield is an mandatory setup step for the Payroll Setup

Cost allocation Flexfield is used to accumulate the employee costing information, if we use Oracle Payroll we can accumulate the cost associated with the payroll and transfer to GL and if we are not using Oracle Payroll we can able to interface the costing information to the third Party Payroll system

Few important points which should be taken care before creating the Cost allocation flexfield.

  • If we are planning to integrate with the GL, then the number segment in the cost allocation flexfield should be same or more than the accounting flexfield.

  • We should atleast have one segment for the cost allocation flexfield otherwise will run into error when defining payroll or in other form which is having this flexfield

Cost Allocation flexfiled makes use of qualifiers, we can use the segment qualifiers to control the level at which the costing information can be entered in the system.the various level at which we can cost are

  • Element entry
  • Assignment
  • Organization
  • Element Link
  • Payroll

If the element is not costed at any level then the final costing information will get accumulated in the suspense account which is defined in the payroll form.

Use the GL Map window to map the cost allocation flexfield with the GL Accounting flexfield, we should map each cost allocation segment with the GL Segment for each Payroll. for the addition segment in the cost allocation flexfield should be mapped to 'Null'

Following are the process that you need to run for transferring the costing information from Oracle to Payroll

  • Costing process
  • Costing of Payment
  • Transfer to GL

Try it out!!!

Categories: APPS Blogs


Herod T - Mon, 2007-08-13 14:04
Due to the LARGE amount of spam this blog is getting, I am going to switch comments to registered bloggers only. Sorry all, but I have had enough of deleting the SPAM posts. Death to all spammers.

Oracle Flow Manufacturing is gaining referenceable early majority customers

Chris Grillone - Mon, 2007-08-13 13:43

Oracle Flow Manufacturing is crossing the chasm of the technology adoption life cycle and gaining referenceable customers in the early majority. An extremely detailed market analysis was conducted to prioritize the next enhancements to Flow.

Assignment action interlock rule failure

RameshKumar Shanmugam - Sun, 2007-08-12 17:02
During the Payrol Run Rollback Many time we might come accross with the Error message
APP-PAY-07507: Assignment action interlock rule failure

This May be due to multiple resason but one such reason is, there may be some future sequenced process which has to be rollback before we roll back the current process.

To find the list of all future sequenced process Exucute the following Query

select distinct pact.payroll_action_id, pact.effective_date, pact.action_type
from pay_action_classifications CLASS,
pay_payroll_actions PACT,
pay_assignment_actions ACT,
per_assignments_f ASS,
per_periods_of_service POS,
pay_assignment_actions act1,
per_assignments_f ass1
where POS.person_id = ass1.person_id
and ass1.assignment_id = act1.assignment_id
and ASS.period_of_service_id = POS.period_of_service_id
and ACT.assignment_id = ASS.assignment_id
and ACT.action_sequence > act1.action_sequence
and ACT.action_status in ('C', 'S', 'M')
and ACT.payroll_action_id = PACT.payroll_action_id
and PACT.action_type = CLASS.action_type
and CLASS.classification_name = 'SEQUENCED'

This Query will return the list of all future sequence process which need to be rolled back first before we rollback the current process

Hope this helps :)
Categories: APPS Blogs

KDD 2007

Marcos Campos - Sun, 2007-08-12 08:34
For the next couple of days I am going to be attending the KDD (Knowledge Discovery in Databases) 2007 conference (conference website) along with some other Oracle colleagues. KDD is one of the primary conferences on data mining. This year it will take place in San Jose, CA, from August 12 to 15.Oracle is a Gold sponsor for the event and will have a large presence at the conference. Among other Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

Oracle Database 11g available for download

Hampus Linden - Sat, 2007-08-11 15:40
Well, it's about time. Oracle finally made 11g available for download. Only 32-bit Linux so far though and I have a feeling we'll have to wait a while for most other platforms (possibly a 64-bit Linux download soon).

Download it here.

Lots of new cool stuff to blog about, I'm away on holiday for a week but my recently upgraded lab machines at home are sitting there waiting. Fun times when I get home.

Welcome back

Oracle WTF - Fri, 2007-08-10 05:16

Our guest administrator "Splogger" has now left the building, along with his page of helpful links to items on Amazon.com and a range of gentlemen's health products.

Suspiciously, a couple of days before he arrived we were taken off air by Blogger's spambots, presumably alerted by the amount of irrelevant, repetitive, and nonsensical text and links to Viagra sites they found here. From what I read, it seems possible that the Blogger automated suspension to prevent blog spam might have actually left the account vulnerable to blog spammers. As ironies go, that is up there with rain on your wedding day and good advice that you just didn't take.

11g , get -set - go !!!!

Pankaj Chandiramani - Fri, 2007-08-10 01:55

11g for linux is available for download @OTN from here.
Read all abt the new features for HA , Db Replay   etc from http://www.oracle.com/technology/products/database/oracle11g/index.html

Categories: DBA Blogs

Tune up your JDeveloper

Wijaya Kusumo - Tue, 2007-08-07 10:35
JDeveloper is slow, or is it? I'm using Oracle JDeveloper 11g - Technical Preview (Studio Edition Version, and it was incredibly slow. Opening project, drawing database diagram, switching between application, etc are really testing my patient. I'm using a pretty good notebook: Win XP, dual core processors, 2 GB RAM, and plenty of HD space. After a few searches here and there, the

Handy script to find out eligible workflow data for purging

Fadi Hasweh - Tue, 2007-08-07 02:33
I have the "purge Obsolete Workflow Runtime Data" concurrent request scheduled to run on a weekly basis but I find out that this request is not purging all data that can be purged, so I searched metalink for similar cases and found more that one note talking about the same issue, anyway one of the notes (165316.1) (bde_wf_data.sql - Query Workflow Runtime Data That Is Eligible For Purging) has the bde_wf_data.sql script that can be downloaded from metalink, this script will create a bde_wf_data.lst file that looks like a script but it needs some cleansing, the script has commands like the following


Which will purge data eligible to be purged, also at the end of the .lst file there are statements to delete/build the tables stats for the following tables

Since the script do a lot of purging/delete form those tables so the stats needs to be build again


And during the search I found note (144806.1) (A Detailed Approach To Purging Oracle Workflow Runtime Data) which I recommend so much for reading

have a nice free bugs day

Some thoughts on the ACE program

Peter Khos - Sat, 2007-08-04 10:54
You know with the recent revamping of the Oracle ACE program and the recent spat between a couple of well-known individuals in the Oracle community and subsequent related blog entries in the Oracle Blogsphere, I wonder where these two individuals fit within the Oracle ACE program. A quick check to the Oracle ACE site reveals that one of the individual is already an Oracle ACE but not the other.Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com6

Don and Jonathan at it again

Herod T - Fri, 2007-08-03 13:13

Once again they are at it.


All I have to say on the matter is, Don Burleson and his employee comments, scripts, "how to's" and expert advice have screwed up more than one thing mostly due to me trusting them without actually paying attention to what was going on. Nothing from Jonathan Lewis has ever failed me.
Don Burleson has an interesting outlook on life - check out his personal blog - I won't link to it, but just google it "don burleson blog personal" and it is the first hit.

Rather enlightening to see a his view on life.

Oracle APPS DBA Interview Questions

Madan Mohan - Thu, 2007-08-02 01:34
Q1. What is wdbsvr.app file used for? What's full path of this file? What's significance of this file ?

Ans: The wdbsvr.app is used by mod_plsql component of Apache to connect to
database. The File is located at $IAS_ORACLE_HOME/Apache/modplsql/cfg .

Q2. Where would i find .rf9 file, and what execatly it does ?

Ans: These files are used during restart of patch in case of patch failure because of some reason.

Q3. Where is appsweb.cfg or appsweb_$CONTEXT.cfg stored & why its used?

Ans: This file is defined by environment variable FORMS60_WEB_CONFIG_FILE This is usually in directory $OA_HTML/bin on forms tier. This file is used by any forms client session. When a user try to access forms , f60webmx picks up this file and based on this configuration file creates a forms session to user/client.

Q4. Can you clone from multi node system to single node system & vice versa ?

Ans: Yes.

Q5. What is .dbc file , there are lot of dbc file under $FND_SECURE, How its determined that which dbc file to use from $FND_SECURE ?

Ans: dbc as name says is database connect descriptor file which stores database connection information used by application tier to connect to database. This file is in directory $FND_TOP/secure also called as FND_SECURE

Q6. Whats things you do to reduce patch timing ?

Ans: # Merging patches via admrgpch
# Use various adpatch options like nocompiledb or nocompilejsp
# Use defaults file
# Staged APPL_TOP during upgrades
# Increase batch size (Might result into negative )

Q7. Can you apply patch without putting Applications 11i in Maintenance mode ?

Ans: Yes, use options=hotpatch as mentioned above with adpatch. from AD.I onwards we need to enable maintenance mode inorder to apply apps patches.

Q8. adident utility is used for what ?

Ans: adident utility in oracle apps is used to find version of any file . AD Identification.
for ex. "adident Header

Q9. How can you licence a product after installation ?

Ans: By using ad utility adlicmgr to licence product in Oracle Apps.

Q10. What is MRC ? What you do to enable MRC in Apps ?

Ans: MRC also called as Multiple Reporting Currency in oracle Apps. Default you have currency in US Dollars but if your organization operating books are in other currency then you as apps dba need to enable MRC in Apps.

Q11. What is access_log in apache , what entries are recored in access_log ? Where is default location of this file ?

Ans: access_log in Oracle Application Server records all users accessing oracle applications 11i. This file location is defined in httpd.conf with default location at $IAS_ORACLE_HOME/Apache/Apache/logs. Entries in this file is defined by directive LogFormat in httpd.conf Typical entry in access_log is - - [10/Sep/2006:18:37:17 +0100] "POST /OA_HTML/OA.jsp?.... HTTP/1.1" 200 28035
where 200 is HTTP status code & last digits 28035 is bytes dowloaded as this page(Size of page).

Q12. What is session time out parameter & where all you define these values ?

Ans: In order to answer first you have to understand what kind of seesions are in Apps 11i and what is Idle timeout ?
In Apps there are two broad categories of session
- Self Service Application Session ( Server by Web Server iAS Apache & Jserv, like iRecruitment, iProcurement)
-Forms session ( served by your form session, like system Administrator)

What is Session Idle time ?
If Oracle Apps client is not doing any activity for some time (when application user goes for coffee or talks over phone) session during that time is called as Idle Session & because of security reason, performance issues and to free up system resource Oracle Applications terminates client session( both forms & self service) after idle time value is reached to the one mentioned in configuration file.

From FND.G or 11.5.9 or with introduction of AppsLocalLogin.jsp to enter into application, profile option "ICX Session Timeout" is used only to determine Forms Session Idle timeout value . This might be confusing as earlier this profile option used to control forms as well as self service application(with session.timeout) session.timeout is used to control Idle session timeout for Self Service Applications ( Served by Jserv via JVM )

From where ICX : Session Timeout & session.timeout get values ?

Autoconfig determines value for profile option "ICX: Session Timeout" and "session.timeout" from entry in context file ( $APPL_TOP/admin/SID_hostname.xml ) with parameter s_sesstimeout where value mentioned is in milliseconds so profile option ICX: Session Timeout value should be s_sesstimeout/ (1000 * 60) which means here its 10 Minutes. This value is also set in zone.properties in $IAS_ORACLE_HOME/Apache/Jserv where number mentioned is in milli second i.e. 600000 ( equal to 10 Minutes)session.timeout = 600000

session.timeout mentioned in zone.properties is in milli secondsICX Session Time out mentioned in profile option ICX: Session Timeout is in minutes so ICX session timeout=30 & session.timeout= 1800,000 are same 30 minutes

P.S. ICX Session time out was introduced in FND.D so if your FND version is below D you might not see this variable.

Important Things Apps DBA should consider while setting session timeout value ?
1.. If you keep session.timeout value too high , when some oracle application user accessing Self service application terminates
his session, so longer idle session will drain JVM resource & can result in Java.Lang No Memory available issues .
2. If you keep it too low, users going out for tea or sitting idle for some time have to login again into application & can be
annoying .

Thumb rule is session time out usually set to 30 minutes.

Q13. Where is applications start/stop scripts stored ?

Ans: $COMMON_TOP/admin/scripts/$CONTEXT_NAME

Q14. What are main configuration files in Web Server (Apache) ?

Ans: Main configuration files in Oracle Apps Web Server are

# httpd.conf, apps.conf, oracle_apache.conf, httpd_pls.conf
# jserv.conf, ssp_init.txt, jserv.properties, zone.properties
# plsql.conf, wdbsvr.app, plsql.conf

Q15. How to check if Apps 11i System is Autoconfig enabled ?

Ans: Under $AD_TOP/bin check for file adcfginfo.sh & if this exists use
adcfginfo.sh contextfile= show=enabled

If this file is not there , look for any configuration file under APPL_TOP if system is Autoconfig enabled then you will see entry like
# AutoConfig automatically generates this file. It will be read and .......

Q16. How to check if Oracle Apps 11i System is Rapid Clone enabled ?

Ans: For syetem to be Rapid Clone enabled , it should be Autoconfig enabled (Check above How to confirm if Apps 11i is Autoconfig enabled). You should have Rapid Clone Patches applied , Rapid Clone is part of Rapid Install Product whose Family Pack Name is ADX. By default all Apps 11i Instances 11.5.9 and above are Autoconfig & Rapid Clone enabled.

Q17. What is plssql/database cache?

Ans: In order to improve performance mod_pls (Apache component) caches some database content to file. This database/plssql cache is usually of type session & plsql cache
# session cache is used to store session information.
# plsql cache is used to store plsql cache i.e. used by mod_pls

Q18. How to determine Oracle Apps 11i Version ?

Ans: select RELEASE_NAME from fnd_product_groups;

You should see output like
11.5.9 or

Q19. What is RRA/FNDFS ?

Ans: Report Review Agent(RRA) also referred by executable FNDFS is default text viewer in Oracle Applications 11i for viewing output files & log files. As most of apps dba's are not clear about Report Server & RRA.

Q20. What is PCP in Oracle Applications 11i ? In what scenarios PCP is Used ?

Ans: PCP stands for parallel Concurrent processing.Usually you have one Concurrent Manager executing your requests but if you can configure Concurrent Manager running on two machines (Yes you need to do some additional steps in order to configure Parallel Concurrent Processing) . So for some of your requests primary CM Node is on machine1 and secondary CM node on machine2 and for some requests primary CM is on machine2 & secondary CM on machine1.

Well If you are running GL Month end reports or taxation reports annually these reposrts might take couple of days. Some of these requests are very resource intensive so you can have one node running long running , resource intensive requests while other processing your day to day short running requets.
another scenario is when your requests are very critical and you want high resilience for your Concurrent Processing Node , you can configure PCP. So if node1 goes down you still have CM node available processing your requests.

Q21. Output & Logfiles for requests executed on source Instance not working on cloned Instance?

Ans: Here is exact problem description - You cloned an Oracle Apps Instance from PRODBOX to another box with Instance name say CLONEBOX on 1st of August. You can any CM logs/output files after 1st of August only becuase these all are generated on CLONEBOX itself, But unable to view the logs/output files which are prior to 1st August. What will you do & where to check ?
Log , Output file path & location is stored in table FND_CONCURRENT_REQUESTS. Check

select logfile_name, logfile_node_name, outfile_name, outfile_node_name from fnd_concurrent_requests where request_id=&requestid ;
where requestid is id of request for which you are not able to see log or out files. You should see output like
/u01/PRODBOX/log/l123456.req, host1,/u01/PRODBOX/out/o123456.out, host1 Update it according to your cloned Instance Variables.

Q22. How to confirm if Report Server is Up & Running ?

Ans: Report Server is started by executable rwmts60 on concurrent manager Node & this file is under $ORACLE_HOME/bin .execute command on your server like
ps -ef | grep rwmts60
You should get output like
applmgr ....... rwmts60 name=REP60_VISION
where VISION is your Instance name.
Else you can submit a request like "Active Users" with display set to PDF, check output & log file to see if report server can display PDF files.

Q23. What is difference between ICM, Std Managers & CRM in Concurrent Manager ?

Ans: # ICM stand for Internal Concurrent Manager, which controls other managers. If it finds other managers down , it checks & try to restart them. You can say it as administrator to other concurrent managers. It has other tasks as well.
# Standard Manager These are normal managers which control/action on the requests & does batch or single request processing.
# CRM acronym for Conflict Resolution Manager is used to resolve conflicts between managers & request. If a request is submitted whose execution is clashing or it is defined not to run while a particular type of request is running then such requests are actioned/assigned to CRM for Incompatibilities & Conflict resolution.

Q24. What is use of Apps listener ? How to start Apps listener ? How to confirm if Apps Listener is Up & Running ?

Ans: Apps Listener usually running on All Oracle Applications 11i Nodes with listener alias as APPS_$SID is mainly used for listening requests for services like FNDFS & FNDSM.

In Oracle 11i, you have script adalnctl.sh which will start your apps listener. You can also start it by command
- lsnrctl start APPS_$SID (Replace sid by your Instance SID Name)

execute below command
lsnrctl status APPS_$SID (replcae SID with your Instance Name)
so If your SID is VISION then use lsnrctl status APPS_VISION out put should be like
Services Summary...
FNDFS has 1 service handler(s)
FNDSM has 1 service handler(s)
The command completed successfully

Q25. What is Web Listener ?

Ans: Web Listener is Web Server listener which is listening for web Services(HTTP) request. This listener is started by adapcctl.sh & defined by directive (Listen, Port) in httpd.conf for Web Server. When you initially type request like http://becomeappsdba.blogspot.com:80 to access application here port number 80 is Web Listener port.

Q26. How will you find Invalid Objects in database ? How to compile Invalid Objects in database ?

Ans: using query
SQLPLUS> select count(*) from dba_objects where status like 'INVALID';

- using ADADMIN
- using utlrp.sql which is shipped with Oracle.

Q27. How to compile JSP in Oracle Apps ?

Ans: Using ojspCompile.pl perl script shipped with Oracle apps to compile JSP files. This script is under $JTF_TOP/admin/scripts. Sample compilation method is
perl -v ojspCompile.pl --compile --quiet

Q28. What is difference between adpatch & opatch ? Can you use both adpatch & opatch in Apps ?

Ans: Yes , we can use both adpatch and opatch in Apps. adpatch is an ad utility used for applying apps patches, whereas opatch is a utility used to apply rdbms patches.

Q29. Where will you find forms configuration details apart from xml file ? What is forms server executable Name ?

Ans: Forms configuration at time of startup is in script adfrmctl.sh and appsweb_$CONTEXT_NAME.cfg (defined by environment variable FORMS60_WEB_CONFIG_FILE) for forms client connection used each time a user initiates forms connection.
- f60srvm is the forms executable name.

Q30. What are different modes of forms in which you can start Forms Server and which one is default ?

Ans: There are two modes in which we can start forms.
- Socket Mode
- Servlet Mode.

By Default forms are configured to start in socket mode.

Q31. How you will start Discoverer in Oracle Apps 11i ?

Ans: In order to start dicoverer you can use script addisctl.sh under $OAD_TOP/admin/scripts/$CONTEXT_NAME
or startall.sh under $ORACLE_HOME/discwb4/util (under Middle/Application Tier)

Q32. How many ORACLE HOME are Oracle Apps and whats significance of each ?

Ans: There are three $ORACLE_HOME in Oracle Apps, Two for Application Tier (Middle Tier) and One in Database Tier.
# ORACLE_HOME 1 : On Application Tier used to store 8.0.6 techstack software. This is used by forms, reports & discoverer.
ORACLE_HOME should point to this ORACLE_HOME which applying Apps Patch.
# ORACLE_HOME 2: On Application Tier used by iAS (Web Server) techstack software. This is used by Web Listener &
contains Apache.
# ORACLE_HOME 3: On Database Tier used by Database Software usually 8i,9i or 10g database.

Q33. Where is HTML Cache stored in Oracle Apps Server ?

Ans: Oracle HTML Cache is available at $COMMON_TOP/_pages for some previous versions you might find it in $OA_HTML/_pages

Q34. Where is plssql cache stored in Oracle Apps ?

Ans: sually two type of cache session & plssql stored under $IAS_ORACLE_HOME/Apache/modplsql/cache

Q35. What happens if you don't give cache size while defining Concurrent Manager ?

Ans: Lets first understand what is cache size in Concurrent Manager. When Manager picks request from FND CONCURRENT REQUESTS Queues, it will pick up number of requests defined by cache size in one shot & will work on them before going to sleep. So in my views if you don't define cache size while defining CM then it will take default value 1, i.e. picking up one request per cycle.

Q36. What are few profile options which you update after cloning ?

Ans: Rapid clone updates profile options specific to site level . If you have any profile option set at other levels like server, responsibility, user....level then reset them.

- Site Name

Q39. How to retrieve SYSADMIN password ?

Ans: If forgot password link is enabled and sysadmin account is configured with mail id user forget password link else you can reset sysadmin password via FNDCPASS.

Q40. If you have done two node Installation, First machine : Database and concurrent processing server. 2nd machine: form,web Which machine have admin server/node?

Ans: Admin server will always reside on machine where Concurrent Processing Resides.

Q41. What is GWYUID, Where GWYUID defined & what is its used in Oracle Applications ?

Ans: GWYUID , stands for Gateway User ID and password. Usually like APPLSYSPUB/PUB
GWYUID is defined in dbc i.e. Database Connect Descriptor file . It is used to connect to database by think clients.

Q42. Whats is TWO_TASK in Oracle Database ?

Ans: TWO_TASK mocks your tns alias which you are going to use to connect to database. Lets assume you have database client with tns alias defined as PROD to connect to Database PROD on machine teachmeoracle.com listening on port 1521. Then usual way to connect is sqlplus username/passwd@PROD ; now if you don't want to use @PROD then you set TWO_TASK=PROD and then can simply use sqlplus username/passwd then sql will check that it has to connect to tnsalias define by value PROD i.e. TWO_TASK

Q43. What is difference between GUEST_USER_PWD (GUEST/ORACLE) & GWYUID ?

Ans: GUEST_USER_PWD(Guest/Oracle) is used by JDBC Thin Client where as GWYUID is used by Thick Clients like via Forms Connections.

Q44. How to check number of forms users at any time ?

Ans: Forms Connections initiate f60webmx connections so you can use
ps -ef | grep f60webmx | wc -l

Q45. What is 0 & Y in FNDCPASS, FNDLOAD or WFLOAD ?

Ans: 0 & Y are flags for FND Executable like FNDCPASS & FNDLOAD where
0 is request id (request ID 0 is assigned to request ID's which are not submitted via Submit Concurrent Request Form.
'Y' indicates the method of invocation. i.e. it is directly invoked from the command-line not from the Submit Request Form.

Q46. In a Multi Node Installation, How will you find which node is running what Services ?

Ans: You can query for table FND_NODES and check for column , SUPPORT_CP ( for Concurrent Manager) SUPPORT_FORMS ( for forms server) , SUPPPORT_WEB (Web Server), SUPPORT_ADMIN( Admin Server), and SUPPORT_DB for database tier.
You can also check same from CONTEXT File (xml file under APPL_TOP/admin)

Q47. If your system has more than one Jinitiator, how will the system know, which one to pick. ?

Ans: When client makes a forms connection in Oracle Applications, forms client session uses configuration file defined by environment variable FORMS60_WEB_CONFIG_FILE also called as appsweb config file. These days this file is of format appsweb_$CONTEXT.cfg The initiator version number defined by parameter jinit_ver_name in this file will be used

Q48. While applying Apps patch using adpatch, if you want to hide the apps password, how will that be possible ?

Ans: using flags=hidepw

Q49. What is importance of IMAP Server in Java Notification Mailer ?

Ans: IMAP stands for Internet Message Access Protocol and Java Notification mailer require IMAP server for Inbound Processing of Notification Mails.

Q50. What is difference between Socket & Servlet Mode in Apps Forms ?

Ans: When forms run SOCKET Mode these are dedicated connection between Client Machine & Form Server (Started by adfrmctl.sh). When Forms run in servlet mode the forms requests are fulfilled by Jserv in Apache . There will be additional JVM for Forms Request in that case and you won't start form via adfrmctl.sh.

Q51. a. How to find OUI version ?
b. How to find Database version ?
c. How to find Oracle Workflow Cartridge Release Version ?
d. How to find opatch Version ?
e. How to find Version of Apps 11i ?
f. How to Discoverer Version installed with Apps ?
g. How to find Workflow Version embedded in Apps 11i ?
h. How to find version of JDK Installed on Apps ?

Ans: OUI
OUI stands for Oracle Universal Installer. In order to find Installer version you have to execute ./runInstaller -help ( From OUI location)
You will get output like
Oracle Universal Installer, Version Production Copyright (C) 1999, 2005, Oracle. All rights reserved.
That means OUI version in above case is
OUI location is $ORACLE_HOME/oui/bin

select * from v$version;

Oracle Workflow
Log in to the database as the owf_mgr user and issue
select wf_core.translate('WF_VERSION') from dual;

$ORACLE_HOME/OPatch/opatch version

select RELEASE_NAME from fnd_product_groups;

Discoverer with Apps installed in ORACLE_HOME same as 806 is usually 3i or 4i. To find Version login to Application Tier & go to $ORACLE_HOME/discwb4/bin and execute
strings dis4ws | grep -i 'discoverer version'

Workflow embedded in 11i
Run following SQL from apps user ;

You should see output like
Which means you are on Workflow Version 2.6.0

You can also use script wfver.sql in FND_TOP/sql to find version of workflow in Apps.

JDK in Apps
There might be multiple JDK installed on Operating System . Like JDK 1.3.1, 1.4.2 or 1.5 but in order to find which Version of JDK your Apps is using
Open your Context File $SID_$HOSTNAME.xml under $APPL_TOP/admin and look for variable
JDK_TOP oa_var="s_jdktop" what so ever value assigned against that parameter go to that directory & cd bin & execute command
./java -version so lets assume entry above is /usr/jdk then cd /usr/jdk/bin & ./java -version , you will see output like

java version "1.4.2_10"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_10-b03)
Java HotSpot(TM) Client VM (build 1.4.2_10-b03, mixed mode)
Which means you are using JDK 1.4.2 in Oracle Applications 11i.

Q52. If by mistake you/someone deleted FNDLIBR can this executable be restored if Yes, How & if no, what will you do ?

Ans: Yes, you can restore FNDLIBR executables
Run adadmin on concurrent manager node
select option 2. Maintain Applications Files menu
then select 1. Relink Applications programs
when prompts for
Enter list of products to link ('all' for all products) [all]
select FND
when prompt for
Generate specific executables for each selected product [No] ? YES
select YES
& from list of executables select FNDLIBR
This will create new FNDLIBR executables.

Q53. What is .pls files which you see with apps ?

Ans: pls file stands for plsql files. In apps patch these files contain code to create package spec or package body or both.

Q54. What are .ldt & .lct files which you see in apps patch or with FNDLOAD ?

Ans: .ldt & .lct stands for Loader datafile & Loader configuration files, used frequently in migrating customization, profile options, configuration data, etc.. across Instances.

Q55. What are .odf file in apps patch ?

Ans: odf stands for Object Description Files used to create tables & other database objects.

Q56. What to find Form Server log files in forms ?

Ans: Form Server Start up log file default location is $OAD_TOP/admin/log/$CONTEXT_NAME/f60svrm.txt
Forms Run Time Diagnostics default location is $ORACLE_HOME/forms60/log/$CONTEXT_NAME

Q57. How to convert pll to pld file or pld file to pll ?

Ans: Pll->Pld f60gen module=MSCOSCW3.pll module_type=library userid=apps/ module_access=file output_file=MSCOSCW1.pld script=yes

Pld -> pll f60gen module=MSCOSCW3.pld userid=apps/ module_type=library module_access=file output_file=MSCOSCW1.pll parse=y batch=yes compile_all=special

Q58. Is APPS_MRC Schema exists for MRC in 11.5.10 and higher ?

Ans: No , apps_mrc schema is dropped with 11.5.10 Upgrade & 11.5.10 new Install. This is replaced by more Integrated Architecture.

Q59.If APPS_MRC schema is not used in 11.5.10 and higher then How MRC is working ?

Ans: For products like Payable, Recievables which uses MRC and if MRC is enabled then each transaction table in base schema related to currency now has an assoicated MRC Subtables.

Q60. When you apply C driver patch does it require database to be Up & Why ?

Ans: Yes , database & db listener should be Up when you apply any driver patch in apps. even if driver is not updating any database object connection is required to validate apps & other schema and to upload patch history information in database tables.

Q61. Can C driver in apps patch create Invalid Object in database ?

Ans: No , C driver only copies files in File System. Database Object might be invalidated during D driver when these objects are created/dropped/modified.

Q.62 Why does a worker fails in Oracle Apps Patch and few scenarios in which it failed for you ?

Ans: This question sounds stupid but this is asked quite often in Apps DBA Interview. Apps Patch worker can fail in case it doesn't find expected data, object, files or any thing which driver is trying to update/edit/modify. Possible symptoms may be underlying tables/objects are invalid, a prereq patch is missing , login information is incorrect, inconsistency in seeded data...

Q63. What is dev60cgi & f60cgi ?

Ans: cgi stands for Common Gateway Interface and these are Script Alias in Oracle Apps used to access forms server . Usually Form Server access directly via http://hostname:port/dev60cgi/f60cgi

Q64. What is difference between mod_osso & mod_ose in Oracle HTTP Server ?

Ans: mod_osso is Oracle Single Sign-On Module where as mod_ose is module for Oracle Servlet Engine.
mod_osso is module in Oracle's HTTP Server serves as Conduit between Oracle Apache Server & Singl Sign-On Server where as mod_ose is also another module in Oracle's HTTP Server serves as conduit between Oracle Apache & Oracle Servlet Engine.

Q65. What is difference between COMPILE_ALL=SPECIAL and COMPILE=ALL while compiling Forms ?

Ans: Both the options will compile all the PL/SQL in the resultant .FMX, .PLX, or .MMX file but COMPILE_ALL=YES also changes the cached version in the source .FMB, .PLL, or .MMB file. This confuses version control and build tools (CVS, Subversion, make, scons); they believe you've made significant changes to the source. COMPILE_ALL=SPECIAL does not do this.

Q66. What is ps -ef or ps command in Unix ? for work ex < 1 yr

Ans: ps is unix/linux utility or executable to find status of process. Used mainly to find if services/process is running or not.

Q67. What is GSM in Oracle application E-Business Suite ?

Ans: GSM stands for Generic Service Management Framework. Oracle E-Business Suite consist of various compoennts like Forms, Reports, Web Server, Workflow, Concurrent Manager ..
Earlier each service used to start at their own but managing these services (given that) they can be on various machines distributed across network. So Generic Service Management is extension of Concurrent Processing which manages all your services , provide fault tolerance (If some service is down ICM through FNDSM & other processes will try to start it even on remote server) With GSM all services are centrally managed via this Framework.

Q68. What is FNDSM ?

Ans: FNDSM is executable & core component in GSM ( Generic Service Management Framework discussed above). You start FNDSM services via APPS listener on all Nodes in Application Tier in E-Business Suite.

Q69. What is iAS Patch ?

Ans: iAS Patch are patches released to fix bugs associated with IAS_ORACLE_HOME (Web Server Component) Usually these are shiiped as Shell scripts & you apply iAS patches by executing Shell script. Note that by default ORACLE_HOME is pointing to 8.0.6 ORACLE_HOME and if you are applying iAS patch export ORACLE_HOME to iAS . You can do same by executing environment file under $IAS_ORACLE_HOME

Q70. If we run autoconfig which files will get effected ?

Ans: n order to check list of files changes during Autoconfig , you can run adchkcfg utility which will generate HTML report. This report will list all files & profile options going to change when you run AutoConfig.

Q71. What is difference between .xml file & AutoConfig ?

Ans: Autoconfig is Utility to configure your Oracle Application environment. .xml file is repository of all configuration from which AutoConfig picks configuration and polulates related files.

Q72. What is .lgi files ?

Ans: gi files are created with patching along with .log files . .lgi files are informative log files containing information related to patch. You can check .lgi files to see what activities patch has done. Usually informative logs.

Q73. How will you skip worker during patch ?

Ans: f in your adctrl there are six option shown then seventh is hidden option.(If there are seven options visible then 8th option is to Skip worker depending on ad version).

Q74. Which two tables created at start of Apps Patch & drops at end of Patch ?

Ans: FND_INSTALLED_PROCESSES &AD_DEFFERED_JOBS are the tables that get updated while applying a patch mainly d or unified driver.

Q75. How to compile an Oracle Reports file ?

Ans: Utility adrepgen is used to compile Reports. Synatx is given below

adrepgen userid=apps\ source = $PRODUCT_TOP\srw\filename.rdf dest=$PRODUCT_TOP\srw\filename.rdf stype=rdffile dtype=rdffile logfile=x.log overwrite=yes batch=yes dunit=character

Q76. What is difference between AD_BUGS & AD_APPLID_PATCHES ?

Ans: AD_BUGS holds information about the various Oracle Applications bugs whose fixes have been applied (ie. patched) in the Oracle Applications installation.
AD_APPLIED_PATCHES holds information about the "distinct" Oracle Applications patches that have been applied. If 2 patches happen to have the same name but are different in content (eg. "merged" patches), then they are considered distinct and this table will therefore hold 2 records.

Q77. What exactly happens when you put an Oracle Apps instance in maintenance mode ?

Ans: Maintenance mode provides a clear separation between normal runtime operation of Oracle Applications and system downtime for maintenance. Enabling the maintenance mode feature
a) shuts down the Workflow Business Events System and
b) sets up function security so that no Oracle Applications functions are available to users.

Used only during AutoPatch sessions, maintenance mode ensures optimal performance and reduces downtime when applying a patch. (Source Metalink Note: 233044.1)

Q78. What is profile options, What are various type of profile options ?


Q79. If users complaining Oracle Applications 11i system is running slow , what all things you will check at broad level ?


Q80. Why appsutil directory under Database ORACLE_HOME used for ?

Ans: All the template files, startup scripts , XML files are maintained here .

Q81. How to create User in Oracle Applications 11i ? Can you delete a User ?

Ans: New User can be created using security-->Define-->User menu. No , user cannot be deleted but can be end-dated.

Q82. What is Single Sign On ? ( If you are using portal 3.0.9 or 10G )?

Ans: As name says Single-Sign On Server is set of services (Software) which enables login to Application once which will allow you to login to Ppartner Applications with no need to login again. Lets assume I have configured single SSO Server for Portal , E-Business Suite, Collaboration Suite plus some other other applications, Now if I login to any one of them & after that if I wish to login to other applications I should be able to login without supplying passwords again.

Q83. How to configure portal with 11i ? ( If you are using portal 3.0.9 or 10G )?

Q84. What is content of dbc file & why its important ?

Ans: DBC file is quite important as whenever Java or any other program like forms want to connect to database it uses dbc file. Typical entry in dbc file is

Q85. There are lot of dbc file under $FND_SECURE, How its determined that which dbc file to use from $FND_SECURE ?

Ans: This value is determined from profile option "Applications Database ID".
The name can be picked from s_dbc_file_name in XML file.

Q86. Info Regarding Inventory.

Ans: What is oraInventory ?
oraInventory is repository (directory) which store/records oracle software products & their oracle_homes location on a machine. This Inventory now a days in XML format and called as XML Inventory where as in past it used to be in binary format & called as binary Inventory.
There are basically two kind of Inventory Global Inventory (also called as Central Inventory) and Local Inventory also called as Oracle Home Inventory.

Global Inventory ?
Global Inventory holds information about Oracle Products on a Machine. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server . This global Inventory location will be determined by file oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris). If you want to see list of oracle products on machine check for file inventory.xml under ContentsXML in oraInventory (Please note if you have multiple global Inventory on machine check all oraInventory directories)

You will see entry like
HOME NAME="ORA10g_HOME" LOC="/u01/oracle/10.2.0/db" TYPE="O" IDX="1"/

Local Inventory ?
Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

Can I have multiple Global Inventory on a machine ?
- Quite common questions is that can you have multiple global Inventory and answer is YES you can have multiple global Inventory but if your upgrading or applying patch then change Inventory Pointer oraInst.loc to respective location. If you are following single global Inventory and if you wish to uninstall any software then remove it from Global Inventory as well.

What to do if my Global Inventory is corrupted ?
- No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option

./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME="Oracle_Home_Location" ORACLE_HOME_NAME="Oracle_Home_Name"

Do I need to worry about oraInventory during oracle Apps 11i cloning ?
- No, Rapid Clone will update both Global & Local Inventory with required information , you don't have to worry about Inventory during Oracle Apps 11i cloning.

Q87. What is the database holding Capacity of Oracle ?

- database holding capacity of oracle 9i is 512 pb(peta bytes)
- database holding capacity of oracle 10 g is 8 trillion tera bytes

Q88. How to find Operation System Version (Unix/Linux) ?

For solaris use command
uname -a
You will see output like
For Solaris SunOS servername 5.8 Generic_117350-23 sun4u sparc SUNW,Sun-Fire-V240
For RedHat Linux use command
cat /etc/*release*
You will see output like
Red Hat Enterprise Linux AS release 3 (Taroon Update 6)

Which means you are on Solaris 5.8 or Linux AS 3 resp.

Q89. How to find if your Operating System is 32 bit or 64 Bit ?

For solaris use command
isainfo -v
If you see out put like
32-bit sparc applications
That means your O.S. is only 32 bit but if you see output like

64-bit sparcv9 applications
32-bit sparc applications
above means your o.s. is 64 bit & can support both 32 & 64 bit applications

Q90. Can I run 64 bit application on 32 bit Operating system ?

You can run 32 bit application (like oracle application server, web server, all oracle application server are 32 bit ) on both 32 /64 bit operating system but a 64 bit application like 64 bit database can run only on 64 bit operating system.

Q91. How to find if your database is 32 bit or 64 bit(Useful in applying Patches) ?

execute "file $ORACLE_HOME/bin/oracle" , you should see output like

/u01/db/bin/oracle: ELF 64-bit MSB executable SPARCV9 Version 1
which means you are on 64 bit oracle
If your oracle is 32 bit you should see output like
oracle: ELF 32-bit MSB executable SPARC Version 1
Now you know what should be bit of patch to download

Webinar: Release 12 Accounting Setup Manager 101

Solution Beacon - Tue, 2007-07-31 17:23
This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. The webinar will be presented on August 8th at 1:30pm CDT, and registration is available here.TitleRelease 12 Accounting Setup Manager 101AbstractLearn the basics of the Release 12 Accounting Setup Manager from this exciting presentation.

AJAX and the Refresh Button

Adam Winer - Tue, 2007-07-31 17:12
JSF relies heavily on <input type="hidden" name="javax.faces.ViewState"> for its lifecycle. This hidden field carries all UI state for the page. Whether that's client-side state (with the entire page Base64-encoded) or server-side state (with a simple token), it's important that the right field be delivered with any JSF postback for the page to function correctly.

As a result, AJAX implementations in JSF typically need not only to submit this value to the server when posting an AJAX request, but also to update it as necessary when a request completes. While looking at an ADF Rich Client bug recently, I was rudely reminded that the Refresh/Reload button doesn't always behave as you might imagine, and thought it was worth delving into the details. (I'll be talking about JSF, but the behavior is generic to DHTML and applies outside of JSF, and the code samples are just raw HTML.)

Take the following page:

<form name="foo">
<div id="valCtr">
<input name="val" type="hidden" svalue="1">

<a href="#" onclick="forms.foo.val.value =
parseInt(forms.foo.val.value) + 1; return false;">

Now, in your favorite browser, try the following:
  • Click Display (you'll see 1)
  • Click Increment a couple of times
  • Click Display again (you'll see 3)
  • Click or select Refresh/Reload (but not Shift-Refresh)
  • And Display once more. You'll still see 3 (unless you're using Safari)
  • Now, Shift-Refresh, and Display. Now we're back at 1.
What have we seen? Refresh has re-queried the HTML for the page, but instead of resetting the value of our hidden input field back to 1, it's stored the updated value of 3! This isn't a bug - so says this Bugzilla bug (and all 61 duplicates!) Microsoft would agree with Mozilla here. Of the big 3, only Safari doesn't overwrite form fields on reload. (The caching behavior is in fact very handy for Back/Forward, and is exploited by the Really Simple History framework.) Shift-refresh fully reloads the page, and drops the form element cache.

This can lead to big problems in a JSF application. Take the following scenario:
  • A page initially renders with state token 1 in a hidden field
  • An AJAX request updates the state token to 2
  • The user hits Reload, and the new HTML contains state token 3
  • But the browser ignores it, and overwrites it with state token 2!
Now we've got a page in state "3", but a token claiming it's really in state "2". This is bad. As always, let's see what alternatives we've got, and whether they suffer from the same problem.

First, how about creating the DOM on the fly?

<script type="text/javascript">
function incrementViaDOM()
var value = parseInt(document.forms.foo.val.value) + 1;
var newField = document.createElement("input");
newField.name = "val";
newField.type = "hidden";
newField.value = "" + value;
var oldField = document.forms.foo.val;
var parent = oldField.parentNode;
parent.replaceChild(newField, oldField);


<a href="#" onclick="incrementViaDOM(); return false;">
Increment with replaceChild</a>

In Firefox, this still doesn't help. The hidden field value is still cached. (And this example doesn't work at all in IE... see this entry for why and how to fix it.)

OK, how about innerHTML?

<script type="text/javascript">

var value = parseInt(document.forms.foo.val.value) + 1;
var valCtr = document.getElementById("valCtr");
valCtr.innerHTML = "<input name=\"val\" " +
"type=\"hidden\" value=\"" + value + "\">";

<a href="#" onclick="increment(); return false;">
Increment with innerHTML</a>

Now this works... changes made by innerHTML are not remembered by Firefox or Internet Explorer (or Safari). So, if you don't want a hidden field cached, update with innerHTML and browsers won't hassle you.

Alternatively, you could look at tackling this issue during the refresh, by forcibly resetting the value from Javascript:

<input name="val" type="hidden">
<script type="text/javascript">

This works in Firefox, but does not in IE. Whatever code overwrites the value of the hidden field runs after this inline script, but before the page's onload handler. So, if you want to tackle this problem while refreshing, you'll have to do it in onload.

To (finally) come back to JSF, there's a better way to solve this problem, at least for the state token. Use a StateManager that automatically doesn't generate new tokens for AJAX requests, but instead reuses the old token. New tokens are important when you're rendering a new page, but are a waste of space when you're just working on a single page. And, as a nice side-effect, this makes this issue moot. (MyFaces Trinidad 1.0.2 will include this token-reuse optimization, though it's always used innerHTML for updating the state token, so it hasn't been hit by this bug.)

So, to summarize, if you have a programatically-modified, hidden input field that needs to be Reload-proof, two techniques look good:
  • Use innerHTML to update the field
  • Use onload to set the hidden input field value

If you've got any other tricks, I'd be happy to know.

Know Thyself

Mary Ann Davidson - Tue, 2007-07-31 07:47

I am trying something a bit different with this blog entry, which is to tag team it with a colleague. The idea for this entry came out of an email exchange inside Oracle about identity and privacy. It was an interesting enough discussion that a third colleague thought it would be worthy of a blog entry. From my perspective, it's a chance to work with a great colleague and vent my spleen while doing so. What's better than that?

I should do a brief introduction here to my co-blogger, Roger Sullivan, who is VP of Business Development for Oracle Identity Management. I could provide Roger's more formal CV but suffice it to say that on a "street creds" level, Roger is a very well respected identity management kahuna (e.g., he is also President of the Liberty Alliance).  On a more personal level, a few years ago, Oracle acquired a company of which Roger was then-CEO. I remember punching the air with multiple "woo hoos" when I heard the news, because I knew Roger and thought that the fact we were "acquiring" him as part of the deal was at least as important as the technology (which itself was pretty cool).
The original email exchange Roger and I (and others) had was around federated identity. There is probably a fancier definition of federated identity with which Roger can enlighten us, but as a practical matter you can think of it as being able to have an "identity" that is recognized in multiple arenas that have some business relationship among them. For example, suppose I go to a web site for FooHotels. FooHotels has a business relationship with a car rental agency, RentHotCars (they offer joint car rental/hotel packages, and you get hotel points for renting cars and vice versa). From a consumer standpoint, I'd like to be able to go to a single web site, book a hotel and rent a car (and get all those points!) without having to separately identify myself to each entity. In particular, if I jump to RentHotCars.com from FooHotels.com, I'd like to be able to do that without typing in yet another cryptic username and password. I can do that through the magic of federated identity.
One of the beauties of federated identity is that the standards work in this area was driven by real businesses (not merely technologists in search of a problem to solve "elegantly"). As such, a lot of work to-date has been pragmatic, implementable, and focused on a clear problem that it would be useful to solve. And Roger has been a leader in developing those standards. Woo hoo!
&lt;Roger> Mary Ann, you are too kind.  I think it is safe to say that all of the Identity Management acquisitions, including the recent announcement of Bharosa joining the family have, been great moves for Oracle and our customers.  The synergies created by the union of these "best of breed" companies have been truly remarkable and have catapulted Oracle into a leadership position in the Identity Management market.
Mary Ann has provided a great description of the essentials of Federation.  What else would I say to a former U.S. Navy officer with a loaded weapon?  (Read on...)
Federated Identity Management provides benefits to each of the participants.  For the Service Provider, federation extends their reach into their markets.  For the Identity Provider, it provides a singularly secure mechanism of storing the essential identity elements.  And for the individual user, it provides a safe and secure means of disclosing only the required information necessary for the task at hand.
&lt;Mary Ann> To me, the business need (and utility as a user) of a being able have a federated identity is a different issue than having (or needing to have) a single identity recognized by lots of disparate entities who don't have a business relationship. This lack of business need (dare I say Need to Know?) coupled with many people's natural desire for privacy to me means that we don't actually need a single identity and in fact, I'd really resist it. I do resist it.
For example, I have multiple "identity cards" in my wallet (I have lots of them in fact, judging by the thickness of my wallet). These identities are issued by different organizations, each of which has a different authority to issue them (and don't in general need to know of my other personas). For example, I have a concealed weapons permit in the state of Idaho (which required me, among other things, to take a gun safety class and pass an FBI check). I also have an American Express card. American Express neither knows nor cares that I am licensed to carry concealed in Idaho and the state of Idaho presumably neither knows nor cares what my credit rating is, as long as I pay my taxes on time. This is one reason why I have two cards: it reflects the fact that these identities have no relationship with each other, and each has "authority" to recognize "Mary Ann's identity" that are unrelated. They don't even need to know I am the same person.
&lt;Roger> Mary Ann has hit upon the fundamental issue that concerns many who are using web-based services.
Consider the following example of how we interact in our daily (non-electronic) lives.  Let's say that I'm driving home from work and receive a call from my wife asking me to stop at the store for some milk.  I don't have much cash, so I swing by my bank's ATM machine.  This simple example involves at least four distinct identity elements.  [Pause now while the "Jeopardy" music plays and you try to identify at least four forms of identity.]
The cell call will probably have my wife's name on the screen as I answer because her number is passed through the cell network and I have that number associated with her name in my contact list.  There is reasonably strong assurance that it will be her on the other end, unless she has loaned the phone to someone else.
At the ATM machine, I'll need to insert my ATM card (something I have) and enter a PIN code (something I know) in order to provide multi-factor authentication that it's really me trying to access my bank account.  This is good for the bank and especially good for the security of my funds.  I like providing these extra steps.
At the convenience store, the only "authentication" required is that of the $5.00 bill that I present for the milk.  For some transactions, there is an implicit right to anonymity.  Neither the store clerk nor I have any need to know the other's identity for this transaction.  If, on the other hand, I looked considerably younger than I do and was buying beer, the clerk would have an obligation for me to provide proof of legal age on some sort of recognized identity card.  I know that this is expected in the various day-to-day transactions in which I participate.
What we want to achieve in the electronic world, is the same kind of humanly intuitive interaction with web-based services.  The minimal set of appropriate credentials - and only those - provided for the transaction at hand.
[PS: The 4th form was my driver's license with beaucoup identity information on it.  If I want the privilege of driving, then I will agree to carry and provide this authentication mechanism to the local constable on reasonable demand.]
&lt;Mary Ann> Unfortunately, more and more of our representative identities are linked through the magic (or curse) of Social Security Numbers (SSNs). As such, we have the "collapsing" of identity, sort of the metaphysical equivalent of having one persona instead of many, which has made it a lot easier for identity thieves to flourish. It used to be the case that losing a single credit card did not lead to an identity theft nightmare. You just called the company and cancelled the card (I only did this once, when I "lost" a credit card that later turned up).  Now, if you lose (or misplace) one "identity" that is based on your SSN, or someone else gets access to it, they can often "link" to all your other personas. You didn't lose one card, you lost your entire identity.
The entire identity theft explosion has been fueled by too darn many people using Social Security Number for purposes for which it was never intended: as a single unique identifier (that enables linkages between non-related personas, many times for no purpose that could possibly be construed as beneficial to a user).  Putting it differently, if my magazine subscription record is leaked, so people know that I take Surfer's Journal, I really would not care. (So what? It's a great publication and no harm to me if people know I read it.)  However, if my "subscription record" were to include my Social Security Number (for no good reason, and there is no good reason to use it), I am in deep kimchee because that SSN allows a bad guy access to many other Mary Ann personas. Bad guy can "become" me.
If a Social Security Number is ubiquitous (and it is) but not secret (it has been used as people's health care number and driver's license number, conveniently located on your health care card or driver's license in 12 point font, bold) and the key to "who you are," it is the perfect storm of one "key" (which is not secret and cannot be changed) unlocking your entire identity, so that it can be stolen. The only worse thing would be substituting your fingerprint as your "unique identifier," which is also not secret and can't be changed.
I confess to being a professional crank on this issue. Some years ago, I registered for a class at UC Berkeley Extension (which had and presumably still has excellent classes). They wanted my Social Security Number in order for me to register. Here's how the dialogue went:
UCB: "We'd like your Social Security Number."
Me: "Why, is this a taxable event? Because if it isn't, you shouldn't be using SSN."
UCB: "We want to uniquely identify you."
Me: "That's your problem. I know perfectly well who I am. Besides, if you want a unique identifier, use a database sequence number. Trust me; I know about these things."
I won the argument through sheer stubbornness. (In the interests of full disclosure, I am told that UC Berkeley no longer uses SSN as student identifiers, so good on them. Perhaps in my own small, cranky way, I helped them see the light.)
I should note that Oracle has technology (transparent data encryption, woo hoo!) that can encrypt columns (or entire tablespaces, as of Oracle Database 11g) of sensitive information like Social Security Numbers. It's a nice weapon in the security person's arsenal. A great one, in fact. But the entire world is not Oracle, and encryption, as wonderful as it is, doesn't help you if the data is "inappropriately accessed" in decrypted form.  Eventually, data - any data - has to be decrypted to use it.
&lt;Roger> As Mary Ann points out, Oracle provides sophisticated security mechanisms in order to protect the information that our solutions manage.  In addition, we strongly believe that these solutions must be based on industry proven standards.  This provides our customers with the investment protection and flexibility that web-based applications require in a connected world.  There is a very long list of boring acronyms that detail the identity-related standards that our products support.  Moreover, there are equally long lists of standards for virtually every product area in the company.  A fundamental criterion for the successful integration of acquired companies is their adherence to relevant standards.  This enables Oracle to quickly achieve a return on its acquisition investment.

&lt;Mary Ann> Clearly, there are times when an organization needs to collect sensitive information (like your employer, who needs your SSN to for tax reporting purposes). Many organizations take appropriate care to secure that information. Those aren't the folks that make my blood boil.
You do have to ask, and I have, why is really sensitive data being collected by so many people who do not need it and who don't secure it? Especially when it is users and not "collectors" who pay the costs of recovery from identity theft? It's like someone demanding to borrow my car (something of value to me), getting drunk (not acting responsibly with my property), wrecking my car, and leaving me to pay the repair bill. It's a rotten deal. Any reasonable person would say, "You shouldn't borrow my car; get your own. If you are going to borrow it and you wreck it in a drunken stupor, you pay for it. The entire megillah."
&lt;Roger> Identity standards have been well established for the means of collecting and "connecting the bits" so that the necessary identity information can be protected and securely shared for legitimate purposes.  The danger is that, once the bits are connected for illicit purposes, they can spread at light speed through web infrastructure.  The identity industry is beginning to establish policy standards that go beyond simple interoperability of the technology.  Rules can be created so that there are secure electronic safeguards in place to ensure that only legitimate connections can be made, thereby significantly reducing the opportunity for identity theft.  Additionally, one can assure that the mechanisms for managing the data and performing risk analysis can be thoroughly audited.  Establishing, implementing, deploying, and auditing these standards-based solutions will add enormous confidence to those who are using web-based services.  And that will yield comparable market growth.  
&lt;Mary Ann> I like what I am hearing from you, Roger! Federated identity is a wonderful thing when there is a business purpose for linking my identities, as there so often is. As someone who is forever being asked to create another username and password for yet another website, I wish we had more of this where it makes sense. However, federated identity is most assuredly not "having a single persona that relies on an immutable non-secret for security." Only God and my mom - well, maybe only God - knows or should know all the different facets of my persona, or anybody's persona. I for one would like to keep it that way.
&lt;Roger> So, Mary Ann does your spleen feel sufficiently vented?
&lt;Mary Ann> On this topic, yes! Thanks for adding color, detail and expertise to the discussion, and for being part of the solution, not the problem.

For more information:

Liberty Alliance:


Surfer's Journal, a great publication:


Roger Sullivan's blog:


Oracle's acquisition of Bharosa:


Useful Metalink NOTE ID's

Madan Mohan - Tue, 2007-07-31 02:18
RDBMS and E-Business Suite Installation and Configuration
118218.1 11i. Installing a Digital Cerificate on both the Server and Client.
252217.1 Requirements for Installing Oracle 9iR2 on RHEL3
146469.1 Installation & Configuration of Oracle Login server & Portal3i
146468.1 Installation of Oracle9i Application Server(9iAS)
152775.1 XML gateway installation
165700.1 Multiple Jserv configuration
207159.1 Documentation of 9iAS
210514.1 Express Server WebIV Note numbers
170931.1 Notes on Motif troubleshooting
177610.1 Oracle Forms in Applications FAQ
258021.1 How to monitor the progress of a materialized view refresh (MVIEW)
330250.1 Tips & Tricks To Make Apache Work With Jserv
139684.1 Oracle Applications Current Patchset Comparison Utility - patchsets.sh
236469.1 Using Distributed AD in Applications Release 11.5.
96630.1 Cash Management Overview
233428.1 Sharing the Application Tier File System in Oracle Applications 11i
243880.1 Shared APPL_TOP FAQ
330250.1 Tips & Tricks To Make Apache Work With Jserv
241370.1 Concurrent Manager Setup and Configuration Requirements in an 11i RAC Environment
209721.1 How to Change the Port Number on one Machine, When we Use Multiple Collaboration Suite Tiers
177377.1 How to change passwords in Portal (Database and lightweight user passwords)
304748.1 Internal: E-Business Suite 11i with Database FAQ
166213.1 SPFILE internals ** INTERNAL ONLY **
216208.1 Oracle9i Application Server (9iAS) with Oracle E-Business Suite Release

11i Troubleshooting
186981.1 Oracle Application Server with Oracle E-Business Suite Release 11i

Physical Standby
Note:180031.1 Creating a Data Guard physical standby
Note:214071.1 Creating a Data Guard physical standby with Data Guard Manager
Note:232649.1 Configuring gap resolution
Note:232240.1 Performing a switchover
Note:227196.1 Performing a failover
Note:187242.1 Applying Patchsets with Physical Standby in Place

Logical Standby
Note:186150.1 Creating a logical standby
Note:214071.1 Creating a logical standby with Data Guard Manager
Note:232240.1 Performing a switchover
Note:227196.1 Performing a failover
Note:233261.1 Tuning Log Apply Services
Note:215020.1 Troubleshooting Logical Standbys
Note:210989.1 Applying Patchsets with Logical Standby in Place
Note:233519.1 Known Issues with Logical Standby

Dataguard General Information
Note:205637.1 Configuring Transparent Application Failover with Data Guard
Note:233509.1 Data Guard Frequently Asked Questions
Note:225633.1 Using SSH with 9i Data Guard
Note:233425.1 Top Data Guard Bugs
Note:219344.1 Usage, Benefits and Limitations of Standby RedoLogs
Note:201669.1 Setup and maintenance of Data Guard Broker using DGMGRL
Note:203326.1 Data Guard 9i Log Transportation on RAC
Note:239100.1 Data Guard Protection Modes Explained

Dataguard Configuration Best Practices
Note:240874.1 Primary Site and Network Configuration Best Practices
Note:240875.1 9i Media Recovery Best Practices

Frequently Asked Questions
68993.1 Concurrent Managers on NT
1013526.102 Changing and Resetting Release 11 Applications Passwords
74924.1 ADI (Applications Desktop Integrator) Installation
114226.1 How to Set Up Apache and JSERV w/ Oracle XSQL, JSP, and Developer
146469.1 Installing and Configuring Oracle Login Server and Oracle Portal 3i with Oracle Applications 11i
146468.1 Installing Oracle9i Application Server with Oracle Applications 11i
62463.1 Detailed Guide on How the Intelligent Agent Works
104452.1 Troubleshooting (Concurrent Manager Unix specific)
122662.1 How to change the hostname or domainname of your portal
231286.1 Configuring the Oracle Workflow 2.6 Java-based Notification Mailer with Oracle Applications 11i
230688.1 Basic ApacheJServ Troubleshooting with IsItWorking.class
204015.1 Export/Import Process for Oracle Applications Release 11i Database Instances Using Oracle8i EE
158818.1 Migrating the Workflow Mailer to the APPLMGR Account
185431.1 Troubleshooting Oracle Applications Manager OAM 2.0 for 11i
177089.1 OAM11i Standalone Mode Setup and Configuration
172174.1 WF 2.6: Oracle Workflow Notification Mailer Architecture in Release 11i
166021.1 Oracle Applications Manager 11i - Pre-requisite Patches
166115.1 Oracle Applications Manager 11i integrated with Oracle Applications 11i
165041.1 Generic Service Management Functionality
204090.1 Generic Service Management Configuration using Applications Context Files
139863.1 Configuring and Troubleshooting the Self Service Framework with Oracle Applications (latest version)
187735.1 Workflow FAQ - All Versions
166830.1 Setting up Real Application Cluster (RAC) environment on Linux - Single node
158868.1 Step by Step, Oracle 9iAS Installation Process
123243.1 Scheduling Web Reports Via Oracle Reports Server CGI
165195.1 Using AutoConfig to Manage System Configurations with Oracle Applications 11i

RMAN and Backup & Restore
60545.1 How to Extract Controlfiles, Datafiles, and Archived Logs from RMAN Backupsets

10gR2 Setup Installation, ASM,CRS, RAC , Troubleshooting
471165.1 Additional steps to install 10gR2 RAC on IBM zSeries Based Linux (SLES10)
414163.1 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA Failures)
467753.1 Veritas clusterware 5.0 not recognized by Oracle due to the fact that Veritas
467176.1 RAC: Installing RDBMS Oracle Home Hangs The Oui
466975.1 Step to remove node from Cluster when the node crashes due to OS or H/w
330358.1 CRS 10g R2 Diagnostic Collection Guide
401132.1 How to install Oracle Clusterware with shared storage on block devices
392207.1 CSSD Startup fails with NSerr (12532,12560) transport:(502,0,0) during Install
333166.1 CSSD Startup Fails with NSerr (12546,12560) transport:(516,0,0) During install
330929.1 CRS Stack Fails to Start After Reboot ORA-29702 CRS-0184
463255.1 Enable trace for gsd issues on 10gR2 RAC
338924.1 CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs
462616.1 Reconfiguring the CSS disktimeout of 10gR2 Clusterware for Proper LUN Failover
461884.1 How To Disable Fatal Mode Oprocd On HP-UX Itanium 10gR2
404474.1 Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4
329530.1 Using Redhat Global File System (GFS) as shared storage for RAC
458324.1 Increased 'Log File Sync' waits in 10gR2
341214.1 How To clean up after a Failed (or successful) Oracle Clusterware Installation
454638.1 srvctl command failed - An unexpected exception has been detected in native
276434.1 Modifying the VIP or VIP Hostname of a 10g Oracle Clusterware Node
383123.1 PRKP-1001 CRS-215 srvctl Can not Start 2nd Instance
358620.1 How To Recreate Voting And OCR Disk In 10gR1/2 RAC
200346.1 RAC: Frequently Asked Questions
220970.1 RAC: Frequently Asked Questions
269320.1 Removing a Node from a 10g RAC Cluster
430266.1 How to install 10gR2 and 9iR2 on the same node with different UDLM requirement
283684.1 How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
391790.1 Unable To Connect To Cluster Manager Ora-29701
294430.1 CSS Timeout Computation in RAC 10g (10g Release 1 and 10g Release 2)
414177.1 Executing root.sh errors with "Failed To Upg Oracle Cluster Registry Config
390483.1 DRM - Dynamic Resource management
390880.1 OCR Corruption after Adding/Removing voting disk to a cluster when CRS stack
309542.1 How to start/stop the 10g CRS ClusterWare
387205.1 The DB Cannot Start With CRS And ASM
270512.1 Adding a Node to a 10g RAC Cluster
395156.1 Startup (mount) of 2nd RAC instance fails with ORA-00600 [kccsbck_first]
363777.1 How to Completely Remove a Service so that its Service_id Can Be Reused
391112.1 Database Resource Manager Spins Lmon To 100% Of Cpu
365530.1 Permissions not set correctly after 10gR2 installation
357808.1 Diagnosability for CRS / EVM / RACG
284752.1 10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
332180.1 ASMCMD - ASM command line utility
371434.1 Using Openfiler iSCSI with an Oracle database
338047.1 cluvfy ERROR: Unable to retrieve database release version
183408.1 Raw Devices and Cluster Filesystems With Real Application Clusters
367564.1 Server Reboots When Rolling Upgrading CRS(10gr1 -> 10gr2)
358545.1 Root.sh is failing with CORE dumps, during CRS installation
343092.1 How to setup Linux md devices for CRS and ASM
295871.1 How to verify if CRS install is Valid
331934.1 RAC Single Instance (ASM) startup fails with ORA-27300/ORA-27301/ORA-27302
341974.1 10gR2 RAC Scheduling and Process Prioritization
341971.1 10gR2 RAC GES Statistics
341969.1 10gR2 RAC OS Best Practices
341965.1 10gR2 RAC Reference
341963.1 10gR2 RAC Best Practices
313540.1 Manually running cvu to verify stages during a CRS/RAC installation
331168.1 Oracle Clusterware consolidated logging in 10gR2
339710.1 Abnormal Program Termination When Installing 10gR2 on RHAS 4.0
337937.1 Step By Step - 10gR2 RAC with ASM install on Linux(x86) - Demo
280209.1 10g RAC Performance Best Practices

216664.1 FAQ: Cloning Oracle Applications Release 11i
230672.1 Cloning Oracle Applications Release 11i with Rapid Clone
135792.1 Cloning Oracle Applications Release 11i

139516.1 Discoverer 4i with Oracle Applications 11i
257798.1 Discoverer 10g (9.0.4) with Oracle Applications 11i
139516.1 Installation of Discoverer 4i

165195.1 Using AutoConfig to Manage System Configurations with Oracle Applications 11i
218089.1 Autoconfig FAQ

Real Application Clusters(RAC)
181503.1 Real Application Clusters Whitepapers (OTN)
280209.1 10g RAC Performance Best Practices (INTERNAL ONLY)
302806.1 IBM General Parallel File System (GPFS) and Oracle RAC on AIX 5L and IBM eServer pSeries
270512.1 Adding a Node to a 10g RAC Cluster
137288.1 Manual Database Creation in Oracle9i (Single Instance and RAC)
292776.1 10g RAC Lessons Learned
280216.1 10g RAC Reference (INTERNAL ONLY)
269320.1 Removing a Node from a 10g RAC Cluster
226561.1 9iRAC Tuning Best Practices (INTERNAL ONLY)
220178.1 Installing and setting up ocfs on Linux - Basic Guide
208375.1 How To Convert A Single Instance Database To RAC In A Cluster File System Configuration
255359.1 Automatic Storage Management (ASM) and Oracle Cluster File System (OCFS) in Oracle10g
341963.1 10gR2 RAC Best Practices (INTERNAL ONLY)
273015.1 Migrating to RAC using Data Guard
329530.1 Using Redhat Global File System (GFS) as shared storage for RAC
270901.1 How to Dynamically Add a New Node to an Existing 9.2.0 RAC Cluster
203326.1 Data Guard 9i Log Transportation on RAC
169539.1 A Short Description of HA Options Available in 9i
160120.1 Oracle Real Application Clusters on Sun Cluster v3
226569.1 9iRAC Most Common Performance Problem Areas (INTERNAL ONLY)
251578.1 Step-By-Step Upgrade of Oracle Cluster File System (OCFS v1) on Linux
247135.1 How to Implement Load Balancing With RAC Configured System Using JDBC
139436.1 Understanding 9i Real Application Clusters Cache Fusion
285358.1 Creating a Logical Standby from a RAC Primary Using a Hot Backup
222288.1 9i Rel 2 RAC Running on IBM’s General Parallel File System
226567.1 9iRAC Related Init.ora Parameters (INTERNAL ONLY)
210889.1 RAC Installation with a NetApp Filer in Red Hat Linux Environment
341965.1 10gR2 RAC Reference (INTERNAL ONLY)
341969.1 10gR2 RAC OS Best Practices (INTERNAL ONLY)
226566.1 9iRAC Related Latches (INTERNAL ONLY)
220970.1 RAC: Frequently Asked Questions
268202.1 Dynamic node addition in a Linux cluster
332257.1 Using Oracle Clusterware with Vendor Clusterware FAQ
245079.1 Steps to clone a 11i RAC environment to a non-RAC environment
235158.1 How To Enable/Disbale Archive Log Mode on Oracle9i Real Application Cluster
210022.1 How To Add A New Instance To The Existing Two Nodes RAC Database Manually
317516.1 Adding and Deleting a Cluster Node on 10gR2 / Linux
271685.1 How to Run Autoconfig for RAC Environment on Apps Tier Only
278816.1 How to Setup Parallel Concurrent Processing using Shared APPL_TOP for RAC Environment
334459.1 How to change hostname in RAC environment
250378.1 Migrating Applications 11i to use Oracle9i RAC (Real Application Clusters).
295998.1 How to solve corruptions on OCFS file system
345081.1 How to Rename a RAC Database in a 10g Real Application Clusters Environment
312051.1 How To Remove Ocfs From Linux Box.

228516.1 How to copy (export/import) Portal database schemas of IAS 9.0.2 to another database
330391.1 How to copy (export/import) Portal database schemas of IAS 10.1.2 to another database

125767.1 Upgrading Devloper6i with Oracle Applications 11i
216550.1 RDBMS upgrade to 9.2.0
161779.1 Upgradation of HTTP Server
212005.1 Upgrade Oracle Applications to 11.5.8
139863.1 Self Servie Framework Upgrade
112867.1 Express Server & OFA upgrade
124606.1 Jinitiator upgrade
130091.1 JDK upgrade to 1.3
130091.1 Upgrading Oracle Applications 11i to use JDK 1.3
144069.1 Upgrading to Workflow 2.6 with Oracle Applications 11i
159657.1 Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i

Note 230627.1 - 9i Export/Import Process for Oracle Applications Release 11i
Note 331221.1 - 10g Export/Import Process for Oracle Applications Release 11i
Note 362205.1 - 10g Release 2 Export/Import Process for Oracle Applications Release 11i
Note 277650.1 - How to Use Export and Import when Transferring Data Across Platforms or Acros...
Note 243304.1 - 10g: Transportable Tablespaces Across Different Platforms
Note 341733.1 - Export/Import DataPump Parameters INCLUDE and EXCLUDE - How to Load and Unload..


Database Upgrade from 8i to 9i

Madan Mohan - Tue, 2007-07-31 01:53
Pre-Upgrade Tasks

1) Make sure that you have 9iR2 software (9.2.0 CD Dump) before starting the installation.

first need to download the software on the server. Following are the details of the software to be downloaded:

i. for Solaris:
Disk1: Part# A99349-01
Disk2: Part# A99350-01
Disk3: Part# A99351-01
ii. for Linux
Disk1: Part# A99339-01
Disk2: Part# A99340-01
Disk3: Part# A99341-01

2) Login to orxxxxxx user and make sure that you have minimum 4 GB free space in /xxxxxx/oracle mount point. If you do not have enough free space, you need to send the TAR to SA team for adding more space to /xxxxxx/oracle mount point.

3) Make sure that you have oraInst.loc file in the below given locations and it has 777 privileges over it. If the file is not there or it does not have 777 privileges, you need to send the TAR to SA team for creating the file or granting 777 on it.
a. On Solaris: /var/opt/oracle/oraInst.loc
b. On Linux: /etc/oraInst.loc

4) Look out for the oraInventory on your instance. As per EBSO standards the location of oraInventory is “/xxxxxx/oracle/product/oraInventory”. If it is not present in this location you need to search for it. Once you know the location of oraInventory and you have privileges to update oraInst.loc file, make sure that the contents of oraInst.loc is as shown below. If the file was already existing and had some other values, just update it as shown below:

5) Make sure that you have /igold symbolic link pointing to /xxxxxx on the file system.

Preparing the System for Upgrade

1) Declare blackout on all the Components of APPS Instance and shutdown all Middle-Tier services of the instance. Keep Database server and DB listener up.

2) Check the free space in SYSTEM tablespace and if it is less than 1 GB free, add another datafile to create free space. Similarly, make sure that you have minimum 750 MB free in RBS tablespace. One example of how to add datafile is given below.

SQL> select tablespace_name, round(sum(bytes)/1024/1024) free_space from dba_free_space where tablespace_name in (‘SYSTEM’,’RBS’) group by tablespace_name;

SQL> alter tablespace SYSTEM add datafile ‘/xxxxxx/oradata02/data02/systemxx.dbf’ size 1000m autoextend on next 25m maxsize 1800m;

3) Make sure that the value of maxextents for all Rollback Segments is Unlimited. Run the below given query to check this. The value of “32765” means the Unlimited size. If the value is less than 32765 then alter the rollbacks segment to make maxextents unlimited.

SQL> select segment_name, max_extents,status from dba_rollback_segs;
SQL> alter rollback segment rbsXX storage (maxextents unlimited);

4) Set the values of following parameters in initXXXXXX.ora file as given below.
db_domain =
aq_tm_processes = 0
job_queue_processes = 0
log_archive_start = false
_system_trig_enabled = FALSE

5) Search for any “event=” set in initXXXXXX.ora or ifilecbo.ora files. If you find any event, comment that entry. Also, you may not find some of the initialization parameters given above in initXXXXXX.ora file. In that case check the parameter in ifilecbo.ora file.

6) Alter the database to NOARCHIVELOG mode and shut it down. Also shutdown the DB listener.

$ sqlplus “/ as sysdba”
SQL> shutdown immediate
SQL> startup mount
SQL> alter database noarchivelog;
SQL> archive log list
SQL> shutdown immediate
SQL> exit
$ lsnrctl stop XXXXXX

7) Create new ORACLE_HOME and set environment for that.

a. Create new directory “920” in product directory for the new HOME
$ cd /xxxxxx/oracle/product
$ mkdir 920

b. Copy the environment file from old ORACLE_HOME (817) to new ORACLE_HOME (920)
$ cd /xxxxxx/oracle/product/920
$ cp ../817/.env .

c. Edit the environment file in new ORACLE_HOME and change all the references from “817” to “920” by performing a global replace in “vi”.
$ vi .env
:1,$ s/817/920/g

d. Edit “.profile” file and “.bash_profile” (only in Linux) and edit all the references of “817” to “920”

$ cd $HOME
$ vi .profile
:1,$ s/817/920/g
$ vi .bash_profile
:1,$ s/817/920/g

e. Log out from orxxxxxx user, login again and make sure that the following environment variables are pointing to new ORACLE_HOME i.e. “/orxxxxxx/product/920”.
$ echo $TNS_ADMIN

8) At this stage we are ready to perform our upgrade. Just review all the steps in these first 2 sections and make sure that you have followed all of them. Then proceed to next section and perform the upgrade.

Performing 920 Upgrade

1) Login to orxxxxxx user and make sure that the environment variables like ORACLE_HOME, TNS_ADMIN and LD_LIBRARY_PATH are pointing to new ORACLE_HOME of 920. Also, make sure that /igold symbolic link is pointing to /xxxxxx and oraInst.loc file has been correctly updated. All these things had been discussed in the previous sections of the document.

2) Start a Reflection X session and connect to orxxxxxx user using Fsecure SSH client. Run xclock to see if you can run GUI. If you are performing the upgrade from a remote location (from India) do not run the Installer from your own PC but use VNC Viewer to connect to a Desktop in the US and then run the upgrade from the US PC. In case of Your Place customers, the normal SSH session is enabled to run GUI Installers and there is no performance hit from any location. You can run the installer without opening any Reflection or VNC viewer.

3) Start “runInstaller” from Disk1 of CD Set which has been downloaded earlier and choose the following options while installation:

a. File Locations:
ORACLE_HOME path=/igold/oracle/product/920
Do not give the actual location of 920 ORACLE_HOME (/xxxxxx/oracle/product/920) here. We are deliberately using “igold” as it helps in patching of cloned instances

b. Select a Product:
Oracle 9i Database

c. Type of Installation:
Enterprise Edition

d. Database Configuration
Software Only

4) Download PatchSet (Patch# 3095277) and unzip it in a temporary directory and run below given cpio command. It will create a new Disk1 directory.
$ unzip 9204_solaris_release.cpio.z
$ cpio -idmv < 9204_solaris_release.cpio
$ cpio -idmv < 9204_lnx32_release.cpio

5) Start “runInstaller” from /xxxxxx/oracle/product/oui directory to install PatchSet files and choose the following options:

a. Files Locations:
Source Path: /Disk1/stage/products.jar
ORACLE_HOME path=/igold/oracle/product/920

b. Choose OUI installation and complete it. Exit the installer, do not choose continue with “Next Install”. You have to restart installer.

c. Start installer again with the same File Location values as given above and choose PatchSet installation.

6) Relink Oracle executables to remove igold dependencies and verify that libraries being referenced after relinking are from correct ORACLE_HOME location and not from igold link.
$ cd $ORACLE_HOME/bin
$ ./relink all
$ ldd lsnrctl
$ ldd sqlplus
$ ldd oracle

7) Copy initXXXXXX.ora and ifilecbo.ora files from old ORACLE_HOME to new 920 ORACLE_HOME. Do not change the value of any initialization parameter; this will be done in later steps.
$ cd $ORACLE_HOME/dbs
$ cp ../../817/dbs/initXXXXXX.ora .
$ cp ../../817/dbs/ifilecbo.ora .

8) Perform the DB upgrade from 8.1.7 to 9.2.0 using the manual scripts as given below. These upgrade scripts may take 4-5 hours to complete. “startup migrate” statement will throw “ORA-32004: obsolete and/or deprecated parameter(s) specified” exception. At this point of time ignore this error it will be taken care in later steps. Once the upgrade scripts complete, query “dba_registry” table to make sure that Oracle components have been upgraded.
$ sqlplus “/ as sysdba”
SQL> startup migrate
SQL> spool db_upgrade.log
SQL> @?/rdbms/admin/u0801070.sql
SQL> spool off
SQL> spool dbcmp_upgrade.log
SQL> @?/rdbms/admin/cmpdbmig.sql
SQL> spool off
SQL> SELECT comp_name, status, substr(version,1,10) as version from dba_registry;
Oracle9i Catalog Views VALID
Oracle9i Packages and Types VALID
JServer JAVA Virtual Machine VALID
Oracle9i Java Packages VALID
Oracle XDK for Java UPGRADED
Oracle interMedia Text LOADED
Oracle9i Real Application Clusters INVALID
Oracle interMedia LOADED
Oracle Spatial LOADED

9) Shutdown the instance, start it up again and execute utl_recomp package to recompile invalid objects using parallel workers. This may take 3-4 hours.
a. SQL> shutdown immediate
b. SQL> startup
c. SQL> @?/rdbms/admin/utlrcmp.sql
d. SQL> exec utl_recomp.recomp_parallel(6)

10) Upgrade Oracle Text, Oracle interMedia and Orace Spatial as given in the following steps. Run “catpatch.sql” script to complete the installation of patchset. Then query dba_registry table to verify the upgrade.
a. Upgrade Oracle Spatial
SQL> spool spatial_upgrade.log
SQL> connect / as sysdba
SQL> @?/md/admin/mdprivs.sql
SQL>connect mdsys/mdsys
SQL> @?/md/admin/c81Xu9X.sql
SQL> spool off
b. Upgrade Oracle interMedia
SQL> spool intermedia_upgrade.log
SQL> connect / as sysdba
SQL> @?/ord/im/admin/imdbma.sql
SQL> @?/ord/admin/u0801070.sql
SQL> @?/ord/im/admin/u0801070.sql
SQL> connect ordsys/ordsys
SQL> @?/ord/im/admin/imchk.sql
SQL> spool off
c. Upgrade Oracle Text
SQL> spool text_upgrade.log
SQL> connect / as sysdba
SQL> @?/ctx/admin/s0900010.sql
SQL> connect ctxsys/ctxsys
SQL> @?/ctx/admin/u0900010.sql
SQL> connect / as sysdba
SQL> @?/ctx/admin/s0902000.sql
SQL> connect ctxsys/ctxsys
SQL> @?/ctx/admin/u0902000.sql
SQL> spool off
d. Complete Patchset
SQL> shutdown immediate
SQL> startup migrate
SQL> spool patch.log
SQL> @?/rdbms/admin/catpatch.sql
SQL> spool off
e. Compile invalids
SQL> shutdown immediate
SQL> startup
SQL> exec utl_recomp.recomp_parallel(4)
f. Verify the upgrade
SQL> SELECT comp_name, status, substr(version,1,10) as version from dba_registry;
Oracle9i Catalog Views VALID
Oracle9i Packages and Types VALID
JServer JAVA Virtual Machine VALID
Oracle9i Java Packages VALID
Oracle XDK for Java VALID
Oracle interMedia Text VALID
Oracle9i Real Application Clusters INVALID
Oracle interMedia VALID
Oracle Spatial VALID

11) Copy tnsnames.ora and listener.ora files from TNS_ADMIN directory of old ORACLE_HOME (817) to TNS_ADMIN directory of new ORACLE_HOME (920) and change references of 817 ORACLE_HOME to 920 ORACLE_HOME. Then start DB listener and make sure that “tnsping” works.

12) Update initXXXXXX.ora and ifilecbo.ora files as given below. These values have been taken from Note# 216205.1.

a. Update the following parameters in initXXXXXX.ora file:
Set the value of “aq_tm_processes” to the original value which was there before starting the upgrade
Set the value of “job_queue_processes” to the original value which was there before starting the upgrade
Set “compatible = 9.2.0”
Set “_system_trig_enabled = TRUE”
Set “log_archive_start = true”

b. Add the following new parameters in initXXXXXX.ora file:
nls_length_semantics = BYTE
pga_aggregate_target = 1000M
workarea_size_policy = AUTO

c. Comment the following parameters in initXXXXXX.ora as these are obsoleted in 9iR2 database:

d. Update the following parameter in ifilecbo.ora file”
Set “optimizer_features_enable = 9.2.0”

e. Comment the following parameters in ifilecbo.ora as these are obsoleted in 9iR2 database:

13) Restart the database and alter it in arhivelog mode. Make sure that you do not get “ORA-32004: obsolete and/or deprecated parameter(s) specified” error while starting the database. If you get this error, check the erroring parameter name in alertXXXXXX.log file and comment that in init.ora.
$ sqlplus “/ as sysdba”
SQL> shutdown immediate
SQL> startup mount
SQL> alter database archivelog;
SQL> shutdown immediate
SQL> startup

14) Execute the post-install scripts
SQL> conn / as sysdba
SQL> @?/javavm/install/jvmsec3.sql
SQL> @?/javavm/install/jvmsec5.sql
SQL> conn apps/
SQL> @/patch/115/sql/adgrn9i.sql apps

15) Apply APPS patches required for 9iR2 database.

a. Apply FND Patch# 2838093
b. If adpatch hangs while executing “adinvset.pls” for more than 10 minutes then you may be hitting Bug# 2651057. Apply Patch# 2651057 to fix the issue.
c. Apply AD Patch# 2361208

16) Complete the upgrade and start APPS services

a. Run “Re-create grants and synonyms” from adadmin
b. Run “Compile APPS schema” from adadmin
c. Start all APPS services
d. Expire the blackout
e. Perform health checks and release the instance to the customer
f. Rename old ORACLE_HOME as 817_old
$ mv /xxxxxx/oracle/product/817 /xxxxxx/oracle/product/817_old


Subscribe to Oracle FAQ aggregator