Feed aggregator

11g whitepapers @ OTN

Pankaj Chandiramani - Mon, 2007-07-23 02:09

I have seen couple of good technical whitepapers at otn , below is the link to the same .
These cover the complete series for new features , security , HA etc

http://www.oracle.com/technology/products/database/oracle11g/index.html

Categories: DBA Blogs

How we solved a (ORA-02049 Timeout: Distributed Transaction Waiting for Lock) on our Apps Customized module

Fadi Hasweh - Sun, 2007-07-22 05:00
We have a customized Point of Sale module that is integrated with our Apps standard CRM and financial modules; we faced a serious issue on this customized module that is when users are trying to sale through this module they receive an ORA-02049 Timeout: Distributed Transaction Waiting for Lock, which require them to keep trying until they make the sale. This error used to show on daily basis on the peak hours only but we could not tell what the cause of it, simple search of the error on metalink return note 1018919.102 that advices that we should increases the distributed_lock_timeout value in the INIT.ORA file the default value was 60 seconds so we increased it to 300 seconds even though we don’t have any distributed transactions on the system all the transactions were local. We restart the issue and the problem became worse because now the end users have to wait for 5 minutes (300 seconds) before they receive the error message (ORA-02049) and because of that we had to set the value back to 60 seconds.

After that we tried to trace the error using different event trace levels but with no luck we were not able to determine what is causing the error.

We thought that it’s a database bug and oracle advised us to upgrade the database from 9.2.0.5 to 9.2.0.7 we did that but still the issue is there.

After a month of investigation/tracing and snapshot of when the problem is happing we managed to find out what was causing the problem. It was a bitmap index that was built on the table we were trying to insert data on.

When an end user was trying to sale without committing his transaction for some reason and at the same time another end user tries to sale he will receive the error message and a lock on the table happened and the error pops-up.

We solved the issue by dropping the bitmap index and creating a normal b-tree index even though the column has only three distinct values.

Anydata and anytype in 9i

Adrian Billington - Sun, 2007-07-22 03:00
An introduction to generic types in Oracle 9i. October 2002 (updated July 2007)

Encapsulating bulk pl/sql exceptions

Adrian Billington - Sun, 2007-07-22 03:00
Using object features to encapsulate FORALL .. SAVE EXCEPTIONS error-handling. July 2007

The "Golden Rule" of People Management

Peter Khos - Sat, 2007-07-21 13:53
I was going to blog about the current spat between Jonathan Lewis and Don Burleson on the OTN forums over LGWR and LGWR_IO_SLAVES but then decided that it wasn't worth the web space that it occupies. So, I will blog about a non-technical subject, managing people.People management is a complex subject and there are numerous books published by folks smarter than I on the subject. Here's my take Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com3

Problems with CVS removes?

Robert Baillie - Fri, 2007-07-20 11:30
Accidently removed a file in CVS that you want to keep? Sounds like a stupid question, because when you know the answer to this problem it just seems blindingly obvious, but what if you've issued a 'remove' against a file in CVS and before you commit the remove you decided that you made a mistake and still want to keep it? I.E you issued (for example) > cvs remove -f sheep.php But not issued > cvs commit -m removed sheep.php I've heard work arounds such as: Edit the "entries" file in the relevant CVS directory in your workspace, removing the reference to the file. This makes the file appear unknown to CVS.Perform an update in that directory. This gets the repository version of the file and updates the "entries" file correctly All you actually need to do is re-add the file: > cvs add sheep.php U sheep.php cvs server: sheep.php, version 1.6, resurrected When used in this way, the add command will issue an update against the file and retrieve the repository version of...

Problems with CVS removes?

Rob Baillie - Fri, 2007-07-20 10:58
Accidently removed a file in CVS that you want to keep?

Sounds like a stupid question, because when you know the answer to this problem it just seems blindingly obvious,
but what if you've issued a 'remove' against a file in CVS and before you commit the remove you decided that you
made a mistake and still want to keep it?

I.E you issued (for example)

> cvs remove -f sheep.php

But not issued

> cvs commit -m removed sheep.php

I've heard work arounds such as:
  • Edit the "entries" file in the relevant CVS directory in your workspace, removing the reference to the file.
    This makes the file appear unknown to CVS.
  • Perform an update in that directory. This gets the repository version of the file and updates the "entries"
    file correctly


All you actually need to do is re-add the file:

> cvs add sheep.php

U sheep.php
cvs server: sheep.php, version 1.6, resurrected

When used in this way, the add command will issue an update against the file and retrieve the repository version of the file.

A word of warning though, if you had uncommitted changes in that file before you issued a remove, CVS isn't going to recover that for you...

How about if you've removed a file, but your version of the file is out of date and so you can't commit it?

So you've issued the following:

> cvs remove -f sheep.txt

cvs server: scheduling 'sheep.txt' for removal
cvs server: use 'cvs sheep' to remove this file permanently

> cvs commit -m removed sheep.txt

cvs server: Up-to-date check failed for 'sheep.txt'
cvs server: correct above errors first!

You can't issue an update because you get the following:

> cvs update sheep.txt

cvs server: conflict: removed sheep.txt was modified by second party
C rob_tmp.txt

Again, add the file.

> cvs add sheep.php

U sheep.php
cvs server: sheep.php, version 1.6, resurrected

This gets you the most up to date version from the repository, that you can then check for changes (you wouldn't want to just remove it now that someone's added new content would you?)

Once you've convinced yourself that it's still a good idea to delete it, just issue the remove and commit.

Simple when you know how!

EnterpriseDB, Oracle-compatible open-source RDBMS

Peter Khos - Thu, 2007-07-19 23:35
I recently received an e-mail from Ziff-Davis for a free webinar to take a look at EnterpriseDB which is quoted as the "World's Leading Oracle-compatible" database. It went on to describe that FTD (the flower company) save 83% in Oracle licencing costs and got a 400% improvement in performance by moving their reporting database from Oracle to EnterpriseDB Advanced Server.That got my interest butPeter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com3

Introducing the Solution Beacon Release 12 Webinar Series

Solution Beacon - Thu, 2007-07-19 17:42
We're pleased to announce our first Release 12 Webinar Series! These live webinars range from 30 to 60 minutes long and are intended to get people informed about the new Oracle Release 12 E-Business Suite. Topics include a Technical Introduction for Newcomers, Security Recommendations, and reviews of the new features in the apps modules, so whether your interest is functional or technical you're

Top 10 areas to address before taking Oracle BPEL Process Manager 10.1.3 to a Production Implementation

Arvind Jain - Mon, 2007-07-16 17:33
Here is a summary of the article I am writing on How to adopt BPEL PM in a Production Environment. This is based on 10.1.3 release of BPEL PM. If you need specific details then please drop me a line.

Top 10 areas to address before taking Oracle BPEL Process Manager 10.1.3 to a Production Implementation
Arvind Jain
5th July 2007

1) Version Management (Design Time)
When we are choosing a Source Safe System or Version Control system for Business Processes the consideration are quite different than choosing a Source Safe System or Version Control system for Java, C++ code components. The average user / designer of Business processes is not CODE savvy, they cannot be expected to manually merge code (*.bpel files or *.wsdl files for example). BPEL PM lacks in Design time version management of Business Processes using the jDeveloper IDE. What is needed is a Process Based Development and Merge environment. We need Visibility into Process Repository. So the requirements are different from that of a Component based repository. Consider using a good BPMN / BPA tool.

2) Version Governance (Run Time)
While BPEL PM can maintain version number for deployed BPEL processes, it is still left to an administrator or a Business Analyst to decide which process version will be active at a given point in time and what will be the naming, versioning standard. Since every deployed BPEL Process is a service, so it becomes critical to apply SOA governance methodology to control various deployed and running BPEL Processes.

3) SOAP over JMS (over SSL)
Most of the big corporation and multinationals have policies which restrict use of HTTP traffic from outside world to inside intranet. Moreover they have policies which require the use of a Messaging Layer or an ESB as a Service Intermediatory for persistence, logging, security and compliance reasons. BPEL PM support for bi directional SSL enabled JMS communication is not out of box. It needs to be tried and tested out within your organization and workarounds needs to be implemented.

4) Authentication & Authorization - Integration with LDAP / Active Directory
SOA governance requires authentication and authorization for service access based on a corporate repository and roles defined within them. This is also critical for BPEL Human Workflow (HWF). Make sure to do a small Pilot / POC for integration with your corporate identity repository before taking BPEL PM to production.
5) Integration with Rules Engine
BPEL should be used for Orchestration only and not for coding programming logic or hard coded rules. Hence it is important to have a separate Rules Engine. Many rules engine available in Market support Java facts and BPEL Engine Being a Java Engine should integrate out of the Box with these. But some rules engine have the limitation that they can take only XML facts, so it is an overhead to go from Java to XML so as to use XML facts and then marshal back to Java. So make sure that you have sorted out Integration with Rules Engine prior to BPEL production implementation.
6) Implementation Architecture
BPEL processes and projects can and will expand to occupy all available resources within your organization. These business processes are pretty visible processes within a company and have strict SLAs to meet. Make sure you have a proven and tested reference architecture for Clustering, High Availability and Disaster recovery. There has to be a provisioning process, deployment process and Process Life cycle governance methodology in place before you can fire up all engines in a production environment.
7) Throughput Consideration
BPEL PM by nature is an interpretation engine and hence there is a performance hit when running long running processes and doing heavy transformations. Plan on doing some stress and load testing on the servers running your Business Processes to get a Ball Park estimate of what is the end to end processing time and how much load can be taken by the BPEL server. Specifically do a capacity planning based on results from these pilot load and stress tests.

8) Design of BPEL Process (Payload Size, BPEL Variables - Pass by Reference or by Value)
Designing a Business Process is more of an art than a science and the same holds for BPEL Business Processes. It is important to understand what will be best practices in your organization in terms of Payload size and length of various Processes and how they are orchestrated. Are you passing across big XML payloads which can be avoided by changing the process and using a technique called as passing by reference? Will that also make your process more efficient and create true Business Services from these processes? Give a consideration to these and spend some white boarding sessions with Business and IT Analysts before creating a BPEL process.
9) Schema Consideration - Canonical Data Model & Minimal Transformations
The most cost and resource intensive step in any Integration or Process Orchestration scenario is Transformations. Especially in an orchestration engine like BPEL PM the XML payload goes through multiple massaging steps. If you can design your process flow in such a way that there is minimal of these steps then it will improve the performance of Business Process end 2 end. Also it is a best practice to have an enterprise wide canonical data model derived from some industry wide standard like OASIS, Rosettanet, ebXML etc
10) Administration - Multiple BPEL Console, Central HWF Server, Customized UI or use existing UI?
BPEL PM is easy to use and makes process orchestration almost a zero coding activity. Also it is pretty easy to learn and hence there is suddenly a bunch of BPEL processes deployed and a bunch of BPEL developers in enterprise once the floodgates are opened.

It is very critical for an enterprise scale deployment to figure out ways to Provision BPEL Server Instances and to give selective access to BPEL Console to relevant developers. BPEL console is a powerful tool and there is not much of a role based security functionality in that except for the concept of domains. Options are to create your own Administration / Console UI using the BPEL Server API’s or to have a BPEL Administrator take care of such requests.
BPEL PM comes with a built in Human Workflow Server (HWF) but in an enterprise you might want to have a centralized HWF server. All these need to be given though to before putting BPEL PM into a production environment.

10 @ Sun

Siva Doe - Mon, 2007-07-16 15:31

The title should say '10 @ Sun; 15 w. Sun'. When I joined Larsen and Toubro (L&T) in 1992, little did I know that those pizza boxes named SparcStation1 and Sun 3/xx (Motorola CPUs??) were made by a company that I was going to work for in about 4 years time. It was fun playing with SS1, writing Postscript programs that directly draws on the root window. The 3 series was running Sunview (I am sure quite a few would remember this GUI). My impression is that it was as fast and responsive as I it is currently on my Ultra 20 running GNOME ;)
It had been a roller coaster ride with Sun. I had moments of extreme happiness (probably the news that Sun stock was doing $120+) and also the complete opposite. I had been with Sun IT doing application development, later doing system administration with ITOps, and now am with engineering teams.
I greatly admire Sun as a company and cant think of working with any one else. I am afraid I will be too much biased to work any where else. The freedom that you get here is awesome. One has to work at Sun to believe and feel it. I am proud to be part of Sun's efforts, with open source in particular.
I hope I will be around to write '15 @ Sun' and '20 @ Sun'. Thanks to all my colleagues who has been making my life at Sun a great one. Thank you Sun.

ATG Rollup 4 and my Custom schema

Fadi Hasweh - Mon, 2007-07-16 01:38
After Appling ATG Rollup 4 patch no. (4676589) on our HP-UX server successfully we start to receive the following error only on our customized schema but not on the standard schemas.
The error was showing when we try to run any procedure from this customized schema we keep getting the following even though it used to work fine before the patch
"
ORA-00942: table or view does not existORA-06512: at "APPS.FND_CORE_LOG", line 23ORA-06512: at "APPS.FND_CORE_LOG", line 158ORA-06512: at "APPS.FND_PROFILE", line 2468ORA-06512: at "APPS.XX_PACKAGE_PA", line 682ORA-06512: at line 4
"

After checking on metalink we got a hint from note 370000.1 the note dose not apply for the same case but it did gave us the hint and the solution was as follow

connect as APPLSYSGRANT SELECT ON FND_PROFILE_OPTIONS TO SUPPORT;GRANT SELECT ON FND_PROFILE_OPTION_VALUES TO SUPPORT;


Support is my customized schema Custom
Have an error free day ;-)

fadi

Blogging away!

Menon - Sat, 2007-07-14 18:00
For a long time, I wanted to create a web site with some articles that reflected my thoughts on database and J2EE. During the 15 odd years of my experience in the software industry, I have realized that there is a huge gap between the middle tier folks in Java and the database folks (or the backend folks.) In fact my book - Expert Oracle JDBC Programming - was largely inspired by my desire to fill this gap for Java developers who develop Oracle-based applications. Although most of my industry experience has been in developing Oracle-based applications, during the last 2 years or so, I have had the opportunity to work with MySQL and SQL Server databases as well. This has given me a somewhat unique perspective on developing Java applications that use database (a pretty large spectrum of applications.)

This blog will contain my opinions on this largely controversial subject (think database-independence for example), on good practices related to Java/J2EE and database programming (Oracle, MySQL and SQL Server). From time to time, it will also include any other personal ramblings I may choose to add.

Feel free to give comments on any of my posts here.

Enjoy!

Using dbx collector

Fairlie Rego - Sat, 2007-07-14 08:45
It is quite possible that you have a single piece of sql which consumes more and more cpu over time without an increase in logical i/o for the statement or due to increased amount of hard parsing.

The reason could be extra burning of cpu in an Oracle source code function with time which has not been instrumented as a wait in the RDBMS kernel. One way to find out which function in the Oracle source code is the culprit is via the dbx collector function available in the Sun Studio 11. I guess dtrace would also help but I haven’t played with it. This tool can also be used in diagnosing increased cpu usage of Oracle tools across different RDBMS versions.

Let us take a simple example on how to run this tool on a simple insert statement.

SQL> create table foo ( a number);

Table created.

> sqlplus

SQL*Plus: Release 10.2.0.3.0 - Production on Sat Jul 14 23:46:03 2007

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Enter user-name: /

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> set sqlp sess1>>
sess1>>

Session 2
Find the server process servicing the previously spawned sqlplus session and attach to it via the debugger.

> ps -ef | grep sqlplus
oracle 20296 5857 0 23:47:38 pts/1 0:00 grep sqlplus
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus
> ps -ef | grep 17205
oracle 20615 5857 0 23:47:48 pts/1 0:00 grep 17205
oracle 17237 17205 0 23:46:04 ? 0:00 oracleTEST1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus

> /opt/SUNWspro/bin/dbx $ORACLE_HOME/bin/oracle 17237

Reading oracle
==> Output trimmed for brevity.

dbx: warning: thread related commands will not be available
dbx: warning: see `help lwp', `help lwps' and `help where'
Attached to process 17237 with 2 LWPs
(l@1) stopped in _read at 0xffffffff7bfa8724
0xffffffff7bfa8724: _read+0x0008: ta 64
(dbx) collector enable


Session 1
==================================================================
begin
for i in 1..1000
loop
insert into foo values(i);
end loop;
end;
/

Session 2
==================================================================

(dbx) cont
Creating experiment database test.3.er ...
Reading libcollector.so

Session 1
==================================================================
PL/SQL procedure successfully completed.

sess1>>exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

Session 2
=========

execution completed, exit code is 0
(dbx) quit

The debugger creates a directory called test.1.er.
You can analyse the collected data by using analyser which is a GUI tool.

> export DISPLAY=10.59.49.9:0.0
> /opt/SUNWspro/bin/analyzer test.3.er



You can also generate a callers-callees report using the following syntax

/opt/SUNWspro/bin/er_print test.3.er
test.3.er: Experiment has warnings, see header for details
(/opt/SUNWspro/bin/er_print) callers-callees

A before and after image of the performance problem would help in diagnosing the function in the code which consumes more CPU with time.

Architectural Differences in Linux

Solution Beacon - Fri, 2007-07-13 16:55
In this second edition in the Evaluating Linux series of posts I want to discuss what is both one of the strengths and weaknesses of Linux, namely the architectural differences between it and the traditional UNIX platforms. The relevant architectural differences between Linux and UNIX (AIX, HP-UX, Solaris: take your pick) can be viewed as several broad categories:hardware differencesfilesystem

Oracle Ireland employee # 74 signing off...

Donal Daly - Fri, 2007-07-13 08:29
I will shortly be starting my life outside Oracle after some 15 years there. My last day is today.

I've enjoyed it immensely and am proud of our accomplishments. It really doesn't seem like 15 years, and I have been lucky to work on some very exciting projects with some very clever people, many of whom have become friends. I look forward to hearing about all the new releases coming from Database Tools in the future.

Next it is two weeks holidays in France (I hope the weather gets better!) and then the beginning of my next adventure in a new company. More on that later.

I think I'll continue to blog on database tools topics.

Eclipse JSF Tools Turns 1.0

Omar Tazi - Thu, 2007-07-12 18:54
I would like to congratulate Raghu Srinivasan from Oracle (Eclipse JSF Tools Project Lead) and his team for helping the community produce its first official release of the JSF Tools Project. A couple of weeks ago the Eclipse Foundation announced the Europa release which among other things included Web Tools Platform (WTP) 2.0 of which the JSF Tools Project v1.0 is an important piece.

JSF Tools v1.0 is a key milestone as it simplifies the development of JavaServer Faces applications in the Eclipse environment. The highlights of this release include performance improvements, a new Web Page Editor as well as a graphical editor for building HTML/JSP/JSF web pages. This release is also extensible by design, it comes with an extensibility framework that allows third party developers to come up with their own enhancements.

This release is yet another milestone in delivering "productivity with choice" to our customers. For more information on other recent activities around Oracle's involvement with Eclipse check out this blog entry.

- Download Eclipse Europa: http://download.eclipse.org/webtools/downloads/drops/R2.0
- Release notes for Eclipse WTP 2.0:
http://www.eclipse.org/webtools/releases/2.0

Can a change in execution plan change the results?

Rob Baillie - Thu, 2007-07-12 08:15
We've been using Oracle Domain indexes for a while now in order to search documents to get back a ranked order of things that meet certain criteria. The documents are releated to people, and we augment the basic text search with other filters and score metrics based on the 'people' side of things to get an overall 'suitability' score for the results in a search. Without giving too much away about the business I work with I can't really tell you much more about the product than that, but it's probably enough of a background for this little gem. We've known for a while that the domain index 'score' returned from a 'contains' clause is based not only on the document to which that score relates, but also on the rest of the set that is searched. An individual document score does not live in isolation, rather in lives in the context of the whole result set. No problem. As I say, we've known this for a while and so have our customers. Quite a while ago they stopped asking what the numbers mean and learned to trust them. However, today we realised something. Since the results are affected by the result set that is searched, this means that the results can be affected by the order in which the optimizer decides to execute a query. I can't give you a full end to end example, but I can assure you that the following is most definately the case on one of our production domain indexes (names changed, obviously): We have a two column table 'document_index', which contains 'id' and 'document_contents'. Both columns have an index. The ID being the primary key and the other being a domain index. The following SQL gives the related execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, :1, 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX DOMAIN INDEX SCOTT.DOCUMENT_INDEX_IDX01 However, the alternative SQL gives this execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, 'Some text', 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX INDEX UNIQUE SCAN SCOTT.DOCUMENT_INDEX_PK Normally, this kind of change in execution path wouldn't be a problem. But as stated earlier, the result of a score operation against a domain index is not just dependant on the individual records, but the context of the whole result set. The first execution provides you a score for the single document in the context of the all the documents in the table, the second gives you a score within the context of just that document. The scores are different. Now obviously, this is an extreme example, but more subtle examples will almost certainly exist if you combine the domain index lookups with any other where clause criteria. This is especially true if you're using literal values instead of bind variables in which case you may find the execution path changing between calls to the 'same' piece of SQL. My advice? Well, we're going to split our domain index look ups from all the rest of the filtering criteria, that way we can prepare the set of documents we want the search to be within and know that the scoring algorithm will be applied consistently.

How to OBFUSCATE passwords and ENCRYPT sensitive fields in BPEL PM?

Arvind Jain - Wed, 2007-07-11 15:19
Here is a small tip on security while using Oracle BPEL Process Manager.

Many a times you have to supply password information and other sensitive information in your BPEL PM project files (*.bpel, *.xml, *.wsdl). How do you ensure that these are not visible as clear text to others who do not have access to source codes? Here is a quick tip on using the XML tag <encryption="encrypt">.

Where can this be used?

- to obfuscate password info while accessing a partnerlink that refers to a WebService secured by Basic Authentication ... login/password.

Example:

Suppose you have a partnerlink definition defined with LOGIN PASSWORD info as shown below. You want to obfuscate the password i.e. You do not want to see clear text "cco-pass"

(sample)
<partnerLinkBinding name="PartnerProfileService">
<property name="wsdlLocation">PartnerProfileWSRef.wsdl</property>
<property name="basicUsername">cco-userid</property>
<property name="basicPassword">cco-pass</property>
<propertyname="basicHeaders">credentials</property>
</partnerLinkBinding>

Add the property encryption="encrypt" for sensitive fields, this will cause the value to be encrypted at deployment. So the new XML will look like


(sample)
<partnerLinkBinding name="PartnerProfileService">
<property name="wsdlLocation">PartnerProfileWSRef.wsdl</property>
<property name="basicUsername">cco-userid</property>
<property name="basicPassword" encryption="encrypt">cco-pass</property>
<property name="basicHeaders">credentials</property>
</partnerLinkBinding>


Then deploy your process and the password will be encrypted.
Have fun encrypting things !!

Backing Up and Recovering Voting Disks

Pankaj Chandiramani - Tue, 2007-07-10 21:31

Backing Up and Recovering Voting Disks

What is a voting disk & why its needed ?
The voting disk records node membership information. A node must be
able to access more than half of the voting disks at any time.

For example, if you have seven voting disks configured, then a node must
be able to access at least four of the voting disks at any time. If a
node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.

Backing Up Voting Disks

When to backup voting disk ?

  1.       After installation
  2.       After adding nodes to or deleting nodes from the cluster
  3.       After performing voting disk add or delete operations

To make a backup copy of the voting disk, use the Linux dd command. Perform this operation on every voting disk as needed where voting_disk_name is the name of the active voting disk and backup_file_name is the name of the file to which you want to back up the voting disk contents:
dd if=voting_disk_name of=backup_file_name

If your voting disk is stored on a raw device, use the device name in place of voting_disk_name. For example:
dd if=/dev/sdd1 of=/tmp/voting.dmp

Note : When you use the dd command for making backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process before taking a backup of the voting disk.

Recovering Voting Disks

If a voting disk is damaged, and no longer usable by Oracle Clusterware, you can recover the voting disk if you have a backup file.

dd if=backup_file_name of=voting_disk_name

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator