RAC

Configuring a private DNS server on Openfiler for use with Oracle RAC 12C on Virtual Boxes

image.png 

This post has already been published in the past on the AMIS-blog.

To build an Oracle 12C RAC database – on Virtual Boxes – there’s at least shared storage needed for ASM, and a DNS-server for the SCAN-addresses. Several methods can be used for this, but  for the storage in my private project I chose Openfiler, an open source management storage tool, on a separate Virtual Box. It’s like a SAN in real life (the complete system will be three Virtual Boxes: two RAC-nodes and 1 storage Virtual Box). Version Openfiler: 2.99.

O.k. storage is clear, but what about DNS? The quickest and dirtiest way to accomplish this is to use Dnsmasq on every RAC-node. A nice blogpost about this subject is here to be found.

But what I want is a separate DNS-server, just as in real life. The perfect candidate is to use the separate Openfiler Virtual Box

By |June 7th, 2014|Categories: Database, RAC|Tags: , , |0 Comments

Oracle on RHEL 6 , using ASM with or without ASMLIB

A while ago, but worth mentioning it for those customers with a roadmap based on Red Hat Linux.  The status regarding RHEL 6 is the following:

– Oracle has indeed certified RHEL 6 for the Oracle database, see also the comments in this article, and my seperate blogpost about it.

Red Hat announced recently an extended lifecyle (support) of version 5 and 6, from 7 to 10 years.

– Red Hat is making things harder by shipping its RHEL 6 kernel source as one big tarball, without breaking out the patches. Distribution in this form satisfies the GPL, but it makes life hard for anybody else wanting to see what has been done with this kernel

Maybe as a result of this:  regarding ASMLIB Oracle mentioned in note 1089399.1 :

For RHEL 6, Oracle will provide ASMLib software and updates only when configured with a kernel distributed by Oracle. Oracle will not provide ASMLib packages for kernels distributed by Red Hat as part of RHEL 6. ASMLib updates will be delivered via Unbreakable Linux Network (ULN), which is available to customers with Oracle Linux support. ULN works with both Oracle Linux or Red Hat Linux installations, but ASMLib usage will require replacing any Red Hat kernel with a kernel provided by Oracle.

Red Hat has written this general article  about it and this technical article how to overcome this:  using ASM with or without ASMLIB.

First I mentioned in this post that you could not use ASM with RHEL 6, but Tim Hall corrected me on this, read his comment on this post. Thanks (and cheers… ) !

Hope it helps somebody.

 

By |January 4th, 2012|Categories: Database, RAC|Tags: , , |11 Comments

Multipath timeout issues with extended 11.2.0.2 – cluster setup. Part II

The second and final post about an issue with a RAC-configuration with two SAN’s.  Problem was a i/o-freeze of minutes when crashing one of the two SAN’s. The first post I ended with a ‘cliffhanger’  because we had a solution, but not tested it yet. Now we tested it.

Start with a mockup of the first post.

Setup:

3 HP DL380 G6 systems with a basic RHEL 5u5 x86_64 installation (2 x RAC clusternodes, 1 x NFS-voting-node)

2 SAN’s HP EVA 6400 systems with 2 controllers each (resulting in 8 paths per device)

Oracle 11.2.0.2

Test: power off 1 SAN.  Default result / problem: i/o freeze of minutes, Oracle didn’t like it, started to evict, shutdown, startup = expected behaviour after such a long i/o freeze. But this is not the intention when installing a RAC with two SAN’s….

By |January 3rd, 2012|Categories: Database, RAC|Tags: , , , |0 Comments

Param ‘_datafile_write_errors_crash_instance’ , TRUE or FALSE?

Since 11.2.0.2 there’s a new parameter, “_datafile_write_errors_crash_instance” to prevent the intance to crash when a write error on a datafile occurs .  But.. should I use this or not.  The official text of this parameter:

This fix introduces a notable change in behaviour in that
from 11.2.0.2 onwards an I/O write error to a datafile will
now crash the instance.

Before this fix I/O errors to datafiles not in the system tablespace
offline the respective datafiles when the database is in archivelog mode.
This behavior is not always desirable. Some customers would prefer
that the instance crash due to a datafile write error.

By |August 26th, 2011|Categories: Database, RAC|Tags: , , |0 Comments

Multipath timeout issues with extended 11.2.0.2 – cluster setup

We were setting up a 2 node Oracle Grid Infrastructure (RAC) – extended – cluster on top of RHEL 5.5 according to the Oracle standard documentation, with of course a third NFS-node as voting node. Also using ASM to create “host-based”mirror blockdevices for the Oracle software.

The setup is as follows:

3 HP DL380 G6 systems with a basic RHEL 5u5 x86_64 installation (2 x RAC clusternodes, 1 x NFS-voting-node)

2 SAN’s HP EVA 6400 systems with 2 controllers each (resulting in 8 paths per device)

Oracle 11.2.0.2

We did choose this configuration in stead of a configuration with Dataguard because of our high demand of failover-time in case of a node- / SAN- disaster. Should be within 30 seconds. This post raises the question if we made the right decision….

The following analyses and testing by the way has been the effort of my collegae Chris Verhoef, a former RedHat-consultant:

With this setup we are facing the issue that if we loose a complete SAN, the IO’s to the ASM diskgroups will be blocked for approx 3 till 4 minutes. Oracle does not like this. After 70 seconds after a freeze, rdbms is starting to reboot (expected behaviour).  To shorten this  time we have done some testing with the following parameters:

checker timeout

no_path_retry

dev_loss_tmo

m4s0n501
By |August 25th, 2011|Categories: Database, RAC|Tags: , , , |0 Comments