This is the second post of my little project of building my own RAC on my Windows7-desktop (64-bits, 8GB RAM), using 3 VM’s (VM-workstation , 7.1.3 build-324285) :

  • 2 VM’s for two RAC-nodes, based on OEL 5.5, 11.2.0.2 for infra, and database.
  • 1 VM as my own SAN, based on Openfiler 2.3 (free), for ASM.

My goal: just a little bit hands-on experience with Openfiler, and do some testing with RAC 11gr2, and especially 11.2.0.2. Maybe it’s helpful for somebody else, so I’ll post my experiences. Steps I’m gonna perform.

–          The first post handled the following:

1.       Planning my installation
2.       Create a VM with OpenFiler
3.       Configure Openfiler
 

–          The second post (this post):

4.       Create a VM as node rac1 and configure ASM
5.       Create a VM as node rac2
6.       Install Oracle RAC Infrastructure
 
7.       Install Oracle RAC software

8.       Install Oracle RAC database

 

A little summary of my actions in the former post:

For my own project I used  Internet resources of course, and a few of them are listed  below.

A useful blog to setup storage with Openfiler
Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCI , by Jeffrey Hunter
http://www.appsdba.info/docs/oracle_apps/RAC/Install_Open_Filer.pdf

First of all you need software:

VMware Workstation (not free) or Server (free). I used Workstation here, which has some advantages like cloning and using snapshots
Openfiler 2.3 – 64-bits iso file (not the appliance!)
Oracle Enterprise Linux, 5.5
Oracle software 11.2.0.2 (Linux) from My Oracle Support (is not a patch, full install!). 11.2.0.1 is still available on otn.

Creating 3 VM’s:

Node (VM) Instance DB-name Memory O.S.
Rac1 Racdb1 Racdb.jobacle.nl 2,5 GB OEL 5.4 – (x86_64)
Rac2 Racdb2 Racdb.jobacle.nl 2,5 GB
Openfiler1 768 MB Openfiler 2.3 64-bit

Remember: swap-file of Rac1 and Rac2 should be equal or larger than 3GB.

Network :

Node (VM) Public IP (bridged, eth0) Private IP (host-only, eth1) Virtual IP
Rac1 192.168.178.151 192.168.188.151 192.168.178.251
Rac2 192.168.178.152 192.168.188.152 192.168.178.252
Openfiler1 192.168.178.195 192.168.188.195

Scan:

SCAN name SCAN IP
Rac-scan 192.168.178.187

Oracle Software on local storage (rac1 + rac2):

Soft-ware O.S.-user Primary Group Suppl.group Home dir Oracle base/ Oracle home
Grid infra Grid Oinstall asmadmin,asmdba,asmoper /home/grid /u01/app/grid/u01/app/11.2.0/grid
Oracle RAC Oracle Oinstall Dba,oper,asmdba /home/oracle /u01/app/oracle/u01/app/oracle/product/11.2.0/dbhome_1

Shared storage (total of 80GB):

Storage Filesystem Volume size ASM name ASM redundancy Openfiler Volume name
OCR/Voting disk ASM 2GB +CRS External Racdb_crs1
DB-files ASM 39GB +RACDB_DATA External Racdb_data1
Fast Rec.area ASM 39GB +FRA External Racdb_fra1

What I need in this post is the infrastructure-software and then the database software.  The content of the 7 downloaded files :

Oracle Database (includes Oracle Database and Oracle RAC):
Note: you must download both zip files to install Oracle Database.
p10098816_112020_platform_1of7.zip
p10098816_112020_platform_2of7.zip
Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart):
p10098816_112020_platform_3of7.zip
Oracle Database Client:
p10098816_112020_platform_4of7.zip
Oracle Gateways:
p10098816_112020_platform_5of7.zip
Oracle Examples:
p10098816_112020_platform_6of7.zip
Deinstall:
p10098816_112020_platform_7of7.zip
 

First I’ll use zip-file nr 3, then 1 and 2.

——————————//————————

4.  Create a VM as node RAC1

In the last post, creating and configuring Openfiler, I created a VM with two network devices (public and private network).  Same trick again, not that exciting , I will not repeat the steps.

Now I installed OEL 5.5, just a normal, default installation. Also I did not document this here. Sorry. Result is the following:

[root@rac1 media]# uname -a
Linux rac1.jobacle.nl 2.6.18-194.el5 #1 SMP Mon Mar 29 22:10:29 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
My network (eth0=public=bridged network in VMware, eth1=private=host-only in VMware):
[root@rac1 media]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:B2:0F:F7
inet addr:192.168.178.151 Bcast:192.168.178.255  Mask:255.255.255.0
……..
eth1      Link encap:Ethernet  HWaddr 00:0C:29:B2:0F:01
inet addr:192.168.188.151 Bcast:192.168.188.255  Mask:255.255.255.0
…..
lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
….

One thing of vital importance : am I able to reach my SAN (OpenFiler):

[root@rac1 media]# ping 192.168.188.195
PING 192.168.188.195 (192.168.188.195) 56(84) bytes of data.
64 bytes from 192.168.188.195: icmp_seq=1 ttl=64 time=1.47 ms
64 bytes from 192.168.188.195: icmp_seq=2 ttl=64 time=0.172 ms
64 bytes from 192.168.188.195: icmp_seq=3 ttl=64 time=0.206 ms

Should be allright…

Now it’s time to get ready for checking and installing some Oracle stuff, and Infrastructure at first.

Downloaded the 7 (!) files from  : http://support.oracle.com (is not a patch, full install!), software 11.2.0.2 (Linux 64-bits). Number 3 is the first to use:

p10098816_112020_platform_3of7.zip : Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart)

Put it on my rac1-node, unzipped it only.

Let’s see the prerequisites:

1,5 GB for Grid or 2,5 GB for grid + RAC.
1,5 for Swap.
5,5 GB space for grid-home
1 GB for /tmp
grep MemTotal /proc/meminfo
grep SwapTotal /proc/meminfo
df -h /tmp

IP-addresses in /etc/hosts –file for the moment (got no DNS yet):

# Public (eth0)
192.168.178.151                               rac1.jobacle.nl rac1
192.168.178.152                               rac2.jobacle.nl rac2
192.168.178.195                               openfiler1.jobacle.nl openfiler1
 
# Private (eth1)
192.168.188.151                               rac1-priv.jobacle.nl rac1-priv
192.168.188.152                               rac2-priv.jobacle.nl rac2-priv
192.168.188.195                               openfiler1-priv.jobacle.nl openfiler1-priv
 
# VIP
192.168.178.251                               rac1-vip.jobacle.nl rac1-vip
192.168.178.252                               rac2-vip.jobacle.nl rac2-vip
 
#SCAN
192.168.178.178                               rac-scan.jobacle.nl rac-scan

Making users/directories:

groupadd -g 1000 oinstall
groupadd -g 1031 dba
groupadd -g 1032 oper
groupadd -g 1020 asmadmin
groupadd -g 1021 asmdba
groupadd -g 1022 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba grid
useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
mkdir -p  /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir –p /home/grid
chown –R grid:oinstall /home/grid
mkdir -p /u01/app/oracle
chown –R oracle:oinstall /u01/app/oracle
mkdir –p /home/oracle
chown –R oracle:oinstall /home/oracle
chmod -R 775 /u01/
passwd grid
passwd oracle

Checking packages (for 11.2 only the 64-bits packages are needed):

rpm -aq | grep  binutils-2.17.50.0.6
rpm -aq | grep  compat-libstdc++-33-3.2.3
rpm -aq | grep  elfutils-libelf-0.125
rpm -aq | grep  elfutils-libelf-devel-0.125
rpm -aq | grep  gcc-4.1.2
rpm -aq | grep  gcc-c++-4.1.2
rpm -aq | grep  glibc-2.5-24
rpm -aq | grep  glibc-common-2.5
rpm -aq | grep  glibc-devel-2.5
rpm -aq | grep  glibc-headers-2.5
rpm -aq | grep  ksh-20060214
rpm -aq | grep  libaio-0.3.106
rpm -aq | grep  libaio-devel-0.3.106
rpm -aq | grep  libgcc-4.1.2
rpm -aq | grep  libstdc++-4.1.2
rpm -aq | grep  libstdc++-devel 4.1.2
rpm -aq | grep  make-3.81
rpm -aq | grep  numactl-devel-0.9.8.x86_64
rpm -aq | grep  sysstat-7.0.2
rpm -aq | grep  unixODBC-2.2.11
rpm -aq | grep  unixODBC-devel-2.2.11

The numactl-devel-0.9.8.x86_64 was missing in my case, located on cd #3, directory ‘Server’.

Rpm –Uvh numactl-devel-0.9.8.x86_64

Seperate action: cvuqdisk package.

Locate the software [rac1]:  /media/grid/rpm/cvuqdisk-1.0.7-1.rpm
 
On both nodes:
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
 
rpm -iv cvuqdisk-1.0.9-1.rpm
Preparing packages for installation…
cvuqdisk-1.0.9-1.rpm

 

Packages for ASM:

Watch what your kernel is: uname –r:
[root@rac1 sbin]# uname -r
2.6.18-194.el5
[root@rac1 sbin]# uname -p
x86_64
[root@rac1 sbin]# uname -i
x86_64

Then download the right pages on:

http://www.oracle.com/technetwork/topics/linux/downloads/rhel5-084877.html:

Install them:

rpm -Uvih oracleasm-support-2.1.3-1.el5.x86_64.rpm \
> oracleasmlib-2.0.4-1.el5.x86_64.rpm \
> oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
 
Check what you have…
[root@rac1 media]# rpm -aq |grep oracleasm
oracleasm-support-2.1.3-1.el5
oracleasmlib-2.0.4-1.el5
oracleasm-2.6.18-194.el5-2.0.5-1.el5

configure the asm driver:

[root@rac1 media]# oracleasm configure -i
Configuring the Oracle ASM library driver.
 
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (‘[]’).  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.
 
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Load and start the oracleasm module:

[root@rac1 init.d]# /etc/init.d/oracleasm start
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Initiating the iscsi service

rpm –Uvh iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm (2nd cd, directory ‘Server’)
service iscsi start
chkconfig iscsid on
chkconfig iscsi on

Discovering the iscsi on Openfiler1

iscsiadm -m discovery -t st –p openfiler1-priv

At first I had no output at all. Then it turned out that the file /etc/initiator.deny  on Openfiler1 was filled. Commented out all the lines, did a ‘service iscsi-target restart’ on Openfiler1, tried to discover again and:

192.168.188.195:3260,1 iqn.2006-01.com.openfiler:racdb.data1
192.168.188.195:3260,1 iqn.2006-01.com.openfiler:racdb.fra1
192.168.188.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs1

5. Create the second node, rac2

At this point I decided to clone my VM and create the second node, rac2.

After that: power up rac2 (while the ‘old’ rac1 is still down)

Login as root

Changing hostname rac1 to rac2:

# Cd /etc/sysconfig
# Vi  network  – change the hostname
# hostname rac2
# service network restart
Change the ip-addresses manually or though the GUI (systemà Administration à Network)
# service network restart

Setting up ssh-connectivity between the nodes:

Using this script for users grid and oracle (change the homes, and name-checking..):

File: nodeinfo:
rac1
rac2
 
File ssh_setup.sh:
if [ `whoami` != “grid” ]
then
echo “You must execute this as grid!!!”;exit
fi
LOCAL=`hostname -s`
REMOTE=`sed -n 2p $HOME/nodeinfo`
ssh $REMOTE rm -rf /home/grid/.ssh
rm -rf /home/grid/.ssh
ssh-keygen -t rsa  -f $HOME/.ssh/id_rsa -N ” -q
touch $HOME/.ssh/authorized_keys
chmod 600  $HOME/.ssh/authorized_keys
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
chmod 400  $HOME/.ssh/authorized_keys
ssh -o StrictHostKeyChecking=no -q $LOCAL /bin/true
scp -o StrictHostKeyChecking=no -q -r $HOME/.ssh $REMOTE:
# and now add the host keys for the FQDN  hostnames
ssh -o StrictHostKeyChecking=no -q ${LOCAL} /bin/true
ssh -o StrictHostKeyChecking=no -q ${REMOTE} /bin/true
ssh -o StrictHostKeyChecking=no -q ${LOCAL}.jobacle.nl /bin/true
ssh -o StrictHostKeyChecking=no -q ${REMOTE}.jobacle.nl /bin/true
scp -q $HOME/.ssh/known_hosts $REMOTE:$HOME/.ssh/known_hosts

Run cluvfy as precheck:

# ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup –verbose

In my case there was a fixup script needed: /tmp/CVU_11.2.0.2.0_grid

Response file looked like this:

SYSCTL_LOC=”/sbin/sysctl”
INSTALL_USER=”grid”
FILE_MAX_KERNEL=”6815744″
IP_LOCAL_PORT_RANGE=”9000 65500″
RMEM_MAX=”4194304″
WMEM_MAX=”1048576″
AIO_MAX_NR=”1048576″

Only some kernel parameters. Ran it on both nodes, checked it again, still unsuccessful:

Check: Swap space
Node Name     Available                 Required                  Comment
————  ————————  ————————  ———-
rac2          1.9687GB (2064376.0KB)    2.9458GB (3088872.0KB)    failed
rac1          1.9687GB (2064376.0KB)    2.9458GB (3088872.0KB)    failed
Result: Swap space check failed

I think I can live with that….

Further on with the iscsi.

When I boot, the node automatically logs in the iscsi-drives. I can see how they are mapped:

# partprobe
# fdisk –l
In my case /dev/sdg (41GB), /dev/sdi (2GB), /dev/sdh (41GB).

Make it persistent (every time you boot, same devicenames ):

On both nodes:

Make a rule (it’s a bit far away of a DBA’s job, don’t know exactly what I’m doing, but allright..):

# vi /etc/udev/rules.d/55-openiscsi.rules
KERNEL==”sd*”, BUS==”scsi”, PROGRAM=”/etc/udev/scripts/iscsidev.sh %b”,SYMLINK+=”iscsi/%c/part%n”

Make the script, mentioned in the rule:

# vi /etc/udev/iscsidev.sh
#!/bin/sh
 
# FILE: /etc/udev/scripts/iscsidev.sh
 
BUS=${1}
HOST=${BUS%%:*}
 
[ -e /sys/class/iscsi_host ] || exit 1
 
file=”/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname”
 
target_name=$(cat ${file})
 
# This is not an open-scsi drive
if [ -z “${target_name}” ]; then
exit 1
fi
 
# Check if QNAP drive
check_qnap_target_name=${target_name%%:*}
if [ $check_qnap_target_name = “iqn.2004-04.com.qnap” ]; then
target_name=`echo “${target_name%.*}”`
fi
 
echo “${target_name##*.}”
 
# chmod 755 iscsidev.sh
# service iscsi stop
# service iscsi start

Now the iscsi is ‘mapped’ to drives:

# ls -l /dev/iscsi/*

Keep in mind that the mapping of iSCSI target names and local SCSI device names will be different on each of our RAC nodes (it may even change on each particular node after a reboot). This does not present a problem as we are using local device names presented to us by “udev”.

We’ve got three disks now that need to be partioned (on node rac1):

/dev/iscsi/crs1/part    /sdi
/dev/iscsi/data1/part    /sdh
/dev/iscsi/fra1/part    /sdg

[root@rac1 scripts]# fdisk /dev/iscsi/crs1/part

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won’t be recoverable.
 
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
 
Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1012, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1012, default 1012): 1012
 
Command (m for help): p
 
Disk /dev/iscsi/crs1/part: 2315 MB, 2315255808 bytes
72 heads, 62 sectors/track, 1012 cylinders
Units = cylinders of 4464 * 512 = 2285568 bytes
 
Device Boot      Start         End      Blocks   Id  System
/dev/iscsi/crs1/part1               1        1012     2258753   83  Linux
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

Two more to do:

[root@rac1 scripts]# fdisk  /dev/iscsi/data1/part

[root@rac1 scripts]# fdisk  /dev/iscsi/fra1/part

Then verifying symbolic links of the partitions:

# (cd /dev/disk/by-path; ls -l *openfiler* | awk ‘{FS=” “; print $9 ” ” $10 ” ” $11}’)
 
ip-192.168.188.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdi
ip-192.168.188.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1 -> ../../sdi1
ip-192.168.188.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdg
ip-192.168.188.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0-part1 -> ../../sdg1
ip-192.168.188.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdh
ip-192.168.188.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0-part1 -> ../../sdh1

After this partitioning, we need to create ASM disks (node rac1 as root):

[root@rac1 init.d]# /usr/sbin/oracleasm createdisk CRSVOL /dev/iscsi/crs1/part1
Writing disk header: done
Instantiating disk: done
[root@rac1 init.d]# /usr/sbin/oracleasm createdisk DATAVOL /dev/iscsi/data1/part1
Writing disk header: done
Instantiating disk: done
 
[root@rac1 init.d]# /usr/sbin/oracleasm createdisk FRAVOL /dev/iscsi/fra1/part1
Writing disk header: done
Instantiating disk: done

Check on rac1:

[root@rac1 init.d]# /usr/sbin/oracleasm listdisks
CRSVOL
DATAVOL
FRAVOL

On rac2 only run  a ‘scandisk’:

[root@rac2 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…

Check on rac2:

[root@rac2 ~]# /usr/sbin/oracleasm listdisks
CRSVOL
DATAVOL
FRAVOL

6. Installing the infrastructure….

Screens:

1: Skip software updates

2: Install and configure Oracle Grid Infrastructure for a Cluster

3: Advanced Installation

4: Languages: default English

5: Clustername: rac-cluster

Scan-name: rac-scan
Port: 1521
GNS: not yet.

6: Cluster node information: hostnames rac1.jobacle.nl / rac2.jobacle.nl (added) including vip-names rac1-vip.jobacle.nl / rac2-vip.jobacle.nl

7: Specify network usage. 192.168.178.0 = public / 192.168.188.0 = private.

8: Storage options: ASM

9: Create ASM disk group. Here I made a mistake. What you have to do here is to fill in the ASM-diskgroup which will hold the voting disk, ocr and spfile. I filled in the data-group….  But I noticed it when the installation was finished. But it was a good practice to move all of those stuff to the CRS- diskgroup afterwards :-)

10: ASM passwords. All the same. Chose ‘oracle’.

11: Failure isolation: do not use IPMI.

12: Privileged Operating System Groups: asmdba, asmoper, asmadmin

13: Installation location: base=/u01/app/grid,  software=/u01/app/11.2.0/grid

14: Inventory location: /u01/app/oraInventory

15: pre-requisite check. Oh… there’s a warning:

PRVF-5150 : Path ORCL:VOL1 is not a valid path on all nodes (and for the other disks as well)
 
This turned out to be an ‘Unpublished bug’:
 
Note [ID 1210863.1]: Unpublished bug 10026970
 
 
/etc/init.d/oracleasm status
 
Checking if ASM is loaded:                                  [  OK  ]
Checking if /dev/oracleasm is mounted:                [  OK  ]
## Both should be [OK]
 
 
/etc/init.d/oracleasm listdisks
 
CRSVOL
DATAVOL
FRAVOL
 
 
Check users:
 
id <grid user>
uid=1001(grid) gid=1000(oinstall) groups=1000(oinstall)
 
/usr/sbin/oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=””
ORACLEASM_SCANEXCLUDE=””
 
## both ORACLEASM_UID and ORACLEASM_GID should match id output for grid user
So: ignored.

16. Summary

17. Install

Then the window to run as root the scripts:

Rac1: /u01/app/oraInventory/orainstRoot.sh
Rac2: /u01/app/oraInventory/orainstRoot.sh
Rac1: /u01/app/11.2.0/grid/root.sh
Rac2: /u01/app/11.2.0/grid/root.sh

This takes a while.

Should look like this for example (and you’ll see my mistake: voting-disks in the data-group…):

Running Oracle 11g root script…
 
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/app/11.2.0/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
 
 
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies – this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac1’
CRS-2676: Start of ‘ora.diskmon’ on ‘rac1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘rac1’ succeeded
 
ASM created and started successfully.
 
Disk Group RAC_DATA created successfully.
 
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk ac2d8720175b4fd6bfce07023ae69299.
Successfully replaced voting disk group with +RAC_DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
—  —–    —————–                ——— ———
1. ONLINE   ac2d8720175b4fd6bfce07023ae69299 (ORCL:FRAVOL) [RAC_DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start ‘ora.asm’ on ‘rac1’
CRS-2676: Start of ‘ora.asm’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.RAC_DATA.dg’ on ‘rac1’
CRS-2676: Start of ‘ora.RAC_DATA.dg’ on ‘rac1’ succeeded
ACFS-9200: Supported
ACFS-9200: Supported
CRS-2672: Attempting to start ‘ora.registry.acfs’ on ‘rac1’
CRS-2676: Start of ‘ora.registry.acfs’ on ‘rac1’ succeeded
Configure Oracle Grid Infrastructure for a Cluster … succeeded

A shorter list on rac2:

Running Oracle 11g root script…
 
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/app/11.2.0/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
 
 
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies – this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster … succeeded
You have new mail in /var/spool/mail/root

Now we have a running Infrastructure on nodes rac1 and rac2. Next step: creating and configuring the database

7. Install Oracle RAC software (database)

When you’ve installed the gridinfra correctly, you will discover that the most easy part is the installation of the database (software and database itself)

Database software install

Unzip  the files

# p10098816_112020_Linux-x86-64_1of7.zip

# p10098816_112020_Linux-x86-64_2of7.zip

Defaults to a subdirectory ‘database’.

I unzipped it as root, and will install the database as user Oracle, so  I changed the rights of the directory:

# chown -R oracle:oinstall database

Logging in as user oracle (  most easy to get a working X-windows…), starting up the runInstaller in de database-subdirectory

# ./runInstaller

Screen 1 , Configure security update: everything blank, including ‘wishing to receive security updates’

Screen 2,  Download software updates: third option, skip software updates.

Screen 3,  Installation options: Install database software only.

Screen 4, Grid installation options: second option, Oracle RAC database installation (rac1 and rac2 should be visible and ‘on’). You can also test and setup the ssh-connectivity of the oracle user.

Screen 5, Select product languages: English

Screen 6, Select  database edition: Enterprise Edition

Screen 7, Specify Installation Location: base= /u01/app/oracle, software location=/u01/app/oracle/product/11.2/db_1

Screen 8, Privileged Operating System Group: OSDBA group=dba, OSOPER group=oper.

Screen 9, Prerequisite check: swap space failed. Ignore.

Screen 10, Summary.

Screen 11, Installing.  You will notice that during de phase of ‘Setup files’ de installed home of rac1 will be copied to rac2. Run root.sh as root on rac1, then at rac2. Not at the same time! At the end return to the gui, and press ok.

Screen 12, Installation successful !

8. Install Oracle RAC  database.

# cd /u01/app/oracle/product/11.2/db_1/bin/

# ./dbca

Welcome screen: Oracle RAC Database

Screen 1, Operations: Create a database.

Screen 2, Database templates: General purpose or Transaction Processing

Screen 3, Database identification: Admin-managed,, nodes rac1,rac2 (‘select all!’)

Screen 4, Management options: blank.

Screen 5, Database credentials: Use the same Admin Password for All Accounts (e.g. oracle)

Screen 6, Database file locations: Use oracle-Managed Files, ‘+RACDB_DATA’ (in my case), select ‘Multiplex Redo Logs and Control Files’, define two diskgroups ‘+RACDB_DATA’ and ‘+FRA’ .

Screen 7, Recovery configuration: Specify Flash Recovery Area: +FRA , Enable Archiving.

Screen 8, Database Contect: blank

Screen 9, Initialization Parameters: sizing of 250MB is enough for me, characterset on AL32UTF8, rest will be default.

Screen 10, Database storage:  kept it default.

Screen 11, Creation options: Create database ánd create script.

Screen summary: o.k.  Let’s go…  First the script, then the database.

After a (long?) while, finished.

Post-installation tasks:

–          Change the /etc/oratab and add a line for the instance. For only the database name ‘racdb’ had been added by asmca. In my case on node rac1: ‘racdb1:/u01/app/oracle/product/11.2/db_1:N’

Node rac2: ‘racdb1:/u01/app/oracle/product/11.2/db_1:N’

–          Check if database is running on both nodes:  set the environment with oraenv and perform : # srvctl status database -d racdb

Instance racdb1 is running on node rac1

Instance racdb2 is running on node rac2

It looks like a working concept….