My current job doesn’t offer me at the moment the opportunity to play with RAC or other stuff, so I decided to build my own RAC on my Windows7-desktop (64-bits, 8GB RAM), using 3 VM’s (VM-workstation , 7.1.3 build-324285) :
- 2 VM’s for two RAC-nodes, based on OEL 5.5, 11.2.0.2 for infra and database.
- 1 VM as my own storage, based on Openfiler 2.3 (free) and ASM as storage manager.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the shared storage components required by Oracle RAC 11g. A 500GB internal hard drive will be connected to the network storage server (sometimes referred to in this article as the Openfiler server) through an internal embedded SATA II controller. The Openfiler server will be configured to use this disk for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store the shared files required by Oracle Clusterware as well as all Oracle ASM volumes.
This piece of text is taken from another website: http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_10.shtml
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
My goal: just a little bit hands-on experience with Openfiler, and do some testing with RAC 11gr2, and especially 11.2.0.2. Maybe it’s helpful for somebody else, so I’ll post my experiences. Steps I’m gonna perform.
– The first post (this post) :
1. Planning my installation 2. Create a VM with OpenFiler 3. Configure Openfiler– The second post:
4. Create a VM as node rac1 and configure ASM 5. Create a VM as node rac2 6. Install Oracle RAC Infrastructure 7. Install Oracle RAC Database software8. Install Oracle RAC Database
1. Planning my installation
A very important part of a RAC-installation. Consistency and a structured approach will help a lot during this kind of installations.
For my own project I used Internet resources of course, and a few of them are listed below.
A useful blog to setup storage with Openfiler Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCI , by Jeffrey Hunter http://www.appsdba.info/docs/oracle_apps/RAC/Install_Open_Filer.pdf
First of all you need software:
VMware Workstation (not free) or Server (free). I used Workstation here, which has some advantages like cloning and using snapshots. For running the software you can use VMware player. Openfiler 2.3 – 64-bits iso file (not the appliance!) Oracle Enterprise Linux, 5.5 Oracle software 11.2.0.2 (Linux) from My Oracle Support (is not a patch, full install!). 11.2.0.1 is still available on otn.
Creating 3 VM’s:
Node (VM) | Instance | DB-name | Memory | O.S. |
Rac1 | Racdb1 | Racdb.jobacle.nl | 2,5 GB | OEL 5.4 – (x86_64) |
Rac2 | Racdb2 | Racdb.jobacle.nl | 2,5 GB | |
Openfiler1 | 768 MB | Openfiler 2.3 64-bit |
Network :
Node (VM) | Public IP (bridged, eth0) | Private IP (host-only, eth1) | Virtual IP |
Rac1 | 192.168.178.151 | 192.168.188.151 | 192.168.178.251 |
Rac2 | 192.168.178.152 | 192.168.188.152 | 192.168.178.252 |
Openfiler1 | 192.168.178.195 | 192.168.188.195 | |
Scan:
SCAN name | SCAN IP |
Rac-scan | 192.168.178.187 |
Oracle Software on local storage (rac1 + rac2):
Soft-ware | O.S.-user | Primary Group | Suppl.group | Home dir | Oracle base
/ Oracle home |
Grid infra | Grid | Oinstall | asmadmin,asmdba,
asmoper |
/home/grid | /u01/app/grid
/u01/app/11.2.0/grid |
Oracle RAC | Oracle | Oinstall | Dba,oper,asmdba | /home/oracle | /u01/app/oracle
/u01/app/oracle/product/11.2.0/dbhome_1 |
Shared storage (total of 80GB):
Storage | Filesystem | Volume size | ASM name | ASM redundancy | Openfiler Volume name |
OCR/Voting disk | ASM | 2GB | +CRS | External | Racdb_crs1 |
DB-files | ASM | 39GB | +RACDB_DATA | External | Racdb_data1 |
Fast Rec.area | ASM | 39GB | +FRA | External | Racdb_fra1 |
2. Create a VM with OpenFiler.
Decided not to use the applicance, but install Openfiler with the iso-file, and later on add a disk of 80 GB (/dev/sdb) for configuring my ASM-based shared storage. On that disk I’ll create a volume group of 80 GB. You can also choose to create more separate disks and then create 1 volume group to span those disks.
Detailed steps:
2.1 Create a VM
2.3 Add disk of 80GB
Download Openfiler iso-file (64-bits) from here.
In VM workstation, click on menu-item VM, ‘new’:
In compatibility screen, click next.
Browse and click your Openfiler – iso-file, ‘next’.
Guest Operating system : Linux. Version , I chose OEL.
VM-name: Openfiler1.
Processor configuration: Default
Memory: 768 MB (tried 512 MB, going to swap)
Next screens: network bridged, and I/O: LSI logic.
First I’m using a disk of 8GB for the installation of Openfiler. Later on I’ll add a disk of 80GB for Oracle datafiles.
Create new virtual disk , in the next screen: ‘Disk type: SCSI’
Disk: 8GB, split in multiple files. Saves space. Less performance.
Disk file: default.
Ready? You get the start screen of VM-workstation.
Now we also add an network-adapter, host-only (for private networking).
Click ‘edit virtual machine settings’
Select network adapter, click add.
Click next.
Network adapter type: host-only (=vmnet1)
Click on ok twice.
Check if the cd points to the iso-file of Openfiler, then power on this virtual machine. This will lead to:
Press enter
Skip the test..
Screen: welcome to OpenFiler NSA. Click next.
Screen: keyboard configuration: U.S. International. Click next
Screen: disk partitioning setup: Automatic Partitioning. Click next.
Warning screen: would you like to initialize … Click o.k.
Automatic partitioning: remove all partitions, review partitions created.
Warning screen: erase all data? Click o.k.
Screen: disk setup. Click next.
Edit eth0: -> Activate on boot, ip: 192.168.178.195, mask: 255.255.255.0
Edit eth1: -> Activate on boot, ip: 192.168.188.195, mask 255.255.255.0
Set the hostname, insert the gateway.
Screen: timezone: appropriate timezone, no UTC.
Screen: set root password.
Screen: about to install. Click next.
Installing…..
Screen: congratulations (I hope..). Click ‘reboot’.
Logging in with ‘root’ and the password you provided.
You get the prompt.
Then log in through the browser: https://192.168.178.195:446 . You may see same warnings about certificates. Accept them.
Login with ‘openfiler’ and ‘password’ to see if you did it allright.
Shutdown (prompt: ‘shutdown –h now’ or through the gui) to add the disk for the RAC-shared storage.
After shutting down:
Edit virtual machine settings
Choose add, Create a new Virtual Disk.
SCSI, Independent, persistent.
80 GB, split into multiple files.
Click finish.
Now we’ve got a standard Openfiler, just like the appliance they are offering, but now with an extra disk of 80GB.
3. Configure Openfiler
After installing Openfiler and adding a disk we have to configure:
3.3 Partition the physical storage
3.5 Create all logical volumes
3.6 Create new iSCSI targets for each of the logical volumes.
3.1 Configuring iSCSI storage .
Power on VM Openfiler1.
Log in through a browser: https://192.168.178.195:446 (username: ‘openfiler’ /password: ‘password’)
Click tab ‘Services’
Click link ‘Enable’ at row ‘iSCSI target server’
3.2 Configure network access :
Click tab ‘System’, scroll down:
Name | Network/Host | Netmask | Type |
rac1-public | 192.168.178.151 | 255.255.255.0 | Shared |
rac1-priv | 192.168.188.151 | 255.255.255.0 | Shared |
rac2-public | 192.168.178.152 | 255.255.255.0 | Shared |
rac2-priv | 192.168.188.152 | 255.255.255.0 | Shared |
3.3 Partition the physical storage
Click tab ‘Volumes’
Click ‘Create new physical volumes’ in link.
Click link /dev/sdb
Scroll down and click button ‘Create’.
Now we’ve created a Linux Physical Volume (Partition /dev/sdb1):
Now we will create a Volume Group. This volume group will be named ‘racdbvg’ and will contan the newly created primary partition of 80GB.
Click on Volumes – tab, you will get:
Fill in ‘racdbvg’ in volume group name, click on the checkbox of the partition of 80GB and click on ‘Add Volume Group’.
Next step is to create ‘logical volumes’ in this Volume Group (racdbvg):
Remember the planning of the disks, see the last column :
Storage | Filesystem | Volume size | ASM name | ASM redundancy | Openfiler Volume name |
OCR/Voting disk | ASM | 2GB (2208M) | +CRS | External | Racdb_crs1 |
DB-files | ASM | 39GB (39840M) | +RACDB_DATA | External | Racdb_data1 |
Fast Rec.area | ASM | 39GB (39840M) | +FRA | External | Racdb_fra1 |
Navigate to Volume-tab:
Click ‘Add Volume’
Fill in the name of the volume, size and filesystem (iSCSI) for all three volumes.
Result:
These volumes are the iSCSI disks that can be presented to our iSCSI clients (racnode1 and racnode2) on the network.
Unfortunately we are not finished yet. The iSCSI clients are still not able to access / ‘see’ the drives. We have to create iSCSI targets for each of the three volumes, and map the iSCSI target to the volumes (so-called ‘LUN-mapping’).
Steps for each of the volumes (three times!)
– Create a unique Target IQN = the name for the new iSCSI target.
– Map one of the created volumes to that Target IQN
– Grant both of the Oracle RC nodes access to the new iSCSI target.
3.6 Create a unique Target IQN
Target IQN | iSCSI Volume Name | Volume description |
Iqn.2006-01.com.openfiler:racdb.crs1 | Racdb-crs1 | ASM CRS Vol1 |
Iqn.2006-01.com.openfiler:racdb.data1 | Racdb-data1 | ASM Data Vol1 |
Iqn.2006-01.com.openfiler:racdb.fra1 | Racdb-fra1 | ASM FRA Vol1 |
Navigate in the Volume-tab to iSCSI-targets:
A name is generated , like this: iqn.2006-01.com.openfiler:tsn.4c0aa6d6d32a .
For better understanding, change this to iqn.2006-01.com.openfiler:racdb.crs1 like this:
Click ‘add’. You get a page to modify some settings for this target. Didn’t change anything.
Then click on the LUN-mapping (grey sub-tab). Be sure it’s the right iSCSI-target you’re in.
Click on ‘map’ for the specific target.
Now we need to grant the two-RAC nodes acces to the new iSCSI targets through the storage (private) network.
Click on ‘network ACL’ in the grey sub-tab.
Allow acces for the rac1-priv and rac2-priv. Click ‘update’.
Perform this three times.
Now we are done for the moment. The storage is ready for the rac-nodes. More in part II.
First of all, sorry for my pour English! But I need to say I’m loving this posts!
It’s a fantastic job to practice RAC, I’d like to do the exam 1z0-058 and this will help me a lot! THANKS!
It is very very good document
Hello Sir,
Can you give me your PART II ?
Hello,
This should be https://www.jobacle.nl/?p=1040
Regards..
Thanks a lot .. it works like charm for me !!!
Dear Sir , how did you assigned the IP for the two network adapters?
Is it any random IP or did you got in by running any command in CMD?
Please help me out on this sir
Dear Sai, long time ago, but you will have to make some sort of networkplan, and choose the IP-numbers you want to use. Use static IP-numbers, not DHCP generated numbers, and configure this in Linux – network configuration files.