Wednesday, February 25, 2015

Step by step process of building the 11g R2 (11.2.0.3) 2-Node RAC on OEL 5.4

This doc explains the real time installation of RAC on OEL 5.4 with Openfiler 2.99.1

Linux Servers are configured as follows:

Server Configuration:
Nodes
RAC1
RAC2
OPENFILER-STORAGE
Instance Name
Racdb1
Racdb2

Database Name
Racdb
Racdb

Operating System
OEL 5.4 –(x86_64)
OEL 5.4 –(x86_64)
Openfiler-2.99.1
–(x86_64)
Public IP

192.168.7.151
Subnet: 255.255.255.0
Gateway: 192.168.7.254
192.168.7.154
Subnet: 255.255.255.0
Gateway: 192.168.7.254
192.168.7.159
Subnet: 255.255.255.0
Private IP
192.168.8.151
255.255.255.0
192.168.8.154
255.255.255.0

Virtual IP
192.168.7.152
192.168.7.155

SCAN name
racnode-cluster-scan
racnode-cluster-scan

SCAN IP
192.168.7.161
192.168.7.161


Oracle Software Components
Software Component
OS User
Primary Group
Supplementary Groups
Oracle Base / Oracle Home
Grid Infrastructure
grid
oinstall
asmadmin, asmdba, asmoper
/u01/app/grid
/u01/app/11.2.0/grid
Oracle RAC
oracle
oinstall
dba, oper, asmdba
/u01/app/oracle
/u01/app/oracle/product/11.2.0/dbhome_1

Storage Components
Storage Component
File System
Volume Size
ASM Volume Group Name
ASM Redundancy
Openfiler Volume Name
OCR/Voting Disk
ASM
2GB
+CRS
External
racdb-crs1
Database Files
ASM
20GB
+RACDB_DATA
External
racdb-data1
Fast Recovery Area
ASM
8GB
+FRA
External
racdb-fra1



Software’s needed to be download:
1.       Oracle Enterprise Linux Release 5 Update 4
2.       Oracle Database 11g Release 2, Grid Infrastructure, (11.2.0.3.0)
3.       openfileresa-2.99.1-x86_64-disc1.iso
4.       oracleasmlib-2.0.4-1.el6.x86_64.rpm
A conceptual look at what the environment would look like after connecting all of the hardware components



Step 1:
Install the Linux Operating System

Perform the following installation on Oracle RAC node1 in the cluster.
For now, install the following package groups:
 Desktop Environments
·          
    • GNOME Desktop Environment
  • Applications
·          
    • Editors
    • Graphical Internet
    • Text-based Internet
  • Development
·          
    • Development Libraries
    • Development Tools
    • Legacy Software Development
  • Servers
·          
    • Server Configuration Tools
  • Base System
    • Administration Tools
    • Base
    • Java
    • Legacy Software Support
    • System Tools
    • X Window System
Step 2:
For Enterprise Linux 5.4 (x86_64)
The following rpm should be installed on Oracle RAC node1 in the cluster.
rpm -Uvh libaio-devel-0.3.106-3.2.i386.rpm
rpm -Uvh libaio-devel-0.3.106-3.2.x86_64.rpm
rpm -Uvhlibstdc++44-devel-4.4.0-6.el5.x86_64.rpm
rpm -Uvh sysstat-7.0.2-3.el5.x86_64.rpm
rpm -Uvh unixODBC-2.2.11-7.1.i386.rpm
rpm -Uvh unixODBC-2.2.11-7.1.x86_64.rpm
rpm -Uvh unixODBC-devel-2.2.11-7.1.i386.rpm
rpm -Uvh unixODBC-devel-2.2.11-7.1.x86_64.rpm
rpm -Uvh oracleasm-support-2.1.3-1.el5.x86_64.rpm
rpm -Uvh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
rpm -Uvh iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm
Step 3:
Our example Oracle RAC configuration will use the following network settings:

Change the /etc/hosts on Oracle RAC node1 in the cluster.
/etc/hosts

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost.localdomainlocalhost
#::1            localhost6.localdomain6 localhost6
#Public Network- (etho)

192.168.7.151 rac1.aibl.com  rac1
192.168.7.154 rac2.aibl.com  rac2

#Private Interconnect- (eth1)

192.168.8.151 rac1-priv.aibl.com rac1-priv
192.168.8.154 rac2-priv.aibl.com rac2-priv

# Public Virtual IP (VIP) addresses

192.168.7.152 rac1-vip.aibl.com rac1-vip
192.168.7.155 rac2-vip.aibl.com rac2-vip

#Single Client Access Name (SCAN)

192.168.7.161 rac-scan

# openfielr (eth0)
192.168.7.159 openfiler.aibl.com openfiler
#End of etc/hosts file
Step 4:
Confirm the RAC Node Name is Not Listed in Loopback Address
Ensure that the node names ( racnode1 or racnode2) are not included for the loopback address in the /etc/hosts file. If the machine name is listed in the in the loopback address entry as below:
127.0.0.1 

 racnode1
localhost.localdomainlocalhost
it will need to be removed as shown below:
127.0.0.1 localhost.localdomainlocalhost
If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation
ORA-00603: ORACLE server session terminated by fatal error
or
ORA-29702: error occurred in Cluster Group Service operation

Step 5:
 Check to ensure that the firewall option is turned off. If the firewall option is stopped (like it is in my example below) you do not have to proceed with the following steps.
[root@racnode1 ~]#  /etc/rc.d/init.d/iptables status

Firewall is stopped.
 If the firewall option is operating you will need to first manually disable UDP ICMP rejections:
[root@racnode1 ~]# 
 /etc/rc.d/init.d/iptables stop
Flushing firewall rules: [ 

OK ]
Setting chains to policy ACCEPT: filter [ 

OK ]
Unloading iptables modules: [ 

OK ]

Then, to turn UDP ICMP rejections off for next server reboot (which should always be turned off):
[root@racnode1 ~]# 
chkconfigiptables off

Set secure Linux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=permissive

Step 6:
Configure Cluster Time Synchronization Service - (CTSS)
If you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then de-configure and de-install the Network Time Protocol (NTP).
To deactivate the NTP service, you must stop the existing ntpd service, disable it from the initialization sequences and remove the ntp.conf file. To complete these steps on Oracle Enterprise Linux, run the following commands as the root user on both Oracle RAC nodes:
[root@racnode1 ~]#  /sbin/service ntpd stop
 
[root@racnode1 ~]#chkconfigntpd off
[root@racnode1 ~]#  mv /etc/ntp.conf /etc/ntp.conf.original
 
Also remove the following file:
[root@racnode1 ~]#  rm /var/run/ntpd.pid
 
Create Job Role Separation Operating System Privileges Groups, Users, and Directories
Step7:
Create Groups and User for Grid Infrastructure
Lets start this section by creating the recommended OS groups and user for Grid Infrastructure on both Oracle RAC nodes:
[root@racnode1 ~]#  
groupadd -g 1000 oinstall
[root@racnode1 ~]#  
groupadd -g 1200 asmadmin
[root@racnode1 ~]#  
groupadd -g 1201 asmdba
[root@racnode1 ~]#  
groupadd -g 1202 asmoper
[root@racnode1 ~]#  
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
 
[root@racnode1 ~]#  
id grid
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
 
Set the password for the grid account:
[root@racnode1 ~]#  
passwd grid
Changing password for user grid.
New UNIX password:  
xxxxxxxxxxx
Retype new UNIX password:  
xxxxxxxxxxx
passwd: all authentication tokens updated successfully.
 
Create Login Script for the grid User Account
Log in to both Oracle RAC nodes as the grid user account and create the following login script ( .bash_profile):
Note: When setting the Oracle environment variables for each Oracle RAC node, make certain to assign each RAC node a unique Oracle SID. For this example, I used:
  • racnode1 : ORACLE_SID=+ASM1
  • racnode2 : ORACLE_SID=+ASM2
[root@racnode1 ~]# su – grid
[grid@racnode1 ~]# vi /home/grid/.bash_profile
 
if [ -f ~/.bashrc ]; then
      . ~/.bashrc
fi

aliasls="ls -FA"

ORACLE_SID=+ASM1; export ORACLE_SID

JAVA_HOME=/usr/local/java; export JAVA_HOME

ORACLE_BASE=/u01/app/grid; export ORACLE_BASE

ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME

ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH

ORACLE_TERM=xterm; export ORACLE_TERM

NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

export TEMP=/tmp
export TMPDIR=/tmp

umask 022

 
Step 8:
Create Groups and User for Oracle Database Software
Next, create the the recommended OS groups and user for the Oracle database software on both Oracle RAC nodes:
[root@racnode1 ~]#  
groupadd -g 1300 dba
[root@racnode1 ~]#  
groupadd -g 1301 oper
[root@racnode1 ~]#  
useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
 
[root@racnode1 ~]#  
id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
 
Set the password for the oracle account:
[root@racnode1 ~]#  
passwd oracle
Changing password for user oracle.
New UNIX password:  
xxxxxxxxxxx
Retype new UNIX password:  
xxxxxxxxxxx
passwd: all authentication tokens updated successfully.
 
Create Login Script for the oracle User Account
Log in to both Oracle RAC nodes as the oracle user account and create the following login script ( .bash_profile):
Note: When setting the Oracle environment variables for each Oracle RAC node, make certain to assign each RAC node a unique Oracle SID. For this example, I used:
  • racnode1 : ORACLE_SID=racdb1
  • racnode2 : ORACLE_SID=racdb2
[root@racnode1 ~]# su – oracle
[oracle@racnode1 ~]# vi /home/oracle/.bash_profile

if [ -f ~/.bashrc ]; then
      . ~/.bashrc
fi
 
aliasls="ls -FA"
 
ORACLE_SID=racdb1; export ORACLE_SID
 
ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME
 
JAVA_HOME=/usr/local/java; export JAVA_HOME
 
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
 
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
 
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
 
ORACLE_TERM=xterm; export ORACLE_TERM
 
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
 
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
 
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
 
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH
 
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
 
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
 
THREADS_FLAG=native; export THREADS_FLAG
 
export TEMP=/tmp
export TMPDIR=/tmp
 
umask 022



Step 9:
Verify That the User nobody Exists
Before installing the software, complete the following procedure to verify that the user nobody exists on both Oracle RAC nodes:
  1. To determine if the user exists, enter the following command:
2.       #  id nobody
3.       uid=99(nobody) gid=99(nobody) groups=99(nobody)                        
If this command displays information about the nobody user, then you do not have to create that user.
  1. If the user nobody does not exist, then enter the following command to create it:
5.       # /usr/sbin/useradd nobody
  1. Repeat this procedure on all the other Oracle RAC nodes in the cluster.
Step 10:
Create the Oracle Base Directory Path
[root@racnode1 ~]#  mkdir -p /u01/app/grid
[root@racnode1 ~]#  mkdir -p /u01/app/11.2.0/grid
[root@racnode1 ~]#  chown -R grid:oinstall /u01
[root@racnode1 ~]#  mkdir -p /u01/app/oracle
[root@racnode1 ~]#  chownoracle:oinstall /u01/app/oracle
[root@racnode1 ~]#  chmod -R 775 /u01
 
 Step 11:
On each Oracle RAC node, add the following lines to the /etc/security/limits.conf file (the following example shows the software account owners oracle and grid):    
 
[root@racnode1 ~]#  vi/etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
Step 12:
On each Oracle RAC node, add or edit the following line in the /etc/pam.d/login file, if it does not already exist:
[root@racnode1 ~]#   vi  /etc/pam.d/login
session    required     pam_limits.so
Step 13:
Depending on your shell environment, make the following changes to the default shell startup file, to change ulimit setting for all Oracle installation owners (note that these examples show the users oracle and grid):
For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file by running the following command:
[root@racnode1 ~]#  vi  /etc/profile
if [ \$USER = "oracle" ] || [ \$USER = "grid" ];
then
if [ \$SHELL = "/bin/ksh" ];
then
ulimit -p 16384        
ulimit -n 65536    
else
ulimit -u 16384 -n 65536   
fi
umask 022
fi
Step 14:
Add or amend the following lines in the "/etc/sysctl.conf" file.
 
[root@racnode1 ~]# vi /etc/sysctl.conf
 
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
 
net.ipv4.ip_forward = 0
 
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
 
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
 
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
 
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
 
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
 
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
 
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
 
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
 
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
 
#kernel.shmmax = 4294967295
#kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
fs.aio-max-nr=1048576

For making permanent the changes , run the following command.
[root@racnode1 ~]# sysctl –p
 
Step 15:
Preventing Installation Errors Caused by stty Commands
During an Oracle grid infrastructure or Oracle RAC software installation, OUI uses SSH to run commands and copy files to the other nodes. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause makefile and other installation errors if they contain stty commands.
To avoid this problem, you must modify these files in each Oracle installation owner user home directory to suppress all output on STDERR, as in the following examples:
 
[oracle@rac1 ~]$ cat /home/oracle/.bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
# User specific aliases and functions
 
if [ -t 0 ]; then
 
sttyintr ^C
 
fi
 
Step 16:
 
Install ASMLib 2.0 Packages and cvuqdisk Package for Linux
 
[root@racnode1 rpm]#  rpm -Uvhoracleasmlib-2.0.4-1.el6.x86_64.rpm
The cvuqdisk RPM can be found on the Oracle grid infrastructure installation media in the rpm directory. For the purpose of this article, the Oracle grid infrastructure media was extracted to the /home/grid/software/oracle/grid directory on racnode1 as the grid user.
To install the cvuqdisk RPM, complete the following procedures:
In the directory where you have saved the cvuqdisk RPM, use the following command to install the cvuqdisk package on both Oracle RAC nodes:

[root@racnode1 ~]# cd /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.9-1.rpm
[root@racnode1 rpm]#  rpm -Uvh cvuqdisk-1.0.9-1.rpm
 [root@racnode2 rpm]#  rpm -Uvh cvuqdisk-1.0.9-1.rpm

 
Step 17: Configure ASMLib
The oracleasm command by default is in the path /usr/sbin. The /etc/init.d path, which was used in previous releases, is not deprecated, but the oracleasm binary in that path is now used typically for internal commands. If you enter the command oracleasm configure without the -i flag, then you are shown the current configuration. For example,
[root@racnode1 ~]# /usr/sbin/oracleasm configure
ORACLEASM_ENABLED=false
ORACLEASM_UID=
ORACLEASM_GID=
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""

Enter the following command to run the oracleasm initialization script with the configure option:
[root@racnode1 ~]# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: 
grid
Default group to own the driver interface []: 
asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: 
y
Scan for Oracle ASM disks on boot (y/n) [y]: 
y
Writing Oracle ASM library driver configuration: done


# Enter the following command to load the oracleasm kernel module:

[root@racnode1 ~]# /usr/sbin/oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm








Step by step Openfiler installation and configuration

 
Perform the following installation on the network storage server(192.168.7.159).
Step 1: 
Configure iSCSI Volumes using Openfiler
Openfiler administration is performed using the Openfiler Storage Control Center — a browser based tool over an https connection on port 446. For example:                      
https://openfiler1.idevelopment.info:446/        
From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:
  • Username: openfiler
  • Password: password
Step 2:
To enable the iSCSI service, click on the 'Enable' link under the 'iSCSI target server' service name. After that, the 'iSCSI target server' status should change to ' Enabled '.
The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system on Linux. With the iSCSI target enabled, we should be able to SSH into the Openfiler server and see the iscsi-target service running:
[root@openfiler1 ~]#  
                         service iscsi-target status
ietd (pid 14243) is running...
 
 
Step 3:                      
Network Access Configuration
click on the 'System' [Network Setup] link under the ‘Network Access Configuration’ . The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance.
When entering each of the Oracle RAC nodes, note that the 'Name' field is just a logical name used for reference only. As a convention when entering nodes, I simply use the node name defined for that IP address. Next, when entering the actual node in the 'Network/Host' field, always use its IP address even though its host name may already be defined in your /etc/hosts file or DNS. Lastly, when entering actual hosts in our Class C network, use a subnet mask of 255.255.255.255.


Step 4:
 
To start the process of creating our iSCSI volumes, navigate to [Volumes] / [Block Devices] from the Openfiler Storage Control Center:
Partitioning the Physical Disk
The first step we will perform is to create a single primary partition on the /dev/sdb internal hard disk. By clicking on the /dev/sdb link, we are presented with the options to 'Edit' or 'Create' a partition. Since we will be creating a single primary partition that spans the entire disk, most of the options can be left to their default setting where the only modification would be to change the ' Partition Type' from 'Extended partition' to ' Physical volume'. Here are the values I specified to create the primary partition on /dev/sdb:
Mode: Primary
Partition Type: Physical volume
Starting Cylinder: 1 ( enter the starting cylinder given +60)
Ending Cylinder: 8924

To accept that we click on the "Create" button. This results in a new partition ( /dev/sdb1) on our internal hard disk:
Step 5:
Volume Group Management
The next step is to create a Volume Group. We will be creating a single volume group named racdbvg that contains the newly created primary partition.
From the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Groups]. There we would see any existing volume groups, or none as in our case. Using the Volume Group Management screen, enter the name of the new volume group ( racdbvg), click on the checkbox in front of /dev/sdb1 to select that partition, and finally click on the 'Add volume group' button. After that we are presented with the list that now shows our newly created volume group named " racdbvg":
Step 6:
Logical Volumes
We can now create the three logical volumes in the newly created volume group ( racdbvg).
From the Openfiler Storage Control Center, navigate to [Volumes] / [Add Volume]. There we will see the newly created volume group ( racdbvg) along with its block storage statistics. Also available at the bottom of this screen is the option to create a new volume in the selected volume group - (Create a volume in "racdbvg"). Use this screen to create the following three logical (iSCSI) volumes. After creating each logical volume, the application will point you to the "Manage Volumes" screen. You will then need to click back to the "Add Volume" tab to create the next logical volume until all three iSCSI volumes are created:
iSCSI / Logical Volumes
Volume Name
Volume Description
Required Space (MB)
Filesystem Type
racdb-crs1
racdb - ASM CRS Volume 1
2,208
iSCSI
racdb-data1
racdb - ASM Data Volume 1
33,888
iSCSI
racdb-fra1
racdb - ASM FRA Volume 1
33,888
iSCSI
Step 7:
Create New Target IQN
From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Verify the grey sub-tab "Target Configuration" is selected. This page allows you to create a new iSCSI target. A default value is automatically generated for the name of the new iSCSI target (better known as the "Target IQN"). An example Target IQN is " iqn.2006-01.com.openfiler:tsn.ae4683b67fd3":
I prefer to replace the last segment of the default Target IQN with something more meaningful. For the first iSCSI target (Oracle Clusterware / racdb-crs1), I will modify the default Target IQN by replacing the string " tsn.ae4683b67fd3" with " racdb.crs1".
Once you are satisfied with the new Target IQN, click the "Add" button. This will create a new iSCSI target and then bring up a page that allows you to modify a number of settings for the new iSCSI target. For the purpose of this article, none of settings for the new iSCSI target need to be changed.
LUN Mapping
Next, click on the grey sub-tab named "LUN Mapping" (next to "Target Configuration" sub-tab). Locate the appropriate iSCSI logical volume ( /dev/racdbvg/racdb-crs1 in this case) and click the "Map" button. You do not need to change any settings on this page.
Network ACL
Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the current iSCSI target, change the "Access" for both hosts from 'Deny' to 'Allow' and click the 'Update' button:
Go back to the Create New Target IQN section and perform these three tasks for the remaining two iSCSI logical volumes while substituting the values found in the " iSCSI Target / Logical Volume Mappings" table .
Step 8:
Configure iSCSI Volumes on Oracle RAC Nodes
Configure the iSCSI (initiator) service
After verifying that the iscsi-initiator-utils package is installed on both Oracle RAC nodes, start the iscsid service and enable it to automatically start when the system boots. We will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system startup.
[root@racnode1 ~]#  
                         service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [   
                        
OK  ]
                                                           [            
OK  ]
 
[root@racnode1 ~]#  chkconfig iscsid on
[root@racnode1 ~]#  chkconfig iscsi on
                      
Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server. This should be performed on both Oracle RAC nodes to verify the configuration is functioning properly:
[root@racnode1 ~]#  iscsiadm -m discovery -t sendtargets -p openfiler1-priv
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.fra1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.data1
                      
Step 9:
Manually Log In to iSCSI Targets
At this point the iSCSI initiator service has been started and each of the Oracle RAC nodes were able to discover the available targets from the network storage server. The next step is to manually log in to each of the available targets which can be done using the iscsiadm command-line interface. This needs to be run on both Oracle RAC nodes.
[root@racnode1~]#iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.7.159 -l
[root@racnode1~]#iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.7.159 -l
[root@racnode1~]#iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.7.159 -l

Step 10:
Configure Automatic Log In
The next step is to ensure the client will automatically log in to each of the targets listed above when the machine is booted (or the iSCSI initiator service is started/restarted). As with the manual log in process described above, perform the following on both Oracle RAC nodes:
[root@racnode1 ~]#  iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.7.159 --op update -n node.startup -v automatic
[root@racnode1 ~]#  iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.7.159--op update -n node.startup -v automatic
[root@racnode1 ~]#  iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.7.159 --op update -n node.startup -v automatic
                      
Step 11:
Create Persistent Local SCSI Device Names
we will go through the steps to create persistent local SCSI device names for each of the iSCSI target names. This will be done using udev.
When either of the Oracle RAC nodes boot and the iSCSI initiator service is started, it will automatically log in to each of the targets configured in a random fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:racdb.crs1 may get mapped to /dev/sdb. I can actually determine the current mappings for all targets by looking at the /dev/disk/by-path directory:
[root@racnode1 ~]#  (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdb
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdd
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdc
                      
Using the output from the above listing, we can establish the following current mappings:
Current iSCSI Target Name to local SCSI Device Name Mappings
iSCSI Target Name
SCSI Device Name
iqn.2006-01.com.openfiler:racdb.crs1
/dev/sdb
iqn.2006-01.com.openfiler:racdb.data1
/dev/sdd
iqn.2006-01.com.openfiler:racdb.fra1
/dev/sdc
This mapping, however, may change every time the Oracle RAC node is rebooted. For example, after a reboot it may be determined that the iSCSI target iqn.2006-01.com.openfiler:racdb.crs1 gets mapped to the local SCSI device /dev/sdc. It is therefore impractical to rely on using the local SCSI device name given there is no way to predict the iSCSI target mappings after a reboot.
What we need is a consistent device name we can reference (i.e. /dev/iscsi/crs1) that will always point to the appropriate iSCSI target through reboots. This is where the Dynamic Device Management tool named udev comes in

The first step is to create a new rules file. The file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line of name=value pairs used to receive events we are interested in. It will also define a call-out SHELL script ( /etc/udev/scripts/iscsidev.sh) to handle the event.
Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on both Oracle RAC nodes:
..............................................
# /etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"
..............................................
We now need to create the UNIX SHELL script that will be called when this event is received. Let's first create a separate directory on both Oracle RAC nodes where udev scripts can be stored:
[root@racnode1 ~]# 
                         mkdir -p /etc/udev/scripts
                     
Next, create the UNIX shell script /etc/udev/scripts/iscsidev.sh on both Oracle RAC nodes:
..............................................
#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}
HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive
if [ -z "${target_name}" ]; then
   exit 1
fi

# Check if QNAP drive
check_qnap_target_name=${target_name%%:*}
if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then
    target_name=`echo "${target_name%.*}"`
fi

echo "${target_name##*.}"
..............................................
After creating the UNIX SHELL script, change it to executable:
[root@racnode1 ~]# 
                         chmod 755 /etc/udev/scripts/iscsidev.sh
                     
Now that udev is configured, restart the iSCSI service on both Oracle RAC nodes:
[root@racnode1 ~]# 
                         service iscsi stop
Logging out of session [sid: 6, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]
Logging out of session [sid: 7, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]
Logging out of session [sid: 8, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]
Logout of [sid: 6, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful
Logout of [sid: 7, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful
Logout of [sid: 8, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful
Stopping iSCSI daemon:                                     [  
                        
OK  ]
[root@racnode1 ~]#  service iscsi start
iscsid dead but pid file exists
Turning off network shutdown. Starting iSCSI daemon:       [  
                       
OK  ]
                                                           [  
                        
OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful
                                                           [   
                       
OK  ]  
Let's see if our hard work paid off:
[root@racnode1 ~]# 
                         ls -l /dev/iscsi/*
/dev/iscsi/crs1:
total 0
lrwxrwxrwx 1 root root 9 Nov  3 18:13 part -> ../../sdc

/dev/iscsi/data1:
total 0
lrwxrwxrwx 1 root root 9 Nov  3 18:13 part -> ../../sde

/dev/iscsi/fra1:
total 0
lrwxrwxrwx 1 root root 9 Nov  3 18:13 part -> ../../sdd
The listing above shows that udev did the job it was suppose to do! We now have a consistent set of local device names that can be used to reference the iSCSI targets. For example, we can safely assume that the device name /dev/iscsi/crs1/part will always reference the iSCSI target iqn.2006-01.com.openfiler:racdb.crs1. We now have a consistent iSCSI target name to local device name mapping which is described in the following table:
iSCSI Target Name to Local Device Name Mappings
iSCSI Target Name
Local Device Name
iqn.2006-01.com.openfiler:racdb.crs1
/dev/iscsi/crs1/part
iqn.2006-01.com.openfiler:racdb.data1
/dev/iscsi/data1/part
iqn.2006-01.com.openfiler:racdb.fra1
/dev/iscsi/fra1/part

Step 12:
Create Partitions on iSCSI Volumes
We now need to create a single primary partition on each of the iSCSI volumes that spans the entire size of the volume.
The following table lists the three ASM disk groups that will be created and which iSCSI volume they will contain:
Oracle Shared Drive Configuration
File Types
ASM Diskgroup Name
iSCSI Target (short) Name
ASM Redundancy
Size
ASMLib Volume Name

OCR and Voting Disk
+CRS
crs1
External
2GB
ORCL:CRSVOL1

Oracle Database Files
+RACDB_DATA
data1
External
32GB
ORCL:DATAVOL1

Oracle Fast Recovery Area
+FRA
fra1
External
32GB
ORCL:FRAVOL1

As shown in the table above, we will need to create a single Linux primary partition on each of the three iSCSI volumes. The fdisk command is used in Linux for creating (and removing) partitions. For each of the three iSCSI volumes, you can use the default values when creating the primary partition as the default action is to use the entire disk. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF disklabel).
In this example, I will be running the fdisk command from racnode1 to create a single primary partition on each iSCSI target using the local device names created by udev
  • /dev/iscsi/crs1/part
  • /dev/iscsi/data1/part
  • /dev/iscsi/fra1/part
Note: Creating the single partition on each of the iSCSI volumes must only be run from one of the nodes in the Oracle RAC cluster! (i.e. racnode1)
# ---------------------------------------
[root@racnode1 ~]#  fdisk /dev/iscsi/crs1/part
Command (m for help):  
                         n
Command action
   e   extended
   p   primary partition (1-4)
                          p
Partition number (1-4):  
                         1
First cylinder (1-1012, default 1):  
                         1
Last cylinder or +size or +sizeM or +sizeK (1-1012, default 1012):  
                         1012
 
Command (m for help):  
                         p
 
Disk /dev/iscsi/crs1/part: 2315 MB, 2315255808 bytes
72 heads, 62 sectors/track, 1012 cylinders
Units = cylinders of 4464 * 512 = 2285568 bytes
 
               Device Boot      Start         End      Blocks   Id  System
/dev/iscsi/crs1/part1               1        1012     2258753   83  Linux
 
Command (m for help):  
                         w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.
 
# ---------------------------------------
 
[root@racnode1 ~]#  fdisk /dev/iscsi/data1/part
Command (m for help):  
                         n
Command action
   e   extended
   p   primary partition (1-4)
                          p
Partition number (1-4):  
                         1
First cylinder (1-33888, default 1):  
                         1
Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888):  
                         33888
 
Command (m for help):  
                         p
 
Disk /dev/iscsi/data1/part: 35.5 GB, 35534143488 bytes
64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
 
                Device Boot      Start         End      Blocks   Id  System
/dev/iscsi/data1/part1               1       33888    34701296   83  Linux
 
Command (m for help):  
                         w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.
 
 
# ---------------------------------------
 
[root@racnode1 ~]#  fdisk /dev/iscsi/fra1/part
Command (m for help):  
                         n
Command action
   e   extended
   p   primary partition (1-4)
                          p
Partition number (1-4):  
                         1
First cylinder (1-33888, default 1):  
                         1
Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888):  
                         33888
 
Command (m for help):  
                         p
 
Disk /dev/iscsi/fra1/part: 35.5 GB, 35534143488 bytes
64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
 
               Device Boot      Start         End      Blocks   Id  System
/dev/iscsi/fra1/part1               1       33888    34701296   83  Linux
 
Command (m for help):  
                         w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.
   
Step 13:
Verify New Partitions
After creating all required partitions from racnode1, you should now inform the kernel of the partition changes using the following command as the " root" user account from all remaining nodes in the Oracle RAC cluster ( racnode2). Note that the mapping of iSCSI target names discovered from Openfiler and the local SCSI device name will be different on both Oracle RAC nodes. This is not a concern and will not cause any problems since we will not be using the local SCSI device names
From racnode2, run the following commands:
[root@racnode2 ~]#  partprobe
 
[root@racnode2 ~]#  fdisk -l
 
Disk /dev/sda: 160.0 GB, 160000000000 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14       19452   156143767+  8e  Linux LVM
 
Disk /dev/sdb: 35.5 GB, 35534143488 bytes
64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       33888    34701296   83  Linux
 
Disk /dev/sdc: 35.5 GB, 35534143488 bytes
64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       33888    34701296   83  Linux
 
Disk /dev/sdd: 2315 MB, 2315255808 bytes
72 heads, 62 sectors/track, 1012 cylinders
Units = cylinders of 4464 * 512 = 2285568 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        1012     2258753   83  Linux
                      
As a final step you should run the following command on both Oracle RAC nodes to verify that udev created the new symbolic links for each new partition:
[root@racnode2 ~]#  
                         (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdd
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1 -> ../../sdd1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdc
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0-part1 -> ../../sdc1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdb
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0-part1 -> ../../sdb1
                      
The listing above shows that udev did indeed create new device names for each of the new partitions. We will be using these new device names when configuring the volumes for ASMlib. 
  • /dev/iscsi/crs1/part1
  • /dev/iscsi/data1/part1
  • /dev/iscsi/fra1/part1
Step 14:
Create ASM Disks for Oracle
Creating the ASM disks only needs to be performed from one node in the RAC cluster as the root user account. I will be running these commands on racnode1. On the other Oracle RAC node(s), you will need to perform a scandisk to recognize the new volumes. When that is complete, you should then run the oracleasm listdisks command on both Oracle RAC nodes to verify that all ASM disks were created and available.
To create the ASM disks using the iSCSI target names to local device name mappings, type the following:
[root@racnode1 ~]#  /usr/sbin/oracleasm createdisk CRSVOL1 /dev/iscsi/crs1/part1
Writing disk header: done
Instantiating disk: done
 
[root@racnode1 ~]#   /usr/sbin/oracleasm createdisk DATAVOL1 /dev/iscsi/data1/part1
Writing disk header: done
Instantiating disk: done
 
[root@racnode1 ~]#   /usr/sbin/oracleasm createdisk FRAVOL1 /dev/iscsi/fra1/part1
Writing disk header: done
Instantiating disk: done
                              
To make the disk available on the other nodes in the cluster ( racnode2), enter the following command as root on each node:
[root@racnode2 ~]#   /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "FRAVOL1"
Instantiating disk "DATAVOL1"
Instantiating disk "CRSVOL1"
                              
We can now test that the ASM disks were successfully created by using the following command on both nodes in the RAC cluster as the root user account. This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks:
[root@racnode1 ~]#  /usr/sbin/oracleasm listdisks
CRSVOL1
DATAVOL1
FRAVOL1
 
[root@racnode2 ~]#   /usr/sbin/oracleasm listdisks
CRSVOL1
DATAVOL1
FRAVOL1
                              
Step 15:
Install Oracle Grid Infrastructure for a Cluster
Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (racnode1).
The Oracle grid infrastructure software (Oracle Clusterware and Automatic Storage Management) will be installed to both of the Oracle RAC nodes in the cluster by the Oracle Universal Installer.
Typical and Advanced Installation
Select the Advanced type

Screen Name
Response
Select Installation Option
Select " Install and Configure Grid Infrastructure for a Cluster"
Select Installation Type
Select " Advanced Installation"
Select Product Languages
Make the appropriate selection(s) for your environment.
Grid Plug and Play Information
Instructions on how to configure Grid Naming Service (GNS) is beyond the scope of this article. Un-check the option to "Configure GNS".
Cluster Name
SCAN Name
SCAN Port
racnode-cluster
racnode-cluster-scan
1521
After clicking [Next], the OUI will attempt to validate the SCAN information:
Cluster Node Information
Use this screen to add the node racnode2 to the cluster and to configure SSH connectivity.
Click the "Add" button to add " racnode2" and its virtual IP address " racnode2-vip" according to the table below:
Public Node Name
Virtual Host Name
racnode1
racnode1-vip
racnode2
racnode2-vip

Next, click the [SSH Connectivity] button. Enter the "OS Password" for the grid user and click the [Setup] button. This will start the "SSH Connectivity" configuration process:
After the SSH configuration process successfully completes, acknowledge the dialog box.
Finish off this screen by clicking the [Test] button to verify passwordless SSH connectivity.
Specify Network Interface Usage
Identify the network interface to be used for the "Public" and "Private" network. Make any changes necessary to match the values in the table below:
Interface Name
Subnet
Interface Type
eth0
192.168.1.0
Public
eth1
192.168.2.0
Private
Storage Option Information
Select " Automatic Storage Management (ASM)".
Create ASM Disk Group
Create an ASM Disk Group that will be used to store the Oracle Clusterware files according to the values in the table below:
Disk Group Name
Redundancy
Disk Path
CRS
External
ORCL:CRSVOL1
Specify ASM Password
For the purpose of this article, I choose to " Use same passwords for these accounts".
Failure Isolation Support
Configuring Intelligent Platform Management Interface (IPMI) is beyond the scope of this article. Select " Do not use Intelligent Platform Management Interface (IPMI)".
Privileged Operating System Groups
This article makes use of role-based administrative privileges and high granularity in specifying Automatic Storage Management roles using a Job Role Separation. configuration.
Make any changes necessary to match the values in the table below:
OSDBA for ASM
OSOPER for ASM
OSASM
asmdba
asmoper
asmadmin
Specify Installation Location
Set the "Oracle Base" ( $ORACLE_BASE) and "Software Location" ( $ORACLE_HOME) for the Oracle grid infrastructure installation:
   Oracle Base: /u01/app/grid
   Software Location: /u01/app/11.2.0/grid
Create Inventory
Since this is the first install on the host, you will need to create the Oracle Inventory. Use the default values provided by the OUI:
   Inventory Directory: /u01/app/oraInventory
   oraInventory Group Name: oinstall
Prerequisite Checks
The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Clusterware and Automatic Storage Management software.
Starting with Oracle Clusterware 11g release 2 (11.2), if any checks fail, the installer (OUI) will create shell script programs, called fixup scripts, to resolve many incomplete system configuration requirements. If OUI detects an incomplete task that is marked "fixable", then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button.
The fixup script is generated during installation. You will be prompted to run the script as root in a separate terminal session. When you run the script, it raises kernel values to required minimums, if necessary, and completes other operating system configuration tasks.
If all prerequisite checks pass (as was the case for my install), the OUI continues to the Summary screen.
Summary
Click [Finish] to start the installation.
Setup
The installer performs the Oracle grid infrastructure setup process on both Oracle RAC nodes.
Execute Configuration scripts
After the installation completes, you will be prompted to run the /u01/app/oraInventory/orainstRoot.sh and /u01/app/11.2.0/grid/root.sh scripts. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the root user account.
Run the orainstRoot.sh script on both nodes in the RAC cluster:
[root@racnode1 ~]#   /u01/app/oraInventory/orainstRoot.sh
 
[root@racnode2 ~]#   /u01/app/oraInventory/orainstRoot.sh
                                      

Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the root user account. Run the root.sh script on both nodes in the RAC cluster one at a time starting with the node you are performing the install from:
[root@racnode1 ~]#  /u01/app/11.2.0/grid/root.sh
 
[root@racnode2 ~]#   /u01/app/11.2.0/grid/root.sh
                                      
The root.sh script can take several minutes to run. When running root.sh on the last node, you will receive output similar to the following which signifies a successful install:
 
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.
Configure Oracle Grid Infrastructure for a Cluster
The installer will run configuration assistants for Oracle Net Services (NETCA), Automatic Storage Management (ASMCA), and Oracle Private Interconnect (VIPCA). The final step performed by OUI is to run the Cluster Verification Utility (CVU). If the configuration assistants and CVU run successfully, you can exit OUI by clicking [Next] and then [Close].
As described earlier in this section, if you configured SCAN "only" in your hosts file ( /etc/hosts) and not in either Grid Naming Service (GNS) or manually using DNS, this is considered an invalid configuration and will cause the Cluster Verification Utility to fail.
Provided this is the only error reported by the CVU, it would be safe to ignore this check and continue by clicking [Next] and then the [Close] button to exit the OUI.
If on the other hand you want the CVU to complete successfully while still only defining the SCAN in the hosts file, do not click the [Next] button in OUI to bypass the error. Instead, follow the instructions in section Configuring SCAN without DNS to modify the nslookup utility. After completing the steps document in that section, return to the OUI and click the [Retry] button. The CVU should now finish with no errors. Click [Next] and then [Close] to exit the OUI.
Finish
At the end of the installation, click the [Close] button to exit the OUI.

Postinstallation Tasks for Oracle Grid Infrastructure for a Cluster
Perform the following postinstallation procedures on both Oracle RAC nodes in the cluster.
Verify Oracle Clusterware Installation
After the installation of Oracle grid infrastructure, you should run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC cluster as the grid user.
Check CRS Status
[grid@racnode1 ~]$  crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
                               
Check Clusterware Resources
Note: The crs_stat command is deprecated in Oracle Clusterware 11g release 2 (11.2).
[grid@racnode1 ~]$ 
                                   crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host       
----------------------------------------------------------------------
ora.CRS.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    racnode1   
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    racnode1   
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    racnode1   
ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    racnode1   
ora.eons       ora.eons.type  0/3    0/     ONLINE    ONLINE    racnode1   
ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE              
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    racnode1   
ora.oc4j       ora.oc4j.type  0/5    0/0    OFFLINE   OFFLINE              
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    racnode1   
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    racnode1   
ora....E1.lsnr application    0/5    0/0    ONLINE    ONLINE    racnode1   
ora....de1.gsd application    0/5    0/0    OFFLINE   OFFLINE              
ora....de1.ons application    0/3    0/0    ONLINE    ONLINE    racnode1   
ora....de1.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    racnode1   
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    racnode2   
ora....E2.lsnr application    0/5    0/0    ONLINE    ONLINE    racnode2   
ora....de2.gsd application    0/5    0/0    OFFLINE   OFFLINE              
ora....de2.ons application    0/3    0/0    ONLINE    ONLINE    racnode2   
ora....de2.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    racnode2   
ora....ry.acfs ora....fs.type 0/5    0/     ONLINE    ONLINE    racnode1   
ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    racnode1
                               
Check Cluster Nodes
[grid@racnode1 ~]$  olsnodes -n
racnode1        1
racnode2        2
                       
Check Oracle TNS Listener Process on Both Nodes
[grid@racnode1 ~]$ 
                                   ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_SCAN1
LISTENER

[grid@racnode2 ~]$  ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER

Confirming Oracle ASM Function for Oracle Clusterware Files
If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation is running:
[grid@racnode1 ~]$ 
                                   srvctl status asm -a
ASM is running on racnode1,racnode2
ASM is enabled.
                              
Check Oracle Cluster Registry (OCR)
[grid@racnode1 ~]$ 
                                   ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2404
         Available space (kbytes) :     259716
         ID                       : 1259866904
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user
                               
Check Voting Disk
[grid@racnode1 ~]$ 
                                   crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS]
Located 1 voting disk(s).
                              

Step 16:
Create ASM Disk Groups for Data and Fast Recovery Area
Run the ASM Configuration Assistant (asmca) as the grid user from only one node in the cluster (racnode1) to create the additional ASM disk groups which will be used to create the clustered database.
During the installation of Oracle grid infrastructure, we configured one ASM disk group named +CRS which was used to store the Oracle clusterware files (OCR and voting disk).
In this section, we will create two additional ASM disk groups using the ASM Configuration Assistant ( asmca). These new ASM disk groups will be used when creating the clustered database.
The first ASM disk group will be named +RACDB_DATA and will be used to store all Oracle physical database files (data, online redo logs, control files, archived redo logs). A second ASM disk group will be created for the Fast Recovery Area named +FRA.
Create Additional ASM Disk Groups using ASMCA
Perform the following tasks as the grid user to create two additional ASM disk groups:
[grid@racnode1 ~]$asmca

Screen Name
Response
Disk Groups
From the "Disk Groups" tab, click the " Create" button.
Create Disk Group
The "Create Disk Group" dialog should show two of the ASMLib volumes we created earlier in this guide.
If the ASMLib volumes we created earlier in this article do not show up in the "Select Member Disks" window as eligible ( ORCL:DATAVOL1 and ORCL:FRAVOL1) then click on the "Change Disk Discovery Path" button and input " ORCL:*".
When creating the "Data" ASM disk group, use " RACDB_DATA" for the "Disk Group Name". In the "Redundancy" section, choose " External (none)". Finally, check the ASMLib volume " ORCL:DATAVOL1" in the "Select Member Disks" section.
After verifying all values in this dialog are correct, click the " [OK]" button.
Disk Groups
After creating the first ASM disk group, you will be returned to the initial dialog. Click the " Create" button again to create the second ASM disk group.
Create Disk Group
The "Create Disk Group" dialog should now show the final remaining ASMLib volume.
When creating the "Fast Recovery Area" disk group, use " FRA" for the "Disk Group Name". In the "Redundancy" section, choose " External (none)". Finally, check the ASMLib volume " ORCL:FRAVOL1" in the "Select Member Disks" section.
After verifying all values in this dialog are correct, click the " [OK]" button.
Disk Groups
Exit the ASM Configuration Assistant by clicking the [Exit] button.

Step 17:
Install Oracle Database 11g with Oracle Real Application Clusters
Perform the Oracle Database software installation from only one of the Oracle RAC nodes in the cluster (racnode1)! The Oracle Database software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer using SSH.
Now that the grid infrastructure software is functional, you can install the Oracle Database software on the one node in your cluster ( racnode1) as the oracle user. OUI copies the binary files from this node to all the other node in the cluster during the installation process.
For the purpose of this guide, we will forgo the "Create Database" option when installing the Oracle Database software. The clustered database will be created using the Database Configuration Assistant (DBCA) after all installs have been completed.
[oracle@racnode1 ~]$  cd /home/oracle/software/oracle/database
[oracle@racnode1 database]$  ./runInstaller
                             
 Screen Name
Response
Configure Security Updates
For the purpose of this article, un-check the security updates checkbox and click the [Next] button to continue. Acknowledge the warning dialog indicating you have not provided an email address by clicking the [Yes] button.
Installation Option
Select " Install database software only".
Grid Options
Select the " Real Application Clusters database installation" radio button (default) and verify that both Oracle RAC nodes are checked in the "Node Name" window.
Next, click the [SSH Connectivity] button. Enter the "OS Password" for the oracle user and click the [Setup] button. This will start the "SSH Connectivity" configuration process:
After the SSH configuration process successfully completes, acknowledge the dialog box.
Finish off this screen by clicking the [Test] button to verify passwordless SSH connectivity.
Product Languages
Make the appropriate selection(s) for your environment.
Database Edition
Select " Enterprise Edition".
Installation Location
Specify the Oracle base and Software location (Oracle_home) as follows:
   Oracle Base:
/u01/app/oracle
   Software Location:
/u01/app/oracle/product/11.2.0/dbhome_1
Operating System Groups
Select the OS groups to be used for the SYSDBA and SYSOPER privileges:
   Database Administrator (OSDBA) Group:
dba
   Database Operator (OSOPER) Group:
oper
Prerequisite Checks
The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Database software.
Starting with 11g release 2 (11.2), if any checks fail, the installer (OUI) will create shell script programs, called fixup scripts, to resolve many incomplete system configuration requirements. If OUI detects an incomplete task that is marked "fixable", then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button.
The fixup script is generated during installation. You will be prompted to run the script as root in a separate terminal session. When you run the script, it raises kernel values to required minimums, if necessary, and completes other operating system configuration tasks.
If all prerequisite checks pass (as was the case for my install), the OUI continues to the Summary screen.
Summary
Click [Finish] to start the installation.
Install Product
The installer performs the Oracle Database software installation process on both Oracle RAC nodes.
Execute Configuration scripts
After the installation completes, you will be prompted to run the /u01/app/oracle/product/11.2.0/dbhome_1/root.sh script on both Oracle RAC nodes. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the root user account.
Run the root.sh script on all nodes in the RAC cluster:
[root@racnode1 ~]# 
                                         /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

[root@racnode2 ~]# 
                                         /u01/app/oracle/product/11.2.0/dbhome_1/root.sh                                  
 Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.
Finish
At the end of the installation, click the [Close] button to exit the OUI.

Step 18:
Create the Clustered Database
To start the database creation process, run the following as the oracle user:
[oracle@racnode1 ~]$ dbca
                             
 Screen Name
Response
Screen Shot
Welcome Screen
Select Oracle Real Application Clusters database.
Operations
Select Create a Database.
Database Templates
Select Custom Database.
Database Identification
Cluster database configuration.
   Configuration Type: Admin-Managed
Database naming.
   Global Database Name: racdb.idevelopment.info
   SID Prefix: racdb
Note: I used idevelopment.info for the database domain. You may use any database domain. Keep in mind that this domain does not have to be a valid DNS domain.
Node Selection.
Click the [Select All] button to select all servers: racnode1 and racnode2.
Management Options
Leave the default options here, which is to Configure Enterprise Manager / Configure Database Control for local management.
Database Credentials
I selected to Use the Same Administrative Password for All Accounts. Enter the password (twice) and make sure the password does not start with a digit number.
Database File Locations
Specify storage type and locations for database files.
   Storage Type: Automatic Storage Management (ASM)
   Storage Locations: Use Oracle-Managed Files
     Database Area: +RACDB_DATA
Specify ASMSNMP Password
Specify the ASMSNMP password for the ASM instance.
Recovery Configuration
Check the option for Specify Fast Recovery Area.
For the Fast Recovery Area, click the [Browse] button and select the disk group name +FRA.
My disk group has a size of about 33GB. When defining the Fast Recovery Area size, use the entire volume minus 10% for overhead — (33-10%=30 GB). I used a Fast Recovery Area Size of 30 GB ( 30413 MB).
Database Content
I left all of the Database Components (and destination tablespaces) set to their default value although it is perfectly OK to select the Sample Schemas. This option is available since we installed the Oracle Database 11g Examples.
Initialization Parameters
Change any parameters for your environment. I left them all at their default settings.
Database Storage
Change any parameters for your environment. I left them all at their default settings.
Creation Options
Keep the default option Create Database selected. I also always select to Generate Database Creation Scripts. Click Finish to start the database creation process. After acknowledging the database creation report and script generation dialog, the database creation will start.
Click OK on the "Summary" screen.
End of Database Creation
At the end of the database creation, exit from the DBCA.

When the DBCA has completed, you will have a fully functional Oracle RAC cluster running!
Verify Clustered Database is Open
Step 19:
Verify Oracle Grid Infrastructure and Database Configuration
The following Oracle Clusterware and Oracle RAC verification checks can be performed on any of the Oracle RAC nodes in the cluster. For the purpose of this article, I will only be performing checks from racnode1 as the oracle OS user.
Most of the checks described in this section use the Server Control Utility (SRVCTL) and can be run as either the oracle or grid OS user. There are five node-level tasks defined for SRVCTL:

  • Adding and deleting node-level applications
  • Setting and un-setting the environment for node-level applications
  • Administering node applications
  • Administering ASM instances
  • Starting and stopping a group of programs that includes virtual IP addresses, listeners, Oracle Notification Services, and Oracle Enterprise Manager agents (for maintenance purposes).
Oracle also provides the Oracle Clusterware Control (CRSCTL) utility. CRSCTL is an interface between you and Oracle Clusterware, parsing and calling Oracle Clusterware APIs for Oracle Clusterware objects.
Oracle Clusterware 11g release 2 (11.2) introduces cluster-aware commands with which you can perform check, start, and stop operations on the cluster. You can run these commands from any node in the cluster on another node in the cluster, or on all nodes in the cluster, depending on the operation.
You can use CRSCTL commands to perform several operations on Oracle Clusterware, such as:
  • Starting and stopping Oracle Clusterware resources
  • Enabling and disabling Oracle Clusterware daemons
  • Checking the health of the cluster
  • Managing resources that represent third-party applications
  • Integrating Intelligent Platform Management Interface (IPMI) with Oracle Clusterware to provide failure isolation support and to ensure cluster integrity
  • Debugging Oracle Clusterware components

Step 20:
RAC and DB Verification
Check the Health of the Cluster - (Clusterized Command)
Run as the grid user.
[grid@racnode1 ~]$  crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
                               
All Oracle Instances - (Database Status)
[oracle@racnode1 ~]$   srvctl status database -d racdb
Instance racdb1 is running on node racnode1
Instance racdb2 is running on node racnode2
                               
Single Oracle Instance - (Status of Specific Instance)
[oracle@racnode1 ~]$   srvctl status instance -d racdb -i racdb1
Instance racdb1 is running on node racnode1
                               
Node Applications - (Status)
[oracle@racnode1 ~]$  srvctl status nodeapps
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1
VIP racnode2-vip is enabled
VIP racnode2-vip is running on node: racnode2
Network is enabled
Network is running on node: racnode1
Network is running on node: racnode2
GSD is disabled
GSD is not running on node: racnode1
GSD is not running on node: racnode2
ONS is enabled
ONS daemon is running on node: racnode1
ONS daemon is running on node: racnode2
eONS is enabled
eONS daemon is running on node: racnode1
eONS daemon is running on node: racnode2
                               
Node Applications - (Configuration)
[oracle@racnode1 ~]$  srvctl config nodeapps
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168, listening port 2016
                               
List all Configured Databases
[oracle@racnode1 ~]$  srvctl config database racdb
                               
Database - (Configuration)
[oracle@racnode1 ~]$   srvctl config database -d racdb -a
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +RACDB_DATA/racdb/spfileracdb.ora
Domain: idevelopment.info
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: RACDB_DATA,FRA
Services:
Database is enabled
Database is administrator managed
                               
ASM - (Status)
[oracle@racnode1 ~]$  srvctl status asm
ASM is running on racnode1,racnode2
                               
ASM - (Configuration)
$  srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
                  
TNS listener - (Status)
[oracle@racnode1 ~]$   srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): racnode1,racnode2
                               
TNS listener - (Configuration)
[oracle@racnode1 ~]$   srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <crs>
  /u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521
                               
SCAN - (Status)
[oracle@racnode1 ~]$ 
                                   srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node racnode1             
SCAN - (Configuration)
[oracle@racnode1 ~]$  srvctl config scan
SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /racnode-cluster-scan/192.168.1.187
                               
VIP - (Status of Specific Node)
[oracle@racnode1 ~]$  srvctl status vip -n racnode1
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1

[oracle@racnode1 ~]$  srvctl status vip -n racnode2
VIP racnode2-vip is enabled
VIP racnode2-vip is running on node: racnode2       
VIP - (Configuration of Specific Node)
[oracle@racnode1 ~]$   srvctl config vip -n racnode1
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0

[oracle@racnode1 ~]$  srvctl config vip -n racnode2
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0                     
Configuration for Node Applications - (VIP, GSD, ONS, Listener)
[oracle@racnode1 ~]$   srvctl config nodeapps -a -g -s -l
-l option has been deprecated and will be ignored.
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home: <crs>
  /u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521
                               
Verifying Clock Synchronization across the Cluster Nodes
[oracle@racnode1 ~]$  cluvfy comp clocksync -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status                 
  ------------------------------------  ------------------------
  racnode1                              
                                                                         passed                                  
Result: CTSS resource check passed


Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State                  
  ------------------------------------  ------------------------
  racnode1                              Active                 
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status                 
  ------------  ------------------------  ------------------------
  racnode1      0.0                       
                                                                         passed                                  

Time offset is within the specified limits on the following set of nodes:
"[racnode1]"
Result: Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.
                               
All running instances in the cluster - (SQL)
                                    SELECT     inst_id   , instance_number inst_no   , instance_name inst_name   , parallel   , status   , database_status db_status   , active_state state   , host_name host FROM gv$instance ORDER BY inst_id;

 INST_ID  INST_NO INST_NAME  PAR STATUS  DB_STATUS    STATE     HOST
-------- -------- ---------- --- ------- ------------ --------- -------
       1        1 racdb1     YES OPEN    ACTIVE       NORMAL    racnode1
       2        2 racdb2     YES OPEN    ACTIVE       NORMAL    racnode2
                               
All database files and the ASM disk group they reside in - (SQL)
                                    select name from v$datafile union select member from v$logfile union select name from v$controlfile union select name from v$tempfile;

NAME
-------------------------------------------
+FRA/racdb/controlfile/current.256.703530389
+FRA/racdb/onlinelog/group_1.257.703530391
+FRA/racdb/onlinelog/group_2.258.703530393
+FRA/racdb/onlinelog/group_3.259.703533497
+FRA/racdb/onlinelog/group_4.260.703533499
+RACDB_DATA/racdb/controlfile/current.256.703530389
+RACDB_DATA/racdb/datafile/example.263.703530435
+RACDB_DATA/racdb/datafile/indx.270.703542993
+RACDB_DATA/racdb/datafile/sysaux.260.703530411
+RACDB_DATA/racdb/datafile/system.259.703530397
+RACDB_DATA/racdb/datafile/undotbs1.261.703530423
+RACDB_DATA/racdb/datafile/undotbs2.264.703530441
+RACDB_DATA/racdb/datafile/users.265.703530447
+RACDB_DATA/racdb/datafile/users.269.703542943
+RACDB_DATA/racdb/onlinelog/group_1.257.703530391
+RACDB_DATA/racdb/onlinelog/group_2.258.703530393
+RACDB_DATA/racdb/onlinelog/group_3.266.703533497
+RACDB_DATA/racdb/onlinelog/group_4.267.703533499
+RACDB_DATA/racdb/tempfile/temp.262.703530429

19 rows selected.
                               
ASM Disk Volumes - (SQL)
                                    SELECT path FROM   v$asm_disk;

PATH
----------------------------------
ORCL:CRSVOL1
ORCL:DATAVOL1
ORCL:FRAVOL1
                               
Step 21:
Starting / Stopping the Cluster
At this point, everything has been installed and configured for Oracle RAC 11g release 2. Oracle grid infrastructure was installed by the grid user while the Oracle RAC software was installed by oracle. We also have a fully functional clustered database running named racdb.
After all of that hard work, you may ask, "OK, so how do I start and stop services?". If you have followed the instructions in this guide, all services — including Oracle Clusterware, ASM , network, SCAN, VIP, the Oracle Database, and so on — should start automatically on each reboot of the Linux nodes.
There are times, however, when you might want to take down the Oracle services on a node for maintenance purposes and restart the Oracle Clusterware stack at a later time. Or you may find that Enterprise Manager is not running and need to start it. This section provides the commands necessary to stop and start the Oracle Clusterware stack on a local server ( racnode1).
The following stop/start actions need to be performed as root.
Stopping the Oracle Clusterware Stack on the Local Server
Use the " crsctl stop cluster" command on racnode1 to stop the Oracle Clusterware stack:
[root@racnode1 ~]#   /u01/app/11.2.0/grid/bin/crsctl stop cluster
CRS-2673: Attempting to stop 'ora.crsd' on 'racnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'racnode1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'racnode1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.racnode1.vip' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'racnode1'
CRS-2677: Stop of 'ora.scan1.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'racnode2'
CRS-2677: Stop of 'ora.racnode1.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.racnode1.vip' on 'racnode2'
CRS-2677: Stop of 'ora.registry.acfs' on 'racnode1' succeeded
CRS-2676: Start of 'ora.racnode1.vip' on 'racnode2' succeeded           
                                
<-- Notice racnode1 VIP moved to racnode2
CRS-2676: Start of 'ora.scan1.vip' on 'racnode2' succeeded              
                               
<-- Notice SCAN moved to racnode2
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'racnode2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'racnode2' succeeded    
                               
<-- Notice LISTENER_SCAN1 moved to racnode2
CRS-2677: Stop of 'ora.CRS.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode1'
CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'racnode1'
CRS-2673: Attempting to stop 'ora.eons' on 'racnode1'
CRS-2677: Stop of 'ora.ons' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'racnode1'
CRS-2677: Stop of 'ora.net1.network' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.eons' on 'racnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode1' has completed
CRS-2677: Stop of 'ora.crsd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racnode1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racnode1'
CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'racnode1'
CRS-2677: Stop of 'ora.diskmon' on 'racnode1' succeeded
                              

Note: If any resources that Oracle Clusterware manages are still running after you run the " crsctl stop cluster" command, then the entire command fails. Use the -f option to unconditionally stop all resources and stop the Oracle Clusterware stack.
Also note that you can stop the Oracle Clusterware stack on all servers in the cluster by specifying -all. The following will bring down the Oracle Clusterware stack on both racnode1 and racnode2:
[root@racnode1 ~]#  /u01/app/11.2.0/grid/bin/crsctl stop cluster -all   
Starting the Oracle Clusterware Stack on the Local Server
Use the " crsctl start cluster" command on racnode1 to start the Oracle Clusterware stack:
[root@racnode1 ~]#   /u01/app/11.2.0/grid/bin/crsctl start cluster
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1'
CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1'
CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'
CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'racnode1'
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded
                              
Note: You can choose to start the Oracle Clusterware stack on all servers in the cluster by specifying -all:
[root@racnode1 ~]#  /u01/app/11.2.0/grid/bin/crsctl start cluster -all
You can also start the Oracle Clusterware stack on one or more named servers in the cluster by listing the servers separated by a space:
[root@racnode1 ~]#  /u01/app/11.2.0/grid/bin/crsctl start cluster -n racnode1 racnode2
                             
Start/Stop All Instances with SRVCTL
Finally, you can start/stop all instances and associated services using the following:
[oracle@racnode1 ~]$  srvctl stop database -d racdb

[oracle@racnode1 ~]$  srvctl start database -d racdb

1 comment:

  1. Hi Sohel,
    I need the grid infrastructure software. It is urgent.

    Mamun, AIBL

    ReplyDelete