Installing 2 Nodes Oracle 10g RAC on Solaris 10 64bit

Content

Introduction

  1. Network Configuration (Hostname and IP address)
  2. Create Oracle groups and Oracle user
  3. Prepare disk for Oracle binaries (Local disk)
  4. iSCSI Configuration
  5. Prepare disk for OCR, Voting and ASM
  6. Setting Kernel Parameters
  7. Check and install required package
  8. Installing Oracle Clusterware
  9. Installing Oracle Database 10g Software
  10. Create ASM instance and ASM diskgroup

Reference:

www.oracle.com
www.idevelopment.info

Introduction

These article are intended for people who have basic knowledge of Oracle RAC. This article does not detail everything required to be understood in order to configure a RAC database. Please refer to Oracle documentation for explanation.

This article, however, focuses on putting together your own Oracle RAC 10g environment for development and testing by using Solaris servers and a low cost shared disk solution; iSCSI by using Openfiler (Openfiler installation and disk management is not covered in this article).

The two Oracle RAC nodes will be configured as follows:

Oracle Database Files
RAC Node Name Instance Name Database Name $ORACLE_BASE File System for DB Files
soladb1 sola1 sola ⁄oracle ASM
soladb2 sola2 sola ⁄oracle ASM
Oracle Clusterware Shared Files
File Type File Name iSCSI Volume Name Mount Point File System
Oracle Cluster Registry ⁄dev⁄rdsk⁄c2t3d0s2 ocr RAW
CRS Voting Disk ⁄dev⁄rdsk⁄c2t4d0s2 vot RAW

The Oracle Clusterware software will be installed to ⁄oracle⁄product⁄10.2.0⁄crs_1 on both the nodes that make up the RAC cluster. All of the Oracle physical database files (data, online redo logs, control files, archived redo logs) will be installed to shared volumes being managed by Automatic Storage Management (ASM).

1. Network Configuration (Hostname and IP address)

Perform the following network configuration on both Oracle RAC nodes in the cluster

Both of the Oracle RAC nodes should have one static IP address for the public network and one static IP address for the private cluster interconnect. The private interconnect should only be used by Oracle to transfer Cluster Manager and Cache Fusion related data along with data for the network storage server (Openfiler). Although it is possible to use the public network for the interconnect, this is not recommended as it may cause degraded database performance (reducing the amount of bandwidth for Cache Fusion and Cluster Manager traffic). For a production RAC implementation, the interconnect should be at least gigabit (or more) and only be used by Oracle as well as having the network storage server on a separate gigabit network.

The following example is from soladb1:

i. Update entry of ⁄etc⁄hosts

# cat ⁄etc⁄hosts

127.0.0.1       localhost
# Public Network (e1000g0)
192.168.2.100     soladb1         loghost
192.168.2.101     soladb2

# Public Virtual IP (VIP) addresses
192.168.2.104     soladb1-vip
192.168.2.105     soladb2-vip

# Private Interconnect (e1000g1)
10.0.0.100            soladb1-priv
soladb2-priv

ii. Edit name of server hostname by update ⁄etc⁄nodename file
# cat ⁄etc⁄nodename
soladb1
iii. Update⁄add file ⁄etc⁄hostname.<interface name> to
# cat hostname.e1000g0
soladb1

# cat hostname.e1000g1
soladb1-priv

Once the network is configured, you can use the ifconfig command to verify everything is working. The following example is from soladb1:

# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.2.100 netmask ffffff00 broadcast 192.168.2.255
ether 0:50:56:99:45:20
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.0.0.100 netmask ff000000 broadcast 10.255.255.255
ether 0:50:56:99:4f:a1

Adjusting Network Settings
The UDP (User Datagram Protocol) settings affect cluster interconnect transmissions. If the buffers set by these parameters are too small, then incoming UDP datagrams can be dropped due to insufficient space, which requires send-side retransmission. This can result in poor cluster performance.

On Solaris, the UDP parameters are udp_recv_hiwat and udp_xmit_hiwat. The default values for these paramaters on Solaris 10 are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.

To see what these parameters are currently set to, enter the following commands:
# ndd ⁄dev⁄udp udp_xmit_hiwat
# ndd ⁄dev⁄udp udp_recv_hiwat

To set the values of these parameters to 65536 bytes in current memory, enter the following commands:
# ndd -set ⁄dev⁄udp udp_xmit_hiwat 65536
# ndd -set ⁄dev⁄udp udp_recv_hiwat 65536

We need to write a startup script udp_rac in ⁄etc⁄init.d with the following contents to set to these values when the system boots.

#!⁄sbin⁄sh
case “$1” in
‘start’)
ndd -set ⁄dev⁄udp udp_xmit_hiwat 65536
ndd -set ⁄dev⁄udp udp_recv_hiwat 65536
;;
‘state’)
ndd ⁄dev⁄udp udp_xmit_hiwat
ndd ⁄dev⁄udp udp_recv_hiwat
;;
*)
echo “Usage: $0 { start | state }”
exit 1
;;
esac

We now need to create a link to this script in the ⁄etc⁄rc3.d directory.

# ln -s ⁄etc⁄init.d⁄udp_rac ⁄etc⁄rc3.d⁄S86udp_rac

2. Create Oracle groups and Oracle user
Perform the following task  on all Oracle RAC nodes in the cluster
We will create the dba group and the oracle user account along with all appropriate directories.

# mkdir -p ⁄oracle
# groupadd –g 501 oinstall
# groupadd –g 502 dba

# useradd -s ⁄usr⁄bin⁄bash -u 500 -g 501 -G 502 -d ⁄oracle oracle -c “Oracle Software Owner” oracle
# chown -R oracle:dba ⁄oracle
# passwd oracle

Modify Oracle user environment variable
Perform the following task  on all Oracle RAC nodes in the cluster

After creating the oracle user account on both nodes, ensure that the environment is setup correctly by using the following .bash_profile (Please note that the .bash_profile will not exist on Solaris; you will have to create it).

The following example is from soladb1:

# su – oracle
$ cat .bash_profile
PATH=⁄usr⁄sbin:⁄usr⁄bin
export ORACLE_SID=sola1
export ORACLE_BASE=⁄oracle
export ORACLE_HOME=⁄oracle⁄product⁄10.2.0⁄db_1
export ORA_CRS_HOME=$ORACLE_BASE⁄product⁄10.2.0⁄crs_1
export PATH=$PATH:$ORACLE_HOME⁄bin:$ORA_CRS_HOME⁄bin

3. Prepare disk for Oracle binaries (Local disk)

Perform the following task on all Oracle RAC nodes in the cluster

1. Format the disk

# format
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
⁄pci@0,0⁄pci15ad,1976@10⁄sd@0,0
1. c1t1d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
⁄pci@0,0⁄pci15ad,1976@10⁄sd@1,0
Specify disk (enter its number):  1

format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% “SOLARIS System” partition
Type “y” to accept the default partition,  otherwise type “n” to edit the
partition table.
Y

format> p
PARTITION MENU:
0      – change `0′ partition
1      – change `1′ partition
2      – change `2′ partition
3      – change `3′ partition
4      – change `4′ partition
5      – change `5′ partition
6      – change `6′ partition
7      – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name   – name the current table
print  – display the current table
label  – write partition map and label to the disk
!<cmd> – execute <cmd>, then return
quit

partition> p (print  – display the current table)
Current partition table (original):
Total disk cylinders available: 2607 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0         (0⁄0⁄0)           0
1 unassigned    wm       0               0         (0⁄0⁄0)           0
2     backup    wu       0 – 2606       19.97GB    (2607⁄0⁄0) 41881455
3 unassigned    wm       0               0         (0⁄0⁄0)           0
4 unassigned    wm       0               0         (0⁄0⁄0)           0
5 unassigned    wm       0               0         (0⁄0⁄0)           0
6 unassigned    wm       0               0         (0⁄0⁄0)           0
7 unassigned    wm       0               0         (0⁄0⁄0)           0
8       boot    wu       0 –    0        7.84MB    (1⁄0⁄0)       16065
9 unassigned    wm       0               0         (0⁄0⁄0)           0partition> label

partition> label
Ready to label disk, continue? Y

2. Create solaris file system
# newfs ⁄dev⁄dsk⁄c1t1d0s2

3. Add entry to ⁄etc⁄vfstab
# cat ⁄etc⁄vfstab
⁄dev⁄dsk⁄c1t1d0s2       ⁄dev⁄rdsk⁄c1t1d0s2      ⁄oracle   ufs   –  yes  –

4. mount the filesystem
# mkdir ⁄oracle
# mount ⁄oracle

5.Change Owner of ⁄oracle
# chown -R oracle:oinstall ⁄oracle

4. iSCSI Configuration

Perform the following task  on all Oracle RAC nodes in the cluster

In this article, we will be using the Static Config method. We first need to verify that the iSCSI software packages are installed on our servers before we can proceed further.

# pkginfo SUNWiscsiu SUNWiscsir
system      SUNWiscsir Sun iSCSI Device Driver (root)
system      SUNWiscsiu Sun iSCSI Management Utilities (usr)

After verifying that the iSCSI software packages are installed to the client machines (soladb1, soladb2) and that the iSCSI Target (Openfiler) is configured, run the following from the client machine to discover all available iSCSI LUNs. Note that the IP address for the Openfiler network storage server is accessed through the private network and located at the address 10.0.0.108

Configure the iSCSI target device to be discovered static by specifying IQN, IP Address and port no:

# iscsiadm add static-config iqn.2006-01.com.openfiler:tsn.2fc90b6b9c73,10.0.0.108:3260

Listing Current Discovery Settings
# iscsiadm list discovery
Discovery:
Static: disable
Send Targets: disabled
iSNS: disabled

The iSCSI connection is not initiated until the discovery method is enabled. This is enabled using the following command:

# iscsiadm modify discovery –static enable

Create the iSCSI device links for the local system. The following command can be used to do this:

# devfsadm -i iscsi

To verify that the iSCSI devices are available on the node, we will use the format command. The output of the format command should look like the following:

# format
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
⁄pci@0,0⁄pci15ad,1976@10⁄sd@0,0
1. c1t1d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
⁄pci@0,0⁄pci15ad,1976@10⁄sd@1,0
2. c2t3d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,0
3. c2t4d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,1
4. c2t5d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,2
5. c2t6d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,3
6. c2t7d0 <DEFAULT cyl 28 alt 2 hd 64 sec 32>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,4
Specify disk (enter its number):

5.  Prepare disk for OCR, Voting and ASM

Perform the following task on one(1) of the Oracle RAC nodes in the cluster

Now, we need to create partitions on the iSCSI volumes. The main point is that when formatting the devices to be used for the OCR and the Voting Disk files, the disk slices to be used must skip the first cylinder (cylinder 0) to avoid overwriting the disk VTOC (Volume Table of Contents). The VTOC is a special area of disk set aside for aside for storing information about the disk’s controller, geometry and slices.

Oracle Shared Drive Configuration
File System Type iSCSI Target
(short) Name
Size Device Name ASM Dg Name File Types
RAW ocr 300 MB ⁄dev⁄rdsk⁄c2t3d0s2 Oracle Cluster Registry (OCR) File
RAW vot 300 MB ⁄dev⁄rdsk⁄c2t4d0s2 Voting Disk
RAW asmspfile 30 MB ⁄dev⁄rdsk⁄c2t7d0s2 ASM SPFILE
ASM asm1 14 GB ⁄dev⁄rdsk⁄c2t5d0s2 DATA Oracle Database Files
ASM asm2 14 GB ⁄dev⁄rdsk⁄c2t6d0s2 ARCH Oracle Database Files

Perform below operation for all the disk from the solaris1 node only using format command.

# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
⁄pci@0,0⁄pci15ad,1976@10⁄sd@0,0
1. c1t1d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
⁄pci@0,0⁄pci15ad,1976@10⁄sd@1,0
2. c2t3d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,0
3. c2t4d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,1
4. c2t5d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,2
5. c2t6d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,3
6. c2t7d0 <DEFAULT cyl 28 alt 2 hd 64 sec 32>
⁄iscsi⁄disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,4
Specify disk (enter its number): 2
selecting c2t3d0
[disk formatted]

FORMAT MENU:
disk       – select a disk
type       – select (define) a disk type
partition  – select (define) a partition table
current    – describe the current disk
format     – format and analyze the disk
fdisk      – run the fdisk program
repair     – repair a defective sector
label      – write label to the disk
analyze    – surface analysis
defect     – defect list management
backup     – search for backup labels
verify     – read and display labels
save       – save new disk⁄partition definitions
inquiry    – show vendor, product and revision
volname    – set 8-character volume name
!<cmd>     – execute <cmd>, then return
quit

format> partition
Please run fdisk first
format> fdisk
No fdisk table exists. The default partition for the disk is:

a 100% “SOLARIS system” partition

Type “y” to accept the default partition, otherwise type “n” to edit the partition table.
y
format> partition
PARTITION MENU:
0      – change `0′ partition
1      – change `1′ partition
2      – change `2′ partition
3      – change `3′ partition
4      – change `4′ partition
5      – change `5′ partition
6      – change `6′ partition
7      – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name   – name the current table
print  – display the current table
label  – write partition map and label to the disk
!<cmd> – execute <cmd>, then return
quit

partition> print
Current partition table (unnamed):
Total disk cylinders available: 508 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders       Size            Blocks
0 unassigned    wm       0              0         (0⁄0⁄0)         0
1 unassigned    wm       0              0         (0⁄0⁄0)         0
2     backup    wu       0 – 507      508.00MB    (508⁄0⁄0) 1040384
3 unassigned    wm       0              0         (0⁄0⁄0)         0
4 unassigned    wm       0              0         (0⁄0⁄0)         0
5 unassigned    wm       0              0         (0⁄0⁄0)         0
6 unassigned    wm       0              0         (0⁄0⁄0)         0
7 unassigned    wm       0              0         (0⁄0⁄0)         0
8       boot    wu       0 –   0        1.00MB    (1⁄0⁄0)      2048
9 unassigned    wm       0              0         (0⁄0⁄0)         0

partition> 2
Part      Tag    Flag     Cylinders       Size            Blocks
2 unassigned    wm       0 – 507      508.00MB    (508⁄0⁄0) 1040384

Enter partition id tag[backup]:

 

Enter partition permission flags[wm]:
Enter new starting cyl[0]: 5
Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: $
partition> label
Ready to label disk, continue? y

partition> quit

Repeat this operation for all the iSCSI disk.

Setting Device Permissions

The devices we will be using for the various components of this article (e.g. the OCR and the voting disk) must have the appropriate ownership and permissions set on them before we can proceed to the installation stage. We will the set the permissions and ownerships using the chown and chmod commands as follows: (this must be done as the root user)

# chown root:oinstall ⁄dev⁄rdsk⁄c2t3d0s2
# chmod 660 ⁄dev⁄rdsk⁄c2t1d0s1
# chown oracle:oinstall ⁄dev⁄rdsk⁄c2t4d0s2
# chmod 660 ⁄dev⁄rdsk⁄c2t4d0s2
# chown oracle: oinstall ⁄dev⁄rdsk⁄c2t7d0s2
# chown oracle: oinstall ⁄dev⁄rdsk⁄c2t5d0s2
# chown oracle: oinstall ⁄dev⁄rdsk⁄c2t6d0s2

These permissions will be persistent accross reboots. No further configuration needs to be performed with the permissions.

6. Setting Kernel Parameters

In Solaris 10, there is a new way of setting kernel parameters. The old Solaris 8 and 9 way of setting kernel parameters by editing the ⁄etc⁄system file is deprecated. A new method of setting kernel parameters exists in Solaris 10 using the resource control facility and this method does not require the system to be re-booted for the change to take effect.

Create a default project for the oracle user.
# projadd -U oracle -K “project.max-shm-memory=(priv,4096MB,deny)” user.oracle
Modify the max-shm-memory Parameter
# projmod -s -K “project.max-shm-memory=(priv,4096MB,deny)” user.oracle

Modify the max-sem-ids Parameter
# projmod -s -K “project.max-sem-ids=(priv,256,deny)” user.oracle

Check the Parameters as User oracle
$ prctl -i project user.oracle

Configure RAC Nodes for Remote Access
Perform the following configuration procedures on both Oracle RAC nodes in the cluster.

Before you can install and use Oracle RAC, you must configure either secure shell (SSH) or remote shell (RSH) for the oracle user account both of the Oracle RAC nodes in the cluster. The goal here is to setup user equivalence for the oracle user account. User equivalence enables the oracle user account to access all other nodes in the cluster without the need for a password. This can be configured using either SSH or RSH where SSH is the preferred method.
Perform below operation as User oracle to setup RSH between all nodes.

# su – oracle
$ cd
$ vi .rhosts
+

7. Check and install required package

Perform the following checks on all Oracle RAC nodes in the cluster

The following packages must be installed on each server before you can continue. To check whether any of these required packages are installed on your system, use the pkginfo -i command as follows:
# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibmr SUNWlibm SUNWsprot SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWxwplt SUNWmfrun SUNWxwplr SUNWxwdv SUNWbinutils  SUNWgcc SUNWuiu8

If you need to install any of the above packages, use the pkgadd –d command. E.g.
# pkgadd -d ⁄cdrom⁄sol_10_1009_x86⁄Solaris_10⁄Product -s ⁄var⁄spool⁄pkg SUNWi15cs
# pkgadd SUNWi15cs

8. Installing Oracle Clusterware
Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (soladb1). The Oracle Clusterware software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

Using xstart or any xterm client, login as Oracle user and start the installation.

$ .⁄runInstaller.sh

Screen Name Response
Welcome Screen Click Next
Specify Inventory directory and credentials Accept the default values:
Inventory directory: ⁄oracle⁄oraInventory
Operating System group name: oinstall
Specify Home Details Set the Name and Path for the ORACLE_HOME (actually the $ORA_CRS_HOME that I will be using in this article) as follows:
Name: OraCrs10g_home
Path: ⁄oracle⁄product⁄10.2.0⁄crs_1
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Clusterware software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.Click Next to continue.
Specify Cluster Configuration Cluster Name:crs

Public Node Name Private Node Name Virtual Node Name
soladb1 soladb1-priv soladb1-vip
soladb2 soladb2-priv soladb2-vip
Specify Network Interface Usage
Interface Name Subnet Interface Type
e1000g0 192.168.2.0 Public
e1000g1 10.0.0.0 Private
Specify OCR Location Starting with Oracle Database 10gRelease 2 (10.2) with RAC, Oracle Clusterware provides for the creation of a mirrored OCR file, enhancing cluster reliability. For the purpose of this example, I did not choose to mirror the OCR file by using the option of “External Redundancy”:Specify OCR Location: ⁄dev⁄rdsk⁄c2t3d0s2
Specify Voting Disk Location For the purpose of this example, I did not choose to mirror the voting disk by using the option of “External Redundancy”:Voting Disk Location: ⁄dev⁄rdsk⁄c2t4d0s2
Summary Click Install to start the installation!
Execute Configuration Scripts After the installation has completed, you will be prompted to run the orainstRoot.sh and root.sh script. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the “root” user account.Navigate to the ⁄ oracle⁄oraInventory directory and run orainstRoot.sh ON ALL NODES in the RAC cluster.


Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the “root” user account.Navigate to the ⁄oracle⁄product⁄10.2.0⁄crs_1 directory and locate the root.sh file for each node in the cluster – (starting with the node you are performing the install from). Run the root.sh file ON ALL NODES in the RAC cluster ONE AT A TIME.

You will receive several warnings while running the root.sh script on all nodes. These warnings can be safely ignored.

The root.sh may take awhile to run.

Go back to the OUI and acknowledge the “Execute Configuration scripts” dialog window after running the root.sh script on both nodes.

End of installation At the end of the installation, exit from the OUI.

After successfully install Oracle 10g Clusterware (10.2.0.1), start the OUI for patching the clusteware with the latest patch available (10.2.0.5). We can refer back above step for the patching activity.

Verify Oracle Clusterware Installation

After the installation of Oracle Clusterware, we can run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC Cluster

$ .⁄oracle⁄product⁄10.2.0⁄crs_1⁄bin⁄olsnodes
soladb1
soladb2

$ .⁄oracle⁄product⁄10.2.0⁄crs_1⁄bin⁄crs_stat –t
Name           Type           Target    State     Host
————————————————————
ora….db1.gsd application    ONLINE    ONLINE    soladb1
ora….db1.ons application    ONLINE    ONLINE    soladb1
ora….db1.vip application    ONLINE    ONLINE    soladb1
ora….db2.gsd application    ONLINE    ONLINE    soladb2
ora….db2.ons application    ONLINE    ONLINE    soladb2
ora….db2.vip application    ONLINE    ONLINE    soladb2

9.  Installing Oracle Database 10g Software
Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (soladb1). The Oracle Database software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

Using xstart or any xterm client, login as Oracle user and start the installation.

$ .⁄runInstaller.sh

Screen Name Response
Welcome Screen Click Next
Select Installation Type Select the Enterprise Edition option.
Specify Home Details Set the Name and Path for the ORACLE_HOME as follows:
Name: OraDb10g_home1
Path: ⁄oracle⁄product⁄10.2.0⁄db_1
Specify Hardware Cluster Installation Mode Select the Cluster Installation option then select all nodes available. Click Select All to select all servers: soladb1 and soladb2.If the installation stops here and the status of any of the RAC nodes is “Node not reachable”, perform the following checks:

  • Ensure Oracle Clusterware is running on the node in question. (crs_stat –t)
  • Ensure you are table to reach the node in question from the node you are performing the installation from.
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle database software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox.If you did not run the OUI with the ignoreSysPrereqs option then the Kernel parameters prerequisite check will fail. This is because the OUI is looking at the ⁄etc⁄system file to check the kernel parameters. As we discussed earlier, this file is not used by default in Solaris 10. This is documented in Metalink Note 363436.1.Click Next to continue.
Select Database Configuration Select the option to “Install database software only.”Remember that we will create the clustered database as a separate step using DBCA.
Summary Click on Install to start the installation!
Root Script Window – Run root.sh After the installation has completed, you will be prompted to run the root.sh script. It is important to keep in mind that the root.sh script will need to be run on all nodes in the RAC cluster one at a timestarting with the node you are running the database installation from.First, open a new console window on the node you are installing the Oracle 10gdatabase software from as the root user account. For me, this was solaris1.Navigate to the ⁄oracle⁄product⁄10.2.0⁄db_1 directory and run root.sh.After running the root.sh script on all nodes in the cluster, go back to the OUI and acknowledge the “Execute Configuration scripts” dialog window.
End of installation At the end of the installation, exit from the OUI.

After successfully install Oracle Database 10g (10.2.0.1), start the OUI for patching the database with the latest patch available (10.2.0.5). We can refer back above step for the patching activity.

Run the Network Configuration Assistant
To start NETCA, run the following:
$ netca

The following table walks you through the process of creating a new Oracle listener for our RAC environment.

Screen Name Response
Select the Type of Oracle
Net Services Configuration
Select Cluster Configuration
Select the nodes to configure Select all of the nodes: soladb1 and soladb2.
Type of Configuration Select Listener configuration.
Listener Configuration – Next 6 Screens The following screens are now like any other normal listener configuration. You can simply accept the default parameters for the next six screens:
What do you want to do: Add
Listener name: LISTENER
Selected protocols: TCP
Port number: 1521
Configure another listener: No
Listener configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Select Naming Methods configuration.
Naming Methods Configuration The following screens are:
Selected Naming Methods: Local Naming
Naming Methods configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Click Finish to exit the NETCA.

The Oracle TNS listener process should now be running on all nodes in the RAC cluster.

$ crs_stat –t
Name           Type           Target    State     Host
————————————————————
ora….B1.lsnr application    ONLINE    ONLINE    soladb1
ora….db1.gsd application    ONLINE    ONLINE    soladb1
ora….db1.ons application    ONLINE    ONLINE    soladb1
ora….db1.vip application    ONLINE    ONLINE    soladb1
ora….B2.lsnr application    ONLINE    ONLINE    soladb2
ora….db2.gsd application    ONLINE    ONLINE    soladb2
ora….db2.ons application    ONLINE    ONLINE    soladb2
ora….db2.vip application    ONLINE    ONLINE    soladb2

10. Create ASM instance and ASM diskgroup

To start the ASM instance creation process, run the following command on any nodes of the Oracle 10g RAC cluster as oracle user.

$ dbca

Screen Name Response
Welcome Screen Select “Oracle Real Application Clusters database.”
Operations Select Configure Automatic Storage Management
Node Selection Click on the Select All button to select all servers: soladb1 and soladb2.
Create ASM Instance Supply the SYS password to use for the new ASM instance.Also, starting with Oracle 10gRelease 2, the ASM instance server parameter file (SPFILE) needs to be on a shared disk. You will need to modify the default entry for “Create server parameter file (SPFILE)” to reside on the RAW partition as follows: ⁄dev⁄rdsk⁄c2t7d0s2. All other options can stay at their defaults.You will then be prompted with a dialog box asking if you want to create and start the ASM instance. Select the OKbutton to acknowledge this dialog.The OUI will now create and start the ASM instance on all nodes in the RAC cluster.
ASM Disk Groups To start, click the Create New button. This will bring up the “Create Disk Group” window with the three of the partitions we created earlier. If you didn’t see any disk, click the Change Disk Discovery Path button and enter ⁄dev⁄rdsk⁄*For the first “Disk Group Name”, I used the string “DATA”. Select the first RAW partitions (in my case ⁄dev⁄rdsk⁄c2t5d0s2) in the “Select Member Disks” window. Keep the “Redundancy” setting to “External”.After verifying all values in this window are correct, click the [OK]button. This will present the “ASM Disk Group Creation” dialog. When the ASM Disk Group Creation process is finished, you will be returned to the “ASM Disk Groups” windows.Click the Create New button again. For the second “Disk Group Name”, I used the string “ARCH”. Select the last RAW partition (⁄dev⁄rdsk⁄c2t6d0s2) in the “Select Member Disks” window. Keep the “Redundancy” setting to “External”.

After verifying all values in this window are correct, click the [OK] button. This will present the “ASM Disk Group Creation” dialog.

When the ASM Disk Group Creation process is finished, you will be returned to the “ASM Disk Groups” window with two disk groups created and selected.

End of ASM Instance creation Click the Finish button to complete the ASM instance creation.

The Oracle ASM instance process should now be running on all nodes in the RAC cluster.

$ crs_stat –t
Name           Type           Target    State     Host
————————————————————
ora….SM1.asm application    ONLINE    ONLINE    soladb1
ora….B1.lsnr application    ONLINE    ONLINE    soladb1
ora….db1.gsd application    ONLINE    ONLINE    soladb1
ora….db1.ons application    ONLINE    ONLINE    soladb1
ora….db1.vip application    ONLINE    ONLINE    soladb1
ora….SM2.asm application    ONLINE    ONLINE    soladb2
ora….B2.lsnr application    ONLINE    ONLINE    soladb2
ora….db2.gsd application    ONLINE    ONLINE    soladb2
ora….db2.ons application    ONLINE    ONLINE    soladb2
ora….db2.vip application    ONLINE    ONLINE    soladb2

The last step is to create Oracle 10g Database using dbca.
Good luck.