Implementando Oracle Database 11gR2 RAC on Virtualbox em Linux com ISCSI – P1

Este artigo tem como objetivo demonstrar a Criação e Configuração da Máquina Virtual, Instalação do Oracle Linux, Configuração do Linux para Instalação do Oracle RAC.

Vamos iniciar criando e configurando o Oracle Virtualbox para um dos Nodes.

Após a criação e configuração da Máquina Virtual para o primeiro Node, vamos instalar o Oracle Enterprise Linux.
Pode-se acessar o artigo Instalando Oracle Enterprise Linux 5.6 para Banco de Dados Oracle 11gR2. ou seguir os passos abaixo.

Partições do Disco para cada Node de um Disco de 30GB.

/boot    128MB
/u01     12228MB
/backup  8192MB
SWAP     4096MB
/        Resto

Vamos iniciar configuração do OEL para a implementação do Oracle RAC 11g R2.
Vamos configurar o host com os IPs dos Nodes.
OBS: Todos os mesmos estarão comentados, pois já estão disponíveis no DNS.

[root@imrac11g1 ~]# vi /etc/hosts
[root@imrac11g1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.

127.0.0.1       localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6

# Public

#192.168.10.50  imrac11g1.tk.local      imrac11g1
#192.168.10.51  imrac11g2.tk.local      imrac11g2
#192.168.10.70  imrac11g3.tk.local      imrac11g3

# Virtual IP

#192.168.10.60  imrac11g1-vip.tk.local  imrac11g1-vip
#192.168.10.61  imrac11g2-vip.tk.local  imrac11g2-vip
#192.168.10.62  imrac11g3-vip.tk.local  imrac11g3-vip

# Interconnect (Private)

#10.0.0.1               imrac11g1-priv.tk.local imrac11g1-priv
#10.0.0.2               imrac11g2-priv.tk.local imrac11g2-priv
#10.0.0.3               imrac11g2-priv.tk.local imrac11g2-priv

# Comunicacao com Storage

#192.168.10.103 rstorage.tk.local       rstorage

# RacScan

#192.168.10.54  imrac11g-scan.tk.local  imrac11g-scan
#192.168.10.56  imrac11g-scan.tk.local  imrac11g-scan
[root@imrac11g1 ~]#

Vamos realizar um teste com a resolução do nome do IP Público conforme abaixo:

[root@imrac11g1 ~]# nslookup imrac11g1.tk.local
Server:         192.168.10.30
Address:        192.168.10.30#53

Name:   imrac11g1.tk.local
Address: 192.168.10.50

[root@imrac11g1 ~]#

Vamos checar a configuração do IP do DNS no OEL. Segue abaixo o mesmo.

[root@imrac11g1 ~]# cat /etc/resolv.conf
search tk.local
nameserver 192.168.10.30
[root@imrac11g1 ~]#

Montando a mídia do OEL na Máquina Virtual, vamos instalar os pacotes que são necessários para a instalação do Oracle RAC 11g R2. Segue o mesmo.

[root@imrac11g1 ~]# mount /dev/cdrom /media/
mount: block device /dev/cdrom is write-protected, mounting read-only
[root@imrac11g1 ~]# cd /media/Server/
[root@imrac11g1 Server]#

Vamos instalar os pacotes abaixo.
Segue os pacotes para cópia.

rpm -Uvh compat-libstdc++-33*
rpm -Uvh compat-libstdc++-33*.i386.rpm
rpm -Uvh elfutils-libelf*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh ksh*
rpm -Uvh libaio-0.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libgomp-4.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh make-3.*
rpm -Uvh sysstat-7.*
rpm -Uvh unixODBC-lib*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
rpm -Uvh numactl-devel-*

[root@imrac11g1 Server]# rpm -Uvh binutils-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh compat-libstdc++-33*.i386.rpm
rpm -Uvh elfutils-libelf*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh ksh*
rpm -Uvh libaio-0.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libgomp-4.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh make-3.*
rpm -Uvh sysstat-7.*
rpm -Uvh unixODBC-lib*
rpm -Uvh unixODBC-2.*
warning: binutils-2.17.50.0.6-20.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
rpm -Uvh unixODBC-devel-2.*
rpm -Uvh numactl-devel-*Preparing...
########################################### [100%]
        package binutils-2.17.50.0.6-20.el5.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh compat-libstdc++-33*
warning: compat-libstdc++-33-3.2.3-61.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package compat-libstdc++-33-3.2.3-61.x86_64 is already installed
        package compat-libstdc++-33-3.2.3-61.i386 is already installed
[root@imrac11g1 Server]# rpm -Uvh compat-libstdc++-33*.i386.rpm
warning: compat-libstdc++-33-3.2.3-61.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package compat-libstdc++-33-3.2.3-61.i386 is already installed
[root@imrac11g1 Server]# rpm -Uvh elfutils-libelf*
warning: elfutils-libelf-0.137-3.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package elfutils-libelf-0.137-3.el5.x86_64 is already installed
        package elfutils-libelf-0.137-3.el5.i386 is already installed
        package elfutils-libelf-devel-static-0.137-3.el5.x86_64 is already installed
        package elfutils-libelf-devel-0.137-3.el5.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh gcc-4.*
warning: gcc-4.1.2-52.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package gcc-4.1.2-52.el5.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh gcc-c++-4.*
warning: gcc-c++-4.1.2-52.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package gcc-c++-4.1.2-52.el5.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh glibc-2.*
warning: glibc-2.5-81.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package glibc-2.5-81.x86_64 is already installed
        package glibc-2.5-81.i686 is already installed
[root@imrac11g1 Server]# rpm -Uvh glibc-common-2.*
warning: glibc-common-2.5-81.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package glibc-common-2.5-81.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh glibc-devel-2.*
warning: glibc-devel-2.5-81.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package glibc-devel-2.5-81.x86_64 is already installed
        package glibc-devel-2.5-81.i386 is already installed
[root@imrac11g1 Server]# rpm -Uvh glibc-headers-2.*
warning: glibc-headers-2.5-81.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package glibc-headers-2.5-81.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh ksh*
warning: ksh-20100621-5.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package ksh-20100621-5.el5.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh libaio-0.*
warning: libaio-0.3.106-5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package libaio-0.3.106-5.x86_64 is already installed
        package libaio-0.3.106-5.i386 is already installed
[root@imrac11g1 Server]# rpm -Uvh libaio-devel-0.*
warning: libaio-devel-0.3.106-5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:libaio-devel           ########################################### [ 50%]
   2:libaio-devel           ########################################### [100%]
[root@imrac11g1 Server]# rpm -Uvh libgomp-4.*
warning: libgomp-4.4.6-3.el5.1.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package libgomp-4.4.6-3.el5.1.x86_64 is already installed
        package libgomp-4.4.6-3.el5.1.i386 is already installed
[root@imrac11g1 Server]# rpm -Uvh libgcc-4.*
warning: libgcc-4.1.2-52.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package libgcc-4.1.2-52.el5.x86_64 is already installed
        package libgcc-4.1.2-52.el5.i386 is already installed
[root@imrac11g1 Server]# rpm -Uvh libstdc++-4.*
warning: libstdc++-4.1.2-52.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package libstdc++-4.1.2-52.el5.x86_64 is already installed
        package libstdc++-4.1.2-52.el5.i386 is already installed
[root@imrac11g1 Server]# rpm -Uvh libstdc++-devel-4.*
warning: libstdc++-devel-4.1.2-52.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package libstdc++-devel-4.1.2-52.el5.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh make-3.*
warning: make-3.81-3.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package make-3.81-3.el5.x86_64 is already installed
[root@imrac11g1 Server]# rpm -Uvh sysstat-7.*
warning: sysstat-7.0.2-11.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:sysstat                ########################################### [100%]
[root@imrac11g1 Server]# rpm -Uvh unixODBC-lib*
warning: unixODBC-libs-2.2.11-10.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:unixODBC-libs          ########################################### [ 50%]
   2:unixODBC-libs          ########################################### [100%]
[root@imrac11g1 Server]# rpm -Uvh unixODBC-2.*
warning: unixODBC-2.2.11-10.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:unixODBC               ########################################### [ 50%]
   2:unixODBC               ########################################### [100%]
[root@imrac11g1 Server]# rpm -Uvh unixODBC-devel-2.*
warning: unixODBC-devel-2.2.11-10.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:unixODBC-devel         ########################################### [ 50%]
   2:unixODBC-devel         ########################################### [100%]
[root@imrac11g1 Server]# rpm -Uvh numactl-devel-*
warning: numactl-devel-0.9.8-12.0.1.el5_6.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:numactl-devel          ########################################### [ 50%]
   2:numactl-devel          ########################################### [100%]
[root@imrac11g1 Server]#

Vamos configurar o Kernel com os parâmetros para o Banco de Dados Oracle. Segue os valores que devem ser ajustados conforme seu ambiente de banco de dados. Maiores informações verificar a documentação da Oracle.

# Oracle Settings
#kernel.shmall = physical RAM size / pagesize For most systems, this will be the value 2097152. See Note 301830.1 for more information.
#kernel.shmmax = 1/2 of physical RAM. This would be the value 2147483648 for a system with 4GB of physical RAM. See Note:107506.1 for more information.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
#fs.file-max = 512 x processes (for example 6815744 for 13312 processes)
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

[root@imrac11g1 Server]# vi /etc/sysctl.conf
[root@imrac11g1 Server]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Oracle Enterprise Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
# See /usr/share/doc/kernel-doc-*/Documentation/networking/ip-sysctl.txt
net.ipv4.conf.default.rp_filter = 2

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# Oracle Settings
#kernel.shmall = physical RAM size / pagesize For most systems, this will be the value 2097152. See Note 301830.1 for more information.
#kernel.shmmax = 1/2 of physical RAM. This would be the value 2147483648 for a system with 4GB of physical RAM. See Note:107506.1 for more information.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
#fs.file-max = 512 x processes (for example 6815744 for 13312 processes)
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

[root@imrac11g1 Server]#

Vamos configurar o limites, o pam.limits e o profile para o usuário oracle para o Banco de Dados Oracle. Segue os valores que devem ser ajustados conforme seu ambiente de banco de dados. Maiores informações verificar a documentação da Oracle.

Limits
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240

Pam.d/login
session required pam_limits.so

Profile
if [ $USER = “oracle” ];
then
if [ $SHELL = “/bin/ksh” ];
then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

[root@imrac11g1 Server]# vi /etc/security/limits.conf
[root@imrac11g1 Server]# cat /etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to
#        - rtprio - max realtime priority
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240

# End of file
[root@imrac11g1 Server]# vi /etc/pam.d/login
[root@imrac11g1 Server]# cat /etc/pam.d/login
#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    optional     pam_keyinit.so force revoke
session    required     pam_loginuid.so
session    include      system-auth
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_limits.so
[root@imrac11g1 Server]# vi /etc/profile
[root@imrac11g1 Server]# cat /etc/profile
# /etc/profile

# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc

pathmunge () {
        if ! echo $PATH | /bin/egrep -q "(^|:)$1($|:)" ; then
           if [ "$2" = "after" ] ; then
              PATH=$PATH:$1
           else
              PATH=$1:$PATH
           fi
        fi
}

# ksh workaround
if [ -z "$EUID" -a -x /usr/bin/id ]; then
        EUID=`id -u`
        UID=`id -ru`
fi

# Path manipulation
if [ "$EUID" = "0" ]; then
        pathmunge /sbin
        pathmunge /usr/sbin
        pathmunge /usr/local/sbin
fi

# No core files by default
ulimit -S -c 0 > /dev/null 2>&1

if [ -x /usr/bin/id ]; then
        USER="`id -un`"
        LOGNAME=$USER
        MAIL="/var/spool/mail/$USER"
fi

HOSTNAME=`/bin/hostname`
HISTSIZE=1000

if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ]; then
    INPUTRC=/etc/inputrc
fi

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC

# By default, we want umask to get set. This sets it for login shell
# Current threshold for system reserved uid/gids is 200
# You could check uidgid reservation validity in
# /usr/share/doc/setup-*/uidgid file
if [ $UID -gt 99 ] && [ "`id -gn`" = "`id -un`" ]; then
    umask 002
else
    umask 022
fi

for i in /etc/profile.d/*.sh ; do
    if [ -r "$i" ]; then
        if [ "${-#*i}" != "$-" ]; then
            . $i
        else
            . $i >/dev/null 2>&1
        fi
    fi
done

if [ $USER = "oracle" ];
then
        if [ $SHELL = "/bin/ksh" ];
        then
                ulimit -u 16384
                ulimit -n 65536
        else
                ulimit -u 16384 -n 65536
        fi
fi

unset i
unset pathmunge
[root@imrac11g1 Server]#

Vamos criar os grupos para o usuário oracle, criar o usuário oracle e colocar uma senha para o mesmo.
OBS: Em ambiente RAC, pode-se configurar um usuário para o o GRID como o usuário grid, porém nesta implementação todo Oracle RAC ficará abaixo do usuário oracle.

groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
groupadd asmoper
groupadd asmdba
useradd -g oinstall -G dba,oper,asmdba,asmadmin,asmoper oracle
passwd oracle

[root@imrac11g1 Server]# groupadd oinstall
[root@imrac11g1 Server]# groupadd dba
[root@imrac11g1 Server]# groupadd oper
[root@imrac11g1 Server]# groupadd asmadmin
[root@imrac11g1 Server]# groupadd asmoper
[root@imrac11g1 Server]# groupadd asmdba
[root@imrac11g1 Server]# useradd -g oinstall -G dba,oper,asmdba,asmadmin,asmoper oracle
[root@imrac11g1 Server]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@imrac11g1 Server]#

Vamos criar os diretórios para o Grid Infrastructure e para o Produto Oracle.
OBS: Os diretórios do Grid e do Produto Oracle devem estar em diretórios diferentes.

mkdir -p /u01/app/oracle/product/11.2.0/db_01
mkdir -p /u01/app/product/11.2.0/grid_01
chown -R oracle.dba /u01/

[root@imrac11g1 Server]# mkdir -p /u01/app/oracle/product/11.2.0/db_01
[root@imrac11g1 Server]# mkdir -p /u01/app/product/11.2.0/grid_01
[root@imrac11g1 Server]# chown -R oracle.dba /u01/

Vamos configurar o ISCSI para iniciar as LUNs liberadas pelo Openfiler.
OBS: Pode-se usar o Openfiler, FreeNAS ou outro Storage. Por ser em ambiente de testes, estou utilizando OpenFiler.
Iniciando os Serviços no OEL.

service iscsi start
service iscsid start
chkconfig iscsid on
chkconfig iscsi on

[root@imrac11g1 Server]# service iscsi start
iscsid (pid  1996) is running...
Setting up iSCSI targets: iscsiadm: No records found
                                                           [  OK  ]
[root@imrac11g1 Server]# service iscsid start
Starting iSCSI daemon:
                                                           [  OK  ]
[root@imrac11g1 Server]# chkconfig iscsid on
[root@imrac11g1 Server]# chkconfig iscsi on

Vamos descobrir as LUNs liberadas e verificar se o ISCSI Inicializador está instalado (Deve estar instalado).

[root@imrac11g1 Server]# iscsiadm -m discovery -t sendtargets -p rstorage
192.168.10.103:3260,1 iqn.2006-01.com.openfiler:imrac11g12_asm3
192.168.10.103:3260,1 iqn.2006-01.com.openfiler:imrac11g12_asm2
192.168.10.103:3260,1 iqn.2006-01.com.openfiler:imrac11g12_asm1
[root@imrac11g1 Server]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep iscsi-initiator-utils
iscsi-initiator-utils-6.2.0.872-13.0.1.el5 (x86_64)
[root@imrac11g1 Server]#

Vamos logar a específica LUN. Quando iniciar a LUN será logada automaticamente.

iscsiadm -m node -T LUN -p IP_OU_NOMEDASTORAGE -l

[root@imrac11g1 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:imrac11g12_asm1 -p rstorage -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm1, portal: 192.168.10.103,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm1, portal: 192.168.10.103,3260] successful.
[root@imrac11g1 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:imrac11g12_asm2 -p rstorage -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm2, portal: 192.168.10.103,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm2, portal: 192.168.10.103,3260] successful.
[root@imrac11g1 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:imrac11g12_asm3 -p rstorage -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm3, portal: 192.168.10.103,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm3, portal: 192.168.10.103,3260] successful.
[root@imrac11g1 Server]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-192.168.10.103:3260-iscsi-iqn.2006-01.com.openfiler:imrac11g12_asm1-lun-0 -> ../../sdb
ip-192.168.10.103:3260-iscsi-iqn.2006-01.com.openfiler:imrac11g12_asm2-lun-0 -> ../../sdc
ip-192.168.10.103:3260-iscsi-iqn.2006-01.com.openfiler:imrac11g12_asm3-lun-0 -> ../../sdd
[root@imrac11g1 Server]#

Com o script abaixo, vamos configurar a LUN que será visível no diretório /dev/iscsi.
Vamos garantir os privilégios necessários para o script e criar o diretório /dev/iscsi.

vi /etc/udev/scripts/iscsidev.sh
#!/bin/bash
{
BUS=${1}
HOST=${BUS%%:*}
LID=`echo ${BUS}|awk -F”:” ‘{print $NF}’`

[ -e /sys/class/iscsi_host ] || exit 1

if [ -f /sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname ]
then
file=”/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname”
else
file=”/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session/session*/targetname”
fi

target_name=$(cat ${file})

# This is not an open-scsi drive
if [ -z “${target_name}” ]; then
exit 1
fi

# Check if QNAP drive
check_qnap_target_name=${target_name%%:*}
if [ $check_qnap_target_name = “iqn.2004-04.com.qnap” ]; then
target_name=`echo “${target_name%.*}”`
fi

echo “${target_name##*.}”

LUN=`echo $target_name|awk -F”:” ‘{print $NF}’`

echo `date` $0 $* ${LUN}_${LID}
} >>/tmp/udev_getlun.log
echo ${LUN}_${LID}

chmod 755 /etc/udev/scripts/iscsidev.sh
mkdir -p /dev/iscsi

[root@imrac11g1 Server]# vi /etc/udev/scripts/iscsidev.sh
[root@imrac11g1 Server]# cat /etc/udev/scripts/iscsidev.sh
#!/bin/bash
{
BUS=${1}
HOST=${BUS%%:*}
LID=`echo ${BUS}|awk -F":" '{print $NF}'`

[ -e /sys/class/iscsi_host ] || exit 1

if [ -f /sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname ]
then
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
else
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session/session*/targetname"
fi

target_name=$(cat ${file})

# This is not an open-scsi drive
if [ -z "${target_name}" ]; then
exit 1
fi

# Check if QNAP drive
check_qnap_target_name=${target_name%%:*}
if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then
target_name=`echo "${target_name%.*}"`
fi

echo "${target_name##*.}"

LUN=`echo $target_name|awk -F":" '{print $NF}'`

echo `date` $0 $* ${LUN}_${LID}
} >>/tmp/udev_getlun.log
echo ${LUN}_${LID}
[root@imrac11g1 Server]# chmod 755 /etc/udev/scripts/iscsidev.sh
[root@imrac11g1 Server]# mkdir -p /dev/iscsi

Vamos criar as regras para o ISCSI baseado no script acima identificar cada LUN liberada para nosso Node e reinicializar o serviço do iscsi.

vi /etc/udev/rules.d/55-openiscsi.rules
#/etc/udev/rules.d/55-openiscsi.rules
KERNEL==”sd*”, BUS==”scsi”, PROGRAM=”/etc/udev/scripts/iscsidev.sh %b”, SYMLINK+=”iscsi/%c/part%n”
service iscsi restart

[root@imrac11g1 Server]# vi /etc/udev/rules.d/55-openiscsi.rules
[root@imrac11g1 Server]# cat /etc/udev/rules.d/55-openiscsi.rules
#/etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b", SYMLINK+="iscsi/%c/part%n"
[root@imrac11g1 Server]# service iscsi restart
Logging out of session [sid: 1, target: iqn.2006-01.com.openfiler:imrac11g12_asm1, portal: 192.168.10.103,3260]
Logging out of session [sid: 2, target: iqn.2006-01.com.openfiler:imrac11g12_asm2, portal: 192.168.10.103,3260]
Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:imrac11g12_asm3, portal: 192.168.10.103,3260]
Logout of [sid: 1, target: iqn.2006-01.com.openfiler:imrac11g12_asm1, portal: 192.168.10.103,3260] successful.
Logout of [sid: 2, target: iqn.2006-01.com.openfiler:imrac11g12_asm2, portal: 192.168.10.103,3260] successful.
Logout of [sid: 3, target: iqn.2006-01.com.openfiler:imrac11g12_asm3, portal: 192.168.10.103,3260] successful.
Stopping iSCSI daemon:
iscsid is stopped                                          [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm3, portal: 192.168.10.103,3260] (multiple)
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm1, portal: 192.168.10.103,3260] (multiple)
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm2, portal: 192.168.10.103,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm3, portal: 192.168.10.103,3260] successful.
Login to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm1, portal: 192.168.10.103,3260] successful.
Login to [iface: default, target: iqn.2006-01.com.openfiler:imrac11g12_asm2, portal: 192.168.10.103,3260] successful.
                                                           [  OK  ]
[root@imrac11g1 Server]#

Visualizando o diretório das LUNs no diretório /dev/iscsi.

[root@imrac11g1 Server]# ls -l /dev/iscsi/
total 0
drwxr-xr-x 2 root root 60 Feb  5 21:51 imrac11g12_asm1_0
drwxr-xr-x 2 root root 60 Feb  5 21:51 imrac11g12_asm2_0
drwxr-xr-x 2 root root 60 Feb  5 21:51 imrac11g12_asm3_0
[root@imrac11g1 Server]#

Agora que criamos os diretório para identificar as LUNs no OEL, vamos particionar cada um deles.

fdisk dir/part

[root@imrac11g1 Server]# fdisk /dev/iscsi/imrac11g12_asm1_0/part
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1019, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1019, default 1019):
Using default value 1019

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@imrac11g1 Server]# fdisk /dev/iscsi/imrac11g12_asm2_0/part
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1009, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):
Using default value 1009

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@imrac11g1 Server]# fdisk /dev/iscsi/imrac11g12_asm3_0/part
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1011, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011):
Using default value 1011

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@imrac11g1 Server]#

Após particionada as LUNs, vamos garantir os privilégios necessários para o diretório /backup e através do WinSCP copiar os arquivos os asmlib para o Node. Como estamos utilizando a versão 5 Update 8, segue o link para download.
Download Oracle ASMLib
OBS: No Oracle Linux 6 para baixar os mesmos, deve-se obter o Oracle Unbreakable Linux.
Vamos baixar também o produto Oracle Database 11g (11.2.0.3) (Estou utilizando neste artigo, mas pode-se baixar o 11.2.0.4).
Segue os links abaixo do download do produto Oracle Database 11g.
OBS: Deve ter acesso ao Oracle Metalink.

p10404530_112030_Linux-x86-64_1of7.zip
p10404530_112030_Linux-x86-64_2of7.zip
p10404530_112030_Linux-x86-64_3of7.zip

Após baixado e copiado para o servidor todos os arquivos, vamos instalar o Oracle ASMLib, configurar o ASMLib e descompactar os arquivos do produto Oracle.

rpm -Uvh /backup/oracleasmlib-2.0.4-1.el5.x86_64.rpm
rpm -Uvh /backup/oracleasm-support-2.1.8-1.el5.x86_64.rpm
rpm -Uvh /backup/oracleasm-2.6.18-371.3.1.el5-2.0.5-1.el5.x86_64.rpm

[root@imrac11g1 Server]# rpm -Uvh /backup/oracleasmlib-2.0.4-1.el5.x86_64.rpm
warning: /backup/oracleasmlib-2.0.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:oracleasmlib           ########################################### [100%]
[root@imrac11g1 Server]# rpm -Uvh /backup/oracleasm-support-2.1.8-1.el5.x86_64.rpm
warning: /backup/oracleasm-support-2.1.8-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:oracleasm-support      ########################################### [100%]
[root@imrac11g1 Server]# rpm -Uvh /backup/oracleasm-2.6.18-371.3.1.el5-2.0.5-1.el5.x86_64.rpm
warning: /backup/oracleasm-2.6.18-371.3.1.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:oracleasm-2.6.18-371.3.########################################### [100%]
[root@imrac11g1 Server]#

Comando para configurar o asmlib
/etc/init.d/oracleasm configure
Usuário proprietário
oracle
Grupo proprietário
asmadmin
Iniciar o driver na inicialização do OEL.
y
Escanear os discos na inicialização do OEL.
y

[root@imrac11g1 Server]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@imrac11g1 Server]#

Criandos os discos no ASMLib das LUNs particionadas.

/etc/init.d/oracleasm createdisk “ASM1” “dir/part1”

[root@imrac11g1 Server]# /etc/init.d/oracleasm createdisk "ASM1" "/dev/iscsi/imrac11g12_asm1_0/part1"
Marking disk "ASM1" as an ASM disk:                        [  OK  ]
[root@imrac11g1 Server]# /etc/init.d/oracleasm createdisk "ASM2" "/dev/iscsi/imrac11g12_asm2_0/part1"
Marking disk "ASM2" as an ASM disk:                        [  OK  ]
[root@imrac11g1 Server]# /etc/init.d/oracleasm createdisk "ASM3" "/dev/iscsi/imrac11g12_asm3_0/part1"
Marking disk "ASM3" as an ASM disk:                        [  OK  ]
[root@imrac11g1 Server]# /etc/init.d/oracleasm listdisks
ASM1
ASM2
ASM3
[root@imrac11g1 Server]#

Vamos agora descompactar com o usuário oracle os arquivos do produto Oracle.

su – oracle
cd /backup
unzip p10404530_112030_Linux-x86-64_1of7.zip
unzip p10404530_112030_Linux-x86-64_2of7.zip
unzip p10404530_112030_Linux-x86-64_3of7.zip

OBS: Depois de cada descompactação estou removendo o arquivo compactado.

[root@imrac11g1 Server]# su - oracle
[oracle@imrac11g1 ~]$ cd /backup/
[oracle@imrac11g1 backup]$ unzip p10404530_112030_Linux-x86-64_1of7.zip
   creating: database/
   creating: database/install/
  inflating: database/install/lsnodes
  .
  .
  inflating: database/welcome.html
  inflating: database/readme.html
[oracle@imrac11g1 backup]$ unzip p10404530_112030_Linux-x86-64_2of7.zip
Archive:  p10404530_112030_Linux-x86-64_2of7.zip
   creating: database/stage/Components/oracle.ctx/
   creating: database/stage/Components/oracle.ctx/11.2.0.3.0/
   creating: database/stage/Components/oracle.ctx/11.2.0.3.0/1/
   creating: database/stage/Components/oracle.ctx/11.2.0.3.0/1/DataFiles/
  inflating: database/stage/Components/oracle.ctx/11.2.0.3.0/1/DataFiles/filegroup15.15.1.jar
  .
  .
  inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.3.0/1/DataFiles/filegroup13.jar
  inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.3.0/1/DataFiles/filegroup2.jar
[oracle@imrac11g1 backup]$ rm p10404530_112030_Linux-x86-64_1of7.zip p10404530_112030_Linux-x86-64_2of7.zip
[oracle@imrac11g1 backup]$ unzip p10404530_112030_Linux-x86-64_3of7.zip
Archive:  p10404530_112030_Linux-x86-64_3of7.zip
   creating: grid/
   creating: grid/doc/
   creating: grid/doc/dcommon/
   creating: grid/doc/dcommon/css/
  inflating: grid/doc/dcommon/css/blafdoc.css
  .
  .
  inflating: grid/stage/properties/oracle.crs_Complete.properties
  inflating: grid/stage/properties/userPaths.properties
[oracle@imrac11g1 backup]$ rm p10404530_112030_Linux-x86-64_3of7.zip
[oracle@imrac11g1 backup]$ rm oracleasm*
[oracle@imrac11g1 backup]$ ls -l
total 24
drwxr-xr-x 8 oracle oinstall  4096 Sep 22  2011 database
drwxr-xr-x 8 oracle oinstall  4096 Sep 22  2011 grid
drwx------ 2 oracle dba      16384 Feb  5 21:17 lost+found
[oracle@imrac11g1 backup]$

Com o usuário oracle, vamos criar os arquivos das variáveis de ambiente conforme abaixo:

vi .var_oracle_grid.sh
#!/bin/ksh
umask 022
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/product/11.2.0/grid_01
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ctx/lib:$ORACLE_HOME/rdbms/lib:/usr/dt/lib:/usr/lib:/usr/openwin/lib:/lib
export NLS_LANG=”AMERICAN_AMERICA.WE8MSWIN1252″
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/plsql/jlib:$ORACLE_HOME/ord/jlib:$ORACLE_HOME/network/jlib:$ORACLE_HOME/owm/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ord/ts/jlib
export ORACLE_HOSTNAME=imrac11g1.tk.local
export PS1=”oracle=$ORACLE_SID-> “

vi .var_oracle_db.sh
#!/bin/ksh
umask 022
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_01
export ORACLE_SID=orcl1
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ctx/lib:$ORACLE_HOME/rdbms/lib:/usr/dt/lib:/usr/lib:/usr/openwin/lib:/lib
export NLS_LANG=”AMERICAN_AMERICA.WE8MSWIN1252″
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/plsql/jlib:$ORACLE_HOME/ord/jlib:$ORACLE_HOME/network/jlib:$ORACLE_HOME/owm/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ord/ts/jlib
export ORACLE_HOSTNAME=imrac11g1.tk.local
export PS1=”oracle=$ORACLE_SID-> “

[oracle@imrac11g1 backup]$ cd
[oracle@imrac11g1 ~]$ pwd
/home/oracle
[oracle@imrac11g1 ~]$ vi .var_oracle_grid.sh
[oracle@imrac11g1 ~]$ cat .var_oracle_grid.sh
#!/bin/ksh
umask 022
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/product/11.2.0/grid_01
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ctx/lib:$ORACLE_HOME/rdbms/lib:/usr/dt/lib:/usr/lib:/usr/openwin/lib:/lib
export NLS_LANG="AMERICAN_AMERICA.WE8MSWIN1252"
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/plsql/jlib:$ORACLE_HOME/ord/jlib:$ORACLE_HOME/network/jlib:$ORACLE_HOME/owm/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ord/ts/jlib
export ORACLE_HOSTNAME=imrac11g1.tk.local
export PS1="oracle=$ORACLE_SID-> "
[oracle@imrac11g1 ~]$ vi .var_oracle_db.sh
[oracle@imrac11g1 ~]$ cat .var_oracle_db.sh
#!/bin/ksh
umask 022
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_01
export ORACLE_SID=orcl1
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ctx/lib:$ORACLE_HOME/rdbms/lib:/usr/dt/lib:/usr/lib:/usr/openwin/lib:/lib
export NLS_LANG="AMERICAN_AMERICA.WE8MSWIN1252"
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/plsql/jlib:$ORACLE_HOME/ord/jlib:$ORACLE_HOME/network/jlib:$ORACLE_HOME/owm/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/ord/ts/jlib
export ORACLE_HOSTNAME=imrac11g1.tk.local
export PS1="oracle=$ORACLE_SID-> "
[oracle@imrac11g1 ~]$ exit
logout
[root@imrac11g1 Server]#

Com o usuário root vamos instalar o pacote cvuqdisk para o Cluster.
OBS: Sem o cvuqdisk, o CVU (Cluster Verification Utility) não pode compartilhar os discos e recebe-se um erro durante a execução do script cluvfy.
Mais informações acesse Installing the cvuqdisk Package for Linux

rpm -Uvh /backup/grid/rpm/cvuqdisk-1.0.9-1.rpm

[root@imrac11g1 Server]# rpm -Uvh /backup/grid/rpm/cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
Using default group oinstall to install package
   1:cvuqdisk               ########################################### [100%]
[root@imrac11g1 Server]#

Vamos configurar os servidores para o NTPD, pois são essenciais para o bom funcionando do Grid e de todo o ambiente Oracle RAC.

[root@imrac11g1 Server]# vi /etc/sysconfig/ntpd
[root@imrac11g1 Server]# cat /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no

# Additional options for ntpdate
NTPDATE_OPTIONS=""
[root@imrac11g1 Server]# vi /etc/ntp.conf
[root@imrac11g1 Server]# cat /etc/ntp.conf
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict -6 ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.rhel.pool.ntp.org
server 1.rhel.pool.ntp.org
server 2.rhel.pool.ntp.org
server 3.pool.ntp.org
server 4.pool.ntp.org
server 5.pool.ntp.org
server 6.pool.ntp.org

#broadcast 192.168.1.255 key 42         # broadcast server
#broadcastclient                        # broadcast client
#broadcast 224.0.1.1 key 42             # multicast server
#multicastclient 224.0.1.1              # multicast client
#manycastserver 239.255.254.254         # manycast server
#manycastclient 239.255.254.254 key 42  # manycast client

# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

# Drift file.  Put this in a directory which the daemon can write to.
# No symbolic links allowed, either, since the daemon updates the file
# by creating a temporary in the same directory and then rename()'ing
# it to the file.
driftfile /var/lib/ntp/drift

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8
[root@imrac11g1 Server]# chkconfig ntpd on
[root@imrac11g1 Server]# service ntpd start
ntpd: Synchronizing with time server:                      [  OK  ]
Starting ntpd:                                             [  OK  ]
[root@imrac11g1 Server]#

Vamos desligar o nosso Node e fazer um Clone da Máquina Virtual.

[root@imrac11g1 Server]# shutdown -h 0

Broadcast message from root (pts/0) (Wed Feb  5 22:22:56 2014):

The system is going down for system halt NOW!
[root@imrac11g1 Server]#

Continua no artigo Implementando Oracle Database 11gR2 RAC on Virtualbox em Linux com ISCSI – P2.

%name Implementando Oracle Database 11gR2 RAC on Virtualbox em Linux com ISCSI   P1

Autor: Maycon Tomiasi

Formado em Tecnologia da Informação na FIPP (Faculdade de Informática de Presidente Prudente), Analista DBA Oracle pela Teiko Soluções em Tecnologia da Informação, residente em Blumenau/ SC, Certificado OCP 10g/11g/12c, OCS 11g Implementation, OCE 11g Performance Tuning, OCE 11g RAC & GRID e OPN Specialist. Conhecimentos em PHP.