Jul 24, 2009

Real Aplication Cluster - sekedar pengalaman aja.. and share..

Hi....
Dalam kesempatan ini, tulisan ini hanya untuk dokumentasi dari pengalaman aja..... untuk komentar tetap di tampung.

so let's begin.....

-- first --

Penjelasan mengenai Oracle RAC ini tidak di bahas ya... tetapi dapat dicari lewat bantuan om google ato ke wikipedia atau ke oracle nya ... oke atau ke http://en.wikipedia.org/wiki/Oracle_RAC

-- Topologi --

Topologi yang dipakai mengacu pada topologi http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.html , dengan 2 komputer node dan 1 sebagai data share. Dalam praktek nya, kami menggunakan dua metode share. Yang pertama menggunakan NFS dan yang satunya menggunakan iSCSI. Settingan untuk NFS bisa di cari di google, untuk iSCSI bisa melihat tulisan sebelumnya atau lihat di google juga :D ( google terus......... ).



-- instalasi --

saran:
1. Segi hardware dan software ikuti sesui recomendet dari oracle
2. Dari pengalaman gunakan LAN card yang bagus, karena akan berpengaruh sekali...( gak usah mahal mahal yang penting berfungsi dengan baik dan benar.. weleh.. :D )

Untuk sistem operasi disaranakan untuk menggunakan Oracle Enterprise Linux 4 atau 5
Oracle nya siapkan Clusterware, database, (menggunakan Oracle 10g)

-- host --

NB: lakukan di kedua node rac1 dan rac2

Samakan file /etc/hosts baik untuk node1 dan node2

[root@rac1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost


# public network (eth0)
192.168.1.190 rac1.localdomain rac1
192.168.1.191 rac2.localdomain rac2

# private interconnect (eth1)
192.168.2.190 rac1-priv.localdomain rac1-priv
192.168.2.191 rac2-priv.localdomain rac2-priv

# public virtual IP (eth0)
192.168.1.200 rac1-vip.localdomain rac1-vip
192.168.1.204 rac2-vip.localdomain rac2-vip

# private storage network for open filer (eth1)
192.168.1.195 openfiler1
192.168.2.195 openfiler2
#-----------end of file-------------------



-- user --
Buat User dan Group sebagai berikut:

#groupadd oinstall
#groupadd dba
#groupadd oper
#useradd -g oinstall -G dba oracle
#passwd oracle
lanjutkan dengan login sebagai user oracle :

#su - oracle

dan edit file .bash_profile
get from http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnLinuxUsingNFS.php

tambahkan script beritkut

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
#===================perhatikan===========

ORACLE_SID=RAC1; export ORACLE_SID

#======= ganti ORACLE_SID=RAC2 di node yang ke dua =========

ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
#---------------end of file--------------------------

jalankan perintah ini sebagai root

cat >> /etc/security/limits.conf <<EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF


cat >> /etc/pam.d/login <<EOF
session required /lib/security/pam_limits.so
EOF


#
cat >> /etc/modprobe.conf <<EOF
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
EOF

#cat
>> /etc/hosts.equiv <<EOF
+rac1 oracle
+rac2 oracle
+rac1-priv oracle
+rac2-priv oracle

EOF

Pastikan pada file /etc/selinux/config (SELINUX harus di disable)
SELINUX=disabled

Tambahkan pada /etc/sysctl.conf

#======================
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
#fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
#net.core.rmem_default=262144
#net.core.rmem_max=262144
#net.core.wmem_default=262144
#net.core.wmem_max=262144

# Additional and amended parameters suggested by Kevin Closson
net.core.rmem_default = 524288
net.core.wmem_default = 524288
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.ipfrag_high_thresh=524288
net.ipv4.ipfrag_low_thresh=393216
net.ipv4.tcp_rmem=4096 524288 16777216
net.ipv4.tcp_wmem=4096 524288 16777216
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=0
net.ipv4.tcp_window_scaling=1
net.core.optmem_max=524287
net.core.netdev_max_backlog=2500
sunrpc.tcp_slot_table_entries=128
sunrpc.udp_slot_table_entries=128
net.ipv4.tcp_mem=16384 16384 16384
#========end of file============

kemudian jalankan

#/sbin/sysctl -p
#chkconfig rsh on
#chkconfig rlogin on
#service xinetd reload


--setting ssh privilage--

Pertama buat keygen dengan perintah di bawah ini ( Di masing masing node). Pembuatan keygen ini dimaksudkan supaya rac1 dan rac2 tidak perlu meng inputkan password saat ada koneksi ssh diantara keduanya.

[oracle@rac2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
06:39:6f:de:e8:37:0d:8b:93:2f:55:9a:57:68:8c:01 oracle@rac2

[oracle@rac2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
10:f4:73:e0:79:13:8f:74:85:c5:3a:85:28:c8:18:50 oracle@rac2

Kemudian jalankan perintah berikut di salah satu node saja

$ ssh rac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac1 (192.168.1.100)' can't be established.
RSA key fingerprint is 31:8a:f8:9e:28:c2:b7:d3:90:8d:dc:76:ca:d9:44:76.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.1.100' (RSA) to the list of known hosts.
oracle@rac1's password: xxxxx

$ ssh rac1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Enter passphrase for key '/home/oracle/.ssh/id_rsa': xxxxx

$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac2 (192.168.1.101)' can't be established.
RSA key fingerprint is 85:2d:cd:eb:17:2c:32:21:52:c2:ee:89:d2:11:2a:e6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.1.101' (RSA) to the list of known hosts.
oracle@rac2's password: xxxxx

$ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rac2's password: xxxxx
$ scp ~/.ssh/authorized_keys linux2:.ssh/authorized_keys
oracle@linux2's password: xxxxx
authorized_keys 100% 1652 1.6KB/s 00:00

Ubah permisi authorized key di tiap node dengan permisi sebagai berikut

$ chmod 600 ~/.ssh/authorized_keys

Untuk mengetes keberhasilan konfigurasi jalankan perintah ini:

[oracle@rac1 ~]$ ssh rac1 date;hostname
Tue Jul 28 09:50:39 WIT 2009
rac1
[oracle@rac1 ~]$ ssh rac2 date;hostname
Tue Jul 28 09:50:49 WIT 2009
rac2


--file, directori installation and mount point --

Sisi server:
*iSCSI :
lihat pembuatan nya di http://underdarkonsole.blogspot.com/2009/07/iscsi-configure-with-openfiler-access.html

*NFS
:
Buat direktori yang akan dijadikan server sharing
mkdir /share1

Tambahkan pada /etc/exports

/share1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
#==================end of file=======================================

Lalu restart service nfs ( tentunya nfs telah di install )

chkconfig nfs on
service nfs restart atau /etc/init.d/nfs restart

Dari sisi client :
jalankan di kedua node rac1 dan rac2

# mkdir /u01

*iSCSI
Edit file /etc/iscsi.conf

.................
DiscoveryAddress=192.168.2.195
.................


kemudian jalankan perintah

#/etc/iscsi restart

edit file /etc/ocfs2/cluster.conf tambahkan

node:
ip_port = 7777
ip_address = 192.168.1.190
number = 0
name = linux1
cluster = ocfs2

node:
ip_port = 7777
ip_address = 192.168.1.191
number = 1
name = linux2
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2
#============end of file============

direcomendasikan gunakan GUI dengan perintah

$su -
#ocfs2console &
--catatan tambahan untuk ocfs file system

/etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 7
Network idle timeout: 10000
Network keepalive delay: 5000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

/etc/init.d/o2cb online ocfs2
Starting O2CB cluster ocfs2: OK

# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [7]: 61
Specify network idle timeout in ms (>=5000) [10000]: 10000
Specify network keepalive delay in ms (>=1000) [5000]: 5000
Specify network reconnect delay in ms (>=2000) [2000]: 2000
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK


Cek Volume iSCSI

[root@rac1 ~]# iscsi-ls
*******************************************************************************
SFNet iSCSI Driver Version ...4:0.1.11-4(15-Jan-2007)
*******************************************************************************
TARGET NAME : iqn.2006-01.com.openfiler:tsn.asm
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 192.168.2.195:3260,1
SESSION STATUS : ESTABLISHED AT Fri Jul 24 08:28:45 WIT 2009
SESSION ID : ISID 00023d000001 TSIH e00
*******************************************************************************


di komputer rac1 volume terdeteksi sebagai sda1
dan jalankan perintah ini dari salah satu node misal rac1 saja.

# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oracle /dev/sda1

Tambahkan di /etc/fstab

LABEL=oracle /u01 ocfs2 _netdev,datavolume,nointr 0 0
#=========end of file======================

dan mount volume

#mount -t ocfs2 -o datavolume,nointr -L "oracle" /u01

*NFS
edit file /etc/fstab tambahkan

openfiler2:/share1 /u01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
#==================end of file====================

jalankan perintah

# mount /u01


--- Folder dan file installation--

NB: jalankan dari salah satu node saja

touch /u01/crs_configuration
touch /u01/voting_disk
mkdir -p /u01/crs/oracle/product/10.2.0/crs
mkdir -p /u01/app/oracle/product/10.2.0/db_1
mkdir -p /u01/oradata

chown -R oracle:oinstall /u01


--install clusterware--
Masuk ke dalam direktori master clusterware dan jalankan sebagai user oracle

$./runInstaller



jalan kan perintah pada box secara berurutan sesui dengan kolom nodes
setelah menjalankan root.sh di node rac2 dan anda mendapati

Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to
configure virtual IPs.

maka jalankan :

# cd /u01/crs/oracle/product/10.2.0/crs/bin
# ./vipca

dan gunakan konfigurasi ip seperti yang ada di /etc/hosts


# public virtual IP (eth0)
192.168.1.200 rac1-vip.localdomain rac1-vip
192.168.1.204 rac2-vip.localdomain rac2-vip

Setelah vipca selesai tekan OK pada box instalasi clusterware

clusterware selesai tinggal install database >> next... terlalu panjang... halamannya.........
bersambung

0 comments:

Post a Comment

comment please ...