Jul 27, 2009

Testing Oracle cluster and RAC Transparent Application Failover (TAF)

--base on--

http://underdarkonsole.blogspot.com/2009/07/real-aplication-cluster-sekedar.html
http://underdarkonsole.blogspot.com/2009/07/install-database-for-cluster-aplication.html

dari artikel diatas sekarang adalah bagaimana cara mengetest bahwa rac telah berjalan dengan baik. Dan berikut sedikit commnad yang mungkin bisa kita pakai... :D
so... let's begin.......

--command--

Nb: dalam kasus ini database universal yang digunakan adalah " db"

Melihat status instance dan service di semua node

[oracle@rac1 ~]$ srvctl status database -d db
Instance db1 is running on node rac1
Instance db2 is running on node rac2


Jika hanya ingin mengecek di salah satu node saja

[oracle@rac1 ~]$ srvctl status instance -d db -i db1
Instance db1 is running on node rac1


untuk mematikan atau menyalakan instance tinggal ganti status dengan stop atau start
Untuk melihat aplikasi node yang berjalan

[oracle@rac1 ~]$ srvctl status nodeapps -n rac1
VIP is running on node: rac1
GSD is running on node: rac1
Listener is running on node: rac1
ONS daemon is running on node: rac1


Melihat config database

[oracle@rac1 ~]$ srvctl config database -d db
rac1 db1 /u01/app/oracle/product/10.2.0/db_1
rac2 db2 /u01/app/oracle/product/10.2.0/db_1

[oracle@rac1 ~]$ srvctl config nodeapps -n rac1 -a -g -s -l
VIP exists.: /rac1-vip.localdomain/192.168.1.200/255.255.255.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.

[oracle@rac1 ~]$ srvctl config nodeapps -n rac2 -a -g -s -l
VIP exists.: /rac2-vip.localdomain/192.168.1.204/255.255.255.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.


Dan untuk menjalankan service Oracle Enterprise Manager dengan


$emctl start dbconsole


untuk menghentikan ganti opsi start dengan stop


--membuat service dengan srvctl--

[oracle@rac1 ~]$ srvctl add service -d db -s jajal -r db1 -a db2
[oracle@rac1 ~]$ srvctl config service -d db
db_service PREF: db1 AVAIL: db2
jajal PREF: db1 AVAIL: db2


Untuk men-start dan men-stop service


srvctl start service -d db -s jajal
srvctl stop service -d db -s jajal

-- another problem--

Mungkin ini jarang di alami tapi pernah saya alamai. Masalahnya adalah IP untuk virual IP atau VIP bentrok dengan ip lain dan harus dirubah. itu dapat di atasi dengan menganti vip nya sebagai berikut



$ su -
# srvctl modify nodeapps -n rac2 -A rac2-vip.localdomain/192.168.1.204/255.255.255.0/eth0




--RAC TAF Failover--

Next kita ke bagian inti yaitu TAF. Untuk settingannya sangat sederhana. file configurasinya berada di $ORACLE_HOME/network/admin/tnsnames.ora

Untuk metode dan pilihan pen- setingan nya bisa dilihat di sini
Dan saya menggunakan metode sebagai berikut

DB2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db.simplimobile.com)
(INSTANCE_NAME = db2)
(FAILOVER_MODE=
(BACKUP = DB1)
(TYPE=select)
(METHOD=preconnect)
)
)
)

DB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.localdomain)(PORT = 1521))
(FAILOVER = yes )
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db.simplimobile.com)
(FAILOVER_MODE =
(TYPE = SELECT)
(BACKUP = DB2 )
(METHOD = preconnect)
(RETRIES = 180)
(DELAY = 5)
)
(INSTANCE_NAME = db1)
)
)


Nb: yang berwarna hijau adalah tambahan......

Dan pertanyaan nya adalah bagaimana kita tahu kalo ini telah berfungsi...?????


--Mari kita cari tau--

[oracle@rac1 ~]$ sqlplus scott/tiger@db1


check instance yang aktif

SQL> select inst_id, instance_name from gv$instance;

INST_ID INSTANCE_NAME
---------- ----------------
1 db1
2 db2


SQL> select instance_name from v$instance ;

INSTANCE_NAME
----------------
db1



SQL> select machine, failover_type, failover_method, failed_over, count(*)
From v$session Group by machine, failover_type, failover_method, failed_over;

MACHINE              FAILOVER_TYPE FAILOVER_M FAI   COUNT(*)

-------------------- ------------- ---------- --- ----------

rac1.localdomain     NONE          NONE       NO          24

rac1.localdomain     SELECT        PRECONNECT NO           1


Kemudian jalankan perintah ini tanpa keluar dari sqlplus dari terminal lain, lalu ulang query diatas


[oracle@rac2 ~]$ srvctl stop instance -d db -i db1
[oracle@rac2 ~]$ srvctl status database -d db
Instance db1 is not running on node rac1
Instance db2 is running on node rac2



SQL> select machine, failover_type, failover_method, failed_over, count(*)
From v$session Group by machine, failover_type, failover_method, failed_over;

MACHINE              FAILOVER_TYPE FAILOVER_M FAI   COUNT(*)

-------------------- ------------- ---------- --- ----------

rac2.localdomain     NONE          NONE       NO          20

rac2.localdomain     SELECT        PRECONNECT YES          1


Kemudian untuk lebih jelasnya coba komputer rac1 di matikan dan perhatikan konfigurasi ip ( ifconfig ) RAC2 akan berubah.
NB: VIP RAC1 akan di pindah ke Virtual IP RAC 2 )

#sebelum komputer RAC1 dimatikan
[root@rac2 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:7B:00:DC
inet addr:192.168.1.191 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe7b:dc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:620286 errors:0 dropped:0 overruns:0 frame:0
TX packets:475415 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:62015968 (59.1 MiB) TX bytes:46630473 (44.4 MiB)
Interrupt:185 Base address:0x1080

eth0:1 Link encap:Ethernet HWaddr 00:0C:29:7B:00:DC
inet addr:192.168.1.204 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:185 Base address:0x1080

eth1 Link encap:Ethernet HWaddr 00:0C:29:7B:00:E6
inet addr:192.168.2.191 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe7b:e6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:62834294 errors:3 dropped:11 overruns:0 frame:0
TX packets:30604774 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3673394008 (3.4 GiB) TX bytes:753379051 (718.4 MiB)
Interrupt:169 Base address:0x1400

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:881272 errors:0 dropped:0 overruns:0 frame:0
TX packets:881272 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:79229853 (75.5 MiB) TX bytes:79229853 (75.5 MiB)



Setelah komputer dimatikan, beberapa saat kemudian ip nya berubah menjadi

[root@rac2 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:7B:00:DC
inet addr:192.168.1.191 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe7b:dc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:621431 errors:0 dropped:0 overruns:0 frame:0
TX packets:480049 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:62168391 (59.2 MiB) TX bytes:47238196 (45.0 MiB)
Interrupt:185 Base address:0x1080

eth0:1 Link encap:Ethernet HWaddr 00:0C:29:7B:00:DC
inet addr:192.168.1.204 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:185 Base address:0x1080

eth0:2 Link encap:Ethernet HWaddr 00:0C:29:7B:00:DC
inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:185 Base address:0x1080

eth1 Link encap:Ethernet HWaddr 00:0C:29:7B:00:E6
inet addr:192.168.2.191 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe7b:e6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:62865093 errors:3 dropped:11 overruns:0 frame:0
TX packets:30619912 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3714022080 (3.4 GiB) TX bytes:756831651 (721.7 MiB)
Interrupt:169 Base address:0x1400

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:881867 errors:0 dropped:0 overruns:0 frame:0
TX packets:881867 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:79248382 (75.5 MiB) TX bytes:79248382 (75.5 MiB)


NB: warna merah merupakan perubahan ip yang terjadi dan Virtual ip ini akan di kembalikan lagi ke RAC1 jika komputer RAC1 sudah ready kembali.


sekian
Terima kasih
Dan mohon maaf jika masih ada kekurangan.. maklum masih belajar.....

keep learn keep smart and never give up...

cayoooo

install database for cluster aplication with nfs or iSCSI

--base on ---

Tulisan sebelum nya Real Aplication Cluster - sekedar pengalaman aja.. and share..
Clusterware sudah terinstall.... dan selanjut nya instalasi untuk database nya

--persiapan--

Segala persiapan dah sesuai dan merujuk dari tulisan diatas maka persiapan hardware dan perlengkapan lainnya diangap dah selesai

--installation--

Untuk installasi komputer yang digunakan adalah rac1. Siapkan master database nya....
extrak database master:

unzip 10201_database_linux32.zip

Masuk kedalam direktori hasi unzipan dan jalankan perintah install

$cd database
$./runInstaller

dan screen capture nya sebagai berikut:

Jalanakan root.sh berdasarkan urutan dalam kolom nodes.. dan jangan berbarengan.. woke..

samapi disini... tinggal membuat database dengan menjalankan

dbca

selamat mengexplore sendiri yaa.....

sumber :
http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi_2.html
http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnLinuxUsingNFS.php

Jul 24, 2009

Real Aplication Cluster - sekedar pengalaman aja.. and share..

Hi....
Dalam kesempatan ini, tulisan ini hanya untuk dokumentasi dari pengalaman aja..... untuk komentar tetap di tampung.

so let's begin.....

-- first --

Penjelasan mengenai Oracle RAC ini tidak di bahas ya... tetapi dapat dicari lewat bantuan om google ato ke wikipedia atau ke oracle nya ... oke atau ke http://en.wikipedia.org/wiki/Oracle_RAC

-- Topologi --

Topologi yang dipakai mengacu pada topologi http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.html , dengan 2 komputer node dan 1 sebagai data share. Dalam praktek nya, kami menggunakan dua metode share. Yang pertama menggunakan NFS dan yang satunya menggunakan iSCSI. Settingan untuk NFS bisa di cari di google, untuk iSCSI bisa melihat tulisan sebelumnya atau lihat di google juga :D ( google terus......... ).



-- instalasi --

saran:
1. Segi hardware dan software ikuti sesui recomendet dari oracle
2. Dari pengalaman gunakan LAN card yang bagus, karena akan berpengaruh sekali...( gak usah mahal mahal yang penting berfungsi dengan baik dan benar.. weleh.. :D )

Untuk sistem operasi disaranakan untuk menggunakan Oracle Enterprise Linux 4 atau 5
Oracle nya siapkan Clusterware, database, (menggunakan Oracle 10g)

-- host --

NB: lakukan di kedua node rac1 dan rac2

Samakan file /etc/hosts baik untuk node1 dan node2

[root@rac1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost


# public network (eth0)
192.168.1.190 rac1.localdomain rac1
192.168.1.191 rac2.localdomain rac2

# private interconnect (eth1)
192.168.2.190 rac1-priv.localdomain rac1-priv
192.168.2.191 rac2-priv.localdomain rac2-priv

# public virtual IP (eth0)
192.168.1.200 rac1-vip.localdomain rac1-vip
192.168.1.204 rac2-vip.localdomain rac2-vip

# private storage network for open filer (eth1)
192.168.1.195 openfiler1
192.168.2.195 openfiler2
#-----------end of file-------------------



-- user --
Buat User dan Group sebagai berikut:

#groupadd oinstall
#groupadd dba
#groupadd oper
#useradd -g oinstall -G dba oracle
#passwd oracle
lanjutkan dengan login sebagai user oracle :

#su - oracle

dan edit file .bash_profile
get from http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnLinuxUsingNFS.php

tambahkan script beritkut

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
#===================perhatikan===========

ORACLE_SID=RAC1; export ORACLE_SID

#======= ganti ORACLE_SID=RAC2 di node yang ke dua =========

ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
#---------------end of file--------------------------

jalankan perintah ini sebagai root

cat >> /etc/security/limits.conf <<EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF


cat >> /etc/pam.d/login <<EOF
session required /lib/security/pam_limits.so
EOF


#
cat >> /etc/modprobe.conf <<EOF
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
EOF

#cat
>> /etc/hosts.equiv <<EOF
+rac1 oracle
+rac2 oracle
+rac1-priv oracle
+rac2-priv oracle

EOF

Pastikan pada file /etc/selinux/config (SELINUX harus di disable)
SELINUX=disabled

Tambahkan pada /etc/sysctl.conf

#======================
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
#fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
#net.core.rmem_default=262144
#net.core.rmem_max=262144
#net.core.wmem_default=262144
#net.core.wmem_max=262144

# Additional and amended parameters suggested by Kevin Closson
net.core.rmem_default = 524288
net.core.wmem_default = 524288
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.ipfrag_high_thresh=524288
net.ipv4.ipfrag_low_thresh=393216
net.ipv4.tcp_rmem=4096 524288 16777216
net.ipv4.tcp_wmem=4096 524288 16777216
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=0
net.ipv4.tcp_window_scaling=1
net.core.optmem_max=524287
net.core.netdev_max_backlog=2500
sunrpc.tcp_slot_table_entries=128
sunrpc.udp_slot_table_entries=128
net.ipv4.tcp_mem=16384 16384 16384
#========end of file============

kemudian jalankan

#/sbin/sysctl -p
#chkconfig rsh on
#chkconfig rlogin on
#service xinetd reload


--setting ssh privilage--

Pertama buat keygen dengan perintah di bawah ini ( Di masing masing node). Pembuatan keygen ini dimaksudkan supaya rac1 dan rac2 tidak perlu meng inputkan password saat ada koneksi ssh diantara keduanya.

[oracle@rac2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
06:39:6f:de:e8:37:0d:8b:93:2f:55:9a:57:68:8c:01 oracle@rac2

[oracle@rac2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
10:f4:73:e0:79:13:8f:74:85:c5:3a:85:28:c8:18:50 oracle@rac2

Kemudian jalankan perintah berikut di salah satu node saja

$ ssh rac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac1 (192.168.1.100)' can't be established.
RSA key fingerprint is 31:8a:f8:9e:28:c2:b7:d3:90:8d:dc:76:ca:d9:44:76.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.1.100' (RSA) to the list of known hosts.
oracle@rac1's password: xxxxx

$ ssh rac1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Enter passphrase for key '/home/oracle/.ssh/id_rsa': xxxxx

$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac2 (192.168.1.101)' can't be established.
RSA key fingerprint is 85:2d:cd:eb:17:2c:32:21:52:c2:ee:89:d2:11:2a:e6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.1.101' (RSA) to the list of known hosts.
oracle@rac2's password: xxxxx

$ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rac2's password: xxxxx
$ scp ~/.ssh/authorized_keys linux2:.ssh/authorized_keys
oracle@linux2's password: xxxxx
authorized_keys 100% 1652 1.6KB/s 00:00

Ubah permisi authorized key di tiap node dengan permisi sebagai berikut

$ chmod 600 ~/.ssh/authorized_keys

Untuk mengetes keberhasilan konfigurasi jalankan perintah ini:

[oracle@rac1 ~]$ ssh rac1 date;hostname
Tue Jul 28 09:50:39 WIT 2009
rac1
[oracle@rac1 ~]$ ssh rac2 date;hostname
Tue Jul 28 09:50:49 WIT 2009
rac2


--file, directori installation and mount point --

Sisi server:
*iSCSI :
lihat pembuatan nya di http://underdarkonsole.blogspot.com/2009/07/iscsi-configure-with-openfiler-access.html

*NFS
:
Buat direktori yang akan dijadikan server sharing
mkdir /share1

Tambahkan pada /etc/exports

/share1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
#==================end of file=======================================

Lalu restart service nfs ( tentunya nfs telah di install )

chkconfig nfs on
service nfs restart atau /etc/init.d/nfs restart

Dari sisi client :
jalankan di kedua node rac1 dan rac2

# mkdir /u01

*iSCSI
Edit file /etc/iscsi.conf

.................
DiscoveryAddress=192.168.2.195
.................


kemudian jalankan perintah

#/etc/iscsi restart

edit file /etc/ocfs2/cluster.conf tambahkan

node:
ip_port = 7777
ip_address = 192.168.1.190
number = 0
name = linux1
cluster = ocfs2

node:
ip_port = 7777
ip_address = 192.168.1.191
number = 1
name = linux2
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2
#============end of file============

direcomendasikan gunakan GUI dengan perintah

$su -
#ocfs2console &
--catatan tambahan untuk ocfs file system

/etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 7
Network idle timeout: 10000
Network keepalive delay: 5000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

/etc/init.d/o2cb online ocfs2
Starting O2CB cluster ocfs2: OK

# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [7]: 61
Specify network idle timeout in ms (>=5000) [10000]: 10000
Specify network keepalive delay in ms (>=1000) [5000]: 5000
Specify network reconnect delay in ms (>=2000) [2000]: 2000
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK


Cek Volume iSCSI

[root@rac1 ~]# iscsi-ls
*******************************************************************************
SFNet iSCSI Driver Version ...4:0.1.11-4(15-Jan-2007)
*******************************************************************************
TARGET NAME : iqn.2006-01.com.openfiler:tsn.asm
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 192.168.2.195:3260,1
SESSION STATUS : ESTABLISHED AT Fri Jul 24 08:28:45 WIT 2009
SESSION ID : ISID 00023d000001 TSIH e00
*******************************************************************************


di komputer rac1 volume terdeteksi sebagai sda1
dan jalankan perintah ini dari salah satu node misal rac1 saja.

# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oracle /dev/sda1

Tambahkan di /etc/fstab

LABEL=oracle /u01 ocfs2 _netdev,datavolume,nointr 0 0
#=========end of file======================

dan mount volume

#mount -t ocfs2 -o datavolume,nointr -L "oracle" /u01

*NFS
edit file /etc/fstab tambahkan

openfiler2:/share1 /u01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
#==================end of file====================

jalankan perintah

# mount /u01


--- Folder dan file installation--

NB: jalankan dari salah satu node saja

touch /u01/crs_configuration
touch /u01/voting_disk
mkdir -p /u01/crs/oracle/product/10.2.0/crs
mkdir -p /u01/app/oracle/product/10.2.0/db_1
mkdir -p /u01/oradata

chown -R oracle:oinstall /u01


--install clusterware--
Masuk ke dalam direktori master clusterware dan jalankan sebagai user oracle

$./runInstaller



jalan kan perintah pada box secara berurutan sesui dengan kolom nodes
setelah menjalankan root.sh di node rac2 dan anda mendapati

Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to
configure virtual IPs.

maka jalankan :

# cd /u01/crs/oracle/product/10.2.0/crs/bin
# ./vipca

dan gunakan konfigurasi ip seperti yang ada di /etc/hosts


# public virtual IP (eth0)
192.168.1.200 rac1-vip.localdomain rac1-vip
192.168.1.204 rac2-vip.localdomain rac2-vip

Setelah vipca selesai tekan OK pada box instalasi clusterware

clusterware selesai tinggal install database >> next... terlalu panjang... halamannya.........
bersambung

Jul 14, 2009

ISCSI configure with openfiler... access oleh xp and linux

Hai.... n let's go....

- - first - -

Install Openfiler... (you can download : here )
akses openfiler via web base

login with user and pass that you was set on instalation

-- step 2 --

Create Group Volume

Select partition would be use. In case i use /dev/hda


Now create the pratition

Partition succes, then back to menu Volume Group and create group with the partisi that we have


-- step 3 --

Create Volume by click the Add Volume menu

then go to ISCSI Target menu and add new ISCSI target

go to LUN mapping Menu and map the partition

this step ISCSI is ready to use...

--step 4 --

Access the ISCSI
To access the ISCSI, we need ISCSI initiator.
- XP you can find :
Microsoft iSCSI Software Initiator Version 2.08
- Ubuntu you can find :
open-iscsi ( you can install with pakage management )
- Other Linux, can use ISCSI Initiator.

* Windows
- Download & install iSCSI Initiator, then run from program menu

Add target :

go to Targets Menu, click refresh button and then log on to iSCSI terget


now go to Control Panel > Administrative Tools > Computer Management

you can format it and use it..... :D

*Ubuntu
- Install open-iscsi via package manager.. on in text mode :
# sudo apt-get install open-iscsi
- Default setting is place in /etc/iscsi/iscsi.conf
- Discoveri iSCSI target with command
# iscsiadm -m discovery -t sendtargets -p 192.168.1.195 # ip openfiler
- log on to iSCSI :
# /etc/init.d/open-iscsi restart
- To see run fdisk -l

have nice day..:D