몇가지 rac2 관련 셋팅 변경


.bash_profile안에 있는 ORACLE_SID를 ORCL2로 변경합니다.

/etc/hosts에
127.0.01을 rac2로 변경합니다.


clusterware 설치 준비


rac1의 /u01에 clusterware를 ftp로 올립니다.
10201_clusterware_linux32.zip - 오라클 사이트에서 받습니다.

파일을 oracle유저가 사용가능 하도력 변경합니다.
chown oracle:dba *.zip

oracel유저로 작업합니다.
su - oracle

/u01에서
unzip *.zip


xwindow 설정하기

- root에서 다음을 한다.
xhost +
- oracle유저로 가서 xclock을 해본다.
su - oracle
xclock
시계가 나타나지 않으면
oracle 홈 디렉토리에 .bash_profile에 있는 DISPLAY에서 IP를 삭제하고 한다.
. ./.bash_profile로 반영후 xclock 를 해야 환경변수가 저장된다.

clusterware 설치


su - oracle
cd /u01/clusterware
./runInstaller









기본으로 갑니다.



디렉토리를 crs_1로 변경합니다.




기본 점검 이상이 없으면 Next. 메모리가 부족하면 에러가 나고 진행이 안됩니다.




Add







































rac1에서

[root@rac1 ~]# cd /oracle/app/oracle/oraInventory/
[root@rac1 oraInventory]# ./orainstRoot.sh
Changing permissions of /oracle/app/oracle/oraInventory to 770.
Changing groupname of /oracle/app/oracle/oraInventory to dba.
The execution of the script is complete
[root@rac1 oraInventory]# cd ..
[root@rac1 oracle]# cd product/10.2.0/crs_1/
[root@rac1 crs_1]# ./root.sh
WARNING: directory '/oracle/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac1 oraInventory]#








rac2에서

[root@rac2 ~]# cd /oracle/app/oracle/
[root@rac2 oracle]# ls
ORA_CRS_HOME  oraInventory  product
[root@rac2 oracle]# cd oraInventory/
[root@rac2 oraInventory]# ./orainstRoot.sh
Changing permissions of /oracle/app/oracle/oraInventory to 770.
Changing groupname of /oracle/app/oracle/oraInventory to dba.
The execution of the script is complete
[root@rac2 oraInventory]# cd ..
[root@rac2 oracle]# cd product/10.2.0/crs_1/
[root@rac2 crs_1]# ./root.sh
WARNING: directory '/oracle/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/oracle/app/oracle/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
[root@rac2 crs_1]#




위 자바에러 수정
http://blog.naver.com/s5b8s4?Redirect=Log&logNo=140058030425 참고


/oracle/app/oracle/product/10.2.0/crs_1/bin/vipca 120번째줄 부분에 진한 글씨 부분을 추가한다.

       #Remove this workaround when the bug 3937317 is fixed
       arch=`uname -m`
       if [ "$arch" = "i686" -o "$arch" = "ia64" ]
       then
            LD_ASSUME_KERNEL=2.4.19
            export LD_ASSUME_KERNEL
       fi
       unset LD_ASSUME_KERNEL
       #End workaround


[root@rac2 bin]# ./vipca
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

에러 다시 발생

[root@rac2 bin]# ./oifcfg iflist
eth0  192.168.89.0
eth1  192.168.206.0
[root@rac2 bin]# ./oifcfg setif -global eth0/192.168.89.0:public
[root@rac2 bin]# ./oifcfg setif -global eth1/192.168.206.0:cluster_interconnect
[root@rac2 bin]# ./oifcfg getif
eth0  192.168.89.0  global  public
eth1  192.168.206.0  global  cluster_interconnect


[root@rac2 bin]# ./vipca
한글이 깨질때는 oracle에 환경변수를 복사해서 실행한다.

# User specific environment and startup programs
export ORACLE_BASE=/oracle/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1
export ORACLE_PATH=$ORACLE_BASE/common/oracle/sq1:.:$ORACLE_HOME/rdbms/admin

export ORA_NLS10=$ORACLE_HOME/nls/data
export NLS_LANG=AMERICAN_AMERICA.KO16KSC5601
export LANG=C

위 환경변수에 셋팅을 해 놓으면 한글이 나온다.
















vipca를 하는 도중 에러가 발생하는데
이럴때는 네트워크 툴에서 rac1, rac2에 gateway를 셋팅해 주고, 호스트네임도 잘 되어 있는지 확인하고, 네트워크를 내렸다 올리면 됩니다.
#ifconfig eth0 down
#ifconfig eth0 up
#route -n


rac1-> ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:6E:EA:16 
          inet addr:192.168.89.101  Bcast:192.168.89.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe6e:ea16/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1059 errors:0 dropped:0 overruns:0 frame:0
          TX packets:905 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:97059 (94.7 KiB)  TX bytes:127858 (124.8 KiB)
          Interrupt:185 Base address:0x1480

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:6E:EA:16 
          inet addr:192.168.89.111  Bcast:192.168.89.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:185 Base address:0x1480





 

[root@rac2 bin]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:3F:BF:D2 
          inet addr:192.168.89.102  Bcast:192.168.89.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe3f:bfd2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:28313 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32759 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:12501293 (11.9 MiB)  TX bytes:14325393 (13.6 MiB)
          Interrupt:185 Base address:0x1480

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:3F:BF:D2 
          inet addr:192.168.89.112  Bcast:192.168.89.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:185 Base address:0x1480




rac1, rac2 모두에서 eth0:1이 잡히면 vipca가 성공적으로 끝날 수 있습니다.





오라클 설치


오라클 거의 설치 완료되었을때 에러가 났습니다.
노트북으로 하려니 컴퓨터가 너무 뜨거워져서 당분간 진행이 어려울 듯 합니다.
PC 하나 구입해서 진행을 하거나 남는 서버 있으면 거기서 해봐야 겠네요.
아 RAC 설치 복잡하고도 어렵구나...

또, clusterware 설치하다가 에러가 나면 고쳐서 다시 성공해도 나중에 문제가 된다고 합니다. 설치전 백업받고 하라고 하네요.

참고
http://cafe.naver.com/prodba.cafe?iframe_url=/ArticleRead.nhn%3Farticleid=10754

+ Recent posts