티스토리 뷰
adding/removing node 추가 및 삭제
Test Evironment
RUNNING NODE : rac1, rac2, rac3
ADDING NODE : rac4
GRID INFRASTRUCTURE OWNER : oracle
GRID INFRASTRUCTURE GROUP : oinstall
RDBMS OWNER : oracle
RDBMS GROUP : dba
Checking the Prerequisites
**CHECK LIST
1. os : oel 5.9
2. network : /etc/hosts, /etc/resolv.conf, ntpd service
3. package : libaio-devel, numactl-devel, unixODBC, asmlib, cvuqdisk(group : oinstall)
4. standard groups : usermod -u 54321 oracle : same as running node grid user
5. correct directory : mkdir -p /u01/app/11.2.0.3.0/grid in newnode(rac4)
6. permission : chown -R oracle:oinstall /u01 on rac4 only, chmod -R 755 /u01 permission
5. shraed storage : asm lib (must be be loaded and discoverd on the new node(rac4))
6. ssh check
**CLUSTER VERIFICATION UTILITY
KEEP OUT THE OUTPUT
1.ssh user equivalency : as oracle user on running node(rac1)
(rac1-oracle)
]$ cd /u01/media/11.2.0.3.0/grid/sshsetup/
]$ ./sshUserSetup.sh -user oracle -hosts "rac1 rac2 rac3 rac4" -noPromptPassphrase -advanced
]$ ./sshUserSetup.sh -user root -hosts "rac1 rac2 rac3 rac4" -noPromptPassphrase -advanced
check on each node
(rac1,rac2,rac3,rac4-oracle)
]$ ssh rac1 date
]$ ssh rac2 date
]$ ssh rac3 date
]$ ssh rac4 date
]$ ssh rac1-priv date
]$ ssh rac2-priv date
]$ ssh rac3-priv date
]$ ssh rac4-priv date
]$ ssh rac1-vip date
]$ ssh rac2-vip date
]$ ssh rac3-vip date
2.cluvfy util : as oracle user on running node(rac1)
(rac1-oracle)
]$ cluvfy stage -post hwos -n rac4 -verbose
]$ cluvfy stage -pre nodeadd -n rac4 -fixup -fixupdir /tmp
3.fixup scripts if error occurs from cluvfy util : fix kernel parameter required for GI to work
as oracle root user on each node(rac1, rac2, rac3, rac4) by order
(rac1,rac2,rac3,rac4-root)
]# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh
4.optional - comparison check to running node(rac1) with new node(rac4)
(rac1-oracle)
]# cluvfy comp peer -refnode rac1 -n rac4 -orainv oinstall -osdba dba -verbose
**node addition to the cluster (not supported for 11.2.0.1.0. Bug 8865943)
1.addNode.sh script : from $GRID_HOME as oracle user on running node(rac1)
copy cluster files from the active node(rac1) to the new node(rac4), specify ip addresses
(rac1-oracle)
]$ cd $GRID_HOME/oui/bin
]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac4-vip}"
2.prompt scripts during addNode.sh running : as root on new node(rac4)
(rac4-root)
orainstRoot.sh only changes the permission,
and root.sh performs actual node addition and backup of OLR not OCR, update central inventory
update /etc/oratab with information for n-th ASM instance
]$ /u01/app/oraInventory/orainstRoot.sh
]$ /u01/app/11.2.0.3.0/grid/root.sh
3.check the cluster integrity after scripts : as oracle user on running node(rac1)
(rac1-oracle)
]$ cluvfy stage -post nodeadd -n rac4
ERROR : error occurred while retrieving node numbers of the existing node
CAUSE : configuration problem in inventory
ACTION : update inventory as oracle user in rac1
]$ cd $GRID_HOME/oui/bin
]$ ./detachHome.sh
]$ ./attachHome.sh
]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac1,rac2,rac3}" -local
]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac4-vip}"
**add RDBMS software
1.addNode.sh script : from $ORACLE_HOME as oracle user on running node(rac1)
copy ORACLE_HOME software to new node
(rac1-oracle)
]$ cd $ORACLE_HOME/oui/bin
]$ addNode.sh -silent “CLUSTER_NEW_NODES=(rac4)”
2.prompt sctirps during addNode.sh : as root user on new node(rac4)
(rac4-root)
]$ /u01/app/11.2.0.3.0/oracle/db/root.sh
**add asm database to new node(policy managed db)
1. dbca interactive mode : as oracle user on running node(rac1)
oracle real application cluster databse
-> instance management
-> add instance
-> fill out username, password, instance name
-> oui add instance
2. dbca silent mode : as oracle user on running node (rac4)
]$ dbca -silent -addInstance -nodeList rac4 -gdbname RACDB -sysDBAUserName -sysDBAPassword
remove node
- remove serivce using srvctl
]$ srvctl status service -d test
]$ srvctl stop service -d test -s test_srv -i test2
]$ srvctl config service -d test
]$ srvctl modify service -d test -s test_srv -n -i test1
]$ srvctl config service -d test
- remove db using dbca
-> instance management
-> delete an instance
]$ srvctl config database -d test
]$ crs_relocate ora.test.db (if required)
]$ crs_stat -t
- remove asm using srvctl
]$ srvctl stop asm -n node2-pub
]$ srvctl remove asm -n node2-pub
]$ srvctl config asm -n node2-pub
]$ srvctl config asm -n node1-pub
]$ crs_stat -t
- remove listner using netca
-> cluster configuration
-> listner configuration
- at the end
]$ srvctl stop nodeapps -n node2-pub
- check running node
]$ olsnodes -n
'ORACLE DB > Projects' 카테고리의 다른 글
ASM Upgrade Guide RAC 11.2.0.2.0 to 11.2.0.3.0 (0) | 2013.08.23 |
---|---|
Raw Dev Install Guide(RAC OEL5.9+11.2.0.3.0) (0) | 2013.08.23 |
admin Q & A (0) | 2013.03.08 |
Manual : Oracle 10g RAC on Red Hat Linux Enterprise 4 - ASM mode (0) | 2012.12.16 |
11g silent mode (0) | 2012.12.10 |