《[计算机软件及应用]Oracle Database 11g Release 2 RAC On Linux Using NFS.doc》由会员分享,可在线阅读,更多相关《[计算机软件及应用]Oracle Database 11g Release 2 RAC On Linux Using NFS.doc(30页珍藏版)》请在三一办公上搜索。
1、Oracle Database 11g Release 2 RAC On Linux Using NFSThis article describes the installation of Oracle Database 11g Release 2 (11.2 64-bit) RAC on Linux (Oracle Enterprise Linux 5.4 64-bit) using NFS to provide the shared storage. Introduction Download Software Operating System Installation Oracle In
2、stallation Prerequisites Create Shared Disks Install the Grid Infrastructure Install the Database Check the Status of the RAC Direct NFS ClientIntroductionNFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems that allows shared access to files s
3、tored on computers via an interface called the Virtual File System (VFS) that runs on top of TCP/IP. Computers that share files are considered NFS servers, while those that access shared files are considered NFS clients. An individual computer can be either an NFS server, a NFS client or both.We can
4、 use NFS to provide shared storage for a RAC installation. In a production environment we would expect the NFS server to be a NAS, but for testing it can just as easily be another server, or even one of the RAC nodes itself.To cut costs, this articles uses one of the RAC nodes as the source of the s
5、hared storage. Obviously, this means if that node goes down the whole database is lost, so its not a sensible idea to do this if you are testing high availability. If you have access to a NAS or a third server you can easily use that for the shared storage, making the whole solution much more resili
6、ent. Whichever route you take, the fundamentals of the installation are the same.The Single Client Access Name (SCAN) should really be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. In this article Ive defined it as a
7、 single IP address in the /etc/hosts file, which is wrong and will cause the cluster verification to fail, but it allows me to complete the install without the presence of a DNS.This article was inspired by the blog postings of Kevin Closson.Download SoftwareDownload the following software. Oracle E
8、nterprise Linux 5.4 Oracle 11g Release 2 (11.2) Clusterware and Database softwareOperating System InstallationThis article uses Oracle Enterprise Linux 5.4. A general pictorial guide to the operating system installation can be found here. More specifically, it should be a server installation with a
9、minimum of 2G swap (preferably 3-4G), firewall and secure Linux disabled. Oracle recommend a default server installation, but if you perform a custom installation include the following package groups: GNOME Desktop Environment Editors Graphical Internet Text-based Internet Development Libraries Deve
10、lopment Tools Server Configuration Tools Administration Tools Base System Tools X Window SystemTo be consistent with the rest of the article, the following information should be set during the installation.RAC1. hostname: rac1.localdomain IP Address eth0: 192.168.2.101 (public address) Default Gatew
11、ay eth0: 192.168.2.1 (public address) IP Address eth1: 192.168.0.101 (private address) Default Gateway eth1: noneRAC2. hostname: rac2.localdomain IP Address eth0: 192.168.2.102 (public address) Default Gateway eth0: 192.168.2.1 (public address) IP Address eth1: 192.168.0.102 (private address) Defaul
12、t Gateway eth1: noneYou are free to change the IP addresses to suit your network, but remember to stay consistent with those adjustments throughout the rest of the article.Once the basic installation is complete, install the following packages whilst logged in as the root user. This includes the 64-
13、bit and 32-bit versions of some packages.# From Enterprise Linux 5 DVDcd /media/cdrom/Serverrpm -Uvh binutils-2.*rpm -Uvh compat-libstdc+-33*rpm -Uvh elfutils-libelf-0.*rpm -Uvh elfutils-libelf-devel-*rpm -Uvh gcc-4.*rpm -Uvh gcc-c+-4.*rpm -Uvh glibc-2.*rpm -Uvh glibc-common-2.*rpm -Uvh glibc-devel-
14、2.*rpm -Uvh glibc-headers-2.*rpm -Uvh ksh-2*rpm -Uvh libaio-0.*rpm -Uvh libaio-devel-0.*rpm -Uvh libgcc-4.*rpm -Uvh libstdc+-4.*rpm -Uvh libstdc+-devel-4.*rpm -Uvh make-3.*rpm -Uvh sysstat-7.*rpm -Uvh unixODBC-2.*rpm -Uvh unixODBC-devel-2.*cd /ejectOracle Installation PrerequisitesPerform the follow
15、ing steps whilst logged into the RAC1 virtual machine as the root user.Make sure the shared memory filesystem is big enough for Automatic Memory Manager to work.# umount tmpfs# mount -t tmpfs shmfs -o size=1500m /dev/shmMake the setting permanent by amending the tmpfs setting of the /etc/fstab file
16、to look like this.tmpfs /dev/shm tmpfs size=1500m 0 0If you are not using DNS, the /etc/hosts file must contain the following information.127.0.0.1 localhost.localdomain localhost# Public192.168.2.101 rac1.localdomain rac1192.168.2.102 rac2.localdomain rac2#Private192.168.0.101 rac1-priv.localdomain
17、 rac1-priv192.168.0.102 rac2-priv.localdomain rac2-priv#Virtual192.168.2.111 rac1-vip.localdomain rac1-vip192.168.2.112 rac2-vip.localdomain rac2-vip# SCAN192.168.2.201 rac-scan.localdomain rac-scan#NAS192.168.2.101 nas1.localdomain nas1Note. The SCAN address should not really be defined in the host
18、s file. Instead is should be defined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs. For this installation, we will compromise and use the hosts file. If you are using DNS, then only the first line should be present in the /etc/hosts file. The other entries are de
19、fined in the DNS, as described here. Also, the NAS1 entry is actually pointing to the RAC1 node. If you are using a real NAS or a third server to provide your shared storage put the correct IP address into the file.Add or amend the following lines to the /etc/sysctl.conf file.fs.aio-max-nr = 1048576
20、fs.file-max = 6815744kernel.shmall = 2097152kernel.shmmax = 536870912kernel.shmmni = 4096# semaphores: semmsl, semmns, semopm, semmnikernel.sem = 250 32000 100 128net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default=262144net.core.rmem_max=4194304net.core.wmem_default=262144net.core.wmem_m
21、ax=1048586Run the following command to change the current kernel parameters./sbin/sysctl -pAdd the following lines to the /etc/security/limits.conf file.oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536Add the following lines to the /etc/pam.d/login file, i
22、f it does not already exist.session required pam_limits.soDisable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows.SELINUX=disabledAlternatively, this alteration can be done using the GUI tool (System Administration Security Level and Firewall). Cl
23、ick on the SELinux tab and disable the feature.Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. In this case we will deconfigure NTP.# service ntpd stopShutting down ntpd: OK # chkconfig ntp
24、d off# mv /etc/ntp.conf /etc/ntp.conf.org# rm /var/run/ntpd.pidIf you are using NTP, you must add the -x option into the following line in the /etc/sysconfig/ntpd file.OPTIONS=-x -u ntp:ntp -p /var/run/ntpd.pidThen restart NTP.# service ntpd restartStart the Name Service Cache Daemon (nscd).chkconfi
25、g -level 35 nscd onservice nscd startCreate the new groups and users.groupadd -g 1000 oinstallgroupadd -g 1200 dbauseradd -u 1100 -g oinstall -G dba oraclepasswd oracleLogin as the oracle user and add the following lines at the end of the .bash_profile file.# Oracle SettingsTMP=/tmp; export TMPTMPDI
26、R=$TMP; export TMPDIRORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAMEORACLE_UNQNAME=rac; export ORACLE_UNQNAMEORACLE_BASE=/u01/app/oracle; export ORACLE_BASEORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOMEORACLE_SID=rac1; export ORACLE_SIDORACLE_TERM=xterm; export ORACLE_TERM
27、PATH=/usr/sbin:$PATH; export PATHPATH=$ORACLE_HOME/bin:$PATH; export PATHLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATHCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATHif $USER = oracle ; then if $SHELL = /bin/ksh ; then ulimit -p 16384 ul
28、imit -n 65536 else ulimit -u 16384 -n 65536 fifiRemember to amend the ORACLE_SID and ORACLE_HOSTNAME on each server.Create Shared DisksFirst we need to set up some NFS shares. In this case we will do this on the RAC1 node, but you can do the on a NAS or a third server if you have one available. On t
29、he RAC1 node create the following directories.mkdir /shared_configmkdir /shared_gridmkdir /shared_homemkdir /shared_dataAdd the following lines to the /etc/exports file./shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_grid *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/sh
30、ared_home *(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_data *(rw,sync,no_wdelay,insecure_locks,no_root_squash)Run the following command to export the NFS shares.chkconfig nfs onservice nfs restartOn both RAC1 and RAC2 create the directories in which the Oracle software will be installed
31、.mkdir -p /u01/app/11.2.0/gridmkdir -p /u01/app/oracle/product/11.2.0/db_1mkdir -p /u01/oradatamkdir -p /u01/shared_configchown -R oracle:oinstall /u01/app /u01/app/oracle /u01/oradata /u01/shared_configchmod -R 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_configAdd the following lines to t
32、he /etc/fstab file.nas1:/shared_config /u01/shared_config nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0nas1:/shared_grid /u01/app/11.2.0/grid nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0nas1:/shared_home /u01/app/oracle/product/1
33、1.2.0/db_1 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0nas1:/shared_data /u01/oradata nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0Mount the NFS shares on both servers.mount /u01/shared_configmount /u01/app/11.2.0/gridmount /u01/
34、app/oracle/product/11.2.0/db_1mount /u01/oradataMake sure the permissions on the shared directories are correct.chown -R oracle:oinstall /u01/shared_configchown -R oracle:oinstall /u01/app/11.2.0/gridchown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1chown -R oracle:oinstall /u01/oradataIns
35、tall the Grid InfrastructureStart both RAC nodes, login to RAC1 as the oracle user and start the Oracle installer./runInstallerSelect the Install and Configure Grid Infrastructure for a Cluster option, then click the Next button.Select the Advanced Installation option, then click the Next button.Sel
36、ect the the required language support, then click the Next button.Enter cluster information and uncheck the Configure GNS option, then click the Next button.On the Specify Node Information screen, click the Add button.Enter the details of the second node in the cluster, then click the OK button.Clic
37、k the SSH Connectivity. button and enter the password for the oracle user. Click the Setup button to to configure SSH connectivity, and the Test button to test it once it is complete. Click the Next button.Check the public and private networks are specified correctly, then click the Next button.Sele
38、ct the Shared File System option, then click the Next button.Select the required level of redundancy and enter the OCR File Location(s), then click the Next button.Select the required level of redundancy and enter the Voting Disk File Location(s), then click the Next button.Accept the default failur
39、e isolation support by clicking the Next button.Select the preferred OS groups for each option, then click the Next button. Click the Yes button on the subsequent message dialog.Enter /u01/app/oracle as the Oracle Base and /u01/app/11.2.0/grid as the software location, then click the Next button.Acc
40、ept the default inventory directory by clicking the Next button.Wait while the prerequisite checks complete. If you have any issues, either fix them or check the Ignore All checkbox and click the Next button. If there are no issues, you will move directly to the summary screen. If you are happy with
41、 the summary information, click the Finish button.Wait while the setup takes place.When prompted, run the configuration scripts on each node.The output from the orainstRoot.sh file should look something like that listed below.# cd /u01/app/oraInventory# ./orainstRoot.shChanging permissions of /u01/a
42、pp/oraInventory.Adding read,write permissions for group.Removing read,write,execute permissions for world.Changing groupname of /u01/app/oraInventory to oinstall.The execution of the script is complete.#The output of the root.sh will vary a little depending on the node it is run on. Example output c
43、an be seen here (Node1, Node2).Once the scripts have completed, return to the Execute Configuration Scripts screen on RAC1 and click the OK button.Wait for the configuration assistants to complete.We expect the verification phase to fail with an error relating to the SCAN, assuming you are not using
44、 DNS.INFO: Checking Single Client Access Name (SCAN).INFO: Checking name resolution setup for rac-scan.localdomain.INFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name rac-scan.localdomainINFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for rac-scan.localdo
45、main (IP address: 192.168.2.201) failedINFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name rac-scan.localdomainINFO: Verification of SCAN VIP and Listener setup failedProvided this is the only error, it is safe to ignore this and continue by clicking the Next butto
46、n.Click the Close button to exit the installer.The grid infrastructure installation is now complete.Install the DatabaseStart all the RAC nodes, login to RAC1 as the oracle user and start the Oracle installer./runInstallerUncheck the security updates checkbox and click the Next button.Accept the Create and configure a database option by clicking the Next button.Accept the Server Class option by clicking the Next button.Make sure both nodes are se