ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx

上传人:牧羊曲112 文档编号:1663312 上传时间:2022-12-13 格式:DOCX 页数:66 大小:4.53MB
返回 下载 相关 举报
ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx_第1页
第1页 / 共66页
ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx_第2页
第2页 / 共66页
ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx_第3页
第3页 / 共66页
ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx_第4页
第4页 / 共66页
ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx_第5页
第5页 / 共66页
点击查看更多>>
资源描述

《ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx》由会员分享,可在线阅读,更多相关《ORACLE11GR2在REDHAT63环境中搭建RAC集群及简单维护.docx(66页珍藏版)》请在三一办公上搜索。

1、1、 系统环境1.1、 硬件环境服务器:vmware 10,新建个虚拟机主机名分别为rac1和rac2,每个虚拟机分配40G磁盘空间,添加两个网络适配器。其中第二个适配器网络连接调整为自定义,且两个节点保持一致。Widonws本机ip:192.168.6.11.2、 软件环境数据库:oracle11.2.0.4 database-x86-64GRID:oracle11.2.0.4_grid-x86-64 操作系统:rhel-server-6.3-x86_64采用最小化安装1.3、 网络环境Ip地址规划分配为IP 名称子网掩码IP 地址Rac1-public255.255.255.0192.16

2、8.6.11Rac2-public255.255.255.0192.168.6.12Rac1-private255.255.255.02.2.2.2Rac2-private255.255.255.02.2.2.3Rac1-vip255.255.255.0192.168.6.13Rac2-vip255.255.255.0192.168.6.14SCAN255.255.255.0192.168.6.151.4、 共享磁盘分区计划创建四个共享磁盘sdb、sdc、sdd、sde,每个磁盘计划分三个分区2、 前期环境准备1.2.1 配置静态地址vi /etc/sysconfig/network-scri

3、pts/ifcfg-eth0修改ip地址。每个虚拟机eth0网卡为public,DEVICE=eth0BOOTPROTO=staticHWADDR=00:0C:29:D1:4E:A6NM_CONTROLLED=yesONBOOT=yesTYPE=Ethernet*UUID=e59cb6a0-deb0-4164-a2b0-8b4dcc0cb027IPADDR=192.168.6.11NETMASK=255.255.255.0GATEWAY=192.168.6.1vi /etc/sysconfig/network-scripts/ifcfg-eth1修改ip地址。每个虚拟机eth1网卡位priva

4、teDEVICE=eth1BOOTPROTO=staticHWADDR=00:0C:29:D1:4E:A6NM_CONTROLLED=yesONBOOT=yesTYPE=Ethernet*UUID=e59cb6a0-deb0-4164-a2b0-8b4dcc0cb027IPADDR=2.2.2.2NETMASK=255.255.255.0修改完成后执行service network restart2.2 在rac1和rac2上分别关闭防火墙service iptables stop -停止防火墙chkconfig iptables off -禁用防火墙2.3 在rac1和rac2上分别修改主机

5、名vi /etc/sysconfig/network 重启生效,一个rac1,另一个rac2HOSTNAME=rac12.4 在rac1和rac2 上分别改hosts vi /etc/hosts 添加对应的ip信息#public192.168.6.180 rac1192.168.6.181 rac2#private2.2.2.1 rac1-priv2.2.2.2 rac2-priv#virtual192.168.6.182 rac1-vip192.168.6.183 rac2-vip#scan192.168.6.184 cluster-scan2.5 在rac1和rac2上分别执行配置内核参数

6、vi /etc/sysctl.conf加入以下内容fs.aio-max-nr = 1048576fs.file-max = 6815744kernel.shmall = 2147483648kernel.shmmax = 68719476736kernel.shmmni = 4096kernel.sem = 250 32000 100 128net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144

7、net.core.wmem_max = 1048586使修改参数立即生效:sysctl -p2.6 在rac1和rac2上分别执行修改limitsvi /etc/security/limits.conf加入以下信息grid soft nproc 2047grid hard nproc 16384grid soft nofile 1024grid hard nofile 65536oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 655362.7 在rac1和rac2上分别

8、修改/etc/pam.d/loginvi /etc/pam.d/login加入以下信息session required /lib/security/pam_limits.so session required pam_limits.so2.8 在rac1和rac2上分别执行修改/etc/profilevi /etc/profile加入以下信息if $USER = oracle | $USER = grid ; thenif $SHELL = /bin/ksh ; thenulimit -p 16384ulimit -n 65536elseulimit -u 16384 -n 65536fium

9、ask 022fi2.9 在rac1和rac2 上分别执行禁用 selinuxvi /etc/selinux/config修改 SELINUX值SELINUX=disabled2.10 在rac1和rac2上分别执行停止 ntp 服务service ntpd stopchkconfig ntpd offmv /etc/ntp.conf /etc/ntp.conf.bak2.11 在rac1和rac2上分别处理/dev/shm 共享内存不足的处理df -h 查看tmpfs分区是否大于1G,如果过小需增加。vi /etc/fstab 默认的:tmpfs /dev/shm tmpfs default

10、s 0 0改成:tmpfs /dev/shm tmpfs defaults,size=1024m 0 0mount -o remount /dev/shm2.12 在rac1和rac2上分别检查软件是否全部安装 rpm -qa | grep binutils-rpm -qa | grep compat-libstdc+-rpm -qa | grep elfutils-libelf-rpm -qa | grep elfutils-libelf-devel-rpm -qa | grep glibc-rpm -qa | grep glibc-common-rpm -qa | grep glibc-d

11、evel-rpm -qa | grep gcc-rpm -qa | grep gcc-c+-rpm -qa | grep libaio-rpm -qa | grep libaio-devel-rpm -qa | grep libgcc-rpm -qa | grep libstdc+-rpm -qa | grep libstdc+-devel-rpm -qa | grep make-rpm -qa | grep sysstat-rpm -qa | grep unixODBC-rpm -qa | grep unixODBC-devel-2.13 在rac1和rac2 2分别将未安装的包通过yum安

12、装mkdir /yummount /dev/cdrom /yumvi /etc/yum.repos.d/chenbin.repo添加以下内容 rhel-chenbinname=Red Hat Enterprise Linux $releasever - $basearch - Debugbaseurl=file:/yum/enabled=1gpgcheck=1gpgkey=file:/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-releaseyum list -查看可用包yum -y install binutils* compat-* elfutils-libelf

13、* gcc-* gcc-* kernel-* ksh-* libaio-* libgcc-* libgomp-* libstdc+-* make-* numactl-devel-* sysstat-* unixODBC-* pdksh*2.14 在oradb1和rac2上分别关闭不需要的服务chkconfig autofs offchkconfig acpid offchkconfig sendmail offchkconfig cups-config-daemon offchkconfig cpus offchkconfig xfs offchkconfig lm_sensors offch

14、kconfig gpm offchkconfig openibd offchkconfig pcmcia offchkconfig cpuspeed offchkconfig nfslock offchkconfig ip6tables offchkconfig rpcidmapd offchkconfig apmd offchkconfig sendmail offchkconfig arptables_jf offchkconifg microcode_ctl offchkconfig rpcgssd offchkconfig ntpd off3、 添加组和用户233.1 在rac1和ra

15、c2上分别添加oracle和grid用户和组groupadd -g 501 oinstallgroupadd -g 502 dbagroupadd -g 503 opergroupadd -g 504 asmadmingroupadd -g 505 asmopergroupadd -g 506 asmdbauseradd -g oinstall -G dba,asmdba,oper oracleuseradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid3.2 在rac1和rac2上分别为oracle和grid用户设密码passwd

16、oraclepasswd grid3.3 在rac1和rac2上分别创建目录grid 和 oracle 用户的mkdir -p /u01/app/oraclemkdir -p /u01/app/gridmkdir -p /u01/app/11.2.0/gridchown -R grid:oinstall /u01/app/gridchown -R grid:oinstall /u01/app/11.2.0chown -R oracle:oinstall /u01/app/oraclechmod -R 775 /u01mkdir -p /u01/app/oraInventorychown -R

17、grid:oinstall /u01/app/oraInventorychmod -R 775 /u01/app/oraInventory3.4 在rac1和rac2上分别修改oracle用户的.bash_profile文件vi /home/oracle/.bash_profileexport ORACLE_SID=rac1export ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1export LD_LIBRARY_PATH=$ORACLE_HOME/libexport NL

18、S_DATE_FORMAT=yyyy-mm-dd HH24:MI:SSexport TMP=/tmpexport TMPDIR=$TMPexport PATH=$PATH:$ORACLE_HOME/bin3.5 在rac1和rac2上分别修改grid用户的.bash_profile文件vi /home/grid/.bash_profileexport ORACLE_SID=+ASM1export ORACLE_BASE=/u01/app/gridexport ORACLE_HOME=/u01/app/11.2.0/gridexport LD_LIBRARY_PATH=$ORACLE_HOME/

19、libexport NLS_DATE_FORMAT=yyyy-mm-dd HH24:MI:SSexport PATH=$ORACLE_HOME/bin:$PATH注意另外一台数据库实例名须做相应修改:Oracle:export ORACLE_SID=rac2grid:export ORACLE_SID=+ASM23.6 建立 ssh 等效性(可不配置)Root用户等效性可不用配置,配置了仅管理服务器方便,gird用户和oracle的ssh等效性由安装程序配置,mkdir /.ssh chmod 755 /.ssh ssh-keygen -t rsassh-keygen -t dsacat /.

20、ssh/id_rsa.pub /.ssh/authorized_keyscat /.ssh/id_dsa.pub /.ssh/authorized_keys在rac1上执行ssh rac2 cat /.ssh/id_rsa.pub /.ssh/authorized_keys ssh rac2 cat /.ssh/id_dsa.pub /.ssh/authorized_keys在rac2上执行ssh rac1 cat /.ssh/id_rsa.pub /.ssh/authorized_keys ssh rac1 cat /.ssh/id_dsa.pub /.ssh/authorized_keys

21、在两台服务器上验证是否可以正常访问对方,执行时不再提示输入密码,则表示SSH 对等性配置成功ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv date4、 添加共享磁盘(VM)44.1 在本机上创建共享磁盘生产环境使用共享存储,直接略过在windows的cmd中进入VMware Workstation安装目录,执行命令创建磁盘:注意创建共享磁盘的路径及文件大小,例如我的VM装在c盘,cd c:Program Files (x86)VMwareVMware Workstationvmware-vdiskmanager.exe -c

22、 -s 5g -a lsilogic -t 2 F:shdiskvot.vmdkVmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 F:shdiskfra.vmdkvmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 F:shdiskdata.vmdkvmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 F:shdiskdata1.vmdk4.2 将共享磁盘挂到两台虚拟机关闭两台虚拟机并退出VM,分别用记事本打开虚拟机安装位置中虚拟机名字. vmx,添加以

23、下内容disk.EnableUUID=TRUEdisk.locking = FALSEscsi1.shared = TRUEdiskLib.dataCacheMaxSize = 0diskLib.dataCacheMaxReadAheadSize = 0diskLib.dataCacheMinReadAheadSize = 0diskLib.dataCachePageSize= 4096diskLib.maxUnsyncedWrites = 0 scsi1.present = TRUEscsi1.virtualDev = lsilogicscsil.sharedBus = VIRTUALscs

24、i1:0.present = TRUEscsi1:0.mode = independent-persistentscsi1:0.fileName = F:shdiskvot.vmdkscsi1:0.deviceType = diskscsi1:0.redo = scsi1:1.present = TRUEscsi1:1.mode = independent-persistentscsi1:1.fileName = F:shdiskfra.vmdkscsi1:1.deviceType = diskscsi1:1.redo = scsi1:2.present = TRUEscsi1:2.mode

25、= independent-persistentscsi1:2.fileName = F:shdiskdata.vmdkscsi1:2.deviceType = diskscsi1:2.redo = scsi1:3.present = TRUEscsi1:3.mode = independent-persistentscsi1:3.fileName = F:shdiskdata1.vmdkscsi1:3.deviceType = diskscsi1:3.redo = 4.3 两个虚拟机开机,格式化硬盘,确认是否共享成功。在rac1上执行fdsik -l -确认看以正常看到sdb,sbc,sdd

26、,sde按前期规划创建分区,分区完成后在rac2 中执行fdisk -l 查询是否同步4.4 在rac1和rac2上分别将裸设备文件和分区设备文件进行绑定vi /etc/udev/rules.d/60-raw.rules 根据磁盘情况修改下面信息加入到文件中ACTION=add, KERNEL=sdb1, RUN+=/bin/raw /dev/raw/raw1 %NACTION=add, KERNEL=sdb2, RUN+=/bin/raw /dev/raw/raw2 %NACTION=add, KERNEL=sdb3, RUN+=/bin/raw /dev/raw/raw3 %NACTION

27、=add, KERNEL=sdc1, RUN+=/bin/raw /dev/raw/raw4 %NACTION=add, KERNEL=sdc2, RUN+=/bin/raw /dev/raw/raw5 %NACTION=add, KERNEL=sdc3, RUN+=/bin/raw /dev/raw/raw6 %NACTION=add, KERNEL=sdd1, RUN+=/bin/raw /dev/raw/raw7 %NACTION=add, KERNEL=sdd2, RUN+=/bin/raw /dev/raw/raw8 %NACTION=add, KERNEL=sdd3, RUN+=/

28、bin/raw /dev/raw/raw9 %NKERNEL=raw0-9, OWNER=grid, GROUP=asmadmin, MODE=0660此处、KERNEL=”raw0-9” 10以必须单独指定,否则无效、MODE=”0666”和GROUP=”oinstall”的设置,在后面的check会报waring应改为:MODE=”0660 GROUP=”asmadmin”执行:start_udev命令,此处建议重启虚拟机执行 raw -qa查看绑定状态:确保两个节点看以看到相同的内容执行ls -l /dev/raw/查看 裸设备归属权限:5、 安装Grid55.1 在rac1和rac2上

29、分别安装cvuqdisk包在两个 Oracle RAC 节点上安装操作系统程序包 cvuqdisk。cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安装介质上的 rpm 目录中。rpm -ivh cvuqdisk-1.0.7-1.rpm5.2 检查 CRS 的安装环境只需要在其中一个节点的gird用户执行.执行需提前设置好grid、oracle用户ssh等效性,本次不检查./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose未检测通过的显示为failed,根据情况修复一下即可:resolv.conf

30、不一致的DNS问题可以忽略5.3 安装Grid Infrastructure软件5.3.1 开始安装export DISPLAY=192.168.6.18:0.0 ./runInstaller5.3.2 语言支持中文5.3.3 配置Scan name与host文件一致此处的SCAN_NAME与/etc/hosts 文件中的scan地址保持一致5.3.4 Grid用户等效性配置点击setup出现提示安装成功。Next5.3.5 创建OCR磁盘组设置asm密码 5.3.6 选择grid安装位置Package这个只要确保安装了ksh即可。 Task resolve此项是因为没有使用dns解析scan

31、,可忽略5.3.7 开始安装5.3.8 执行脚本在每个服务器上以root身份执行图中的2个脚本,两个节点都执行完成后点OK。/u01/app/oraInventory/orainstRoot.sh/u01/app/grid/11.2.0/root.sh出现以下字样表示在该节点安装成功Configure Oracle Grid Infrastructure for a Cluster . succeeded5.4 确认Grid安装5.4.1 CRS状态crs_stat -t -v除GSD外其他全部为onlinecrsctl stat res -t5.4.2 voting disk状态crsctl

32、 query css votedisk# STATE File Universal Id File Name Disk group- - - - -1. ONLINE 7b8903f49cc84fa8bf06d199bdf5dfe3 (ORCL:DISK01) CRSDG5.4.3 检查Oracle集群注册表(OCR)ocrcheckStatus of Oracle Cluster Registry is as follows :Version : 3Total space (kbytes) : 262120Used space (kbytes) : 2264Available space (

33、kbytes) : 259856ID : 1510360228Device/File Name : +CRSDGDevice/File integrity check succeededDevice/File not configuredDevice/File not configuredDevice/File not configuredDevice/File not configuredCluster registry integrity check succeededLogical corruption check bypassed due to non-privileged user5

34、.4.4 检查CRS状态crsctl check crsCRS-4638: Oracle High Availability Services is onlineCRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online6、 ASM磁盘组配置66.1 检查监听状态gridrac01 $ lsnrctl status(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LIS

35、TENER)(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.211)(PORT=1521) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.213)(PORT=1521)Services Summary.Service +ASM has 1 instance(s).Instance +ASM1, status READY, has 1 handler(s) for this service.The command completed successfully6.2 创建DATA磁盘组以g

36、rid用户执行asmca命令,创建DATADG和FRADG两个磁盘组。6.3 创建FLA磁盘组:FLASHDG创建成功,退出ASMCA。验证:crs_stat -t -v或 crsctl stat res -t7、 安装数据库软件77.1 切换到oracle用户export DISPLAY=192.168.6.18:0.0 ./runInstaller 7.2 两个节点安装7.3 oracle用户等效性输入oracle用户密码。点击setup7.4 选择路径7.5 执行root脚本在两个节点上,分别以root身份执行上述脚本,然后点击OK。数据库软件安装完成,点击close退出。7.6 测试数

37、据库安装完成oraclerac1$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 1 22:42:56 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL8、 使用DBCA创建数据库88.1 使用oracle用户登录,执行dbca 8.2 启用em8.3 选择asm磁盘组8.4 选择FRA磁盘组:确定是否启用快速恢复区、归档日志设置最大连接数8.5 注意选合适的字

38、符集点击exit完成安装9、 集群数据库维护99.1 所有Oracle实例状态srvctl status database -d oradbInstance rac1is running on node node1Instance rac2 is running on node node29.2 单个Oracle实例状态srvctl status instance -d oradb -i rac1Instance rac1is running on node node19.3 节点应用程序状态srvctl status nodeappsVIP node1-vip is enabledVIP n

39、ode1-vip is running on node: node1VIP node2-vip is enabledVIP node2-vip is running on node: node2Network is enabledNetwork is running on node: node1Network is running on node: node2GSD is disabledGSD is not running on node: node1GSD is not running on node: node2ONS is enabledONS daemon is running on

40、 node: node1ONS daemon is running on node: node29.4 节点应用程序配置srvctl config nodeappsNetwork exists: 1/192.168.0.0/255.255.0.0/eth0, type staticVIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting

41、 node node2GSD existsONS exists: Local port 6100, remote port 6200, EM port 20169.5 数据库配置srvctl config database -d oradb -aDatabase unique name: oradbDatabase name: oradbOracle home: /u01/app/oracle/product/11.2.0/dbhome_1Oracle user: oracleSpfile: +DATADG/oradb/spfileoradb.oraDomain:Start options:

42、openStop options: immediateDatabase role: PRIMARYManagement policy: AUTOMATICServer pools: oradbDatabase instances: rac1,rac2Disk Groups: DATADG,FRADGMount point paths:Services:Type: RACDatabase is enabledDatabase is administrator managed9.6 ASM状态srvctl status asmASM is running on node2,node19.7 ASM

43、配置srvctl config asm -aASM home: /u01/app/11.2.0/gridASM listener: LISTENERASM is enabled.9.8 TNS监听器状态srvctl status listenerListener LISTENER is enabledListener LISTENER is running on node(s): node2,node19.9 TNS监听器配置srvctl config listener -aName: LISTENERNetwork: 1, Owner: gridHome: /u01/app/11.2.0/grid on node(s) node2,node1End points: TCP:15219.10 SCAN状态srvctl status scanSCAN VIP scan1 is enabledSCAN VIP scan1 is running on node node2SCAN VIP scan2 is enabledSCAN VIP scan2 is running on node node1SCAN VIP scan3 is enabledSCAN VIP

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 生活休闲 > 在线阅读


备案号:宁ICP备20000045号-2

经营许可证:宁B2-20210002

宁公网安备 64010402000987号