《智慧消防项目部署运维手册.docx》由会员分享,可在线阅读,更多相关《智慧消防项目部署运维手册.docx(19页珍藏版)》请在三一办公上搜索。
1、智慧消防项目部署运维手册XX科技股份有限公司编制目录一、文章编写目的5二、前置准备52.1 配置网络ip52.2 编写相关操作脚本91. 批量执行命令脚本92. 批量重命名脚本113. 批量拷贝脚本134. cm_migrate,sh155. format2.sh166. mountDisk.sh167. network,sh178. node,1.ist189. node,txt182. 3hostname及hosts配置181.配置每台节点的hostname182.配置节点ip、hostname映射192.4 禁用SE1.inux192.5 关闭防火墙212.6 设置Swappiness2
2、12.7 关闭透明大页面222.8 配置操作系统本地yum源232.9 安装http服务25这里注意,因为律态ip地址设置为192.168.139.101,因此默认网关和DNS地址前面部分,即192.168.139必须相同,不然会出现无法Ping通的情况SVICE=ethOHWADDr=OO:0C:29:14:8B:FATYPE=EthernetUID=63af865f-878d-4d93-8284-85d60af589bbIDNBOOT=yes1NM_CONTRO1.1.ED=yesbPRQTQ=staticI1.PAK=1.Z.1.b.1.IUU5ATEWAY=192,168.1.2)NS
3、1=114.114.114.114)NS2=8.8.8.8重启网络服务使ip生效servicenetworkrestart如果报错,reboot重启虚拟机三台节点配置的ip分别为:192.168.1.131,192.168.1.132.192.168.1.133注意:每台节点克隆后需要删除每台节点e1.cudevru1.es.d70-PCrSiStCnt-net.ru1.es文件,清除mac地址。重启每台节点即可。节点克隆后还可以使用以下方式修改mac地址Inorderto1.ogintoMariaDBtosecureit,we1.1.needthecurrentpasswordforther
4、ootuser.Ifyou,vejustinsta1.1.edMariaDB,andyouhaventsettherootpasswordyet,thepasswordwi1.1.beb1.ank,soyoushou1.djustpressenterhere.Entercurrentpasswordforroot(enterfornone):力回车OK,successfu1.Iyusedpassword,movingon.Settingtherootpasswordensuresthatnobodycan1.ogintotheMariaDBrootuserwithouttheproperaut
5、horisation.Setrootpassword?YnYNewpassword:Re-enternewpassword:Passwordupdatedsuccessfu1.1.y!Re1.oadingprivi1.egetab1.es.Success!nyonecanaccess.Thisisa1.sointendedon1.yfortesting,andshou1.dberemovedbeforemovingintoaproductionenvironment.Removetestdatabaseandaccesstoit?YnYDroppingtestdatabase.Success!
6、-Removingprivi1.egesontestdatabase.Success!Re1.oadingtheprivi1.egetab1.eswi1.1.ensurethata1.1.changesmadesofarwi1.1.takeeffectimmediate1.y.Re1.oadpriviIcgctab1.esnow?YnYSuccess!C1.eaningup.A1.1.done!Ifyouvecomp1.eteda1.1.oftheabovesteps,yourMariaDBinsta1.1.ationshou1.dnowbesecure.CREATEUSERcm,%,IDEN
7、TIFIEDBYpassword;GRANTA1.1.PRIVI1.EGESONcm.*TO,cm,0%,;F1.USHPRIVI1.EGES;createdatabaseamdefau1.tcharactersetutf8;CREATEUSER,amQ,%,IDENTIFIEDBYpassword;GRANTA1.1.PRIVI1.EGESONam.*TOam,%;F1.USHPRIVI1.EGES;createdatabasermdefau1.tcharacterseiutf8:CREATEUSERrmQ,%,IDENTIFIEDBYpassword:GRANTA1.1.PRIVI1.EG
8、ESONrm.*TOrm,%,;F1.USHPRIVI1.EGES;createdatabasehuedefau1.tcharactersetutf8;CREATEUSERhue1%IDENTIFIEDBYpassword;GRANTA1.1.PRIVI1.EGESONhue.*TOhue%;F1.USHPRIVI1.EGES;createdatabaseooziedefau1.tcharactersetutf8;CREATEUSER,oozie,%,IDENTIFIEDBYpassword;GRANTA1.1.PRIVI1.EGESONoozie.*TO,oozie,%;F1.USHPRIV
9、I1.EGES;createdatabasesentrydefau1.tcharactersetutf8;CREATEUSERsentry10%IDENTIFIEDBYpassword,;GRANTA1.1.PRIVI1.EGESONsentry.*TOsentrye%;F1.USHPRIVI1.EGES;createdatabasenav_msdefau1.tcharactersetutf8;CREATEUSER,navms侦%IDENTIFIEDBYpassword;GRANTA1.1.PRIVI1.EGESONnav_ms.*TOnavjns速*:F1.USHPRIVI1.EGES;cr
10、eatedatabasenav_asdefau1.tcharactersetutf8;CREATEUSER,nav.as,e%IDENTIFIEDBYpassword;GRANTA1.1.PRIVI1.EGESONnav_as.*TOnav-ase%,:F1.USHPRIVI1.EGES;安装jdbc驱动mkdir-Psrsharejavanv11ysq1.-connector-java-5.1.34.jar/usrsharejava/Cdusrshare/javaIn-smysq1.-connector-java-5.1.34.jarmysq1.-connector-java,jarootn
11、ode1.java#11tota1.940rwxrwxr-x.1rootroot960372Feb108:31mysq1.-connector-java-5.1.34.jarInvxrwxrwx1rootroot31Feb200:52mysq1.-conncctor-java.jar-mysq!-connector-java-5.1.34.jar三、C1.ouderaManager安装3.1配置本地CM源下载CM516.1的安装包,地址为:http:/archive,http:/archive,http:/archive,http:/archive,http:/archive,rpmhttp:
12、/archive,cm5rec1.hat7x8664cm5.16.1/RPMS/x86_64/jdk-6u31-1inux-amd64.rpnhttp:/archive,下载CDH5.16.1的安装包,地址为:http:/archive.c1.oudera.eom/cdh5/parce1.s/5.16.1/CDH-5.16.1-1.cdh5.16.1.p.3-e1.7.parce1.http:/archivc.c1.oudera.eom/cdh5/parcc1.s/5.16.1.CI)H-5.16.1-1.cdh5.16.1.p.3-e1.7.parce1.,sha1.http:/archiv
13、e,cdh5parce1.s5.16.1/manifest,json将CIOUderaManagCr安装需要的7个rpm包下载到本地,放在同-目录,执行CreaIerePO命令生成rpm元数据。rootQnode1.cm5.16.1#11tota1.1019160-rw-rr-1rootroot9864584Nov2714:40CIOUdera-manager-agcnt-5.16.1-I.cm5161.p.1.c1.7.x8664.rpm-rw-rr1rootroot789872988Nov2714:40C1.OUdera-Inanager-daenons-5.16.1-5161.p.1.e
14、1.7.x86_64.rpm-rw-r-r1rootroot8704Nov2714:40c1.oudcra-managc1.SerVer-5.16.IT.cm5161.p.1.e1.7.x86_64.rpm-rw-r-r1rootroot10612Nov2714:40C1.oUdera-manager-server-db-2-5.16.1-1.cm5161.p.1.e1.7.x86_64.rpm-rw-rr-1rootroot30604172Nov2714:40enterprise-debuginfo_5.16.11.cm5161.p.1.e1.7.x86_64.rpmrw-r-r-1root
15、root71204325Nov2714:40jdk-6u31-1.inu-amd64.rpm-rw-rr1rootroot142039186Nov2714:40orac1.e-j2sdk1.7-1.7.0+updatc67-1.x8664.rpmErootQnode1.cm5.16.1#CreaterePO.Spawningworker0WiIh2pkgsSpawningworker1with2pkgsSpawningworker2with2pkgsSpawningworker3withIPkgSWorkersFiniShCdSavingPrimarymetadataSavingfi1.e1.
16、istsmetadataSavingothermetadataGeneratingsq1iteDBsSq1.iteDBscomp1.ete如果createrepo命令没有,使用yum下载配置IWb服务器将上述cdh5.16cm5.16目录移动到varwwwhtm1.目录下,使得用户可以通过HTTP访问这些rpm包。rootSnode1.11vcm5.16/cdh5.16/varwwhtm1.验证浏览器能否正常访问制作C1.ouderaManager的repo源rootode1.vietcyum.repos,dcm.repocnreponame=cm_repobaseur1.=http192.1
17、68.139.101cm5.16.2enab1.e=truegpgcheck=fa1.serootnode1.yu11.repos,dj#yumrepo1.ist1.oadedp1.ugins:amazon-id.rhui-Jb,scarch-disab1cd-rcposrepoidreponamestatuscmrepocmrepoRedHat1.pdarhui-REGION-c1.ient-config-server-7x86.64teInfrastructure2.0C1.ientCrhui-REGION-rhe1.-server-re1.eases/7Server/x86_64RedH
18、atEnterprise1.inuxServer7(RPMs)20,668rhui-REGION-rhe1.-server-rh-common/7Server/x86_64RedHatEnterprise1.inuxServer7R1.1.Commo233repo1.ist:20,909验证安装JDKyum-yinsta1.1.orac1.e-j2sdk1.71.7.0+update6713.2安装C1.oUderaManagerServer通过yum安装C1.ouderaManagerServerifi6 .点击“继续”注意这里列出来的CDH版本都是系统最开始默认的,来自于CIoUdera公
19、网的下载源仓库,这里我们需要先将CDH的安装源修改一下。7 .使用parce1.选择,点击“更多选项”,点击“-”删除其它所有地址,输入http:192.168.139.101cdh5.16.2,点击“保存更改”g”M3,219aiNQ保存更改后,这时回到上个页面会看到我们之前准备好的http的CDH5.16.1的源,如果显示不出来,可能http源配置有问题,请参考前面步骤仔细进行检查。8 .选择自定义存储库,输入Cm的http地址IUUteHJCNACMtMn力IhM了*MSfUu*c*1*r*1.*cMMMriw*v9 .点击“继续”,进入下一步安装jdk4. hbase-env.sh配置
20、java_h(e和ZKambowhadoopNode1Conf$vi$HBASEHOME/conf/hbase-env.shexportJAVAHOME=homcw1.appjdk1.8.O121exportHAD00P_H0ME=/home/w1./app/hadoop-2.7.3exportHBASEMANAGESZK=fa1.sc#禁用Hbasc使用内置zookeperexport1IBASE_BACKIPJIASTERS=$IIBASEJIOME)confbackup-masters#配置HA的第二个节HMaster节点新建一个$HBASE_HSIE/COnf/backup-maste
21、rs文件viSHBASEJIOME/conf/backup-masters把备用的HMaStCr节点添加:hadoopNodc25. hbase-site.xm1.配置参数hbase.roo1.dirhdfsc1.uster1.dghbasehbasc.c1.uster,distributcdtruehbase.zookeeper,quorumhadoopNode1,hadoopNode2,hadoopNode3,hadoopNode4.hadoopNode5hbase.zookeeper,property.dataDirhomeambowzkdatahdata6 .配Jtregionserv
22、er(配置每一个机器名子节点名不要配主节名)在hbaseCOnf/下新建regionserver文件,添加如入内容hadoopNodc3hadoopNode4hadoopNode57 .scp-rhbase到其他节点amboWWhadOOPNode1.confSscp-rapphbase-1. 3.2ambowhadoopNode5:appambowhadoopNodc1confSscp-rapphbasc-1.3.2ambowhadoopNode4:vappambowhadoopNode1.confSscprapphbase-1.3.2ambowhadoopNodc3:appambHr0ha
23、doopNode1confSscp-rapphbase-1.3.2ambowhadoopNode2:Vapp/ambowhadoopNode1.confSscp/.bashprofi1.eambow0hadoopNodc5:amboWWhadoOPNode1.confSscp/.bash_profi1.eambowhadoopNode4:ambowhadoopNodc1confSscp/,bash_profi1eambowhadoopNode3:ambowhadoopNode1confJSscp/.bashprofi1.eambowQhadoopNodc2:各节点重新加载:sourceV.ba
24、sh_profiIe启动hdfsstart-dfs.sh六、F1.ume安装6.1 安装1 .解压tar-ZXVfaPaChc-fIumcT.6.O-bin.tar.gzambowhadoopNode3f1.umeT.6.0$tar-zxvfapache-fIume-1. 6.O-bin.tar.gz-C7app2 .然后进入f1.ume的目录,修改Conf下的f1.umc-cnv.sh,配置JAVAHOMEexportJAVAJ1.OME=h(Hneambowappjdk1.8.O1213 .配置.bash-ProfiIe文件exportF1.UME_HOME=/home/ambowapp/
25、fIume-1.6.OexprotPATH=$PATH:$F1.UMEJioME/bin七、Kafka安装:7.1安装I.下载Apachekafka官方:http:/kafka.apache,o1.wn1.oads.htm1.Sca1.a2.11-kafka_2.11-0.10.2.0.tgz(asc,md5)注:当SCaIa用的是2.11那Kafka选择kafka_2.11-0.102.O.10.2才是Kafka的版本Kafka集群安装:1 .安装JDK&配置JAVAIOME2 .安装Zookeeper参照ZOokeCPCr官网搭建一个ZK集群,并启动ZK臬群。3 .解压Kafka安装包am
26、bowhadoopNode1ambowStar-zxvfkafka_2.11一0.10.2.1.tgz-CVapp/4.配置环境变量exportKKAH()ME-homcambowapp/kafka2.11-0.10.2.1exportPTH=PTH:$KAFKAK)ME/bin5.修改配置文件config/server.propertiesviserver,properties#为依次增长的:0、1、2、3、4,集群中节点唯一idbroker,id-0#删除主题的配置,默认是fa1.se生产环境设为fa1.sede1.ete,topic,enab1.e=true#监听的主机及端口号各节点改为
27、本机相应的hostName1isteners=P1.AINTEXT:/hadoopNode1:9092邓afka的消息数据存储路径1.og.dirs-7homcambowkafka1.)ataIogs#创建主题的时候,默认有1个分区num.partitions=3#指定ZooKeeper集群列表,各节点以逗号分zookeeper,connect-hadoopode1.:2181,hadoopNode2:2181,hadoopNode3:2181,hadoopNode4:2181.hadoopNode5:21816 .分发至各个节点ambowhadoopNode1app$scp-rkafka2.
28、11-0.10.2.1ambowhadoopNode5:VappambowhadoopNode1app$scp-rkafka2.11-0.10.2.1ambowhadoopNode4:appambowhadoopNode1app$scp-rkafka_2.11-0.10.2.1ambowhadoopNode3:VappambowhadoopNode1app$scp-rkafka2.11-0.10.2.1ambowhadoopNode2:VappambowhadoopNode1app$scp-r7.bash_profi1.eambowMIadoOPNode5:、ambowhadoopNode1a
29、pp1$scp-r、/.bashprofiIcambowhadoopNodc4:ambowhadoopNode1app:$scp-r.bash_profiIeambowWhadOOPNode3:,、ambowhadoopNode1app:$scp-r.bash_profi1.eambowhadoopNode2:source/.bash_profiIe7 .修改各个节点的配置文件:为依次增长的:0、1、2、3、4,集群中节点唯一idbroker,id-0力监听的主机及端口号各节点改为本机相应的hos1.Name1isteners-P1.AINTEXThadoopNode1:90928 .各台节点上启动Kafka服务ambowhadoopNode1app$kafka-server-start.sh$KAFK.*1.1.OME/config/server.properties&注:要先启动各节点ZOOkeePerZkServ1.et.shstartambowhadoopNodc1app$kafka-SCrVer-Stop.sh