天天看点

hadoop、zookeeper、storm、hbase安装 二  Zookeeper安装

以下环境全部基于此系统:

[[email protected] ~]# cat /etc/redhat-release

CentOS Linux release 7.5.1804 (Core)

安装软件及对应版本

jdk-8u172-linux-x64.tar.gz

Hadoop-2.8.4

Zookeeper-3.4.5

apache-storm-1.0.6.tar.gz

hbase-0.98.6-hadoop2-bin.tar.gz

一 Hadoop安装

http://archive.apache.org/dist/hadoop/common/

http://download.oracle.com/otn-pub/java/jdk/8u172-b11/a58eab1ec242421181065cdc37240b08/jdk-8u172-linux-x64.tar.gz

1、实验环境

3台虚拟机IP及机器名称如下:

IP地址                主机名         角色

192.168.43.205   master        Namenode

192.168.43.79     Slave1        Datanode1

192.168.43.32     Slave2        Datanode2

2、FQND配置

[[email protected] ~]# cat /etc/hosts

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4

::1        localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.43.205   master

192.168.43.79     slave1

192.168.43.32     slave2

3、免密码登录

[[email protected] ~]#ssh-keygen

[[email protected] ~]#ssh-copy-id 192.168.43.205

[[email protected] ~]#ssh-copy-id 192.168.43.79

[[email protected] ~]#ssh-copy-id 192.168.43.32 

验证是否生效

[[email protected] ~]# ssh 192.168.43.79

[[email protected] ~]# ssh 192.168.43.32

复制到其他两台机器

[[email protected] ~]#scp /etc/hosts [email protected]:/etc/

[[email protected] ~]#scp /etc/hosts [email protected]:/etc/

重启生效(我这里是原本就配置好生效的)

4、安装JAVA环境

三台机器都要安装

上传软件包:jdk-8u172-linux-x64.tar.gz

[[email protected] ~]# mkdir /usr/java/

[[email protected] ~]# tar -xvf jdk-8u172-linux-x64.tar.gz  -C /usr/java/

环境变量

[[email protected] ~]# vim /etc/profile          #在文末添加以下内容

####################JDK

export  JAVA_HOME=/usr/java/jdk1.8.0_172

export  CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

export  PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin

[[email protected] ~]# source /etc/profile     #使配置文件生效

验证java运行是否成功

[[email protected] ~]# java -version

java version "1.8.0_172"                

Java(TM) SE Runtime Environment (build 1.8.0_172-b11)

Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

如果出现对应版本,则说明java运行环境安装成功

如果本来就有java环境,升级java也是一样安装,会优先执行高版本

将jdk环境部署到其他两台机器上去

[[email protected] ~]# scp -r /usr/java/  slave1:/usr/

[[email protected] ~]# scp -r /usr/java/  slave2:/usr/

[[email protected] ~]# scp -r /etc/profile slave1:/etc/

[[email protected] ~]# scp -r /etc/profile slave2:/etc/

验证其他两台服务器

[[email protected] ~]# source /etc/profile

[[email protected] ~]# java -version 

[[email protected] ~]# source /etc/profile

[[email protected] ~]# java -version

5、安装hadoop

5.1、安装hadoop

上传软件:hadoop-2.8.4.tar.gz

[[email protected] ~]# tar -zxf hadoop-2.8.4.tar.gz -C /usr/local/

[[email protected] ~]# ls /usr/local/hadoop-2.8.4/

bin etc      lib      LICENSE.txt  NOTICE.txt sbin  

include libexec       README.txt  share

5.2、创建临时目录和文件目录

[[email protected]~]# mkdir /usr/local/hadoop-2.8.4/tmp/

[[email protected]~]# mkdir /usr/local/hadoop-2.8.4/dfs/name

[[email protected]~]# mkdir /usr/local/hadoop-2.8.4/dfs/data

5.3、配置文件

主要配置文件:/usr/local/hadoop-2.8.4/etc/hadoop

文件名称:hadoop-env.sh、yarn-evn.sh、slaves、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml

hadoop-env.sh       //java环境变量脚本

yarn-env.sh            //制定yarn框架的java运行环境,YARN,它将资源管理和 处理组件分开。基于YARN的架构不受MapReduce约束

slaves                    //指定datanode数据存储服务器

core-site.xml          //指定访问hadoop web界面的路径

hdfs-site.xml          //文件系统的配置文件

mapred-site.xml     //mapreducer任务配置文件

yarn-site.xml          //该文件为yarn框架的配置,主要是一些任务的启动位置

5.4、修过配置文件

修改前最好先备份一份,避免出错

[[email protected] etc]# cp -r hadoop/ hadoop.bak

(1)、配置文件 hadoop-env.sh,指定 hadoop 的 java 运行环境该文件是 hadoop 运行基本环境的配置,需要修改的为 java 虚拟机的位置。

[[email protected] hadoop]# vim hadoop-env.sh

改:25 export JAVA_HOME=${JAVA_HOME}

为:export JAVA_HOME=/usr/java/jdk1.8.0_172

注:指定 java 运行环境变量

(2)、配置文件 yarn-env.sh,指定 yarn 框架的 java 运行环境

该文件是 yarn 框架运行环境的配置,同样需要修改 java 虚拟机的位置。

Yarn:Hadoop的新 MapReduce 框架 Yarn 是 Hadoop 自 0.23.0 版本后新的map-reduce 框架(Yarn) 原理。

[[email protected] hadoop]# vim yarn-env.sh

改:26 JAVA_HOME=$JAVA_HOME

为:26 JAVA_HOME=/usr/java/jdk1.8.0_172

(3)、 配置文件 slaves ,指定 datanode 数据存储服务器

将所有DataNode 的机器名字写入此文件中,每个主机名一行,配置如下:

[[email protected] hadoop]#vim slaves

slave1

slave2

(4)、 配置文件 core-site.xml,指定访问 hadoopweb 界面访问路径这个是 hadoop 的核心配置文件,这里需要配置的就这两个属性,fs.default.name 配置了 hadoop 的HDFS 系统的命名,位置为主机的 9000 端口;hadoop.tmp.dir 配置了 hadoop 的 tmp 目彔的根位置。这里使用了一个文件系统中没有的位置,所以要先用 mkdir 命令新建一下。

[[email protected] hadoop]#vim core-site.xml

<configuration>

     <property>

        <name>fs.defaultFS</name>

        <value>hdfs://192.168.43.205:9000</value>

     </property>

     <property>

        <name>hadoop.tmp.dir</name>

        <value>file:/usr/local/hadoop-2.8.4/tmp</value>

     </property>

</configuration>

(5)、 配置文件 hdfs-site.xml

这个是 hdfs 的配置文件,dfs.http.address 配置了 hdfs 的 http 的访问位置;dfs.replication 配置了文件块的副本数,一般不大于从机的个数。

[[email protected] hadoop]#vim hdfs-site.xml

<configuration>

     <property>

        <name>dfs.namenode.secondary.http-address</name>

         <value>master:9001</value>   # 通过 web 界面来查看 HDFS 状态

     </property>

     <property>

        <name>dfs.namenode.name.dir</name>

         <value>file:/usr/local/hadoop-2.8.4/dfs/name</value>

     </property>

<property>

         <name>dfs.datanode.data.dir</name>

         <value>file:/usr/local/hadoop-2.8.4/dfs/data</value>

</property>

<property>

        <name>dfs.repliction</name>

         <value>2</value>         #每个 Block 有 2 个备份。

</property>

</configuration>

(6)、 配置文件 mapred-site.xml

这个是 mapreduce 任务的配置,由于hadoop2.x 使用了yarn框架,所以要实现分布式部署,必须在mapreduce.framework.name属性下配置yarn.mapred.map.tasks和 mapred.reduce.tasks分别为 map 和 reduce 的任务数,同时指定:Hadoop 的历史服务器 historyserver。

生成mapred-site.xml

[[email protected] hadoop]# cp mapred-site.xml.template mapred-site.xml

[[email protected] hadoop]# vim mapred-site.xml

<configuration>

 <property>

          <name>mapreduce.framework.name</name>

          <value>yarn</value>

     </property>

</configuration>

(7)、 配置节点 yarn-site.xml

该文件为 yarn 框架的配置,主要是一些任务的启动位置

[[email protected] hadoop]# vim yarn-site.xml

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

<property>

<name>yarn.resourcemanager.address</name>

<value>master:8032</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>master:8030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>master:8035</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>master:8033</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>master:8088</value>

</property>

</configuration>

7个要修改的配置文件,到这里完成!

复制到其他datanode节点:

[[email protected] ~]# scp -r /usr/local/hadoop-2.8.4/ slave1:/usr/local/

[[email protected] ~]# scp -r /usr/local/hadoop-2.8.4/ slave2:/usr/local/

5.5、配置环境变量

[[email protected] ~]# vim  /etc/profile    #在文末添加以下内容

####################HADOOP

export HADOOP_HOME=/usr/local/hadoop-2.8.4/

export PATH=$PATH:$HADOOP_HOME/bin

[[email protected] bin]# source /etc/profile

[[email protected] bin]# echo $HADOOP_HOME

/usr/local/hadoop-2.8.4/

其他两台也是一样操作,可scp复制过去

5.6、管理节点

(1)  格式化

Hadoop  namenode 的初始化,只需要第一次的时候初始化,之后就不需要了。

注意:只需要一次就行了,因为格式化会改变ID号,多次格式化会导致namenode角色跟datanode角色ID号不一致,最后会导致,namenode启动正常,但是slave的datanode启动不了

[[email protected] ~]# /usr/local/hadoop-2.8.4/bin/hdfs namenode  -format

15/08/03 22:35:21 INFO common.Storage:Storage directory /usr/local/hadoop-2.8.4/dfs/name has beensuccessfully formatted.

。。。。。。

15/08/03 22:35:21 INFOutil.ExitUtil: Exiting with status 0

15/08/03 22:35:21 INFO namenode.NameNode:SHUTDOWN_MSG:

[[email protected] ~]# echo $?            

#检测上一次执行的命令是否正确 0正确  非0 错误

查看格式化后,生成的文件:

[[email protected] ~]# yum -y install tree       #要先配置好源

[[email protected] ~]# tree /usr/local/hadoop-2.8.4/dfs

/usr/local/hadoop-2.8.4/dfs

├── data

└── name

└── current

├── fsimage_0000000000000000000

├── fsimage_0000000000000000000.md5

├── seen_txid

└── VERSION

3 directories, 4 files

(2)启动 hdfs: ./sbin/start-dfs.sh,即启动HDFS 分布式存储

[[email protected] ~]#/usr/local/hadoop-2.8.4/sbin/start-dfs.sh

Starting namenodes on [master]

master: starting namenode, logging to/usr/local/hadoop-2.8.4/logs/hadoop-root-namenode-master.out

slave2: starting datanode, logging to /usr/local/hadoop-2.8.4/logs/hadoop-root-datanode-slave2.out

slave1: starting datanode, logging to/usr/local/hadoop-2.8.4/logs/hadoop-root-datanode-slave1.out

Starting secondary namenodes [master]

master: starting secondarynamenode, loggingto /usr/local/hadoop-2.8.4/logs/hadoop-root-secondarynamenode-master.out

注:如果报错,如:

Slave1: Host key verification failed.

解决:

[[email protected] ~]# ssh slave1  #确认可以输入密码直接连接上 slave1

关闭后再重启:

[[email protected] ~]# /usr/local/hadoop-2.8.4/sbin/stop-dfs.sh

[[email protected] ~]# /usr/local/hadoop-2.8.4/sbin/start-dfs.sh

(3)查看进程,此时master 有进程:namenode 和 secondarynamenode 进程:

[[email protected] ~]# ps -aux | grep namenode--color

Warning: bad syntax, perhaps a bogus '-'?See /usr/share/doc/procps-3.2.8/FAQ

root      4404 12.2 13.6 2761684 136916 ?     Sl   01:17   0:26 /usr/java/jdk1.8.0_144/bin/java -Dproc_namenode -Xmx1000m-Djava.library.path=/lib/native -Dhadoop.log.dir=/usr/local/hadoop-2.8.4/logs

。。。。。。

root      4565  8.5 11.4 2722404 114812?      Sl   01:18  0:15 /usr/java/jdk1.8.0_144/bin/java -Dproc_secondarynamenode-Xmx1000m -Djava.library.path=/lib/native-Dhadoop.log.dir=/usr/local/hadoop-2.8.4/logs

。。。。。。

Slave1和 slave2上有进程:DataNode

[[email protected] ~]# ps-aux | grep datanode --color

Warning: bad syntax,perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ

root      2246 2380 7.3 10.6 2760144 107336 ?     Sl   01:17   0:28 /usr/java/jdk1.8.0_144/bin/java   -Dproc_datanode   -Xmx1000m

。。。。。

org.apache.hadoop.hdfs.server.datanode.DataNode

[[email protected] ~]# ps-aux | grep datanode --color

Warning: bad syntax,perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ

root      1977 6.2 11.1 2756488 111608 ?     Sl   07:26   0:30 /usr/java/jdk1.8.0_144/bin/java   -Dproc_datanode   -Xmx1000m

。。。。。

org.apache.hadoop.hdfs.server.datanode.DataNode

(4)在master 上启动 yarn:./sbin/start-yarn.sh 即,启动分布式计算

[[email protected] ~]#/usr/local/hadoop-2.8.4/sbin/start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /usr/local/hadoop-2.8.4/logs/yarn-root-resourcemanager-master.out

slave1: starting nodemanager, logging to/usr/local/hadoop-2.8.4/logs/yarn-root-nodemanager-slave1.out

slave2: starting nodemanager, logging to/usr/local/hadoop-2.8.4/logs/yarn-root-nodemanager-slave2.out

(5)查看进程:

查看master上的进程:ResourceManager ,slave1 和 slave2上的进程:NodeManager

[[email protected] ~]# ps -aux | grep resourcemanager --color

Warning: bad syntax, perhaps a bogus '-'? See/usr/share/doc/procps-3.2.8/FAQ

root      4749 53.2 16.0 2919812160832 pts/0  Sl   01:28  0:47 /usr/java/jdk1.8.0_144/bin/java -Dproc_resourcemanager-Xmx1000m

。。。。。。

[[email protected] ~]# ps -aux | grep nodemanager --color

Warning: bad syntax, perhaps a bogus '-'? See/usr/share/doc/procps-3.2.8/FAQ

root     2354 17.7 12.9 2785400129776 ?      Sl   01:27  0:45 /usr/java/jdk1.8.0_144/bin/java -Dproc_nodemanager-Xmx1000m

。。。。。。

[[email protected] ~]# ps -aux | grep nodemanager --color

Warning: bad syntax, perhaps a bogus '-'? See/usr/share/doc/procps-3.2.8/FAQ

root      2085 15.1 13.8 2785400139544 ?      Sl   07:36  0:46 /usr/java/jdk1.8.0_144/bin/java -Dproc_nodemanager-Xmx1000m

注:start-dfs.sh 和 start-yarn.sh 这两个脚本可用 start-all.sh 代替。

[[email protected] ~]# /usr/local/hadoop-2.8.4/sbin/start-all.sh 

[[email protected] ~]# /usr/local/hadoop-2.8.4/sbin/stop-all.sh

Hadoop 自带了一个历史服务器historserver,可以通过历史服务器查看已经运行完的 Mapreduce 作业记彔,比如用了多少个 Map、用了多少个 Reduce、作业提交时间、作业启动时间、作业完成时间等信息。默认情况下,Hadoop 历史服务器是没有启动的,我们可以通过下面的命令来启动Hadoop 历史服务器

/usr/local/hadoop-2.8.4/sbin/mr-jobhistory-daemon.shstart historyserver

这样我们就可以在相应机器的 19888 端口上打开历史服务器的 WEB UI 界面。可以查看已经运行完的作业情况。

启动:jobhistory 服务,查看 mapreduce 运行状态

[[email protected]~]# /usr/local/hadoop-2.8.4/sbin/mr-jobhistory-daemon.sh start historyserver

starting historyserver,logging to /usr/local/hadoop-2.8.4/logs/mapred-root-historyserver-master.out

通过jps查看

[[email protected] ~]# jps

4565 SecondaryNameNode     #辅助namenode

4749 ResourceManager          #管理应用资源            

4404 NameNode                     #管理节点

3277 JobHistoryServe             #历史服务器

2527 Jps

[[email protected] ~]# jps

1744 Jps

2354 NodeManager          #从节点资源管理

2246 DataNode                #数据节点

[[email protected] ~]# jps

1644 Jps

2085 NodeManager         #从节点资源管理

1977 DataNode                #数据节点

(6)查看 HDFS 分布式文件系统状态:

[[email protected] ~]#/usr/local/hadoop-2.8.4/bin/hdfs dfsadmin -report

Configured Capacity: 38205915136 (35.58 GB)

Present Capacity: 32456699904 (30.23 GB)

DFS Remaining: 32456691712 (30.23 GB)

DFS Used: 8192 (8 KB)

DFS Used%: 0.00%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

Missing blocks (with replication factor 1):0

Pending deletion blocks: 0

-------------------------------------------------

Live datanodes (2):

Name: 192.168.43.32:50010 (slave2)

Hostname: slave2

Decommission Status : Normal

Configured Capacity: 19102957568 (17.79 GB)

DFS Used: 4096 (4 KB)

Non DFS Used: 2874605568 (2.68 GB)

DFS Remaining: 16228347904 (15.11 GB)

DFS Used%: 0.00%

DFS Remaining%: 84.95%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Mon Jun 25 10:00:18 CST 2018

Name: 192.168.43.79:50010 (slave1)

Hostname: slave1

Decommission Status : Normal

Configured Capacity: 19102957568 (17.79 GB)

DFS Used: 4096 (4 KB)

Non DFS Used: 2874609664 (2.68 GB)

DFS Remaining: 16228343808 (15.11 GB)

DFS Used%: 0.00%

DFS Remaining%: 84.95%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Mon Jun 25 10:00:18 CST 2018

(7)查看文件块组成:一个文件由哪些块组成

[[email protected] ~]#/usr/local/hadoop-2.8.4/bin/hdfs fsck / -files –blocks

(8)Web 查看 HDFS: http://192.168.43.205:50070

(9)通过 Web 查看 hadoop 集群状态: http://192.168.43.205:8088

HDFS

Hdfs是安装hadoop中就有的

HDFS操作命令

(1)显示文件列表

[[email protected] ~]# hadoop fs -ls / 

Found 1 items

drwxrwx---   -root supergroup       0 2018-07-01 11:28 /tmp 

#前面有配置

(2)上传文件

[[email protected] ~]# hadoop fs -put anaconda-ks.cfg  /

[[email protected] ~]# hadoop fs -ls /

Found 2 items

-rw-r--r--   3 root supergroup       1551 2018-07-01 11:45 /anaconda-ks.cfg

drwxrwx---   - root supergroup          0 2018-07-01 11:28 /tmp

(3)创建文件

[[email protected] ~]# hadoop fs -touchz /hdfs.txt

[[email protected] ~]# hadoop fs -ls /

Found 3 items

-rw-r--r--   3 root supergroup       1551 2018-07-01 11:45 /anaconda-ks.cfg

-rw-r--r--   3 root supergroup          0 2018-07-01 11:47 /hdfs.txt

drwxrwx---   - root supergroup          0 2018-07-01 11:28 /tmp

(4)下载文件

[[email protected] ~]# hadoop fs -get /hdfs.txt /root 

[[email protected] ~]# ls 

anaconda-ks.cfg  hdfs.txt

(5)其他命令用法,查看帮助命令

[[email protected] ~]# hadoop  fs    

二  Zookeeper安装

http://archive.apache.org/dist/zookeeper/zookeeper-3.4.5/zookeeper-3.4.5.tar.gz

1、集群环境 

192.168.43.205   master

192.168.43.79     slave1

192.168.43.32     slave2

 安装有jdk、hadoop

2、安装zookeeper

[[email protected] src]# tar -xvf zookeeper-3.4.5.tar.gz  -C /usr/local/

解压即可用

3、环境变量配置

[[email protected] ~]# vim /etc/profile

exportZOOKEEPER_HOME=/usr/local/zookeeper-3.4.5

export PATH=$ZOOKEEPER_HOME/bin:$PATH

[[email protected] ~]# source /etc/profile

[[email protected] ~]# echo $ZOOKEEPER_HOME

/usr/local/zookeeper-3.4.5

4、配置过程

[[email protected] ~]# cd/usr/local/zookeeper-3.4.5/

创建数据、日记保存目录

[[email protected] zookeeper-3.4.5]# mkdir data

[[email protected] zookeeper-3.4.5]# mkdir log

[[email protected] zookeeper-3.4.5]# cd conf

[[email protected] conf]# cp zoo_sample.cfg zoo.cfg

[[email protected] conf]# vim zoo.cfg

dataDir=/usr/local/zookeeper-3.4.5/data

dataLogDir=/usr/local/zookeeper-3.4.5/log

server.1=master:2888:3888

server.2=slave1:2888:3888

server.3=slave2:2888:3888

5、传输给其他两台服务器

[[email protected] ~]# scp -r /usr/local/zookeeper-3.4.5/ slave1:/usr/local/

[[email protected] ~]# scp -r /usr/local/zookeeper-3.4.5/slave2:/usr/local/

[[email protected] ~]# scp /etc/profile slave1:/etc/

[[email protected] ~]# scp /etc/profile slave2:/etc/

[[email protected] ~]# source /etc/profile

[[email protected] ~]# source /etc/profile

6、添加mydi文件

要求 myid中的值要与配置文件server.X 中设置的X 相一致,且路径要与datadir中一致

[[email protected] ~]# echo 1 >/usr/local/zookeeper-3.4.5/data/myid

[[email protected] ~]# echo 2 >/usr/local/zookeeper-3.4.5/data/myid

[[email protected] ~]# echo 3 >/usr/local/zookeeper-3.4.5/data/myid

7、启动服务

[[email protected] bin]# pwd

/usr/local/zookeeper-3.4.5/bin

[[email protected] bin]# zkServer.sh start

JMX enabled by default

Using config:/usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[[email protected] bin]# jps

4404 NameNode

4565 SecondaryNameNode

6524 Jps

3277 JobHistoryServer

4749 ResourceManager

6493 QuorumPeerMain        #zookeeper进程

[[email protected] ~]# zkServer.sh start

JMX enabled by default

Using config:/usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[[email protected] bin]# zkServer.sh start

JMX enabled by default

Using config:/usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

8、查看状态

[[email protected] bin]# zkServer.sh status

JMX enabled by default

Using config:/usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Mode: leader

[[email protected] ~]# zkServer.sh status

JMX enabled by default

Using config:/usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Mode: follower

[[email protected] bin]# zkServer.sh status

JMX enabled by default

Using config:/usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Mode: follower

9、测试leader故障后切换情况

[[email protected] bin]# jps

2085 NodeManager

2824 QuorumPeerMain

1977 DataNode

2894 Jps

[[email protected] bin]# ps -aux | grep zookeeper

Warning: bad syntax, perhaps a bogus '-'?See /usr/share/doc/procps-3.2.8/FAQ

root      2824  0.4  5.2 2262348 52608 pts/0   Sl  11:18   0:13/usr/java/jdk1.8.0_144/bin/java -Dzookeeper.log.dir=.-Dzookeeper.root.logger=INFO,CONSOLE

[[email protected] bin]# kill  -9 2824

[[email protected] bin]# ps -aux | grep zookeeper

Warning: bad syntax, perhaps a bogus '-'?See /usr/share/doc/procps-3.2.8/FAQ

root      9305  0.0  0.0 103328  852 pts/0    S+   12:10  0:00 grep zookeeper

[[email protected] bin]# jps

7520 NodeManager

7415 DataNode

9307 Jps

这时查看其他两台服务器状态

[[email protected] bin]# zkServer.sh status

JMX enabled by default

Using config:/usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Mode: follower

[[email protected] ~]# zkServer.sh status

JMX enabled by default

Using config: /usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg

Mode: leader

现在slave1成为主了

10、Zookeeper客户端四字符指令

conf 配置信息

cons 连接信息

dump 未处理会话节点

envi 环境信息

reqs 未处理请求   

stat 统计信息

wchs 服务器watch的详细信息

wchp 列出指定路径下服务器信息

[[email protected] bin]# echo conf | nc 192.168.43.205 2181

clientPort=2181

dataDir=/usr/local/zookeeper-3.4.5/data/version-2

dataLogDir=/usr/local/zookeeper-3.4.5/log/version-2

。。。。。。

[[email protected] bin]# echo envi |nc192.168.43.205 2181

Environment:

zookeeper.version=3.4.5-1392090, built on09/30/2012 17:52 GMT

host.name=master

。。。。。。

[[email protected] ~]# echo stat | nc 192.168.43.205 2181

Zookeeper version: 3.4.5-1392090, built on09/30/2012 17:52 GMT

Clients:

 /192.168.43.205:49880[0](queued=0,recved=1,sent=0)

。。。。。。

11、Zookeeper客户端命令

通过zkCli.sh 连接服务器

zkCli.sh -server master/slave1/slave2:2181

[[email protected] bin]# ./zkCli.sh

Connecting to localhost:2181

。。。。。。

WATCHER::

WatchedEvent state:SyncConnected type:Nonepath:null

[zk: localhost:2181(CONNECTED) 0] h           ##帮助命令

ZooKeeper -server host:port cmd args

      statpath [watch]

      setpath data [version]

       。。。。。。

[zk: localhost:2181(CONNECTED) 1] ls /     #列出目录内容

[zookeeper]

[zk: localhost:2181(CONNECTED) 2] ls /zookeeper

[quota]

[zk: localhost:2181(CONNECTED) 3] ls /zookeeper/quota

[]

[zk: localhost:2181(CONNECTED) 5] create /root test #创建目录及数据

Created /root

[zk: localhost:2181(CONNECTED) 7] ls /

[zookeeper, root]

[zk: localhost:2181(CONNECTED) 8] get /root  #获取数据

test

cZxid = 0x500000006

ctime = Sat Feb 24 06:16:16 CST 2018

mZxid = 0x500000006

mtime = Sat Feb 24 06:16:16 CST 2018

pZxid = 0x500000006

。。。。。。

需要注意的是ZK是不能一次创建多级节点的

[zk: localhost:2181(CONNECTED) 9] create /root/s1/s1-1  #报错

[zk: localhost:2181(CONNECTED) 10] ls /root          #查看为空

[]

[zk: localhost:2181(CONNECTED) 11] create /root/s1 s1-date

Created /root/s1

[zk: localhost:2181(CONNECTED) 12] create /root/s2 s2-data

Created /root/s2

[zk: localhost:2181(CONNECTED) 13] create /root/s3 s3-data

Created /root/s3

[zk: localhost:2181(CONNECTED) 14] ls /root

[s3, s1, s2]

查看状态

[zk: localhost:2181(CONNECTED) 15] stat /root

cZxid = 0x500000006

ctime = Sat Feb 24 06:16:16 CST 2018

。。。。。。

删除目录

[zk: localhost:2181(CONNECTED) 16] delete /root/s3

[zk: localhost:2181(CONNECTED) 17] ls /root

[s1, s2]

断开连接

[zk: localhost:2181(CONNECTED) 18] close

2018-02-24 06:27:08,915 [myid:] - INFO  [main-EventThread:[email protected]]- EventThread shut down

2018-02-24 06:27:08,917 [myid:] - INFO  [main:[email protected]] - Session:0x161c4b008a20002 closed

重新连接

由于ZK服务器间的数据是一致的,因此这次我连接其它服务器

[zk: localhost:2181(CLOSED) 19] connect master:2181

[zk: master:2181(CONNECTED) 20] get /root

test

cZxid = 0x500000006

。。。。。。

[zk: master:2181(CONNECTED) 21] get /root/s1

s1-date

。。。。。。

三 安装storm

http://mirror.bit.edu.cn/apache/storm/

1. 集群环境

192.168.43.205 master

192.168.43.79   slave1

192.168.43.32   slave2

安装了jdk、hadoop、zookeeper

注意点启动hadoop、zookeeper,再启动storm

2. 安装

上传软件:apache-storm-1.0.6.tar.gz

[[email protected] src]# tar -xvf apache-storm-1.0.6.tar.gz-C /usr/local/

[[email protected] src]# cd /usr/local/

[[email protected] local]# mv apache-storm-1.0.6/ storm-1.0.6

[[email protected] storm-1.0.6]# ls

bin       external       lib      NOTICE           RELEASE

conf      extlib         LICENSE  public           SECURITY.md

examples  extlib-daemon  log4j2   README.markdown

[[email protected] storm-1.0.6]# cd conf/

[[email protected] conf]# ls            #查看主要配置文件,注意先备份

storm_env.ini  storm-env.sh  storm.yaml

3. 修改配置文件

[[email protected] conf]# cp storm.yaml storm.yaml.bak

[[email protected] conf]# vim storm.yaml

storm.zookeeper.servers:

    - "master"

    - "slave1"

    - "slave2"

nimbus.host:"master"

supervisor.slots.ports:

    - 6700

    - 6701

    - 6702

    - 6703

    - 6704

    - 6705

4. 配置环境变量

[[email protected] conf]# vim /etc/profile

#####################storm

STORM_HOME=/usr/local/storm-1.0.6/

export PATH=$PATH:$STORM_HOME/bin

[[email protected] conf]# source /etc/profile

5. 拷贝到slave1 slave2

#Master

[[email protected] ~]# scp -r /usr/local/storm-1.0.6/ slave1:/usr/local/

[[email protected] ~]# scp -r /usr/local/storm-1.0.6/ slave2:/usr/local/

[[email protected] ~]# scp /etc/profile slave1:/etc/

[[email protected] ~]# scp /etc/profile slave2:/etc/

[[email protected] ~]# source /etc/profile

[[email protected] ~]# source /etc/profile

[[email protected] ~]# echo $STORM_HOM

6. 启动集群

[[email protected] ~]# vim /usr/local/storm-1.0.6/bin/start-storm-master.sh 

nohup /usr/local/storm-1.0.6/bin/storm nimbus >/dev/null 2>&1 &

nohup /usr/local/storm-1.0.6/bin/storm ui >/dev/null 2>&1 &

nohup /usr/local/storm-1.0.6/bin/storm logviewer  >/dev/null 2>&1 &

[[email protected]~]#chmod +x /usr/local/storm-1.0.6/bin/start-storm-master.sh

[[email protected]~]#/usr/local/storm-1.0.6/bin/start-storm-master.sh

#Slave1、Slave2

[[email protected] ~]# vim /usr/local/storm-1.0.6/bin/start-storm-slave.sh

nohup /usr/local/storm-1.0.6/bin/storm supervisor  >/dev/null 2>&1 &

nohup /usr/local/storm-1.0.6/bin/storm logviewer  >/dev/null 2>&1 &[[email protected] ~]#chmod +x/usr/local/storm-1.0.6/bin/start-storm-slave.sh

[[email protected] ~]#/usr/local/storm-1.0.6/bin/start-storm-slave.sh

slave2一样操作,这里就不写了

7. 集群状态

Jps

#Master

[[email protected] ~]# jps 

6977 nimbus       #主节点,资源分配,任务调度

6978 core            #ui界面,必须跟nimbus同一服务器        

6979 logviewer    #可在storm ui中查看日记

4404 NameNode

4565 SecondaryNameNode

7322 Jps

3277 JobHistoryServer

4749 ResourceManager

6493 QuorumPeerMain

#Slave1

[[email protected] ~]# jps 

2354 NodeManager

2246 DataNode

2969 QuorumPeerMain

5629 Supervisor        #从节点,管理work

5630 logviewer

6046 Jps

#Slave2

[[email protected] ~]# jps 

2085 NodeManager

5591 Supervisor        #从节点,管理work

2824 QuorumPeerMain

5592 logviewer

1977 DataNode

5994 Jps

8. 监控页面

http://master:8080/index.html

9. 关闭集群

#master、Slave1、Slave2

[[email protected]~]# vim /usr/local/storm-1.0.6/bin/stop-storm.sh

kill `ps aux| grep storm | grep -v 'grep' | awk '{print $2}'`

[[email protected]~]#chmod +x /usr/local/storm-1.0.6/bin/stop-storm.sh

[[email protected]~]#/usr/local/storm-1.0.6/bin/stop-storm.sh

四 安装hbase

http://mirror.bit.edu.cn/apache/hbase/

http://archive.apache.org/dist/hbase/hbase-0.98.6/hbase-0.98.6-hadoop2-bin.tar.gz

启动hbase前,需要先启动hadoop

Hbase里有自带的zookeeper,如果zookeeper是另外安装的也要先启动

启动顺序:Hadoop----zookeeper----hbase

1. 集群环境

192.168.43.205  master

192.168.43.79   slave1

192.168.43.32   slave2

安装有hadoop  zookeeper  storm

2. 安装hbase

#Master

上传软件包:hbase-0.98.6-hadoop2-bin.tar.gz

[[email protected] src]# tar -xvf hbase-0.98.6-hadoop2-bin.tar.gz -C /usr/local/

[[email protected] src]# cd /usr/local

[[email protected] local]# mv hbase-0.98.6-hadoop2/ hbase-0.98.6

3. 修改Hbase配文件

#Master

[[email protected] local]# cd hbase-0.98.6/conf/

[[email protected] conf]# ls

hadoop-metrics2-hbase.properties  hbase-policy.xml  regionservers

hbase-env.cmd                     hbase-site.xml

hbase-env.sh                      log4j.properties

3.1、修改regionservers

#Master

[[email protected] conf]# vim regionservers

master

slave1

slave2

3.2、修改hbase-env.sh

#Master

[[email protected] conf]# vim hbase-env.sh

#添加下面两行

export JAVA_HOME=/usr/java/jdk1.8.0_172

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib

124 # export HBASE_MANAGES_ZK=true

这一行原本注释的,把注释去掉,true为使用自带的zookeeper,false为使用自己安装的zookeeper

改为:export  HBASE_MANAGES_ZK=false

3.3、修改hbase-site.xml

#Master

[[email protected] conf]# vim hbase-site.xml

<configuration>

       <property>

               <name>hbase.tmp.dir</name>

               <value>/var/hbase</value>              #临时文件存储位置

        </property>

       <property>

               <name>hbase.rootdir</name>

               <value>hdfs://master:9000/hbase</value>

       </property>

       <property>

               <name>hbase.cluster.distributed</name>

               <value>true</value>

       </property>

       <property>

               <name>hbase.zookeeper.quorum</name>

               <value>master,slave1,slave2</value>

       </property>

       <property>

               <name>hbase.zookeeper.property.dataDir</name>

               <value>/usr/local/hbase-0.98.6/zookeeper</value>

       </property>

       <property>

               <name>hbase.master.info.port</name>

               <value>60010</value>

       </property>

</configuration>

4、添加环境变量

[[email protected] ~]# vim /etc/profile

####################hbase

HBASE_HOME=/usr/local/hbase-0.98.6

HBASE_CLASSPATH=$HBASE_HOME/conf

HBASE_LOG_DIR=$HBASE_HOME/logs

PATH=$PATH:$HBASE_HOME/bin

[[email protected] ~]# source /etc/profile

#刷新环境变量

5. 拷贝安装包

[[email protected] ~]# scp -r /usr/local/hbase-0.98.6/ slave1:/usr/local/ 

[[email protected] ~]# scp -r /usr/local/hbase-0.98.6/ slave2:/usr/local/ 

[[email protected] ~]# scp /etc/profile slave1:/etc/ 

[[email protected] ~]# scp /etc/profile slave2:/etc/ 

[[email protected] ~]# source /etc/profile

[[email protected] ~]# source /etc/profile

6. 启动集群

启动顺序:

hadoop---hbase                     (自动zookeeper)

hadoop---zookeeper---hbase (另装zookeeper)

关闭相反

启动hbase

启动前要先确认hadoop、zookeeper已经启动,并且要注意启动顺序

[[email protected] ~]# /usr/local/hbase-0.98.6/bin/start-hbase.sh 

starting master, logging to /usr/local/hbase-0.98.6/bin/../logs/hbase-root-master-master.out

slave2: starting regionserver, logging to /usr/local/hbase-0.98.6/bin/../logs/hbase-root-regionserver-slave2.out

slave1: starting regionserver, logging to /usr/local/hbase-0.98.6/bin/../logs/hbase-root-regionserver-slave1.out

master: starting regionserver, logging to /usr/local/hbase-0.98.6/bin/../logs/hbase-root-regionserver-master.out

7. 进程状态

[[email protected] ~]# jps 

4404 NameNode

4565 SecondaryNameNode

9749 Jps

9547 HRegionServer

3277 JobHistoryServer

4749 ResourceManager

6493 QuorumPeerMain

9438 HMaster

8. 监控页面

http://master:60010/master-status

9. 关闭集群

#master

[[email protected] ~]# /usr/local/hbase-0.98.6/bin/stop-hbase.sh

配置完成!

继续阅读