Hadoop 2.7.3 +zookeeper-3.4.8+hbase-1.2.2环境搭建
CentOS6.7上安装Hadoop 2.7.3 的安装 和 初步使用
一.准备阶段
- 3台模拟机,安装好CentOS6.7 的系统
-
IP 主机名 用户名 192.168.1.151 hadoop-master-001 hadoop 192.168.1.152 hadoop-slave-001 hadoop 192.168.1.153 hadoop-slave-002 hadoop - 增加用户有密码:
useradd hadoop
passwd hadoop
禁用Transparent Hugepage
1.查看Transparent Hugepage 的状态
cat /sys/kernel/mm/transparent_hugepage/enabled
2. 返回 结果
[always] madvise never
3.永久关闭
vim /etc/rc.local 加入如下代码: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi
4重启机器
5.查看状态
cat /sys/kernel/mm/transparent_hugepage/enabled
返回结果
always madvise [never]
二.系统环境的安装
1.jdk 我本次使用的
jdk-8u101-linux-x64.tar.gz
软件可以到官网上去下载:ttp://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
2.配置jDK环境
解压文件到指定目录
$ tar xvf jdk-8u101-linux-x64.tar.gz -C /opt/ #指定文件解压到opt下 $ mv jdk1.8.0_101 jkd #修改文件名字
3.配置环境变量(jdk)如同一样的配置
打开全局环境变量文件 profile
sudo vim /etc/profile 非root用请使用sudo 权限可以参考配置sudo权限Linux用户管理 在下面插入如下代码: JAVA_HOME=/opt/jdk PATH=$JAVA_HOME/bin:$PATH CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME export PATH export CLASSPATH
生效文件
$ source /etc/profile $ echo $JAVA_HOME #看环境变是否生效看能否找到家目录如下成功 /opt/jdk $ java -version #看java环境版本 java version "1.8.0_101" Java(TM) SE Runtime Environment (build 1.8.0_101-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
相互绑定主机名:
192.168.1.151 hadoop-master-001
192.168.1.152 hadoop-slave-001
192.168.1.153 hadoop-slave-002
$ vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.151 hadoop-master-001 192.168.1.152 hadoop-slave-001 192.168.1.153 hadoop-slave-002
打通各个节点的无密码访问:
1.安装ssh所有的服务包过客户端
$ sudo yum install ssh* -y
2.使用hadoop用户创建秘钥
$ ssh-keygen -t rsa 一路回车,不需要输入密码
$ ls ~/.ssh/id_rsa* #查看秘钥
/home/hadoop/.ssh/id_rsa /home/hadoop/.ssh/id_rsa.pub
3.把公钥保存到远程机上三台机器都一样进行
$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-master-001 #第一次需要按照提示后输入密码
$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-slave-001
$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-slave-002
4.检测是否可以不需要密码登陆
$ ssh localhost
$ ssh hadoop@hadoop-master-001
$ ssh hadoop@hadoop-slave-001
$ ssh hadoop@hadoop-slave-001
这里只有001是master,如果有多个namenode,或者rm的话则需要打通所有master到其他剩余节点的免密码登陆。
三.如上验证OK后进行hadoop安装:
1.这里使用的是 hadoop的 hadoop-2.7.3.tar.gz
hadoop 下载http://hadoop.apache.org/releases.html
2.解压到指定的位置,并重命名:
$ tar xvf hadoop-2.7.3.tar.gz -C /opt
$ mc hadoop-2.7.3 hadoop
3. 判断hadoop的版本:
$ /opt/hadoop/bin/hadoop version Hadoop 2.7.3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff Compiled by root on 2016-08-18T01:41Z Compiled with protoc 2.5.0 From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4 This command was run using /opt/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar
4.配置环境变量:
vim /etc/profile
插入如下信息:
export PATH=$PATH:/opt/hadoop/bin:/opt/hadoop/sbin
生效环境变量:
$ sudo source /etc/profile
四.配置hadoop
1.所有的节点上创建如下目录:
创建名命令: mkdir /home/hadoop/xxxx
/home/hadoop/name
/home/hadoop/data
/home/hadoop/temp
2.配置必要的文件:
这里要涉及到的配置文件有7个
修改配置(如果系统已经设置了JAVA_HOME,也要配置env.sh)
$ vim /opt/hadoop/etc/hadoop/hadoop-env.sh
.... # The java implementation to use. export JAVA_HOME=/opt/jdk #修改此行信息 ........
$ vim /opt/hadoop/etc/hadoop/yarn-env.sh
...... # User for YARN daemons export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn} # resolve links - $0 may be a softlink export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}" # some Java parameters export JAVA_HOME=/opt/jdk #修改此行值 if [ "$JAVA_HOME" != "" ]; then ........
$ vim /opt/hadoop/etc/hadoop/slaves
hadoop-master-001 #如果不需要服务端做存储节点可以不添加这台机器 hadoop-slave-001 hadoop-slave-002
$ vim /opt/hadoop/etc/hadoop/core-site.xml
hdfs:是主机节点的地址
file:放置文件的地址
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-master-001:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/temp</value>
<description>Abase for other temporary directories.</description>
</property>
</configuration>
$ vim /opt/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-master-001:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
$ vim /opt/hadoop/etc/hadoop/mapred-site.xml
次文件如果没做存在(默认是没有),就把mapred-site.xml.template 文件拷贝一个就可以了
$ cp /opt/hadoop/etc/hadoop/mapred-site.xml.template /opt/hadoop/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-master-001:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-master-001:19888</value>
</property>
</configuration>
$ vim /opt/hadoop/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-master-001:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-master-001:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-master-001:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-master-001:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-master-001:8088</value>
</property>
</configuration>
3.复制hadoop到其他节点:
$ scp -r /opt/hadoop hadoop-slave-001:/opt/hadoop
$ scp -r /opthadoop hadoop-slave-002:/opt/hadoop
五.启动服务
1.进入安装目录: cd /opt/hadoop/
2. 格式化当各节点:
(format前面只有一个减号)成功的话,会看到 “successfully formatted” 和 “Exitting with status 0″ 的提示,若为 “Exitting with status 1″ 则是出错。
./bin/hdfs namenode -format
3.启动各节点(hdfs:):只要在主节点上启动其他各节点会主动启动不需要单独启动其他节点。
$ ./sbin/start-dfs.sh
全部启动完成城后如没有报错,说明节点启动成功。
4.启动成功后用jps名领查看各节点
主节点(管理端):
$ jps 1684 DataNode 1853 SecondaryNameNode 1550 NameNode 1967 Jps
从节点(两个节点都是一样的):
$ jps 1536 Jps 1450 DataNode
5.启动(yarn):
$./sbin/start-yarn.sh
主节点服务(管理端):
$ jps 1684 DataNode 2136 NodeManager 2027 ResourceManager 1853 SecondaryNameNode 1550 NameNode 2447 Jps
从节点(两个节点都是一样的):
# jps 1450 DataNode 1583 NodeManager 1711 Jps
6.运行检查:
查看集群状态:./bin/hdfs dfsadmin -report 下
出现 Live datanodes (3): 说明成功
$ ./bin/hdfs dfsadmin -report Configured Capacity: 50002649088 (46.57 GB) Present Capacity: 39556759552 (36.84 GB) DFS Remaining: 39556624384 (36.84 GB) DFS Used: 135168 (132 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (3): Name: 10.10.10.2:50010 (test2.dadoop.com) Hostname: test2.dadoop.com Decommission Status : Normal Configured Capacity: 16667549696 (15.52 GB) DFS Used: 45056 (44 KB) Non DFS Used: 3274051584 (3.05 GB) DFS Remaining: 13393453056 (12.47 GB) DFS Used%: 0.00% DFS Remaining%: 80.36% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Mon Sep 05 01:54:47 CST 2016 Name: 10.10.10.3:50010 (test1.dadoop.com) Hostname: test1.dadoop.com Decommission Status : Normal Configured Capacity: 16667549696 (15.52 GB) DFS Used: 45056 (44 KB) Non DFS Used: 3221540864 (3.00 GB) DFS Remaining: 13445963776 (12.52 GB) DFS Used%: 0.00% DFS Remaining%: 80.67% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Mon Sep 05 01:54:47 CST 2016 Name: 10.10.10.1:50010 (test3.dadoop.com) Hostname: test3.dadoop.com Decommission Status : Normal Configured Capacity: 16667549696 (15.52 GB) DFS Used: 45056 (44 KB) Non DFS Used: 3950297088 (3.68 GB) DFS Remaining: 12717207552 (11.84 GB) DFS Used%: 0.00% DFS Remaining%: 76.30% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Mon Sep 05 01:54:48 CST 2016
其他:
这个信息才表示集群建立成功
成功启动后,可以访问 Web 界面 http://192.168.1.151:50070 查看 NameNode 和 Datanode 信息,还可以在线查看 HDFS 中的文件。
启动 YARN 可以通过 Web 界面查看任务的运行情况:http://192.168.1.151:8088/cluster

六.常用命令:
hadoop fs 这个命令可以列出所有的 hdfs的子命令的帮助界面。基本上语法和linux上的文件操作类似
$ hadoop fs Usage: hadoop fs [generic options] [-appendToFile <localsrc> ... <dst>] [-cat [-ignoreCrc] <src> ...] [-checksum <src> ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>] [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-count [-q] [-h] <path> ...] [-cp [-f] [-p | -p[topax]] <src> ... <dst>] [-createSnapshot <snapshotDir> [<snapshotName>]] [-deleteSnapshot <snapshotDir> <snapshotName>] [-df [-h] [<path> ...]] [-du [-s] [-h] <path> ...] [-expunge] [-find <path> ... <expression> ...] [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-getfacl [-R] <path>] [-getfattr [-R] {-n name | -d} [-e en] <path>] [-getmerge [-nl] <src> <localdst>] [-help [cmd ...]] [-ls [-d] [-h] [-R] [<path> ...]] [-mkdir [-p] <path> ...] [-moveFromLocal <localsrc> ... <dst>] [-moveToLocal <src> <localdst>] [-mv <src> ... <dst>] [-put [-f] [-p] [-l] <localsrc> ... <dst>] [-renameSnapshot <snapshotDir> <oldName> <newName>] [-rm [-f] [-r|-R] [-skipTrash] <src> ...] [-rmdir [--ignore-fail-on-non-empty] <dir> ...] [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]] [-setfattr {-n name [-v value] | -x name} <path>] [-setrep [-R] [-w] <rep> <path> ...] [-stat [format] <path> ...] [-tail [-f] <file>] [-test -[defsz] <path>] [-text [-ignoreCrc] <src> ...] [-touchz <path> ...] [-truncate [-w] <length> <path> ...] [-usage [cmd ...]] Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]
例如 : 复制本地文件到 hdfs 系统
hadoop fs -copyFromLocal *.log hdfs://192.168.1.151:9000/data/weblogs
CentOS6.7上安装Hadoop集群上安装 zookeeper
一.环境的准备
1.基础准备
本次是用的还zookeeper-3.4.8 针对用处大家可以去百度查一下
http://baike.baidu.com/link?url=u20_tyl26COPpanL-vvVvjgD- O12T5FvL50MIcrTj_X_mdW8svq6pH5_dE6DzELp3IUCK3dDTEOnNswAWPZ0wK
在主节点机器上解压软件包
tar xvf zookeeper-3.4.8.tar.gz -C /opt/ #指定到固定目录
2.进行配置
进入配置目录
$ cd /opt/zookeeper/conf/
$ cp zoo_sample.cfg zoo.cfg
$ vim zoo.cfg
3.添加如下内容
dataDir=/home/hadoop/tmp/zookeeper/data/
server.1 = hadoop-master-001:7000:7001
server.2 =hadoop-slave-001:7000:7001
server.3 = hadoop-slave-002:7000:7001
具体如下
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/home/hadoop/tmp/zookeeper/data/ #修改到指定的目录 #logDir=/home/hadoop/tmp/zookeeper/log/zk.log # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1 =hadoop-master-001:7000:7001
server.2 =hadoop-slave-001:7000:7001
server.3 =hadoop-slave-002:7000:7001 ~
4.吧zookeeper分别远程拷贝到两个slave机器上去。
scp -r zookeeper hadoop-slave-001:/opt/ scp -r zookeeper hadoop-slave-002:/opt/
5.在各节点上创建指定的目录,三台机器上全部都执行下面的命令
$ mkdir /home/hadoop/tmp/zookeeper/data -p
创建节点文件,对一台机器就在文件内填写数字一
$ vim /home/hadoop/tmp/zookeeper/data/myid
也可以选择如下的方法;
$ echo "1" > /home/hadoop/tmp/zookeeper/data/myid 第一台机器
$ echo "2" > /home/hadoop/tmp/zookeeper/data/myid 第二台机器
$ echo "3" > /home/hadoop/tmp/zookeeper/data/myid 第三台机器
6.启动zookeeper服务,三台同样执行,记住如果想把启动后的日志放到指定目录就在那个目录下启动服务。
$ /opt/zookeeper/bin/zkServer.sh start
如果出现如下错误,就可以到当前的目录查看一下日志文件,zookeeper.out,如果是链接的问题就暂时不用管它因为其他连个节点尚未启动。
$ /opt/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Error contacting service. It is probably not running.
zookeeper.out
$ cat zookeeper.out 2016-09-06 03:01:59,753 [myid:] - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /opt/zookeeper/bin/../conf/zoo .cfg2016-09-06 03:01:59,763 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: test3.dadoop.com to address: tes t3.dadoop.com/10.10.10.12016-09-06 03:01:59,764 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: test1.dadoop.com to address: tes t1.dadoop.com/10.10.10.32016-09-06 03:01:59,764 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: test2.dadoop.com to address: tes t2.dadoop.com/10.10.10.22016-09-06 03:01:59,764 [myid:] - INFO [main:QuorumPeerConfig@331] - Defaulting to majority quorums 2016-09-06 03:01:59,768 [myid:1] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-09-06 03:01:59,768 [myid:1] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-09-06 03:01:59,768 [myid:1] - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-09-06 03:01:59,775 [myid:1] - INFO [main:QuorumPeerMain@127] - Starting quorum peer 2016-09-06 03:01:59,782 [myid:1] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181 2016-09-06 03:01:59,787 [myid:1] - INFO [main:QuorumPeer@1019] - tickTime set to 2000 2016-09-06 03:01:59,788 [myid:1] - INFO [main:QuorumPeer@1039] - minSessionTimeout set to -1 2016-09-06 03:01:59,788 [myid:1] - INFO [main:QuorumPeer@1050] - maxSessionTimeout set to -1 2016-09-06 03:01:59,788 [myid:1] - INFO [main:QuorumPeer@1065] - initLimit set to 10 2016-09-06 03:01:59,795 [myid:1] - INFO [main:FileSnap@83] - Reading snapshot /home/hadoop/tmp/zookeeper/data/version-2/snapsho t.3000000062016-09-06 03:01:59,803 [myid:1] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: test3.dadoop.co m/10.10.10.1:70012016-09-06 03:01:59,810 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@774] - LOOKING 2016-09-06 03:01:59,811 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@818] - New election. My id = 1, proposed zxid=0x3000000062016-09-06 03:01:59,812 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format versi on), 1 (n.leader), 0x300000006 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x6 (n.peerEpoch) LOOKING (my state)2016-09-06 03:01:59,814 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 2 at election addr ess test2.dadoop.com/10.10.10.2:7001java.net.ConnectException: 拒绝连接 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:745) 2016-09-06 03:01:59,815 [myid:1] - INFO [WorkerSender[myid=1]:QuorumPeer$QuorumServer@149] - Resolved hostname: test2.dadoop.co m to address: test2.dadoop.com/10.10.10.22016-09-06 03:01:59,815 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 3 at election addr ess test1.dadoop.com/10.10.10.3:7001java.net.ConnectException: 拒绝连接 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:745) 2016-09-06 03:01:59,816 [myid:1] - INFO [WorkerSender[myid=1]:QuorumPeer$QuorumServer@149] - Resolved hostname: test1.dadoop.co m to address: test1.dadoop.com/10.10.10.32016-09-06 03:02:00,017 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 2 at election address test2.dadoop.com/10.10.10.2:7001java.net.ConnectException: 拒绝连接 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822) 2016-09-06 03:02:00,019 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostna me: test2.dadoop.com to address: test2.dadoop.com/10.10.10.22016-09-06 03:02:00,021 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 3 at election address test1.dadoop.com/10.10.10.3:7001java.net.ConnectException: 拒绝连接 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822) 2016-09-06 03:02:00,022 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostna me: test1.dadoop.com to address: test1.dadoop.com/10.10.10.32016-09-06 03:02:00,023 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@852] - Notification time ou t: 4002016-09-06 03:02:00,425 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 2 at election address test2.dadoop.com/10.10.10.2:7001java.net.ConnectException: 拒绝连接 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822) 2016-09-06 03:02:00,427 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostna me: test2.dadoop.com to address: test2.dadoop.com/10.10.10.22016-09-06 03:02:00,428 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 3 at election address test1.dadoop.com/10.10.10.3:7001java.net.ConnectException: 拒绝连接 at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822) 2016-09-06 03:02:00,429 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostna me: test1.dadoop.com to address: test1.dadoop.com/10.10.10.32016-09-06 03:02:00,429 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@852] - Notification time ou t: 8002016-09-06 03:02:01,232 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 2 at election address test2.dadoop.com/10.10.10.2:7001java.net.ConnectException: 拒绝连接
启动完成后查看状态,发现是一主,两从
第一台机器
$ /opt/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: leader
第二台机器
$ /opt/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower
第三台机器
$ /opt/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower
5.进行操作测试:
$ /opt/zookeeper/bin/zkCli.sh -server hadoop-master-001:2181
链接成功后可以进行如下操作:
显示根目录下、文件: ls /
创建文件,并设置初始内容:create /shenjian hello
获取文件内容:get /shenjian
修改文件内容:set /shenjian world
删除文件:delete /shenjian
退出客户端:quit
[zk: 127.0.0.1:2181(CONNECTED) 1] ls / [zookeeper] [zk: 127.0.0.1:2181(CONNECTED) 2] create /wangdelogn hello Created /wangdelogn [zk: 127.0.0.1:2181(CONNECTED) 3] get /wangdelogn hello <<<<---文件内容 cZxid = 0x700000002 ctime = Tue Sep 06 03:42:19 CST 2016 mZxid = 0x700000002 mtime = Tue Sep 06 03:42:19 CST 2016 pZxid = 0x700000002 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 5 numChildren = 0 [zk: 127.0.0.1:2181(CONNECTED) 4]ls / [zookeeper, wangdelogn] [zk: 127.0.0.1:2181(CONNECTED) 5] set /wangdelogn world cZxid = 0x700000002 ctime = Tue Sep 06 03:42:19 CST 2016 mZxid = 0x700000003 mtime = Tue Sep 06 03:44:14 CST 2016 pZxid = 0x700000002 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 5 numChildren = 0 [zk: 127.0.0.1:2181(CONNECTED) 6] get /wangdelogn world cZxid = 0x700000002 ctime = Tue Sep 06 03:42:19 CST 2016 mZxid = 0x700000003 mtime = Tue Sep 06 03:44:14 CST 2016 pZxid = 0x700000002 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 5 numChildren = 0
在其他节点上查看内容:
[zk: 127.0.0.1:2181(CONNECTED) 1] get /wangdelogn world cZxid = 0x700000002 ctime = Tue Sep 06 03:42:19 CST 2016 mZxid = 0x700000003 mtime = Tue Sep 06 03:44:14 CST 2016 pZxid = 0x700000002 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 5 numChildren = 0
删除刚才创建的问价并退出:
[zk: 127.0.0.1:2181(CONNECTED) 2] delete /wangdelogn [zk: 127.0.0.1:2181(CONNECTED) 4] quit Quitting... 2016-09-06 03:47:35,888 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x356fbbb5ad10000 closed 2016-09-06 03:47:35,892 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x356 fbbb5ad10000
zookeeper 搭建成功
CentOS6.7上安装Hadoop+zookeeper集群上安装hbase-1.2.2
一.准备工作
hbase-1.2.2-bin.tar.gz
下载地址:
http://apache.fayea.com/hbase/1.2.2/
1.部署hbase
解压软件包到指定的位置:
$ tar xvf hbase-1.2.2-bin.tar.gz -C /opt/
$ cd /opt/
$ mv hbase-1.2.2 hbase
2.配置文件:
$ cd /opt/hbase/conf/
$ vim hbase-env.sh
插入或直接修改如下内容:
export JAVA_HOME=/opt/jdk
export HBASE_MANAGES_ZK=false
完成后如下其他信息暂时不需要修改
$ grep -v "#" hbase-env.sh |grep -v '^$' export JAVA_HOME=/opt/jdk export HBASE_OPTS="-XX:+UseConcMarkSweepGC" export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m" export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m" export HBASE_MANAGES_ZK=false
3.编辑hbase-site.xml ,添加配置文件:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-master-001:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>test2.dadoop.com,test2.dadoop.com</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/tmp/tmp/zookeeper/data</value>
</property>
</configuration>
4. 编辑配置目录下面的文件regionservers. 命令:
$ vim regionservers
hadoop-slave-001
hadoop-slave-002
5. 把Hbase复制到其他机器,命令如下:
$ cd /opt/
$ scp -r hbase hadoop-slave-001:/opt/
$ scp -r hbase hadoop-slave-002:/opt/
6. 开启hbase服务。命令如下:
$ /opt/habase/bin/start-hbase.sh
如果配置没有出问题也同时已经启动:
第一台 $ jps 2064 NodeManager 1586 NameNode 6231 HMaster 1960 ResourceManager 5291 QuorumPeerMain 1804 SecondaryNameNode 6365 HRegionServer 7197 Jps 第二台 $ jps 2016 DataNode 1880 NodeManager 3768 Jps 3371 HRegionServer 3037 QuorumPeerMain 第三台 $ jps 2016 DataNode 1880 NodeManager 3768 Jps 3371 HRegionServer 3037 QuorumPeerMain
7.验证环境:
第一台 $ /opt/hbase/bin/hbase shell SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinde r.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.2.2, r3f671c1ead70d249ea4598f1bbcc5151322b3a13, Fri Jul 1 08:28:55 CDT 2016 hbase(main):001:0> sataus NameError: undefined local variable or method `sataus' for #<Object:0xde81be1> hbase(main):002:0> 第二台 $ /opt/hbase/bin/hbase shell SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinde r.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.2.2, r3f671c1ead70d249ea4598f1bbcc5151322b3a13, Fri Jul 1 08:28:55 CDT 2016 hbase(main):001:0> status 1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load hbase(main):002:0> 第三台 $ /opt/hbase/bin/hbase shell SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinde r.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.2.2, r3f671c1ead70d249ea4598f1bbcc5151322b3a13, Fri Jul 1 08:28:55 CDT 2016 hbase(main):001:0> status 1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load
环境搭建至此结束:!!!!

浙公网安备 33010602011771号