Hadoop分布式集群搭建

1、实验环境说明:
    192.168.100.1 node1.china.com
    192.168.100.2 node2.china.com
    192.168.100.3 node3.china.com
    192.168.100.4 node4.china.com
    192.168.100.5 node5.china.com
    192.168.100.6 node6.china.com
node1与node2主机角色分配:NameNode、DFSZKFailoverController;需要安装软件有:JDK、Hadoop2.7.1
nod3主机角色分配:ResourceManager;需要安装软件有:JDK、Hadoop2.7.1
node4、node5、node6主机角色分配:JournalNode、DataNode、NodeManager、QuorumPeerMain;需要安装软件有:JDK、Hadoop2.7.1、zookeeper3.4.6

2、配置本地解析:

 [root@node1~]# cat /etc/hosts
    
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.1 node1.china.com 192.168.100.2 node2.china.com 192.168.100.3 node3.china.com 192.168.100.4 node4.china.com
192.168.100.5 node5.china.com 192.168.100.6 node6.china.com [root@node1 ~]# for ((x=1;x<=6;x++));do scp /etc/hosts node$x.china.com:/etc/ ; done

3、安装JDK

 [root@node1 ~]# for ((x=1;x<=6;x++));do scp jdk-7u45-linux-x64.rpm node$x.china.com:/root/ ; done
    在所有节点上都rpm -ivh jdk-7u45-linux-x64.rpm
    
编辑
/etc/profile文件,如下: [root@node1 ~]# tail -3 /etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_45 export HADOOP_HOME=/opt/hadoop export PATH=$JAVA_HOME/jre/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
[root@node1
~]# for((x=1;x<=6;x++));do scp /etc/profile node$x.china.com:/etc/ ; done 所有节点都source /etc/profile,并使用java -version验证jdk的新

4、新建立hadoop用户,配置ssh 互信

所有节点都新建hadoop用户,如下:
    useradd hadoop
    echo "redhat" | passwd --stdin hadoop
    然后在node1主机上操作如下:
    
[root@node1
~]# su - hadoop
[hadoop@node1
~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 41:1a:d7:b1:df:bc:1d:bc:8e:ec:f9:ef:c5:48:f8:90 hadoop@node1.china.com The key's randomart image is: +--[ RSA 2048]----+ | . o... | | = .. | | . . . | | . . =. | | S E +o | | + =o| | +.+| | . + .| | .=.++| +-----------------+ [hadoop@node1 ~]$ ssh-copy-id -i node1.china.com The authenticity of host 'china (192.168.1.1)' can't be established. RSA key fingerprint is 44:69:99:88:ac:45:67:7c:fe:95:b0:93:7e:af:38:4d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'china,192.168.1.1' (RSA) to the list of known hosts. hadoop@china's password: Now try logging into the machine, with "ssh 'china'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting.
[hadoop@node1
~]$ for((x=2;x<=6;x++));do scp -r .ssh node$x.china.com:~ ; done The authenticity of host 'node2 (192.168.1.2)' can't be established. RSA key fingerprint is a7:24:ed:2e:56:5f:5c:f7:f4:fe:c0:ee:ef:51:a1:2d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,192.168.1.2' (RSA) to the list of known hosts. hadoop@node2's password: id_rsa.pub 100% 407 0.4KB/s 00:00 known_hosts 100% 799 0.8KB/s 00:00 authorized_keys 100% 407 0.4KB/s 00:00 id_rsa 100% 1675 1.6KB/s 00:00 The authenticity of host 'node3 (192.168.1.3)' can't be established. RSA key fingerprint is 00:38:94:de:68:83:5e:48:77:83:e0:7d:14:33:a1:91. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node3,192.168.1.3' (RSA) to the list of known hosts. hadoop@node3's password: id_rsa.pub 100% 407 0.4KB/s 00:00 known_hosts 100% 1198 1.2KB/s 00:00 authorized_keys 100% 407 0.4KB/s 00:00 id_rsa 100% 1675 1.6KB/s 00:00 The authenticity of host 'node4 (192.168.1.4)' can't be established. RSA key fingerprint is 84:9a:aa:db:b2:2c:38:bb:5f:32:61:b5:e8:c3:9e:8a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node4,192.168.1.4' (RSA) to the list of known hosts. hadoop@node4's password: id_rsa.pub 100% 407 0.4KB/s 00:00 known_hosts 100% 1597 1.6KB/s 00:00 authorized_keys 100% 407 0.4KB/s 00:00 id_rsa 100% 1675 1.6KB/s 00:00 The authenticity of host 'node5 (192.168.1.5)' can't be established. RSA key fingerprint is e2:6a:3f:08:2b:9b:af:39:54:ff:47:5f:a9:ee:af:06. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node5,192.168.1.5' (RSA) to the list of known hosts. hadoop@node5's password: id_rsa.pub 100% 407 0.4KB/s 00:00 known_hosts 100% 1996 2.0KB/s 00:00 authorized_keys 100% 407 0.4KB/s 00:00 id_rsa 100% 1675 1.6KB/s 00:00 The authenticity of host 'node6 (192.168.1.6)' can't be established. RSA key fingerprint is 9d:27:25:89:50:cd:a3:53:b1:0b:56:d0:cd:7d:eb:ae. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node6,192.168.1.6' (RSA) to the list of known hosts. hadoop@node6's password: id_rsa.pub 100% 407 0.4KB/s 00:00 known_hosts 100% 2395 2.3KB/s 00:00 authorized_keys 100% 407 0.4KB/s 00:00 id_rsa 100% 1675 1.6KB/s 00:00
[hadoop@node1
~]$ 各节点相互ssh登录,测试是否互信。

 5、配置zookeeper cluster

 [root@node1 ~]# for((x=4;x<=6;x++));do scp zookeeper-3.4.6.tar.gz node$x.china.com:/tmp ;done
    root@node4's password: 
    zookeeper-3.4.6.tar.gz                        100%   17MB  16.9MB/s   00:01    
    root@node5's password: 
    zookeeper-3.4.6.tar.gz                        100%   17MB  16.9MB/s   00:00    
    root@node6's password: 
    zookeeper-3.4.6.tar.gz                        100%   17MB  16.9MB/s   00:01    

    
[root@node5 ~]# chown hadoop.hadoop /opt [root@node5 ~]# su - hadoop
[hadoop@node5 ~]$ tar xfz /tmp/zookeeper-3.4.6.tar.gz -C /opt/
[hadoop@node5 ~]$ cd /opt/
[hadoop@node5 opt]$ ls rh zookeeper-3.4.6
[hadoop@node5 opt]$ mv zookeeper{-3.4.6,}
[hadoop@node5 opt]$ ls rh zookeeper
[hadoop@node5 opt]$ ls zookeeper/conf/ configuration.xsl log4j.properties zoo_sample.cfg
[hadoop@node5 opt]$ cp zookeeper/conf/zoo{_sample,}.cfg
[hadoop@node5 opt]$ vim zookeeper/conf/zoo.cfg [
hadoop@node5 opt]$ grep -P -v "^($|#)" zookeeper/conf/zoo.cfg tickTime=2000 #这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。 initLimit=10 #这个配置项是用来配置 Zookeeper 接受客户端(这里所说的客户端不是用户连接 Zookeeper 服务器的客户端,而是 Zookeeper 服务器集群中连接到 Leader 的 Follower 服务器),初始化连接时最长能忍受多少个心跳时间间隔数.
当已经超过 5个心跳的时间(也就是 tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 10*2000=20 秒
syncLimit=5 #这个配置项标识 Leader 与 Follower 之间发送消息,请求和应答时间长度,最长不能超过多少个 tickTime 的时间长度,总的时间长度就是 5*2000=4 秒 dataDir=/opt/zookeeper/data #顾名思义就是 Zookeeper 保存数据的目录,默认情况下,Zookeeper 将写数据的日志文件也保存在这个目录里。 clientPort=2181 #这个端口就是客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。 server.1=node4:2888:3888 server.2=node5:2888:3888 server.3=node6:2888:3888 #server.A=B:C:D:其中 A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,
需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。
[hadoop@node5 opt]$ mkdir /opt/zookeeper/data
[hadoop@node5 opt]$ echo 2 > /opt/zookeeper/data/myid #创建对应的服务器编号
[root@node4 ~]# chown hadoop.hadoop /opt
[root@node6 ~]# chown hadoop.hadoop /opt
[hadoop@node5 opt]$ scp -r /opt/zookeeper node4.china.com:/opt/
[hadoop@node5 opt]$ scp -r /opt/zookeeper node6.china.com:/opt/
[root@node4 ~]# su - hadoop
[hadoop@node4 ~]$ echo 1 > /opt/zookeeper/data/myid
[root@node6 ~]# su - hadoop
[hadoop@node6 ~]$ echo 3 > /opt/zookeeper/data/myid
[hadoop@node4 ~]$ /opt/zookeeper/bin/zkServer.sh start JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
[hadoop@node4 ~]$ /opt/zookeeper/bin/zkServer.sh status JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower
[hadoop@node5 opt]$ /opt/zookeeper/bin/zkServer.sh start JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
[hadoop@node5 opt]$ /opt/zookeeper/bin/zkServer.sh status JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: leader
[hadoop@node6 ~]$ /opt/zookeeper/bin/zkServer.sh start JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
[hadoop@node6 ~]$ /opt/zookeeper/bin/zkServer.sh status JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower

 6、配置hadoop

1、首先在node1主机上解压并配置hadoop
    [root@node1 ~]# chown hadoop.hadoop /opt
    [root@node1 ~]# su - hadoop
    [hadoop@node1 ~]$ tar xfz /tmp/hadoop-2.7.1.tar.gz -C /opt
    [hadoop@node1 ~]$ mv /opt/hadoop{-2.7.1,}
    [hadoop@node1 ~]$ ls /opt/hadoop/
    bin  include  libexec      NOTICE.txt  sbin
    etc  lib      LICENSE.txt  README.txt  share
    [hadoop@node1 ~]$ cd /opt/hadoop/etc/hadoop/   <需要修改的配置文件有以下几个>
    [hadoop@node1 hadoop]$ ls hadoop-env.sh hdfs-site.xml core-site.xml mapred-site.xml.template yarn-site.xml slaves 
    core-site.xml  hdfs-site.xml             slaves
    hadoop-env.sh  mapred-site.xml.template  yarn-site.xml
    [hadoop@node1 hadoop]$
    [hadoop@node1 hadoop]$ vim hadoop-env.sh   <在hadoop运行环境配置文件中指定JAVA_HOME的路径>
    [hadoop@node1 hadoop]$ grep "^export JAVA_HOME" hadoop-env.sh
    export JAVA_HOME=/usr/java/jdk1.7.0_45
    
    [hadoop@node1 hadoop]$ vim core-site.xml   <定义整个hadoop集群的namespce名及指定zookeeper集群>
    [hadoop@node1 hadoop]$ tail -17 core-site.xml
<configuration>
  <!-- 指定hdfs的nameservice为ns1 -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://ns1</value>
  </property>
  <!-- 指定hadoop临时目录 -->
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop/tmp</value>
  </property>
  <!-- 指定zookeeper地址 -->
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>node4.china.com:2181,node5.china.com:2181,node6.china.com:2181</value>
  </property>
</configuration>

    [hadoop@node1 hadoop]$ vim hdfs-site.xml       <设置HDFS相关的属性> 
    [hadoop@node1 hadoop]$ tail -62 hdfs-site.xml
<configuration>
  <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
  <property>
    <name>dfs.nameservices</name>
    <value>ns1</value>
  </property>
  <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
  <property>
    <name>dfs.ha.namenodes.ns1</name>
    <value>nn1,nn2</value>
  </property>
  <!-- nn1的RPC通信地址 -->
  <property>
    <name>dfs.namenode.rpc-address.ns1.nn1</name>
    <value>node1.china.com:9000</value>
  </property>
  <!-- nn1的http通信地址 -->
  <property>
    <name>dfs.namenode.http-address.ns1.nn1</name>
    <value>node1.china.com:50070</value>
  </property>
  <!-- nn2的RPC通信地址 -->
  <property>
    <name>dfs.namenode.rpc-address.ns1.nn2</name>
    <value>node2.china.com:9000</value>
  </property>
  <!-- nn2的http通信地址 -->
  <property>
    <name>dfs.namenode.http-address.ns1.nn2</name>
    <value>node2.china.com:50070</value>
  </property>
  <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://node4.china.com:8485;node5.china.com:8485;node6.china.com:8485/ns1</value>
  </property>
  <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/opt/hadoop/journal</value>
  </property>
  <!-- 开启NameNode失败自动切换 -->
  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <!-- 配置失败自动切换实现方式 -->
  <property>
    <name>dfs.client.failover.proxy.provider.ns1</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
  <!-- 配置隔离机制 -->
  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
  </property>
  <!-- 使用隔离机制时需要ssh免登陆 -->
  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/home/hadoop/.ssh/id_rsa</value>
  </property>
</configuration>
    [hadoop@node1 hadoop]$ 
    
    [hadoop@node1 hadoop]$ vim slaves      <指定DataNode节点>
    [hadoop@node1 hadoop]$ cat slaves
    node4.china.com
    node5.china.com
    node6.china.com
    [hadoop@node1 hadoop]$ 
    
    [hadoop@node1 hadoop]$ cp mapred-site.xml{.template,}
    [hadoop@node1 hadoop]$ vim mapred-site.xml         <指定MapReduce运行在yarn框架之上>
    [hadoop@node1 hadoop]$ tail -7 mapred-site.xml
<configuration>
  <!-- 指定mr框架为yarn方式 -->
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>
    [hadoop@node1 hadoop]$

    [hadoop@node1 hadoop]$ vim yarn-site.xml           <指定yarn的ResourceManager节点>
    [hadoop@node1 hadoop]$ tail -13 yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
  <!-- 指定resourcemanager地址 -->
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>node3.china.com</value>
  </property>
  <!-- 指定nodemanager启动时加载server的方式为shuffle server -->
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
</configuration>
    [hadoop@node1 hadoop]$

    以下要保证每个节点上的hadoop用户对/opt目录有写入操作权限
    [root@node2 ~]# chown hadoop.hadoop /opt/
    [root@node2 ~]# ll /opt/ -d
    drwxr-xr-x. 3 hadoop hadoop 4096 Sep 18 04:09 /opt/
    [root@node3 ~]# chown hadoop.hadoop /opt/
    [root@node3 ~]# ll /opt/ -d
    drwxr-xr-x. 3 hadoop hadoop 4096 Sep 18 04:22 /opt/
    
    

2、然后将配置好的整个/opt/hadoop 目录复制到各节点上,如下: [hadoop@node1 ~]$ scp -r /opt/hadoop node2.china.com:/opt [hadoop@node1 ~]$ scp -r /opt/hadoop node3.china.com:/opt [hadoop@node1 ~]$ scp -r /opt/hadoop node4.china.com:/opt [hadoop@node1 ~]$ scp -r /opt/hadoop node5.china.com:/opt [hadoop@node1 ~]$ scp -r /opt/hadoop node6.china.com:/opt

3、保证在node4、node5、node6上启动Zookeeper,确保这三台服务器上有一个leader,两个follower

4、启动journalnode 在node4、node5、node6其中一台服务器上运行hadoop-daemons.sh start journalnode命令,如下所示:
[hadoop@node4
~]$ /opt/hadoop/sbin/hadoop-daemons.sh start journalnode node5: starting journalnode, logging to /opt/hadoop/logs/hadoop-hadoop-journalnode-node5.example.com.out node6: starting journalnode, logging to /opt/hadoop/logs/hadoop-hadoop-journalnode-node6.example.com.out node4: starting journalnode, logging to /opt/hadoop/logs/hadoop-hadoop-journalnode-node4.example.com.out [hadoop@node4 ~]$ jps 2677 Jps 2637 JournalNode 2240 QuorumPeerMain [hadoop@node4 ~]$ [hadoop@node5 ~]$ jps 26259 JournalNode 26309 Jps 2666 QuorumPeerMain [hadoop@node5 ~]$ [hadoop@node6 ~]$ jps 22717 JournalNode 22766 Jps 2199 QuorumPeerMain [hadoop@node6 ~]$

5、格式化hadoop 在china节点上执行hadoop namenode -format命令对整个hadoop集群格式化,如下所示:
[hadoop@node1
~]$ hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 16/03/02 21:26:24 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = node1.china.com/192.168.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.1 STARTUP_MSG: classpath = /opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.7.1.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/opt/hadoop/contrib/capacity-scheduler/*.jar:/opt/hadoop/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z STARTUP_MSG: java = 1.7.0_11 ************************************************************/ 16/03/02 21:26:24 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/03/02 21:26:24 INFO namenode.NameNode: createNameNode [-format] 16/03/02 21:26:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-c09e7fc5-dffd-4bd6-aca8-69b5281757bb 16/03/02 21:26:28 INFO namenode.FSNamesystem: No KeyProvider found. 16/03/02 21:26:28 INFO namenode.FSNamesystem: fsLock is fair:true 16/03/02 21:26:29 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 16/03/02 21:26:29 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 16/03/02 21:26:29 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 16/03/02 21:26:29 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Mar 02 21:26:29 16/03/02 21:26:29 INFO util.GSet: Computing capacity for map BlocksMap 16/03/02 21:26:29 INFO util.GSet: VM type = 64-bit 16/03/02 21:26:29 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 16/03/02 21:26:29 INFO util.GSet: capacity = 2^21 = 2097152 entries 16/03/02 21:26:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 16/03/02 21:26:30 INFO blockmanagement.BlockManager: defaultReplication = 3 16/03/02 21:26:30 INFO blockmanagement.BlockManager: maxReplication = 512 16/03/02 21:26:30 INFO blockmanagement.BlockManager: minReplication = 1 16/03/02 21:26:30 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 16/03/02 21:26:30 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 16/03/02 21:26:30 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 16/03/02 21:26:30 INFO blockmanagement.BlockManager: encryptDataTransfer = false 16/03/02 21:26:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 16/03/02 21:26:30 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 16/03/02 21:26:30 INFO namenode.FSNamesystem: supergroup = supergroup 16/03/02 21:26:30 INFO namenode.FSNamesystem: isPermissionEnabled = true 16/03/02 21:26:30 INFO namenode.FSNamesystem: Determined nameservice ID: ns1 16/03/02 21:26:30 INFO namenode.FSNamesystem: HA Enabled: true 16/03/02 21:26:30 INFO namenode.FSNamesystem: Append Enabled: true 16/03/02 21:26:31 INFO util.GSet: Computing capacity for map INodeMap 16/03/02 21:26:31 INFO util.GSet: VM type = 64-bit 16/03/02 21:26:31 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 16/03/02 21:26:31 INFO util.GSet: capacity = 2^20 = 1048576 entries 16/03/02 21:26:31 INFO namenode.FSDirectory: ACLs enabled? false 16/03/02 21:26:31 INFO namenode.FSDirectory: XAttrs enabled? true 16/03/02 21:26:31 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 16/03/02 21:26:31 INFO namenode.NameNode: Caching file names occuring more than 10 times 16/03/02 21:26:31 INFO util.GSet: Computing capacity for map cachedBlocks 16/03/02 21:26:31 INFO util.GSet: VM type = 64-bit 16/03/02 21:26:31 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 16/03/02 21:26:31 INFO util.GSet: capacity = 2^18 = 262144 entries 16/03/02 21:26:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 16/03/02 21:26:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 16/03/02 21:26:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 16/03/02 21:26:31 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 16/03/02 21:26:31 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 16/03/02 21:26:31 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 16/03/02 21:26:31 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 16/03/02 21:26:31 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 16/03/02 21:26:31 INFO util.GSet: Computing capacity for map NameNodeRetryCache 16/03/02 21:26:31 INFO util.GSet: VM type = 64-bit 16/03/02 21:26:31 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 16/03/02 21:26:31 INFO util.GSet: capacity = 2^15 = 32768 entries 16/03/02 21:26:37 INFO namenode.FSImage: Allocated new BlockPoolId: BP-282112416-192.168.1.1-1456925197769 16/03/02 21:26:37 INFO common.Storage: Storage directory /opt/hadoop/tmp/dfs/name has been successfully formatted. 16/03/02 21:26:38 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 16/03/02 21:26:38 INFO util.ExitUtil: Exiting with status 0 16/03/02 21:26:38 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at node1.china.com/192.168.1.1 ************************************************************/
[hadoop@node1
~]$ ls /opt/hadoop/tmp/dfs/name/current/ fsimage_0000000000000000000 seen_txid fsimage_0000000000000000000.md5 VERSION 注意:由于node2也是namenode,所以以上格式化命令也需要node2上执行一次,保证node2上也有/opt/hadoop/tmp目录。
6
、在node1服务器上格式化ZK,如下操作: [hadoop@node1 ~]$ hdfs zkfc -formatZK 16/03/02 21:33:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/03/02 21:33:59 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at china/192.168.1.1:9000 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:host.name=node1.china.com 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_11 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_11/jre 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:
/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:
/opt/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec
-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/share/hadoop/common/
lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/log
4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt
/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/s
hare/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/share/hadoop/co
mmon/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/share/hadoo
p/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/shar
e/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/co
mmon/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/hadoop/share/hado
op/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hado
op/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/hadoop/share/hadoop/common/lib/c
ommons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jersey-
core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/commo
ns-math3-3.1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar
:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/
hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/co
mmon/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/
gson-2.2.4.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-
3.0.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.7.1.jar
/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/share/hadoop/
hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/
xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/ha
doop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:
/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/
opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:
/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/sha
re/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305
-3.0.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/op
t/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop
/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/hadoop/share/hadoop
/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/zoo
keeper-3.4.6.jar:/opt/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/ha
doop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/sh
are/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/l
ib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/leveldbjni-all-
1.8.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/sha
re/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/
lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/
hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/had
oop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/opt/hado
op/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/
hadoop-yarn-server-web-proxy-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/opt/hadoop/shar
e/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.
7
.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applic
ations-distributedshell-2.7.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/sh
are/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/sha
re/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoo
p/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/
hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/op
t/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/s
hare/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/sh
are/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/mapreduc
e/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/
pt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/opt/hadoop/share/hadoop/
mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jo
bclient-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/opt/hadoop
/contrib/capacity-scheduler/*.jar 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop/lib/native 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop 16/03/02 21:34:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=node4:2181,node5:2181,node6:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.
ActiveStandbyElector$WatcherWithClientRef@34c945ea 16/03/02 21:34:00 INFO zookeeper.ClientCnxn: Opening socket connection to server node4.example.com/192.168.1.4:2181. Will not attempt to authenticate using SASL (unknown error) 16/03/02 21:34:00 INFO zookeeper.ClientCnxn: Socket connection established to node4.example.com/192.168.1.4:2181, initiating session 16/03/02 21:34:01 INFO zookeeper.ClientCnxn: Session establishment complete on server node4.example.com/192.168.1.4:2181, sessionid = 0x153374495a90000,
negotiated timeout = 5000 16/03/02 21:34:01 INFO ha.ActiveStandbyElector: Session connected. 16/03/02 21:34:01 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns1 in ZK. 16/03/02 21:34:01 INFO zookeeper.ZooKeeper: Session: 0x153374495a90000 closed 16/03/02 21:34:01 INFO zookeeper.ClientCnxn: EventThread shut down [hadoop@china ~]$
7、在node1服务器上启动HDFS,操作如下所示:
[hadoop@node1~ ]$ /opt/hadoop/sbin/start-dfs.sh 16/03/02 22:11:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [node1.china.com node2.china.com] node2: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-node2.example.com.out china: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-node1.china.com.out node4: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-node4.example.com.out node6: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-node6.example.com.out node5: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-node5.example.com.out Starting journal nodes [node4.china.com node5.china.com node6.china.com] node4: journalnode running as process 3451. Stop it first. node6: journalnode running as process 23439. Stop it first. node5: journalnode running as process 27089. Stop it first. 16/03/02 22:12:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting ZK Failover Controllers on NN hosts [node1.china.com node2.china.com] china: starting zkfc, logging to /opt/hadoop/logs/hadoop-hadoop-zkfc-node1.china.com.out node2: starting zkfc, logging to /opt/hadoop/logs/hadoop-hadoop-zkfc-node2.example.com.out [hadoop@node1 hadoop]$ jps 32497 Jps 32424 DFSZKFailoverController 32115 NameNode [hadoop@node1 hadoop]$ [hadoop@node3 ~]$ /opt/hadoop/sbin/start-yarn.sh starting yarn daemons starting resourcemanager, logging to /opt/hadoop/logs/yarn-hadoop-resourcemanager-node1.china.com.out node4: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-node4.example.com.out node5: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-node5.example.com.out node6: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-node6.example.com.out [hadoop@node1 ~]$ [hadoop@node1 ~]$ lsof -i:9000 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 32115 hadoop 197u IPv4 74524 0t0 TCP node1.china.com:cslistener->node1.china.com:46551 (ESTABLISHED) java 32115 hadoop 209u IPv4 73214 0t0 TCP node1.china.com:cslistener (LISTEN) java 32424 hadoop 202u IPv4 74523 0t0 TCP node1.china.com:46551->node1.china.com:cslistener (ESTABLISHED) [hadoop@node1 ~]$ lsof -i:50070 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 32115 hadoop 183u IPv4 73074 0t0 TCP node1.china.com:50070 (LISTEN) [hadoop@node1 ~]$ [hadoop@node2 ~]$ jps 9291 DFSZKFailoverController 9181 NameNode 9339 Jps

[hadoop@node2 ~]$ lsof -i:9000 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 9181 hadoop 197u IPv4 31284 0t0 TCP node2.china.com:cslistener->node2.china.com:49727 (ESTABLISHED) java 9181 hadoop 209u IPv4 30469 0t0 TCP node2.china.com:cslistener (LISTEN) java 9181 hadoop 219u IPv4 30697 0t0 TCP node2.china.com:cslistener->node4.china.com:36087 (ESTABLISHED) java 9181 hadoop 220u IPv4 30932 0t0 TCP node2.china.com:cslistener->node5.china.com:46354 (ESTABLISHED) java 9181 hadoop 221u IPv4 30934 0t0 TCP node2.china.com:cslistener->node6.china.com:50555 (ESTABLISHED) java 9291 hadoop 202u IPv4 31283 0t0 TCP node2.china.com:49727->node2.china.com:cslistener (ESTABLISHED)
[hadoop@node2 ~]$ lsof -i:50070 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 9181 hadoop 183u IPv4 30428 0t0 TCP node2.china.com:50070 (LISTEN)

 

posted @ 2017-08-17 14:57  Gary灬Man  阅读(23)  评论(0编辑  收藏  举报