官网信息
# 官网
https://hadoop.apache.org/
# 指定版本文档
https://hadoop.apache.org/docs/r3.3.6/
# 单机部署文档
https://hadoop.apache.org/docs/r3.3.6/hadoop-project-dist/hadoop-common/SingleCluster.html
# 集群部署文档
https://hadoop.apache.org/docs/r3.3.6/hadoop-project-dist/hadoop-common/ClusterSetup.html
# 官网下载hadoop-3.1.0安装包
https://archive.apache.org/dist/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz
# 官网hadoop-3.1.0文档
https://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation
部署规划
// yarn的nodeManager节点不用指定,datanode的节点都会存在nodeManager
namenode secondNameNode datanode resourceManager nodeManager
node1 * *
node2 * *
node3 * *
node4 * *
node5 * *
使用hadoop-3.1.0版本
在每个集群节点新建操作用户,切配置集群节点的免密访问
#添加hadoop用户
useradd hadoop
#切换到hadoop用户操作,设置hadoop用户免密登录
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
#测试hadoop用户可以免密登录
切换到hadoop用户操作
ssh localhost直接登录成功
1、在每个集群节点配置环境变量/etc/profile
export HADOOP_HOME=/opt/hadoop/hadoop-2.10.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile
2、修改配置etc/hadoop/hadoop-env.sh
# set to the root of your Java installation
export JAVA_HOME=jdk地址
3、修改配置etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>设置数据存储目录</value>
</property>
<!--下面配置非必须,配置root(超级用户)允许通过代理访问的主机节点-->
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.user</name>
<value>*</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
<description>如果为"true",则在HDFS中启用权限检查;如果为"false",则关闭权限检查;默认值为"true"。</description>
</property>
</configuration>
4、修改配置etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node2:50090</value>
</property>
</configuration>
5、修改配置etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
6、修改配置etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node1:2181,node2:2181,node3:2181</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>resourcemanager_nama</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node1</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node2</value>
</property>
7、修改配置etc/hadoop/slaves
node3
node4
node5
8把配置同步到集群所有节点
9、格式化namenode
bin/hdfs namenode -format
10、启动、停止 hdfs
sbin/start-dfs.sh
sbin/stop-dfs.sh
11、启动、停止 yarn
sbin/start-yarn.sh
sbin/stop-yarn.sh
12、同时启动、停止 hdfs、yarn
sbin/start-all.sh
sbin/stop-all.sh
13、hadoop访问界面
http://node1:9870
14、yarn访问界面
http://node1:8088