1.Hadoop集群规划

HDFS: NN DN
YARN: RM NM

192.168.107.216
	NN RM
	DN NM
192.168.107.215
	DN NM
192.168.107.214
	DN NM

2.(每台)ssh免密码登陆 https://www.cnblogs.com/yoyo1216/p/12668942.html

ssh-keygen -t rsa
在192.168.107.216机器上进行操作
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.107.216
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.107.215
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.107.214

 

3.JDK安装 https://www.cnblogs.com/yoyo1216/p/12668926.html

1)先在192.168.107.216机器上部署了jdk
2)将jdk bin配置到系统环境变量
3)将jdk拷贝到其他节点上去(从192.168.107.216机器出发)
scp -r jdk1.8.0_241 root@192.168.107.215:/usr/local/java1.8
scp -r jdk1.8.0_241 root@192.168.107.214:/usr/local/java1.8

scp /etc/profile root@192.168.107.215:/etc
scp /etc/profile root@192.168.107.214:/etc

4.Hadoop部署

cd /usr/local/hadoop2.6/hadoop-2.6.0-cdh5.16.2/etc/hadoop
1)hadoop-env.sh和yarn-env.sh
	export JAVA_HOME=/usr/local/java1.8/jdk1.8.0_241

2) core-site.xml
<property>
	<name>fs.default.name</name>
	<value>hdfs://192.168.107.216:8020</value>
</property>
<property>
  <name>io.file.buffer.size</name>
   <value>131072</value>
</property>
<property>
   <name>hadoop.tmp.dir</name>
   <value>/usr/local/hadoop2.6/data</value>
   <description>A base for other temporary directories.</description>
</property>

3) hdfs-site.xml
<property>
  <name>dfs.namenode.name.dir</name>
  <value>${hadoop.tmp.dir}/name</value>
</property>

<property>
  <name>dfs.datanode.data.dir</name>
  <value>${hadoop.tmp.dir}/data</value>
</property>

4) yarn-site.xml
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
 </property>

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>192.168.107.216</value>
</property>
5) mapred-site.xml
<property>
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
</property>

6) slaves   # 配置hadoop从节点
192.168.107.216
192.168.107.215
192.168.107.214

7) 分发hadoop到其他机器添加环境变量
scp -r hadoop-2.6.0-cdh5.9.0 root@192.168.107.215:/usr/local/hadoop2.6
scp -r hadoop-2.6.0-cdh5.9.0 root@192.168.107.214:/usr/local/hadoop2.6

scp /etc/profile root@192.168.107.215:/etc
scp /etc/profile root@192.168.107.214:/etc

8) NN格式化: hadoop namenode -format
9) 启动HDFS
bash /usr/local/hadoop2.6/hadoop-2.6.0-cdh5.16.2/sbin/start-dfs.sh
10) 启动YARN
bash /usr/local/hadoop2.6/hadoop-2.6.0-cdh5.16.2/sbin/start-yarn.sh