完全分布式搭建
修改ip地址和主机名
修改3台虚拟机的ip地址和主机名
vim /etc/sysconfig/network-scripts/ifcfg-ens33
将IPADDR修改为
192.168.200.81
重新启动网络服务使其生效
systemctl restart network
集群规划
| 组件 | node1 | node2 | node3 |
|---|---|---|---|
| HDFS | NameNode,DataNode | DataNode | SecondaryNameNode,DataNode |
| YARN | NodeManager | ResourceManager | NodeManager |
以下操作均只是修改node1
修改配置文件
hadoop-env.sh
修改配置:
export JAVA_HOME=/opt/app/jdk1.8.0_321
export HADOOP_CONF_DIR=/opt/app/hadoop-2.8.5/etc/hadoop
core-site.xml
<configuration>
<!--在configuration标签中增加如下配置-->
<!-- 指定HDFS中NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储目录 HDFS相关文件存放地址-->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/app/hadoop-2.8.5/metaData</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!--secondary namenode地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node3:50090</value>
</property>
<!--hdfs取消用户权限校验-->
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
slaves
node1
node2
node3
yarn-env.sh
export JAVA_HOME=/opt/app/jdk1.8.0_321
yarn-site.xml
<configuration>
<!-- reducer获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定YARN的ResourceManager的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node2</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>
/opt/app/hadoop-2.8.5/etc/hadoop,
/opt/app/hadoop-2.8.5/share/hadoop/common/*,
/opt/app/hadoop-2.8.5/share/hadoop/common/lib/*,
/opt/app/hadoop-2.8.5/share/hadoop/hdfs/*,
/opt/app/hadoop-2.8.5/share/hadoop/hdfs/lib/*,
/opt/app/hadoop-2.8.5/share/hadoop/mapreduce/*,
/opt/app/hadoop-2.8.5/share/hadoop/mapreduce/lib/*,
/opt/app/hadoop-2.8.5/share/hadoop/yarn/*,
/opt/app/hadoop-2.8.5/share/hadoop/yarn/lib/*
</value>
</property>
</configuration>
mapred-env.sh
修改配置:
export JAVA_HOME=/opt/app/jdk1.8.0_321
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
拷贝到别的节点上
scp -r 本机的目录 root@node1:/opt/app
scp -r /opt/app/hadoop-2.8.5 root@node2:/opt/app/
scp -r /opt/app/hadoop-2.8.5 root@node3:/opt/app/
只格式化node1上的namenode
hdfs namenode -format
注意:yarn最好在所在节点上启动
本文来自博客园,作者:jsqup,转载请注明原文链接:https://www.cnblogs.com/jsqup/p/16498259.html

浙公网安备 33010602011771号