返回顶部

一、Hadoop平台安装

卸载自带 OpenJDK(最小化安装不用执行)
[root@master ~]# rpm -qa | grep java
[root@master ~]# rpm -e --nodeps  x x x
配置免密登录
生成密钥对:[root@master ~]# ssh-keygen -t rsa(回车四次)
发送公钥:[root@master ~]# ssh-copy-id 192.168.100.10
ssh-copy-id 192.168.100.20
ssh-copy-id 192.168.100.30
关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
修改主机名:
hostnamectl set-hostname master
hostnamectl set-hostname slave1
hostnamectl set-hostname slave2
映射:
vi /etc/hosts
192.168.100.10 master
192.168.100.20 slave1
192.168.100.30 slave2
配置Java、Hadoop环境变量
[root@localhost src]# vi /etc/profile
export JAVA_HOME=/usr/local/src/jdk1.8.0_181/
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/usr/local/src/hadoop-2.7.6/
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
发送从机:
scp -r /etc/profile slave1:/etc/
scp -r /etc/profile slave2:/etc/
配置文件:
1、vi core-site.xml
<configuration>
   <property>
      <name>fs.defaultFS</name>
      <value>hdfs://master:9000</value>
   </property>

   <property>
      <name>io.file.buffer.size</name>
      <value>131072</value>
   </property>

   <property>
      <name>hadoop.tmp.dir</name>
      <value>file:///usr/tmp/hadoop</value>
   </property>
</configuration>
2、vi hadoop-env.sh
export JAVA_HOME=/usr/local/src/jdk1.8.0_181/  (25行)
3、vi hdfs-site.xml
<configuration>
   <property>
     <name>dfs.replication</name>
     <value>1</value>
   </property>

   <property>
     <name>dfs.namenode.name.dir</name>
     <value>file:///hdfs/namenode</value>
   </property>

   <property>
     <name>dfs.datanode.data.dar</name>
     <value>file:///hdfs/datanode</value>
   </property>

   <property>
     <name>dfs.block.size</name>
     <value>134217728</value>
   </property>

   <property>
     <name>dfs.http.address</name>
     <value>master:50070</value>
   </property>

   <property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>master:9001</value>
   </property>

   <property>
     <name>dfs.webhdfs.enbled</name>
     <value>true</value>
   </property>

   <property>
     <name>dfs.permissions</name>
     <value>false</value>
   </property>
</configuration>
4、cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml 
<configuration>
   <property>
     <name>mapreduce.framework.name</name>
     <value>yarn</value>
   </property>

   <property>
     <name>mapreduce.jobhistory.address</name>
     <value>master:10020</value>
   </property>

   <property>
     <name>mapreduce.jobhistory.webapp.address</name>
     <value>master:19888</value>
   </property>
</configuration>
5、vi yarn-site.xml
<configuration>
   <property>
     <name>yarn.resourcemanager.hostname</name>
     <value>master</value>
   </property>

   <property>
     <name>yarn.nodemanager.aux-services</name>
     <value>mapreduce_shuffle</value>
   </property>

   <property>
     <name>yarn.resourcemanager.address</name>
     <value>master:8032</value>
   </property>

   <property>
     <name>yarn.resourcemanager.scheduler.address</name>
     <value>master:8030</value>
   </property>

   <property>
     <name>yarn.resourcemanager.resource-tracker.address</name>
     <value>master:8031</value>
   </property>

   <property>
     <name>yarn.resourcemanager.admin.address</name>
     <value>master:8033</value>
   </property>

   <property>
     <name>yarn.resourcemanager.webapp.address</name>
     <value>master:8088</value>
   </property>
</configuration> 
6、vi slaves
master
slave1
slave2
分发Hadoop:
scp -r /usr/local/src/hadoop-2.7.6  slave1:/usr/local/src/
scp -r /usr/local/src/hadoop-2.7.6  slave2:/usr/local/src/
分发jdk:
scp -r /usr/local/src/jdk1.8.0_181/  slave1:/usr/local/src/
scp -r /usr/local/src/jdk1.8.0_181/  slave2:/usr/local/src/
刷新生效:
[root@master hadoop]# hadoop namenode -format
启动服务:
[root@master hadoop]# start-all.sh
配置Windows:
C:\Windows\System32\drivers\etc
192.168.100.10 master
192.168.100.20 slave1
192.168.100.30 slave2
posted @ 2023-05-18 09:29  IT-sec  阅读(52)  评论(0)    收藏  举报