hadoop伪分布式搭建小试牛刀

安装jdk

mkdir -p /usr/java
tar -xzvf jdk-8u45-linux-x64.gz -C /usr/java
配置环境变量:
export JAVA_HOME=/usr/java/jdk1.8.0_45
export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile

安装hadoop

证书问题解决:
yum install -y ca-certificates
下载安装包:
wget https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
tar -xzvf hadoop-3.3.1.tar.gz -C /home/hadoop/app/
cd /home/hadoop/app/hadoop-3.3.1
vim /home/hadoop/app/hadoop-3.3.1/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_45

跑一个程序测试一下:

mkdir input
cp etc/hadoop/*.xml input
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input output 'dfs[a-z.]+'
[hadoop@hadoop01 hadoop-3.3.1]$ ll output/*
-rw-r--r-- 1 hadoop hadoop 11 Nov 25 15:43 output/part-r-00000
-rw-r--r-- 1 hadoop hadoop 0 Nov 25 15:43 output/_SUCCESS
[hadoop@hadoop01 hadoop-3.3.1]$ cat output/*
1 dfsadmin

写配置文件

[hadoop@hadoop01 hadoop-3.3.1]$ vim etc/hadoop/core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

[hadoop@hadoop01 hadoop-3.3.1]$ vim etc/hadoop/hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:78088</value>  // 这个没生效
</property>
</configuration>

配置免密

[hadoop@hadoop01 hadoop-3.3.1]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:sEBZJ0nzmD54Cmghx8LGW374BJOhHAczG0DYsdLxHP8 hadoop@hadoop01
The key's randomart image is:
+---[RSA 2048]----+
|=O++o+=.. |
|=oX*+o.B |
|+O==+ = . |
|o== ++ + |
|.o.o.o= E |
|. .+o . |
| .. |
| |
| |
+----[SHA256]-----+
[hadoop@hadoop01 hadoop-3.3.1]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop01 hadoop-3.3.1]$ chmod 0600 ~/.ssh/authorized_keys
[hadoop@hadoop01 hadoop-3.3.1]$ ssh localhost
Last login: Thu Nov 25 22:48:42 2021

Welcome to Alibaba Cloud Elastic Compute Service !

格式化namenode节点,并启动:

[hadoop@hadoop01 hadoop-3.3.1]$ bin/hdfs namenode -format

[hadoop@hadoop01 hadoop-3.3.1]$ jps
28481 SecondaryNameNode
28630 Jps
28247 DataNode
28123 NameNode

测试HDFS

[hadoop@hadoop01 hadoop-3.3.1]$ bin/hdfs dfs -mkdir /user
[hadoop@hadoop01 hadoop-3.3.1]$ bin/hdfs dfs -mkdir /user/hadoop

[hadoop@hadoop01 hadoop-3.3.1]$ bin/hdfs dfs -mkdir input

[hadoop@hadoop01 hadoop-3.3.1]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input output 'dfs[a-z.]+'

[hadoop@hadoop01 hadoop-3.3.1]$ bin/hdfs dfs -get output output
[hadoop@hadoop01 hadoop-3.3.1]$ ls
bin include lib LICENSE-binary LICENSE.txt NOTICE-binary output sbin
etc input libexec licenses-binary logs NOTICE.txt README.txt share

[hadoop@hadoop01 hadoop-3.3.1]$ cat output/*
cat: output/output: Is a directory
1 dfsadmin

停止HDFS服务

[hadoop@hadoop01 hadoop-3.3.1]$ sbin/stop-dfs.sh
Stopping namenodes on [localhost]
Stopping datanodes
Stopping secondary namenodes [hadoop01]

 

posted @ 2021-11-25 23:27  unknowspeople  阅读(31)  评论(0)    收藏  举报