hadoop

真机传hadoop的包给虚拟机nn01

虚拟机4台
2G内存,2CPU,20G硬盘(扩容)

192.168.3.90 nn01
192.168.3.91 node1
192.168.3.92 node2
192.168.3.93 node3

http://hadoop.apache.org/docs/

配置文件的格式
<property>
<name></name>
<value></value>
</property>
#################################################################################
nn01
# hostnamectl set-hostname nn01
# hostname nn01
# vim /etc/hosts
... ...
192.168.3.90 nn01 追加此行

# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
IPV6INIT="no"
IPV4_FAILURE_FATAL="no"
NM_CONTROLLED="no"
TYPE="Ethernet"
BOOTPROTO="static"
IPADDR="192.168.3.90"
PREFIX=24
GATEWAY=192.168.3.254

# systemctl restart network
# LANG=en_US.UTF-8
# growpart /dev/vda 1
# xfs_growfs /


虚拟机nn01
[root@nn01 hadoop]# yum list | grep openjdk
[root@nn01 hadoop]# yum -y install java-1.8.0-openjdk-devel
[root@nn01 hadoop]# tar -zxf hadoop-2.7.6.tar.gz 生成同名的目录hadoop-2.7.6
[root@nn01 hadoop]# mv hadoop-2.7.6 /usr/local/hadoop 剪切目录并改名
[root@nn01 hadoop]# cd /usr/local/hadoop/etc/hadoop/
[root@nn01 hadoop]# vim hadoop-env.sh 修改配置文件
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64/jre" 修改25行
... ...
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"} 修改33行
... ...

[root@nn01 hadoop]# cd /usr/local/hadoop/ 先切换到此目录
[root@nn01 hadoop]# ./bin/hadoop version 查看版本
Hadoop 2.7.6
Subversion https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 085099c66cf28be31604560c376fa282e69282b8
Compiled by kshvachk on 2018-04-18T01:33Z
Compiled with protoc 2.5.0
From source with checksum 71e2695531cb3360ab74598755d036
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.6.jar

[root@nn01 hadoop]# mkdir aa
[root@nn01 hadoop]# cp /usr/local/hadoop/etc/hadoop/hadoop-env.sh /usr/local/hadoop/aa/a.txt

[root@nn01 hadoop]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount aa bb
显示如下:
19/01/02 15:04:41 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
19/01/02 15:04:41 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
19/01/02 15:04:42 INFO input.FileInputFormat: Total input paths to process : 1
19/01/02 15:04:42 INFO mapreduce.JobSubmitter: number of splits:1
19/01/02 15:04:42 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1883843081_0001
19/01/02 15:04:42 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/01/02 15:04:42 INFO mapreduce.Job: Running job: job_local1883843081_0001
19/01/02 15:04:42 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/01/02 15:04:42 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/02 15:04:42 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/01/02 15:04:42 INFO mapred.LocalJobRunner: Waiting for map tasks
19/01/02 15:04:42 INFO mapred.LocalJobRunner: Starting task: attempt_local1883843081_0001_m_000000_0
19/01/02 15:04:42 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/02 15:04:42 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
19/01/02 15:04:42 INFO mapred.MapTask: Processing split: file:/usr/local/hadoop/aa/a.txt:0+4294
19/01/02 15:04:42 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/01/02 15:04:42 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/01/02 15:04:42 INFO mapred.MapTask: soft limit at 83886080
19/01/02 15:04:42 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/01/02 15:04:42 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/01/02 15:04:42 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/01/02 15:04:42 INFO mapred.LocalJobRunner:
19/01/02 15:04:42 INFO mapred.MapTask: Starting flush of map output
19/01/02 15:04:42 INFO mapred.MapTask: Spilling map output
19/01/02 15:04:42 INFO mapred.MapTask: bufstart = 0; bufend = 6309; bufvoid = 104857600
19/01/02 15:04:42 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26212324(104849296); length = 2073/6553600
19/01/02 15:04:42 INFO mapred.MapTask: Finished spill 0
19/01/02 15:04:42 INFO mapred.Task: Task:attempt_local1883843081_0001_m_000000_0 is done. And is in the process of committing
19/01/02 15:04:42 INFO mapred.LocalJobRunner: map
19/01/02 15:04:42 INFO mapred.Task: Task 'attempt_local1883843081_0001_m_000000_0' done.
19/01/02 15:04:42 INFO mapred.LocalJobRunner: Finishing task: attempt_local1883843081_0001_m_000000_0
19/01/02 15:04:42 INFO mapred.LocalJobRunner: map task executor complete.
19/01/02 15:04:42 INFO mapred.LocalJobRunner: Waiting for reduce tasks
19/01/02 15:04:42 INFO mapred.LocalJobRunner: Starting task: attempt_local1883843081_0001_r_000000_0
19/01/02 15:04:42 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/02 15:04:42 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
19/01/02 15:04:42 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@748f44d0
19/01/02 15:04:42 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334338464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/01/02 15:04:42 INFO reduce.EventFetcher: attempt_local1883843081_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
19/01/02 15:04:42 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1883843081_0001_m_000000_0 decomp: 4584 len: 4588 to MEMORY
19/01/02 15:04:42 INFO reduce.InMemoryMapOutput: Read 4584 bytes from map-output for attempt_local1883843081_0001_m_000000_0
19/01/02 15:04:42 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 4584, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->4584
19/01/02 15:04:42 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/01/02 15:04:42 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/01/02 15:04:42 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
19/01/02 15:04:42 INFO mapred.Merger: Merging 1 sorted segments
19/01/02 15:04:42 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 4562 bytes
19/01/02 15:04:42 INFO reduce.MergeManagerImpl: Merged 1 segments, 4584 bytes to disk to satisfy reduce memory limit
19/01/02 15:04:42 INFO reduce.MergeManagerImpl: Merging 1 files, 4588 bytes from disk
19/01/02 15:04:42 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/01/02 15:04:42 INFO mapred.Merger: Merging 1 sorted segments
19/01/02 15:04:42 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 4562 bytes
19/01/02 15:04:42 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/01/02 15:04:42 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
19/01/02 15:04:42 INFO mapred.Task: Task:attempt_local1883843081_0001_r_000000_0 is done. And is in the process of committing
19/01/02 15:04:42 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/01/02 15:04:42 INFO mapred.Task: Task attempt_local1883843081_0001_r_000000_0 is allowed to commit now
19/01/02 15:04:42 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1883843081_0001_r_000000_0' to file:/usr/local/hadoop/bb/_temporary/0/task_local1883843081_0001_r_000000
19/01/02 15:04:42 INFO mapred.LocalJobRunner: reduce > reduce
19/01/02 15:04:42 INFO mapred.Task: Task 'attempt_local1883843081_0001_r_000000_0' done.
19/01/02 15:04:42 INFO mapred.LocalJobRunner: Finishing task: attempt_local1883843081_0001_r_000000_0
19/01/02 15:04:42 INFO mapred.LocalJobRunner: reduce task executor complete.
19/01/02 15:04:43 INFO mapreduce.Job: Job job_local1883843081_0001 running in uber mode : false
19/01/02 15:04:43 INFO mapreduce.Job: map 100% reduce 100%
19/01/02 15:04:43 INFO mapreduce.Job: Job job_local1883843081_0001 completed successfully
19/01/02 15:04:43 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=609724
FILE: Number of bytes written=1175947
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=98
Map output records=519
Map output bytes=6309
Map output materialized bytes=4588
Input split bytes=96
Combine input records=519
Combine output records=268
Reduce input groups=268
Reduce shuffle bytes=4588
Reduce input records=268
Reduce output records=268
Spilled Records=536
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=5
Total committed heap usage (bytes)=401604608
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=4294
File Output Format Counters
Bytes Written=3551


[root@nn01 hadoop]# cd bb
[root@nn01 bb]# ls
part-r-00000 _SUCCESS

[root@nn01 bb]# cat part-r-00000
显示如下:
"$HADOOP_CLASSPATH" 1
"AS 1
"License"); 1
# 49
### 4
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData 1
#export 4
$HADOOP_CLIENT_OPTS" 1
$HADOOP_DATANODE_OPTS" 1
$HADOOP_HOME/contrib/capacity-scheduler/*.jar; 1
$HADOOP_HOME/logs 1
$HADOOP_JAVA_PLATFORM_OPTS" 1
$HADOOP_NAMENODE_OPTS" 1
$HADOOP_PORTMAP_OPTS" 1
$HADOOP_SECONDARYNAMENODE_OPTS" 1
$USER 1
(ASF) 1
(fs, 1
(the 1
**MUST 1
**MUST** 1
-Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} 2
-Djava.net.preferIPv4Stack=true" 1
/tmp 1
1000. 1
2.0 1
A 1
ANY 1
ASF 1
Advanced 1
All 1
Apache 2
Automatically 1
BASIS, 1
CLASSPATH 1
CONDITIONS 1
Command 1
Default 1
Empty 1
Extra 2
Foundation 1
HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f 1
HADOOP_CLASSPATH=$f 1
HADOOP_CLIENT_OPTS="-Xmx512m 1
HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"} 1
HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS 1
HADOOP_HEAPSIZE= 1
HADOOP_IDENT_STRING=$USER 1
HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER 1
HADOOP_MOVER_OPTS="" 1
HADOOP_NAMENODE_INIT_HEAPSIZE="" 1
HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} 1
HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS" 1
HADOOP_OPTS 3
HADOOP_OPTS="$HADOOP_OPTS 1
HADOOP_PID_DIR=${HADOOP_PID_DIR} 1
HADOOP_PORTMAP_OPTS="-Xmx512m 1
HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}1
HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER} 1
HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR} 1
HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} 1
HDFS 3
Hadoop-specific 1
IS" 1
JAVA_HOME 1
JAVA_HOME. 1
JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64/jre" 1
JSVC_HOME=${JSVC_HOME} 1
JVM 1
Java 2
Jsvc 2
KIND, 1
License 3
License, 1
License. 2
Licensed 1
MB. 1
Mover 1
Mover. 1
NOT** 1
NOTE: 1
NOTICE 1
OF 1
OR 1
On 1
Only! 1
Otherwise 1
SASL 2
See 2
Set 1
Software 1
Specify 1
The 7
These 1
This 2
Unless 1
Users 1
Version 1
WARRANTIES 1
WITHOUT 1
When 1
Where 2
You 1
[ 1
]; 1
a 4
additional 1
after 1
agreed 1
agreements. 1
amount 1
an 1
and 2
any 1
appended 2
applicable 1
applies 1
are 4
as 2
at 1
attack. 1
authentication 4
be 6
best 1
bind 1
by 6
can 1
capacity-scheduler. 1
commands 1
compliance 1
configuration 1
configured 2
contributor 1
copy 1
copyright 1
correctly 1
daemons. 1
data 5
datanode 1
datanodes 1
datanodes, 1
default. 4
defined 2
dfs, 1
directory 2
distcp 1
distributed 4
do 1
done 1
dropping 1
either 1
elements. 1
else 1
enable 1
environment 2
environment. 1
etc) 1
except 1
export 17
express 1
f 1
fi 1
file 3
file, 1
files 3
flags 1
following 1
for 6
fsck, 1
governing 1
hadoop 1
hadoop. 1
heap 1
here. 1
http://www.apache.org/licenses/LICENSE-2.0 1
if 4
implementation 2
implied. 1
in 7
information 1
insert 1
instance 1
is 10
it 2
java 1
jsvc 1
language 1
law 1
license 1
licenses 1
limitations 1
log 2
maximum 1
may 3
more 1
multiple 1
nodes. 1
non-privileged 2
not 2
obtain 1
of 7
on 2
one 1
only 2
optional. 1
options 4
options. 1
or 3
others 1
override 1
ownership. 1
parameters 1
permissions 1
pid 1
ports 2
ports. 2
potential 1
privileged 2
privileges. 1
protocol 2
protocol. 2
provide 2
regarding 1
remote 1
representing 1
required 4
run 3
running 1
runtime 1
secure 4
set 3
should 1
similar 1
so 1
software 1
specific 3
specified 2
starting 1
stored 1
stored. 2
string 1
symlink 1
that 4
the 17
then 1
there 1
therefore 1
this 6
to 19
transfer 4
uncommented 1
under 4
use 1
use, 1
use. 2
used 1
user 2
using 3
variable 1
variables 1
when 2
where 1
will 2
with 2
work 1
writing, 1
written 1
you 2
####################################################################################################################
http://hadoop.apache.org/docs/


[root@nn01 ~]# cd /root/.ssh/
[root@nn01 .ssh]# ssh-keygen -b 2018 -t rsa -N ''
显示如下:
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:lZJOUuSJ1+Hy9ObxpKqQzxMVYBWe9+6DGx1nnKAEzPU root@nn01
The key's randomart image is:
+---[RSA 2018]----+
| .B+=o |
| = Oo+. |
| o O Oo.E |
| = *oo.....|
| S...+ +.+|
| .. o B + |
| o . +.+ |
| +. ..o. |
| +o. .... |
+----[SHA256]-----+

[root@nn01 .ssh]# ls
id_rsa id_rsa.pub known_hosts

[root@nn01 .ssh]# for i in 9{1..3};do ssh-copy-id 192.168.3.$i ;done

[root@nn01 .ssh]# vim /etc/ssh/ssh_config
60 StrictHostKeyChecking no 追加第60行,远程时不用输入yes

比如,如果不写上面的选项,会在第一次远程时有如下提示:
[root@nn01 .ssh]# ssh node1
The authenticity of host 'node1 (192.168.3.91)' can't be established.
ECDSA key fingerprint is SHA256:dvCFL1xgwXGayjkWAYYrMZGY0SN1B0ZC7QFBKDtRdmA.
ECDSA key fingerprint is MD5:2d:0c:eb:17:64:37:af:15:f0:d7:e6:64:9c:b2:01:2b.
Are you sure you want to continue connecting (yes/no)? 要输入yes,才能登陆进去


[root@nn01 .ssh]# ssh node1
[root@node1 ~]# exit

[root@nn01 .ssh]# ssh node2
[root@node2 ~]# exit


[root@nn01 .ssh]# ssh node3
[root@node3 ~]# exit


[root@nn01 .ssh]# ssh nn01 远程自己也要无密码登陆,重点注意事项
[root@nn01 ~]# exit

打开http://hadoop.apache.org/docs/,找到对应版本,点击进去目录,看页面左下角的Configuration,点击core-default.xml,参考里面

[root@nn01 hadoop]# cd /usr/local/hadoop/etc/hadoop/
[root@nn01 hadoop]# vim core-site.xml
19 <configuration>
20 <property> 追加20-27行
21 <name>fs.defaultFS</name> 指定文件系统
22 <value>hdfs://nn01:9000/</value> 指定主机的端口号
23 </property>
24 <property>
25 <name>hadoop.tmp.dir</name>
26 <value>/var/hadoop</value>
27 </property> 追加20-27行
28 </configuration>

 

打开http://hadoop.apache.org/docs/,找到对应版本,点击进去目录,看页面左下角的Configuration,点击hdfs-default.xml,参考里面

[root@nn01 hadoop]# vim hdfs-site.xml
19 <configuration>
20 <property> 追加20-31行
21 <name>dfs.namenode.http-address</name>
22 <value>nn01:50070</value>
23 </property>
24 <property>
25 <name>dfs.namenode.secondary.http-address</name>
26 <value>nn01:50090</value>
27 </property>
28 <property>
29 <name>dfs.replication</name>
30 <value>2</value>
31 </property>
32 </configuration>


[root@nn01 hadoop]# vim slaves
node1 删除所有,只保留这3行,配置集群存储数据的主机有node1
node2 配置集群存储数据的主机有node2
node3 配置集群存储数据的主机有node3


复制/usr/local/hadoop/到其他机器
[root@nn01 hadoop]# for i in node{1..3};do rsync -aSH /usr/local/hadoop/ $i:/usr/local/hadoop/ ;done


创建/var/hadoop
[root@nn01 hadoop]# cd
[root@nn01 ~]# for i in nn01 node{1..3}; do ssh ${i} mkdir /var/hadoop; done


格式化
[root@nn01 hadoop]# ./bin/hdfs namenode -format
显示如下:
19/01/02 17:52:14 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = nn01/192.168.3.90
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.6
... ...
STARTUP_MSG: build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 085099c66cf28be31604560c376fa282e69282b8; compiled by 'kshvachk' on 2018-04-18T01:33Z
STARTUP_MSG: java = 1.8.0_131
************************************************************/
19/01/02 17:52:14 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/01/02 17:52:14 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-a9df71cc-c094-419c-8e3a-8f13a2ebbfae
19/01/02 17:52:15 INFO namenode.FSNamesystem: No KeyProvider found.
19/01/02 17:52:15 INFO namenode.FSNamesystem: fsLock is fair: true
19/01/02 17:52:15 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19/01/02 17:52:15 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
19/01/02 17:52:15 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/01/02 17:52:15 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/01/02 17:52:15 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Jan 02 17:52:15
19/01/02 17:52:15 INFO util.GSet: Computing capacity for map BlocksMap
19/01/02 17:52:15 INFO util.GSet: VM type = 64-bit
19/01/02 17:52:15 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
19/01/02 17:52:15 INFO util.GSet: capacity = 2^21 = 2097152 entries
19/01/02 17:52:15 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/01/02 17:52:15 INFO blockmanagement.BlockManager: defaultReplication = 2
19/01/02 17:52:15 INFO blockmanagement.BlockManager: maxReplication = 512
19/01/02 17:52:15 INFO blockmanagement.BlockManager: minReplication = 1
19/01/02 17:52:15 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
19/01/02 17:52:15 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/01/02 17:52:15 INFO blockmanagement.BlockManager: encryptDataTransfer = false
19/01/02 17:52:15 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
19/01/02 17:52:15 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
19/01/02 17:52:15 INFO namenode.FSNamesystem: supergroup = supergroup
19/01/02 17:52:15 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/01/02 17:52:15 INFO namenode.FSNamesystem: HA Enabled: false
19/01/02 17:52:15 INFO namenode.FSNamesystem: Append Enabled: true
19/01/02 17:52:15 INFO util.GSet: Computing capacity for map INodeMap
19/01/02 17:52:15 INFO util.GSet: VM type = 64-bit
19/01/02 17:52:15 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
19/01/02 17:52:15 INFO util.GSet: capacity = 2^20 = 1048576 entries
19/01/02 17:52:15 INFO namenode.FSDirectory: ACLs enabled? false
19/01/02 17:52:15 INFO namenode.FSDirectory: XAttrs enabled? true
19/01/02 17:52:15 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
19/01/02 17:52:15 INFO namenode.NameNode: Caching file names occuring more than 10 times
19/01/02 17:52:15 INFO util.GSet: Computing capacity for map cachedBlocks
19/01/02 17:52:15 INFO util.GSet: VM type = 64-bit
19/01/02 17:52:15 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
19/01/02 17:52:15 INFO util.GSet: capacity = 2^18 = 262144 entries
19/01/02 17:52:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/01/02 17:52:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
19/01/02 17:52:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
19/01/02 17:52:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
19/01/02 17:52:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
19/01/02 17:52:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19/01/02 17:52:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/01/02 17:52:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/01/02 17:52:15 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/01/02 17:52:15 INFO util.GSet: VM type = 64-bit
19/01/02 17:52:15 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
19/01/02 17:52:15 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /var/hadoop/dfs/name ? (Y or N) Y
19/01/02 17:52:19 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1511524144-192.168.3.90-1546422739153
19/01/02 17:52:20 INFO common.Storage: Storage directory /var/hadoop/dfs/name has been successfully formatted.
19/01/02 17:52:20 INFO namenode.FSImageFormatProtobuf: Saving image file /var/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
19/01/02 17:52:20 INFO namenode.FSImageFormatProtobuf: Image file /var/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
19/01/02 17:52:21 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/01/02 17:52:21 INFO util.ExitUtil: Exiting with status 0
19/01/02 17:52:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at nn01/192.168.3.90
************************************************************/


启动集群
[root@nn01 hadoop]# ./sbin/start-dfs.sh
Starting namenodes on [nn01]
nn01: namenode running as process 11952. Stop it first.
node1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node1.out
node2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node2.out
node3: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node3.out
Starting secondary namenodes [nn01]
nn01: secondarynamenode running as process 12140. Stop it first.


验证配置
[root@nn01 hadoop]# jps
显示如下:
11952 NameNode
12760 Jps
12140 SecondaryNameNode


[root@nn01 hadoop]# ./bin/hdfs dfsadmin -report
显示如下:
Configured Capacity: 64389844992 (59.97 GB)
Present Capacity: 58078208000 (54.09 GB)
DFS Remaining: 58078195712 (54.09 GB)
DFS Used: 12288 (12 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (3): 拥有的datanode一共有3台

Name: 192.168.3.92:50010 (node2)
Hostname: node2
Decommission Status : Normal
Configured Capacity: 21463281664 (19.99 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2103873536 (1.96 GB)
DFS Remaining: 19359404032 (18.03 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.20%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jan 02 17:56:07 CST 2019


Name: 192.168.3.91:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 21463281664 (19.99 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2103914496 (1.96 GB)
DFS Remaining: 19359363072 (18.03 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.20%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jan 02 17:56:07 CST 2019


Name: 192.168.3.93:50010 (node3)
Hostname: node3
Decommission Status : Normal
Configured Capacity: 21463281664 (19.99 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2103848960 (1.96 GB)
DFS Remaining: 19359428608 (18.03 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.20%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jan 02 17:56:07 CST 2019

################################################################################################
其他虚拟机node1,node2,node3上,都能查到自己是datanode
[root@node1 ~]# jps
11558 Jps
11471 DataNode

 

posted @ 2019-04-30 22:58  安于夏  阅读(248)  评论(0编辑  收藏  举报