搭建hadoop的单点 【伪分布式】

1.安装hadoop的依赖环境:

  安装JDK1.8以上,并且配置对应的环境变量

  解压缩hadoop的压缩包

       配置主机名称:我配置的为:node1   :::注意 不能有下划线特殊符号,不然会有问题,【亲自尝试了】。  配置方式:[hostnamectl --static set-hostname <name>]

       ssh 免密登录:

     不设置会报以下错误信息:

    

   1.1 创建 ssh密钥:

 ssh-keygen -t rsa

 1.2 在家目录进行如下操作: 必须起名 authorized_keys 不能更改

[root@node_single .ssh]# cd .ssh/
[root@node_single .ssh]# touch authorized_keys
[root@node_single .ssh]# chmod 600 authorized_keys

1.3 将公钥放到authorized_keys中

 cat id_rsa.pub >> authorized_keys

2.解压缩完成后如下:

 3.进入 cd etc/hadoop/ 目录下

4.编辑此目录下的如下xml文件

编辑内容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://node1:9820</value>
    </property>
   <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/sxt/hadoop/peseudo</value>
  </property>
</configuration>

5.编辑hadoop下的   hdfs-site.xml 文件

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
<!-- 单节点,所以配置成1 --> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>node1:9868</value> </property> </configuration>

 编辑workers,属于  etc/hadoop/workers 如下:

6. 运行命令,初始化元信息

./bin/hdfs namenode -format

  7. 执行命令,启动hdfs。

./sbin/start-dfs.sh

 出现这种情况,配置ssh免密钥登录

运行命令后,启动成功!如下:

 8.关闭防火墙,访问大数据页面服务。

http://192.168.25.251:9870/dfshealth.html#tab-overview
posted @ 2020-01-07 09:09  MrSans  阅读(129)  评论(0)    收藏  举报