hadoop的两类配置文件及3种启动/关闭方式

    hadoop配置文件

    默认配置文件:四个模块相对应的jar包中:$HADOOP_HOME/share/hadoop
        *core-default.xml
        *hdfs-default.xml
        *yarn-default.xml
        *mapred-default.xml
    用户自定义配置文件:$HADOOP_HOME/etc/hadoop/
        *core-site.xml
        *hdfs-site.xml
        *yarn-site.xml
        *mapred-site.xml

    *启动方式1:各个服务器逐一启动(比较常用,可编写shell脚本)
        hdfs:
            sbin/hadoop-daemon.sh start|stop namenode
            sbin/hadoop-daemon.sh start|stop datanode
            sbin/hadoop-daemon.sh start|stop secondarynamenode
        yarn:
            sbin/yarn-daemon.sh start|stop resourcemanager
               sbin/yarn-daemon.sh start|stop nodemanager
        mapreduce:
            sbin/mr-jobhistory-daemon.sh start|stop historyserver
           
    *启动方式2:各个模块分开启动:需要配置ssh对等性,需要在namenode上运行
        hdfs:
            sbin/start-dfs.sh
            sbin/start-yarn.sh
        yarn:
            sbin/stop-dfs.sh
            sbin/stop-yarn.sh
    *启动方式3:全部启动:不建议使用,这个命令需要在namenode上运行,但是会同时叫secondaryname节点也启动到namenode节点
            sbin/start-all.sh
            sbin/stop-all.sh

posted @ 2017-04-19 00:42  ChavinKing  阅读(663)  评论(0编辑  收藏  举报