cluster management(hadoop always to be continue... )
installation: create user, mkdir, download, nano configuration file, nano system config, ssh masternode>all nodes, start-all
- 程序源文件存放
- 部署脚本 远程拷贝+本地配置(安装顺序及dependency)
- 目录结构及权限检查
- 远程scp
- 动态设置配置(hadoop/yarn以及spark集群,同步文件目录,配置文件无需修改。zookeeper单独配置每个journalnode的id。)
check installation:
- hadoop-2.x.y.tar.gz.mds 这个文件,该文件包含了检验值可用于检查 hadoop-2.x.y.tar.gz 的完整性,否则若文件发生了损坏或下载不完整,Hadoop 将无法正常运行。
- ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.x.y.jar
-
17/09/26 17:42:45 FATAL namenode.SecondaryNameNode: Failed to start secondary namenode
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
-
17:44:06 ERROR datanode.DataNode: Exception in secureMain
java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
-
2017-09-26 17:57:35,332 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM
--------------------------------
1 机器+角色 echo"192.168.1.*" 主机名 hadoop1(别名)>/etc/hosts
192.168.1.* ZK RM NN1
192.168.1. ZK RM NN2 JOBHIS
192.168.1. ZK DN ND
192.168.1. DN QJM1 ND
192.168.1. DN QJM2 ND
2 用户与组 useradd
hdfs yarn zookeeper hive hbase -g hadoop
3 ssh no pass
su
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.1.*
4 修改ulimit
5 关闭防火墙 service iptables stop
6 关闭seLinux
--------------
软件:1 jdk + hadoop安装包
2ntp server --编辑conf后启动
ntp client --其他节点
3mysql
4hdfs yarn(具体配置...)
浙公网安备 33010602011771号