hadoop-spark集群安装---1.准备
集群为三台centos 6.8虚拟机(node01,node02,node03)
jdk1.8.0_101 , scala-2.11.8 , zookeeper-3.4.9 , hadoop-2.7.3 , spark-2.0.2 , hive-1.2.1 , sqoop-1.4.6
节点分配
node01:zookeeper,namenode,datanode,resourcemanager,nodemanager, journalnode,master,worker
node02:zookeeper,namenode,datanode,nodemanager, journalnode, worker
node03:zookeeper,namenode,datanode,nodemanager, journalnode, worker
1 安装Java,scala
rpm -ivh jdk1.8.0_101.rpm
tar -zxvf scala-2.11.8.tgz -C /usr/scala
加入全局变量 vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_101
export SCALA_HOME=/usr/scala/scala-2.11.8
export PATH=$PATH :$JAVA_HOME/bin :$SCALA_HOME/bin
保存后 source /etc/profile
2.配置host映射
vi /etc/hosts
192.168.153.171 node01
192.168.153.172 node02
192.168.153.173 node03
3.时间同步
yum -y install ntpdate
ntpdate ntp.api.bz
clock -w
4.无密码访问
各节点生成public key
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
若node01访问node02
node01:scp ~/.ssh/id_dsa.pub root@node02:/opt
node02:cat /opt/id_dsa.pub >> ~/.ssh/authorized_keys

浙公网安备 33010602011771号