Hadoop企业级集群架构 - NFS安装

服务器地址:192.168.1.230

 

安装NFS软件

 

 

 

 

检查nfs是否安装完成

rpm -qa | grep nfs

 

 

检查rpcbind和nfs 服务

 

systemctl list-unit-files | grep "nfs"

 

 

systemctl list-unit-files | grep "rpcbind"

 

 

systemctl enable nfs-server.service

 

 

systemctl enable rpcbind.service

 

 

systemctl list-unit-files | grep -E "rpcbind.service|nfs-server.service"

 

 

 

 

 

 

检查 NFS,RPC状态

service rpcbind status

 

service rpcbind status

 

 

建立运行帐户:

 

groupadd grid

useradd -m -s /bin/bash -g grid grid

passwd grid

<grid123>

 

修改/ecc/expots

添加:

/home/grid *(rw,sync,no_root_squash)

 

 

重启rpcbind,nfs

systemctl restart rpcbind

systemctl restart nfs

 

 

验证:

showmount -e 192.168.1.230

 

 

 

在节点上挂载共享目录

 

NSF挂载Errror "wrong fs type,bad option

解决办法:

yum -y install nfs-utils

 

 

mount -t nfs 192.168.1.230:/home/grid /nfs_share/

 

查看mount

 

mount

 

 

设置开机自动挂载

vi /etc/fstab

添加如下一行:

192.168.1.230:/home/grid /nfs_share nfs defaults 0 0

 

 

在NFS服务器及各节点配置SSH

 

将各节点的密钥加到NFS服务器上

ssh h1.hadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh h2.hadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh h3.hadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

id_rsa.pub

 

 

在各节点创建共享目录文件authorized_keys的软连接

 

在各节点上安装

yum -y install nfs-utils

 

mkdir /nfs_share

mount -t nfs 192.168.1.230:/home/grid /nfs_share/

ln -s /nfs_share/.ssh/authorized_keys ~/.ssh/authorized_keys

posted @ 2015-11-15 07:37  沙漠里的树  阅读(394)  评论(0)    收藏  举报