06 2013 档案
05 搭建4节点hadoop集群
摘要:1准备4台linux PC,确保机器间能ping通/*VMware Bridged*/(1)编辑每台/etc/hosts文件,如下:49.123.90.186 redhnamenode49.123.90.181 redhdatanode149.123.90.182 redhdatanode249.123.90.184 redhdatanode3(2)关闭防火墙/*需root权限*/service iptables stop(3)在所有机器上建立相同的用户hadoop-user(4)安装jdk到/home2 ssh配置/*hadoop-user*/(1)在所有redhdatanode上建立.ss
阅读全文
04 wordcount
摘要:1cd /home/chenyong/paper/hadoop-1.1.22mkdir inputcd inputecho "hello world !" > test1.txtecho "hello hadoop" > test2.txtecho "hello redhat" > test3.txt3 ./bin/hadoop dfs -put input /in/*将input目录复制到hdfs根目录下,重命名为in,执行前out目录必须为空*/./bin/hadoop jar hadoop-examples-1
阅读全文
03 hdfs format
摘要:1 hadoop fs -rmr /tmp2 stop-all.sh3 rm -rf /tmp/hadoop* /*** all pc ***/4 hadoop namenode -format5 start-all.sh
阅读全文
02 伪分布模式
摘要:1 Hadoop配置:修改core-site.xml、hdfs-site.xml、mapred-site.xml2 免密码ssh设置[root@localhost /]# ssh-keygen -t rsa[root@localhost ~]# cd /root/.ssh[root@localhost ~]# chmod 700 .ssh[root@localhost .ssh]# cat id_rsa.pub >> authorized_keys[root@localhost .ssh]# chmod 600 authorized_keys3 Hadoop运行[root@loca
阅读全文
01 hadoop
摘要:1 Download and setup jdk-7u25-linux-i586.tar.gz && hadoop-1.1.2.tar.gz2 编辑hadoop-env.sh文件[root@localhost~]# cd /home/chenyong/paper/hadoop-1.1.2[root@localhost hadoop-1.1.2]# vi conf/hadoop-env.sh 编辑: JAVA_HOME=/home/jdk1.7.0_25 export JAVAHOME3 start ssh service注:hadoop文件权限与登录用户相关, 这里为root
阅读全文
浙公网安备 33010602011771号