摘要:
testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest 阅读全文
摘要:
core-site.xml:fs.default.name:hdfs://hadoop:9000fs.tmp.dir:/usr/local/hadoop/tmphdfs-site.xml:dfs.name.dir:dfs.name.edits.dir:eidtsdfs.replication 阅读全文
摘要:
Step0:Set up static IP. service network restart ifconfig 192.168.56.88Step1:关闭防火墙 service iptables stop[status] service iptables status chkconf... 阅读全文
该文被密码保护。 阅读全文
摘要:
Step 1:配置hosts文件(/etc/hosts)--(实际考虑:是否使用DNS替代?)使用awk 自动copycat slaves.txt |awk 'print{"scp -rp ./hadoop.1.2 hadoop@"$1":/home/hadoop}'Step 2:建立Hadoop运... 阅读全文
摘要:
HBasicsHBase is a distributed column-oriented database built on top of HDFS. HBase is theHadoop application to use when you require real-time read/wri... 阅读全文
摘要:
PigPigPigPigPigPigPigPigPigPigPigPigPigPigPigPigPigPigPigPigPigv 阅读全文
摘要:
1——What's Mapreduce.(How does Mapreduce works?)Mapreduce is a progarmming model to process data process.Mapduce works by breaking the processing into two phases:the map phase and the reduce phase.Each phase has key-value pairs as input and output,the types of which can be chosen by programmers.( 阅读全文
摘要:
1.Writable接口 Hadoop并没有使用JAVA的序列化,而是引入了自己实的序列化系统,packageorg.apache.hadoop.io这个包中定义了大量的可序列化对象,这些对象都实现了Writable接口,Writable接口是序列化对象的一个通用接口.我们来看下Writable接口的定义。publicinterfaceWritable{ voidwrite(DataOutputout)throwsIOException; voidreadFields(DataInputin)throwsIOException;} Writable接口抽象了两个序列化的方法Write和... 阅读全文
摘要:
由于Hadoop的MapReduce和HDFS都有通信的需求,需要对通信的对象进行序列化。Hadoop并没有采用Java的序列化(因为Java序列化比较复杂,且不能深度控制),而是引入了它自己的系统。org.apache.hadoop.io中定义了大量的可序列化对象,他们都实现了Writable接口。实现了Writable接口的一个典型例子如下:123456789101112131415161718192021public class MyWritable implements Writable { // Some data private int counter; private long 阅读全文