Hadoop 实战之单词计数wordcount
环境:Vmware 8.0 和Ubuntu11.04Hadoop 实战之单词计数wordcount第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:sudo rm -rf /tmp/*
环境:Vmware 8.0 和Ubuntu11.04
Hadoop 实战之单词计数wordcount
第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:
第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:
sudo rm -rf /tmp/*
rm -rf /home/tanglg1987/hadoop-0.20.2/logs
hadoop namenode -format
hadoop datanode -format
start-all.sh
hadoop fs -mkdir input
hadoop dfsadmin -safemode leave
第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:
chmod 777 /home/tanglg1987/ start.sh
./start.sh
执行过程如下:
12/10/15 23:05:38 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = tanglg1987/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
12/10/15 23:05:39 INFO namenode.FSNamesystem: fsOwner=tanglg1987,tanglg1987,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare
12/10/15 23:05:39 INFO namenode.FSNamesystem: supergroup=supergroup
12/10/15 23:05:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/10/15 23:05:39 INFO common.Storage: Image file of size 100 saved in 0 seconds.
12/10/15 23:05:39 INFO common.Storage: Storage directory /tmp/hadoop-tanglg1987/dfs/name has been successfully formatted.
12/10/15 23:05:39 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at tanglg1987/127.0.1.1
************************************************************/
12/10/15 23:05:40 INFO datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = tanglg1987/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Usage: java DataNode
[-rollback]
12/10/15 23:05:40 INFO datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at tanglg1987/127.0.1.1
************************************************************/
starting namenode, logging to /home/tanglg1987/hadoop-0.20.2/bin/../logs/hadoop-tanglg1987-namenode-tanglg1987.out
localhost: starting datanode, logging to /home/tanglg1987/hadoop-0.20.2/bin/../logs/hadoop-tanglg1987-datanode-tanglg1987.out
localhost: starting secondarynamenode, logging to /home/tanglg1987/hadoop-0.20.2/bin/../logs/hadoop-tanglg1987-secondarynamenode-tanglg1987.out
starting jobtracker, logging to /home/tanglg1987/hadoop-0.20.2/bin/../logs/hadoop-tanglg1987-jobtracker-tanglg1987.out
localhost: starting tasktracker, logging to /home/tanglg1987/hadoop-0.20.2/bin/../logs/hadoop-tanglg1987-tasktracker-tanglg1987.out
Safe mode is OFF
第四步:上传本地文件到hdfs
在/home/tanglg1987/input 目录下新建两个文件file01.txt,file02.txt
file01.txt内容如下:
hello hadoop
file02.txt内容如下:
hello world
上传本地文件到hdfs:
hadoop fs -put /home/tanglg1987//file01.txt input
hadoop fs -put /home/tanglg1987/file02.txt input
第五步:新建一个WordCount.java,代码如下:
package com.baison.action;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper extends
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
String[] arg = { "hdfs://localhost:9100/user/tanglg1987/input",
"hdfs://localhost:9100/user/tanglg1987/output" };
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, arg)
.getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
第六步:Run On Hadoop,运行过程如下:
12/10/15 20:58:47 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/15 20:58:48 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/15 20:58:48 INFO input.FileInputFormat: Total input paths to process : 2
12/10/15 20:58:48 INFO mapred.JobClient: Running job: job_local_0001
12/10/15 20:58:48 INFO input.FileInputFormat: Total input paths to process : 2
12/10/15 20:58:48 INFO mapred.MapTask: io.sort.mb = 100
12/10/15 20:58:48 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/15 20:58:48 INFO mapred.MapTask: record buffer = 262144/327680
12/10/15 20:58:48 INFO mapred.MapTask: Starting flush of map output
12/10/15 20:58:48 INFO mapred.MapTask: Finished spill 0
12/10/15 20:58:48 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/15 20:58:48 INFO mapred.LocalJobRunner:
12/10/15 20:58:48 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/15 20:58:48 INFO mapred.MapTask: io.sort.mb = 100
12/10/15 20:58:48 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/15 20:58:48 INFO mapred.MapTask: record buffer = 262144/327680
12/10/15 20:58:48 INFO mapred.MapTask: Starting flush of map output
12/10/15 20:58:48 INFO mapred.MapTask: Finished spill 0
12/10/15 20:58:48 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
12/10/15 20:58:48 INFO mapred.LocalJobRunner:
12/10/15 20:58:48 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000001_0' done.
12/10/15 20:58:48 INFO mapred.LocalJobRunner:
12/10/15 20:58:48 INFO mapred.Merger: Merging 2 sorted segments
12/10/15 20:58:48 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 53 bytes
12/10/15 20:58:48 INFO mapred.LocalJobRunner:
12/10/15 20:58:48 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/10/15 20:58:48 INFO mapred.LocalJobRunner:
12/10/15 20:58:48 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/10/15 20:58:48 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/15 20:58:48 INFO mapred.LocalJobRunner: reduce > reduce
12/10/15 20:58:48 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.
12/10/15 20:58:49 INFO mapred.JobClient: map 100% reduce 100%
12/10/15 20:58:49 INFO mapred.JobClient: Job complete: job_local_0001
12/10/15 20:58:49 INFO mapred.JobClient: FileSystemCounters
12/10/15 20:58:49 INFO mapred.JobClient: FILE_BYTES_READ=50524
12/10/15 20:58:49 INFO mapred.JobClient: HDFS_BYTES_READ=62
12/10/15 20:58:49 INFO mapred.JobClient: FILE_BYTES_WRITTEN=102822
12/10/15 20:58:49 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=25
12/10/15 20:58:49 INFO mapred.JobClient: Map-Reduce Framework
12/10/15 20:58:49 INFO mapred.JobClient: Reduce input groups=3
12/10/15 20:58:49 INFO mapred.JobClient: Combine output records=4
12/10/15 20:58:49 INFO mapred.JobClient: Map input records=2
12/10/15 20:58:49 INFO mapred.JobClient: Reduce shuffle bytes=0
12/10/15 20:58:49 INFO mapred.JobClient: Reduce output records=3
12/10/15 20:58:49 INFO mapred.JobClient: Spilled Records=8
12/10/15 20:58:49 INFO mapred.JobClient: Map output bytes=41
12/10/15 20:58:49 INFO mapred.JobClient: Combine input records=4
12/10/15 20:58:49 INFO mapred.JobClient: Map output records=4
12/10/15 20:58:49 INFO mapred.JobClient: Reduce input records=4
第七步:查看结果集,运行结果如下:
更多推荐
所有评论(0)