Hadoop 实战之MapReduce链接作业之预处理
环境:Vmware 8.0 和Ubuntu11.04Hadoop 实战之MapReduce链接作业之预处理第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:
环境:Vmware 8.0 和Ubuntu11.04
Hadoop 实战之MapReduce链接作业之预处理
第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:
第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:
- sudo rm -rf /tmp/*
- rm -rf /home/tanglg1987/hadoop-0.20.2/logs
- hadoop namenode -format
- hadoop datanode -format
- start-all.sh
- hadoop fs -mkdir input
- hadoop dfsadmin -safemode leave
第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:
- chmod 777 /home/tanglg1987/start.sh
- ./start.sh
执行过程如下:
第四步:上传本地文件到hdfs
在/home/tanglg1987目录下新建Customer.txt内容如下:
- 100 tom 90
- 101 mary 85
- 102 kate 60
上传本地文件到hdfs:
- hadoop fs -put /home/tanglg1987/ChainMapper.txt input
第五步:新建一个ChainMapperDemo.java,代码如下:
- package com.baison.action;
- import java.io.IOException;
- import java.util.*;
- import java.lang.String;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.conf.*;
- import org.apache.hadoop.io.*;
- import org.apache.hadoop.mapred.*;
- import org.apache.hadoop.util.*;
- import org.apache.hadoop.mapred.lib.*;
- public class ChainMapperDemo {
- public static class Map00 extends MapReduceBase implements
- Mapper<Text, Text, Text, Text> {
- public void map(Text key, Text value, OutputCollector output,
- Reporter reporter) throws IOException {
- Text ft = new Text("100");
- if (!key.equals(ft)) {
- output.collect(key, value);
- }
- }
- }
- public static class Map01 extends MapReduceBase implements
- Mapper<Text, Text, Text, Text> {
- public void map(Text key, Text value, OutputCollector output,
- Reporter reporter) throws IOException {
- Text ft = new Text("101");
- if (!key.equals(ft)) {
- output.collect(key, value);
- }
- }
- }
- public static class Reduce extends MapReduceBase implements
- Reducer<Text, Text, Text, Text> {
- public void reduce(Text key, Iterator values, OutputCollector output,
- Reporter reporter) throws IOException {
- while (values.hasNext()) {
- output.collect(key, values.next());
- }
- }
- }
- public static void main(String[] args) throws Exception {
- String[] arg = { "hdfs://localhost:9100/user/tanglg1987/input/ChainMapper.txt",
- "hdfs://localhost:9100/user/tanglg1987/output" };
- JobConf conf = new JobConf(ChainMapperDemo.class);
- conf.setJobName("ChainMapperDemo");
- conf.setInputFormat(KeyValueTextInputFormat.class);
- conf.setOutputFormat(TextOutputFormat.class);
- ChainMapper cm = new ChainMapper();
- JobConf mapAConf = new JobConf(false);
- cm.addMapper(conf, Map00.class, Text.class, Text.class, Text.class,
- Text.class, true, mapAConf);
- JobConf mapBConf = new JobConf(false);
- cm.addMapper(conf, Map01.class, Text.class, Text.class, Text.class,
- Text.class, true, mapBConf);
- conf.setReducerClass(Reduce.class);
- conf.setOutputKeyClass(Text.class);
- conf.setOutputValueClass(Text.class);
- FileInputFormat.setInputPaths(conf, new Path(arg[0]));
- FileOutputFormat.setOutputPath(conf, new Path(arg[1]));
- JobClient.runJob(conf);
- }
- }
第六步:Run On Hadoop,运行过程如下:
12/10/17 21:05:53 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/17 21:05:53 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/10/17 21:05:53 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/17 21:05:54 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/17 21:05:54 INFO mapred.JobClient: Running job: job_local_0001
12/10/17 21:05:54 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/17 21:05:54 INFO mapred.MapTask: numReduceTasks: 1
12/10/17 21:05:54 INFO mapred.MapTask: io.sort.mb = 100
12/10/17 21:05:54 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/17 21:05:54 INFO mapred.MapTask: record buffer = 262144/327680
12/10/17 21:05:54 INFO mapred.MapTask: Starting flush of map output
12/10/17 21:05:54 INFO mapred.MapTask: Finished spill 0
12/10/17 21:05:54 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/17 21:05:54 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/ChainMapper.txt:0+35
12/10/17 21:05:54 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.Merger: Merging 1 sorted segments
12/10/17 21:05:54 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 16 bytes
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/10/17 21:05:54 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/17 21:05:54 INFO mapred.LocalJobRunner: reduce > reduce
12/10/17 21:05:54 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.
12/10/17 21:05:55 INFO mapred.JobClient: map 100% reduce 100%
12/10/17 21:05:55 INFO mapred.JobClient: Job complete: job_local_0001
12/10/17 21:05:55 INFO mapred.JobClient: Counters: 15
12/10/17 21:05:55 INFO mapred.JobClient: FileSystemCounters
12/10/17 21:05:55 INFO mapred.JobClient: FILE_BYTES_READ=36152
12/10/17 21:05:55 INFO mapred.JobClient: HDFS_BYTES_READ=70
12/10/17 21:05:55 INFO mapred.JobClient: FILE_BYTES_WRITTEN=73202
12/10/17 21:05:55 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=12
12/10/17 21:05:55 INFO mapred.JobClient: Map-Reduce Framework
12/10/17 21:05:55 INFO mapred.JobClient: Reduce input groups=1
12/10/17 21:05:55 INFO mapred.JobClient: Combine output records=0
12/10/17 21:05:55 INFO mapred.JobClient: Map input records=3
12/10/17 21:05:55 INFO mapred.JobClient: Reduce shuffle bytes=0
12/10/17 21:05:55 INFO mapred.JobClient: Reduce output records=1
12/10/17 21:05:55 INFO mapred.JobClient: Spilled Records=2
12/10/17 21:05:55 INFO mapred.JobClient: Map output bytes=12
12/10/17 21:05:55 INFO mapred.JobClient: Map input bytes=35
12/10/17 21:05:55 INFO mapred.JobClient: Combine input records=0
12/10/17 21:05:55 INFO mapred.JobClient: Map output records=1
12/10/17 21:05:55 INFO mapred.JobClient: Reduce input records=1
第七步:查看结果集,运行结果如下:
更多推荐
所有评论(0)