只有mapTask任务没有reduceTask的情况:

 

测试准备:

首先同步时间,然后 开启hdfs集群,开启yarn集群;在本地"/home/hadoop/test/"目录创建user文件;

user是存了数据的文件,内容如下:

YARN集群的mapreduce测试(二)

(导入hadoop-2.7.3-All.jar包

 

 测试目标:

 让groupId排在数据的第一列;

 

测试代码

outValue = NullWritable.get();//其内部定义决定了只能调用get方法初始化
 1 package com.mmzs.bigdata.yarn.mapreduce;
 2 
 3 import java.io.IOException;
 4 
 5 import org.apache.hadoop.io.LongWritable;
 6 import org.apache.hadoop.io.NullWritable;
 7 import org.apache.hadoop.io.Text;
 8 import org.apache.hadoop.mapreduce.Mapper;
 9 
10 public class OnlyMapper extends Mapper<LongWritable, Text, Text, NullWritable> {
11     
12     private Text outKey;
13     private NullWritable outValue;
14     
15     @Override
16     protected void setup(Mapper<LongWritable, Text, Text, NullWritable>.Context context)
17             throws IOException, InterruptedException {
18         outKey = new Text();
19         outValue = NullWritable.get();
20     }
21     
22     @Override
23     protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, NullWritable>.Context context)
24             throws IOException, InterruptedException {
25         String[] fields = value.toString().split("\\s+");
26 //        String groupId = fields[2];
27 //        StringBuilder builder = new StringBuilder();
28 //        builder.append(groupId).append("\t").append(fields[0]).append("\t").append(fields[1]);
29         StringBuilder builder = new StringBuilder(fields[2]).append("\t").append(fields[0]).append("\t").append(fields[1]);
30         outKey.set(builder.toString());
31         context.write(outKey , outValue);
32     }
33 
34     @Override
35     protected void cleanup(Mapper<LongWritable, Text, Text, NullWritable>.Context context)
36             throws IOException, InterruptedException {
37         outKey = null;
38     }
39     
40 }
OnlyMapper

相关文章: