MapReduce编程开发之数据排序

2023-05-16

    MapReduce的数据排序,其实没有很复杂的实现,默认在shuffle阶段,MapReduce就帮我们将数据排好序了,我们在Map和Reduce阶段,无需做额外的操作。

    MapReduce在shuffle阶段,默认帮我们将数据做了排序,并且是做了合并,相同的数据归为一组了。在reduce的时候,我们需要注意的是:因为相同的键做了合并,为了将所有的数据输出,还需要遍历每一个键,如果values集合长度大于1,那么就有可能是重复的数据。另外为了记录数据的顺序,我们的程序里面设定了一个全局静态变量lineNum,记录所有集合的长度总和,同时也是数据的输出顺序。

mapreduce程序:

package com.xxx.hadoop.mapred;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

/**
 * 数据排序
 *
 */
public class DataSortApp {
	
	public static class Map extends Mapper<Object, Text, IntWritable, IntWritable>{
		protected void map(Object key, Text value, 
				Context context) throws IOException ,InterruptedException {
			StringTokenizer tokenizer = new StringTokenizer(value.toString(),"\n");
			while(tokenizer.hasMoreElements()) {
				IntWritable data = new IntWritable(Integer.parseInt(tokenizer.nextToken()));
				context.write(data, new IntWritable(1));
			}
		}
	}
	
	public static class Reduce extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable>{
		private static IntWritable lineNum = new IntWritable(1);
		@SuppressWarnings("unused")
		protected void reduce(IntWritable key, Iterable<IntWritable> values, 
				Context context) throws IOException ,InterruptedException {
			for(IntWritable val:values) {
				context.write(lineNum, key);
				lineNum.set(lineNum.get()+1);
			}
		}
	}

	public static void main(String[] args) throws Exception{
		String input="/user/root/datasort/input",
		       output="/user/root/datasort/output";
		System.setProperty("HADOOP_USER_NAME", "root");
		Configuration conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.56.202:9000");
		
		FileSystem fs = FileSystem.get(conf);
		boolean exists = fs.exists(new Path(output));
		if(exists) {
			fs.delete(new Path(output), true);
		}
		
		Job job = Job.getInstance(conf);
		job.setJarByClass(DataSortApp.class);
		job.setMapperClass(Map.class);
		job.setReducerClass(Reduce.class);
		job.setMapOutputKeyClass(IntWritable.class);
		job.setMapOutputValueClass(IntWritable.class);
		job.setOutputKeyClass(IntWritable.class);
		job.setOutputValueClass(IntWritable.class);
		
		FileInputFormat.addInputPath(job, new Path(input));
		FileOutputFormat.setOutputPath(job, new Path(output));
		
		System.exit(job.waitForCompletion(true)?0:1);
	}

}

准备数据:

运行程序,控制台打印信息:

2019-08-31 22:22:35 [INFO ]  [main]  [org.apache.hadoop.conf.Configuration.deprecation] session.id is deprecated. Instead, use dfs.metrics.session-id
2019-08-31 22:22:35 [INFO ]  [main]  [org.apache.hadoop.metrics.jvm.JvmMetrics] Initializing JVM Metrics with processName=JobTracker, sessionId=
2019-08-31 22:22:35 [WARN ]  [main]  [org.apache.hadoop.mapreduce.JobResourceUploader] Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-08-31 22:22:35 [WARN ]  [main]  [org.apache.hadoop.mapreduce.JobResourceUploader] No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2019-08-31 22:22:36 [INFO ]  [main]  [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] Total input paths to process : 3
2019-08-31 22:22:36 [INFO ]  [main]  [org.apache.hadoop.mapreduce.JobSubmitter] number of splits:3
2019-08-31 22:22:36 [INFO ]  [main]  [org.apache.hadoop.mapreduce.JobSubmitter] Submitting tokens for job: job_local1111900714_0001
2019-08-31 22:22:36 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] The url to track the job: http://localhost:8080/
2019-08-31 22:22:36 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Running job: job_local1111900714_0001
2019-08-31 22:22:36 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] OutputCommitter set in config null
2019-08-31 22:22:36 [INFO ]  [Thread-3]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 22:22:36 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2019-08-31 22:22:36 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] Waiting for map tasks
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1111900714_0001_m_000000_0
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@42324622
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/datasort/input/1.txt:0+14
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-08-31 22:22:36 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 32; bufvoid = 104857600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214384(104857536); length = 13/6553600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1111900714_0001_m_000000_0 is done. And is in the process of committing
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1111900714_0001_m_000000_0' done.
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1111900714_0001_m_000000_0
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1111900714_0001_m_000001_0
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@1e445343
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/datasort/input/2.txt:0+13
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 32; bufvoid = 104857600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214384(104857536); length = 13/6553600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1111900714_0001_m_000001_0 is done. And is in the process of committing
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1111900714_0001_m_000001_0' done.
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1111900714_0001_m_000001_0
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1111900714_0001_m_000002_0
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@3bd4bc73
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Processing split: hdfs://192.168.56.202:9000/user/root/datasort/input/3.txt:0+7
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] (EQUATOR) 0 kvi 26214396(104857584)
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] mapreduce.task.io.sort.mb: 100
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] soft limit at 83886080
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufvoid = 104857600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396; length = 6553600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] 
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Starting flush of map output
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Spilling map output
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] bufstart = 0; bufend = 16; bufvoid = 104857600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.MapTask] Finished spill 0
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task:attempt_local1111900714_0001_m_000002_0 is done. And is in the process of committing
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] map
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1111900714_0001_m_000002_0' done.
2019-08-31 22:22:37 [INFO ]  [LocalJobRunner Map Task Executor #0]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1111900714_0001_m_000002_0
2019-08-31 22:22:37 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] map task executor complete.
2019-08-31 22:22:37 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] Waiting for reduce tasks
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] Starting task: attempt_local1111900714_0001_r_000000_0
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] File Output Committer Algorithm version is 1
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] ProcfsBasedProcessTree currently is supported only on Linux.
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task]  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@5cb84ffa
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.ReduceTask] Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@37f7c67f
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] MergerManager: memoryLimit=1265788544, maxSingleShuffleLimit=316447136, mergeThreshold=835420480, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2019-08-31 22:22:37 [INFO ]  [EventFetcher for fetching Map Completion Events]  [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] attempt_local1111900714_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2019-08-31 22:22:37 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Job job_local1111900714_0001 running in uber mode : false
2019-08-31 22:22:37 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job]  map 100% reduce 0%
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1111900714_0001_m_000000_0 decomp: 42 len: 46 to MEMORY
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 42 bytes from map-output for attempt_local1111900714_0001_m_000000_0
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 42, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->42
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1111900714_0001_m_000001_0 decomp: 42 len: 46 to MEMORY
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 42 bytes from map-output for attempt_local1111900714_0001_m_000001_0
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 42, inMemoryMapOutputs.size() -> 2, commitMemory -> 42, usedMemory ->84
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.LocalFetcher] localfetcher#1 about to shuffle output of map attempt_local1111900714_0001_m_000002_0 decomp: 22 len: 26 to MEMORY
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput] Read 22 bytes from map-output for attempt_local1111900714_0001_m_000002_0
2019-08-31 22:22:37 [INFO ]  [localfetcher#1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] closeInMemoryFile -> map-output of size: 22, inMemoryMapOutputs.size() -> 3, commitMemory -> 84, usedMemory ->106
2019-08-31 22:22:37 [INFO ]  [EventFetcher for fetching Map Completion Events]  [org.apache.hadoop.mapreduce.task.reduce.EventFetcher] EventFetcher is interrupted.. Returning
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Merging 3 sorted segments
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Down to the last merge-pass, with 3 segments left of total size: 88 bytes
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merged 3 segments, 106 bytes to disk to satisfy reduce memory limit
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merging 1 files, 106 bytes from disk
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl] Merging 0 segments, 0 bytes from memory into reduce
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Merging 1 sorted segments
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Merger] Down to the last merge-pass, with 1 segments left of total size: 96 bytes
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.conf.Configuration.deprecation] mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task:attempt_local1111900714_0001_r_000000_0 is done. And is in the process of committing
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] 3 / 3 copied.
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task attempt_local1111900714_0001_r_000000_0 is allowed to commit now
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] Saved output of task 'attempt_local1111900714_0001_r_000000_0' to hdfs://192.168.56.202:9000/user/root/datasort/output/_temporary/0/task_local1111900714_0001_r_000000
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] reduce > reduce
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.Task] Task 'attempt_local1111900714_0001_r_000000_0' done.
2019-08-31 22:22:37 [INFO ]  [pool-6-thread-1]  [org.apache.hadoop.mapred.LocalJobRunner] Finishing task: attempt_local1111900714_0001_r_000000_0
2019-08-31 22:22:37 [INFO ]  [Thread-3]  [org.apache.hadoop.mapred.LocalJobRunner] reduce task executor complete.
2019-08-31 22:22:38 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job]  map 100% reduce 100%
2019-08-31 22:22:38 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Job job_local1111900714_0001 completed successfully
2019-08-31 22:22:38 [INFO ]  [main]  [org.apache.hadoop.mapreduce.Job] Counters: 35
	File System Counters
		FILE: Number of bytes read=4001
		FILE: Number of bytes written=1098948
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=109
		HDFS: Number of bytes written=55
		HDFS: Number of read operations=37
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=10
	Map-Reduce Framework
		Map input records=10
		Map output records=10
		Map output bytes=80
		Map output materialized bytes=118
		Input split bytes=366
		Combine input records=0
		Combine output records=0
		Reduce input groups=10
		Reduce shuffle bytes=118
		Reduce input records=10
		Reduce output records=10
		Spilled Records=20
		Shuffled Maps =3
		Failed Shuffles=0
		Merged Map outputs=3
		GC time elapsed (ms)=8
		Total committed heap usage (bytes)=1486880768
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=34
	File Output Format Counters 
		Bytes Written=55

查看运行结果:

    数据排序也有点词频统计的感觉,只不过我们无需关注词的次数具体是多少,但是我们需要利用这个次数来输出重复的记录。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

MapReduce编程开发之数据排序 的相关文章

  • imagemagick安装方法

    1 下载ImageMagick http www imagemagick org download 下载 ImageMagick 6 8 5 10 tar gz xff0c 下载完毕后开始进行安装 cd Downloads tar xzvf
  • ubuntu中安装apache ab命令进行简单压力测试

    1 安裝ab命令 sudo apt get install apache2 utils 2 ab命令参数说明 Usage ab options http s hostname port path Options are 总的请求数 n re
  • 如何查看当前Apache的连接数

    查看了连接数和当前的连接数 netstat ant grep ip 80 wc l netstat ant grep ip 80 grep EST wc l 查看IP访问次数 netstat nat grep 34 80 34 awk 39
  • php 获取页面中的指定内容类

    功能 xff1a 1 获取内容中的url xff0c email xff0c image 2 替换内容中的url xff0c email xff0c image url xff1a lt a href 61 34 url 34 gt xxx
  • memcached启动参数

    memcached启动参数 p 指定端口号 xff08 默认11211 xff09 U lt num gt UDP监听端口 默认 11211 0 时关闭 s lt file gt 用于监听的UNIX套接字路径 xff08 禁用网络支持 xf
  • mysql常用方法

    1 CONCAT str1 str2 mysql gt SELECT CONCAT 39 My 39 39 S 39 39 QL 39 gt 39 MySQL 39 mysql gt SELECT CONCAT 39 My 39 NULL
  • shell 监控网站是否异常的脚本

    shell 监控网站是否异常的脚本 xff0c 如有异常自动发电邮通知管理员 流程 xff1a 1 检查网站返回的http code 是否等于200 xff0c 如不是200视为异常 2 检查网站的访问时间 xff0c 超过MAXLOADT
  • 文件转base64输出

    Data URI scheme是在RFC2397中定义的 xff0c 目的是将一些小的数据 xff0c 直接嵌入到网页中 xff0c 从而不用再从外部文件载入 优点 xff1a 减少http连接数 缺点 xff1a 这种格式的文件不会被浏览
  • php 支持断点续传的文件下载类

    php 支持断点续传 xff0c 主要依靠HTTP协议中 header HTTP RANGE实现 HTTP断点续传原理 Http头 Range Content Range HTTP头中一般断点下载时才用到Range和Content Rang
  • 基于Linkit 7697的红绿灯控制系统

    1 硬件准备 LinkIt 7697 1 xff0c 继电器模块 1 xff0c 面包板 1 xff0c RGB LED灯 1 xff08 共阳极 xff0c 工作电流20mA xff0c 红灯压降2 2 2V xff0c 绿灯蓝灯压降3
  • shell 记录apache status并自动更新到数据库

    1 获取apache status monitor log sh bin bash 连接数 site connects 61 netstat ant grep ip 80 wc l 当前连接数 site cur connects 61 ne
  • php 缩略图生成类,支持imagemagick及gd库两种处理

    功能 1 按比例缩小 放大 2 填充背景色 3 按区域裁剪 4 添加水印 包括水印的位置 透明度等 使用imagemagick GD库实现 imagemagick地址 www imagemagick org 需要安装imagemagick
  • php5.3 中显示Deprecated: Assigning the return value of new by reference is deprecated in 的解决方法

    今天需要将某个网站般去另一台服务器 设置好运行 xff0c 显示一大堆Deprecated Deprecated Assigning the return value of new by reference is deprecated in
  • 使用apache mod_env模块保存php程序敏感信息

    Apache模块 mod env 说明 xff1a 允许Apache修改或清除传送到CGI脚本和SSI页面的环境变量 模块名 xff1a env module 源文件 xff1a mod env c 本模块用于控制传送给CGI脚本和SSI页
  • php 根据url自动生成缩略图

    原理 xff1a 设置apache rewrite xff0c 当图片不存在时 xff0c 调用php创建图片 例如 原图路径为 xff1a http localhost upload news 2013 07 21 1 jpg 缩略图路径
  • mailto 参数说明

    mailto 可以调用系统内置软件发送电子邮件 参数说明 mailto xff1a 收件人地址 xff0c 可多个 xff0c 用 分隔 cc xff1a 抄送人地址 xff0c 可多个 xff0c 用 分隔 bcc xff1a 密件抄送人
  • mysql 导入导出数据库

    mysql 导入导出数据库 1 导出数据 导出test 数据库 R 表示导出函数和存储过程 xff0c 加上使导出更完整 mysqldump u root p R test gt test sql 导出test数据库中user表 mysql
  • php 广告加载类

    php 广告加载类 xff0c 支持异步与同步加载 需要使用Jquery ADLoader class php lt php 广告加载管理类 Date 2013 08 04 Author fdipzone Ver 1 0 Func publ
  • 使用<img>标签加载php文件,记录页面访问讯息

    原理 xff1a 通过 lt img gt 标标签加载php文件 xff0c php文件会使用gd库生成一张1x1px的空白透明图片返回 xff0c 并记录传递的参数写入log文件 lt img src 61 34 sitestat php
  • tput 命令行使用说明

    什么是 tput xff1f tput 命令将通过 terminfo 数据库对您的终端会话进行初始化和操作 通过使用 tput xff0c 您可以更改几项终端功能 xff0c 如移动或更改光标 更改文本属性 xff0c 以及清除终端屏幕的特

随机推荐

  • ROS2学习笔记(二)-- 多机通讯原理简介及配置方法

    在ROS1中由主节点 master 负责其它从节点的通信 xff0c 在同一局域网内通过设置主节点地址也可以实现多机通讯 xff0c 但是这种多机通讯网络存在一个严重的问题 xff0c 那就是所有从节点强依赖于主节点 xff0c 一旦运行主
  • 使用shell实现阿里云动态DNS

    https github com timwai aliyunDDNS shell 脚本全部使用基础的命令实现 xff0c 支持在openwrt中使用 修改以下参数为你自己的参数 ACCESS KEY ID 61 你的AccessKeyId
  • Java-两个较大的List快速取交集、差集

    工作中经常遇到需要取两个集合之间的交集 差集情况 xff0c 但是普通的retainAll 和removeAll 无法满足数据量大的情况 xff0c 由此就自己尝试运用其他的方法解决 注 xff1a 如果数据量小的情况下 xff0c 还是使
  • Xubuntu15.04更新系统源时出现错误提示W: GPG 错误:http://archive.ubuntukylin.com:10006 xenial InRelease: 由于没有公钥,无法验证

    在更新系统源后 xff0c 输入sudo apt get update之后出现提示 xff1a W GPG 错误 xff1a http archive ubuntukylin com 10006 xenial InRelease 由于没有公
  • ubuntu开启SSH服务远程登录

    ssh secure shell xff0c 提供安全的远程登录 从事嵌入式开发搭建linux开发环境中 xff0c ssh的服务的安装是其中必不可少的一步 ssh方便一个开发小组中人员登录一台服务器 xff0c 从事代码的编写 编译 运行
  • Python实现让视频自动打码,再也不怕出现少儿不宜的画面了

    人生苦短 我用Python 序言准备工作代码解析完整代码 序言 我们在观看视频的时候 xff0c 有时候会出现一些奇怪的马赛克 xff0c 影响我们的观影体验 xff0c 那么这些马赛克是如何精确的加上去的呢 xff1f 本次我们就来用Py
  • Docker安装nextcloud实验

    Docker安装nextcloud实验 修改验证方式 xff1a 从密钥到密码 sudo passwd root su root vi etc ssh sshd config 去掉下面前的 或修改yes no port 22 Address
  • Tesseract-OCR 字符识别---样本训练

    Tesseract是一个开源的OCR xff08 Optical Character Recognition xff0c 光学字符识别 xff09 引擎 xff0c 可以识别多种格式的图像文件并将其转换成文本 xff0c 目前已支持60多种
  • FPGA与OPENCV的联合仿真

    对于初学者来说 xff0c 图像处理行业 xff0c 最佳仿真方式 xff1a FPGA 43 OPENCV xff0c 因为OPENCV适合商业化 xff0c 适合自己写算法 1 xff09 中间交互数据介质 txt文档 2 xff09
  • 华硕P8Z77-V LX老主板转换卡升级NVMe M2硬盘经验,老主机的福音,质的飞跃

    每年双十一都是淘货升级老家伙的时候 xff0c 今年也不例外 xff0c 随着日子长久 xff0c 软件的增多 xff0c 虽然已经尽量装在系统盘以外的盘 xff0c 但C盘还是日渐不够用 xff0c 从以前的30G系统盘升到60G xff
  • linux 更换 软件源后 GPG错误

    linux 更换 软件源后 GPG错误 linux 软件源 GPG 签名 密钥 linux 更换 软件源后 GPG错误 http my oschina net emptytimespace blog 83633 如文章 1 中提到 xff1
  • ROS2学习笔记(四)-- 用方向键控制小车行走

    简介 xff1a 在上一节的内容中 xff0c 我们通过ROS2的话题发布功能将小车实时视频信息发布了出来 xff0c 同时使用GUI工具进行查看 xff0c 在这一节内容中 xff0c 我们学习一下如何订阅话题并处理话题消息 xff0c
  • flume大数据框架数据采集系统

    flume是cloudera开源的数据采集系统 xff0c 现在是apache基金会下的子项目 xff0c 他是hadoop生态系统的日志采集系统 xff0c 用途广泛 xff0c 可以将日志 网络数据 kafka消息收集并存储在大数据hd
  • flume日志收集系统常见配置

    前面介绍了flume入门实例 xff0c 介绍了配置netcat信源 xff0c 以及memory信道 xff0c logger信宿 xff0c 其实flume常见的信源信道信宿有很多 xff0c 这里介绍flume常用信源的三种方式 xf
  • flume自定义拦截器实现定制收集日志需求

    flume默认提供了timestamp host static regex等几种类型的拦截器 xff0c timestamp host static等拦截器 xff0c 其实就是在消息头中增加了时间戳 xff0c 主机名 xff0c 键值对
  • Eclipse开发mapreduce程序环境搭建

    Eclipse作为一个常用的java IDE xff0c 其使用程度虽然比不上idea那么强大 xff0c 但是对于习惯使用eclipse开发的人来说 xff0c 也不失为一个可以选择的IDE 对于喜欢eclipse开发的人来说 xff0c
  • hdfs常见操作java示例

    我们学习hadoop xff0c 最常见的编程是编写mapreduce程序 xff0c 但是 xff0c 有时候我们也会利用java程序做一些常见的hdfs操作 比如删除一个目录 xff0c 新建一个文件 xff0c 从本地上传一个文件到h
  • MapReduce编程开发之数据去重

    MapReduce就是一个利用分而治之的思想做计算的框架 xff0c 所谓分 xff0c 就是将数据打散 xff0c 分成可以计算的小份 xff0c 治就是将数据合并 xff0c 相同键的数据合并成一个集合 MapReduce并不能解决所有
  • MapReduce编程开发之求平均成绩

    MapReduce计算平均成绩是一个常见的算法 xff0c 本省思路很简单 xff0c 就是将每个人的成绩汇总 xff0c 然后做除法 xff0c 在map阶段 xff0c 是直接将姓名做key 分数作为value输出 在shuffle阶段
  • MapReduce编程开发之数据排序

    MapReduce的数据排序 xff0c 其实没有很复杂的实现 xff0c 默认在shuffle阶段 xff0c MapReduce就帮我们将数据排好序了 xff0c 我们在Map和Reduce阶段 xff0c 无需做额外的操作 MapRe