Map Reduce
Map Reduce
Map Reduce
What is MapReduce?
MapReduce is a programming model and framework within the Hadoop ecosystem
that enables efficient processing of big data by automatically distributing and
parallelizing the computation. It consists of two fundamental tasks: Map and Reduce.
In the Map phase, the input data is divided into smaller chunks and processed
independently in parallel across multiple nodes in a distributed computing
environment. Each chunk is transformed or “mapped” into key-value pairs by
The Reduce phase follows the Map phase. It gathers the intermediate key-value
pairs generated by the Map tasks, performs data shuffling to group together pairs
with the same key, and then applies a user-defined reduction function to aggregate
and process the data. The output of the Reduce phase is the final result of the
computation.
MapReduce can define mapper and reducer in several different languages using
Hadoop streaming.
Using HDFS and HBase security, Map Reduce ensures data security by allowing
MapReduce is flexible and works with several Hadoop languages to handle and
store data.
MapReduce Architecture
https://www.analyticsvidhya.com/blog/2022/05/an-introduction-to-mapreduce-with-a-word-count-example/#:~:text=Here's an Map Reduce exampl… 2/7
4/10/24, 9:52 AM An Introduction to MapReduce with Map Reduce Example
1. Input Splits
2. Mapping
3. Shuffling
4. Sorting
5. Reducing
Input Splits
MapReduce splits the input into smaller chunks called input splits, representing a
block of work with a single mapper task.
Mapping
The input data is processed and divided into smaller segments in the mapper phase,
where the number of mappers is equal to the number of input splits. RecordReader
produces a key-value pair of the input splits using TextFormat, which Reducer later
uses as input. The mapper then processes these key-value pairs using coding logic
Shuffling
In the shuffling phase, the output of the mapper phase is passed to the reducer
phase by removing duplicate values and grouping the values. The output remains in
the form of keys and values in the mapper phase. Since shuffling can begin even
Sorting
merging and sorting the output generated by the mapper. The intermediate key-value
pairs are sorted by key before starting the reducer phase, and the values can take
Reducing
In the reducer phase, the intermediate values from the shuffling phase are reduced
to produce a single output value that summarizes the entire dataset. HDFS is then
Here’s an Map Reduce example to count the frequency of each word in an input text.
The input data is divided into multiple segments, then processed in parallel to
reduce processing time. In this case, the input data will be divided into two input
splits so that work can be distributed over all the map nodes.
The Mapper counts the number of times each word occurs from input splits in the
form of key-value pairs where the key is the word, and the value is the
frequency.
For the first input split, it generates 4 key-value pairs: This, 1; is, 1; an, 1; apple,
1; and for the second, it generates 5 key-value pairs: Apple, 1; is, 1; red, 1; in, 1;
color.
It is followed by the shuffle phase, in which the values are grouped by keys in the
The same reducer is used for all key-value pairs with the same key.
All the words present in the data are combined into a single output in the reducer
Here in the example, we get the final output of key-value pairs as This, 1; is, 2;
The record writer writes the output key-value pairs from the reducer into the
output files, and the final output data is by default stored on HDFS.
Limitations of MapReduce
Map Reduce example also faces some limitations, and they are as follows:
time processing.
It does not support data pipelining or overlapping of Map and Reduce phases.
Hadoop’s performance.
Conclusion
In this article, we discussed the importance of Map Reduce example for the Hadoop
framework. The MapReduce framework has helped us deal with huge amounts of
data and find solutions to previously considered impossible problems. In addition to
analyzing large data sets in data warehouses, the MapReduce framework can also
MapReduce is fast, scalable, cost-effective, and can handle large volumes and
complex data.
https://www.analyticsvidhya.com/blog/2022/05/an-introduction-to-mapreduce-with-a-word-count-example/#:~:text=Here's an Map Reduce exampl… 6/7
4/10/24, 9:52 AM An Introduction to MapReduce with Map Reduce Example
The number of mappers in the mapper phase equals the number of input splits.
It is recommended that you take 1/4th of the number of mappers as the number
of reducers.