Aegis Softtech's big data analytics team introduce the tutorial of how to get top N words frequency count using MapReduce paradigm with developer’s assistance. You can try your hands on the code shared in this post and feedback your experience later.
N words count using MapReduce
We are introducing how to get top N words count from different articles and sort them accordingly using hadoop MapReduce paradigm. Whether you’re analyzing blog posts, reports, or wondering how long is an essay in academic settings, understanding text structure is essential when working with large volumes of written content.
MapReduce Problem Statement:
We have N number of articles in text format and we are interested in finding word frequency and also want to sort them accordingly so that we can find which words are most occurring among all those files.
I have tested the code in following environment
- Java: 1.7.0_75
- Hadoop: 1.0.4
- Sample Input:
we do have N number of files in text format. I have used 20 big text files for performing this test.
- Data Preparation:
Once we have collected all the input files, we have to upload them in HDFS.
I have created /input/articles directory and put all those files in that directory.
- Solution :
We will use 2 steps to perform this task.
- 1.Using core MapReduce
We will use 1 mapper for parsing the files and count the single word occurance of a particular word.
We will use 1 reducer for the total count of the word frequency.
Once the mapper and reducer task is completed, we will have a partition file in our HDFS.
- 2. We will sort the data using sort utility based on frequency count data.
I will give a detailed explanation of this program and how to run it at the end of this document.
My code looks like,
- Code Walk Through:
Most of the code is self-explanatory, so you can easily check and get line by line understanding of the code.
We are extracting data from text file using mapper class TopNWordCountMapper.java
We are counting total of a particular word using reduces class TopNWordCountReducer.java
We know how to sort them according to Hadoop MapReduce. Let us discuss how we can reduce your problem effectively.
- How to run this program:
- Prepare Data:
Copy your data files in HDFS---I have put all files in my HDFS in /input/articles folder.
Now make a jar file out of this project using eclipse export jar facility.
Run jar file using hadoop jar command
I used,
I used the following command to run at my local configuration.
hadoop jar TopNArticle.jar /input/articles /output/articles
Please note that while specifying output path, directory named “articles” directory must not exist it will be created automatically. once Big data Job is completed, data will be ready and your output directory will have a file starting With part-r-***** that is our intermediate data (in /output/articles). Now for retrieving data and frequency counts from those files, use following command,