PrepAway - Latest Free Exam Questions & Answers

How many Mappers will run?

Your Custer’s HOFS block size is 64MB. You have a directory containing 100 plain text files, each
of which Is 100MB in size. The InputFormat for your job is TextInputFormat. How many Mappers
will run?

PrepAway - Latest Free Exam Questions & Answers

A.
64

B.
100

C.
200

D.
640

Explanation:
Each file would be split into two as the block size (64 MB) is less than the file size
(100 MB), so 200 mappers would be running.
Note:
If you’re not compressing the files then hadoop will process your large files (say 10G), with a
number of mappers related to the block size of the file.
Say your block size is 64M, then you will have ~160 mappers processing this 10G file (160*64 ~=
10G). Depending on how CPU intensive your mapper logic is, this might be an
acceptable blocks size, but if you find that your mappers are executing in sub minute times, then
you might want to increase the work done by each mapper (by increasing the block size to 128,
256, 512m – the actual size depends on how you intend to process the data).
Reference:http://stackoverflow.com/questions/11014493/hadoop-mapreduce-appropriate-inputfiles-size(first answer, second paragraph)

3 Comments on “How many Mappers will run?

  1. sumit says:

    I disagree.
    Assuming the below here:

    1. Min split size = 1byte
    2. Max split size = INTEGER.MAX

    We will have split size = block size = 64MB

    Then no. of mappers = (100*100)/64 =156.25
    ==> 157 mappers




    0



    0

Leave a Reply to Ramesh Hiremath Cancel reply

Your email address will not be published. Required fields are marked *