PrepAway - Latest Free Exam Questions & Answers

What should you do?

You are upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running
HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a block of
128MB for all new files written to the cluster after the upgrade. What should you do?

PrepAway - Latest Free Exam Questions & Answers

A.
Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the
NameNode, and set the parameter to final.

B.
Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the
NameNode, and set the parameter to final.

C.
Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the
parameter to final. You do need to set this value on the NameNode.

D.
Set dfs.block.size to 128M on all the worker nodes and client machines, and set the parameter
to final. You do need to set this value on the NameNode.

E.
You cannot enforce this, since client code can always override this value.

3 Comments on “What should you do?

  1. Pathik Paul says:

    C – Question is confusing.
    You do NOT need to set the parameter in the namenode. ( dfs.block.size is in Bytes) hence only B and C are possible options.




    0



    0

Leave a Reply

Your email address will not be published. Required fields are marked *