PrepAway - Latest Free Exam Questions & Answers

What should you do?

You’re upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one
running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce
version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You
want to set and enforce a block size of 128MB for all new files written to the cluster after
upgrade. What should you do?

PrepAway - Latest Free Exam Questions & Answers

A.
Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the
NameNode, and set the parameter to final

B.
Set dfs.block.size to 128 M on all the worker nodes and client machines, and set the
parameter to final. You do not need to set this value on the NameNode

C.
Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on
the NameNode, and set the parameter to final

D.
Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the
parameter to final. You do not need to set this value on the NameNode

E.
You cannot enforce this, since client code can always override this value

Explanation:

8 Comments on “What should you do?

  1. paco says:

    Correct answer is D

    dfs.block.size
    134217728
    Block size

    hdfs-site.xml is used to configure HDFS. Changing the dfs.block.size property in hdfs-site.xml will change the default block size for all the files placed into HDFS. In this case, we set the dfs.block.size to 128 MB. Changing this setting will not affect the block size of any files currently in HDFS. It will only affect the block size of files placed into HDFS after this setting has taken effect




    0



    0

Leave a Reply

Your email address will not be published. Required fields are marked *