A slave node in your cluster has 4 TB hard drives installed (4 x 2TB). The DataNode is configured
to store HDFS blocks on all disks. You set the value of the dfs.datanode.du.reserved parameter to
100 GB. How does this alter HDFS block storage?

A.
25GB on each hard drive may not be used to store HDFS blocks
B.
100GB on each hard drive may not be used to store HDFS blocks
C.
All hard drives may be used to store HDFS blocks as long as at least 100 GB in total is
available on the node
D.
A maximum if 100 GB on each hard drive may be used to store HDFS blocks
B is the right one
0
0
100GB will be stored for MR applications so that HDFS cant use it
0
0
Hadoop Operations: The value of dfs.datanode.du.reserved specifies the amount of space, in bytes, to be reserved “on each disk” in dfs.data.dir.
So B is correct.
0
0
http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
Reserved space in bytes per volume. Always leave this much space free for non dfs use.
B is correct
0
0
B is correct
0
0
https://books.google.co.in/books?id=H3mvcxPeUfwC&pg=PA114&lpg=PA114&dq=dfs.datanode.du.reserved+hadoop&source=bl&ots=pYuOyb2Iqa&sig=8aMM4ZLpxFvAC5_3tPI8pvmkYqg&hl=en&sa=X&ved=0ahUKEwil9rSu7b_LAhUBkSwKHef8D24Q6AEIMzAE#v=onepage&q=dfs.datanode.du.reserved%20hadoop&f=false
“B”is correct answer.
0
0
B is right
0
0
I agree with the answer. D
0
0