PrepAway - Latest Free Exam Questions & Answers

which best describe a Hadoop cluster’s block size storage parameters once you set the HDFS default block

Choose which best describe a Hadoop cluster’s block size storage parameters once you set the
HDFS default block size to 64MB?

PrepAway - Latest Free Exam Questions & Answers

A.
The block size of files in the cluster can be determined as the block is written.

B.
The block size of files in the Cluster will all be multiples of 64MB.

C.
The block size of files in the duster will all at least be 64MB.

D.
The block size of files in the cluster will all be the exactly 64MB.

Explanation:
Note: What is HDFS Block size? How is it different from traditional file system block size?

In HDFS data is split into blocks and distributed across multiple nodes in the cluster. Each block is
typically 64Mb or 128Mb in size. Each block is replicated multiple times. Default is to replicate
each block three times. Replicas are stored on different nodes. HDFS utilizes the local file system
to store each HDFS block as a separate file. HDFS Block size can not be compared with the
traditional file system block size.

2 Comments on “which best describe a Hadoop cluster’s block size storage parameters once you set the HDFS default block

  1. Marcelo says:

    Shouldn’t the correct answer be the block size of files will be at most 64MB? Or A? What if you have a file that is smaller than 64MB? If you look at a previous question in this test, the answer was this block would be less than 64MB.




    0



    0

Leave a Reply

Your email address will not be published. Required fields are marked *