PrepAway - Latest Free Exam Questions & Answers

which best describe a Hadoop cluster’s block size storage parameters once you set the HDFS default block

Choose which best describe a Hadoop cluster’s block size storage parameters once you set the
HDFS default block size to 64MB?

PrepAway - Latest Free Exam Questions & Answers

A.
The block size of files in the cluster can be determined as the block is written.

B.
The block size of files in the Cluster will all be multiples of 64MB.

C.
The block size of files in the duster will all at least be 64MB.

D.
The block size of files in the cluster will all be the exactly 64MB.

Explanation:
Note: What is HDFS Block size? How is it different from traditional file system block
size?
In HDFS data is split into blocks and distributed across multiple nodes in the cluster. Each block is
typically 64Mb or 128Mb in size. Each block is replicated multiple times. Default is toreplicate each
block three times. Replicas are stored on different nodes. HDFS utilizes the local file system to
store each HDFS block as a separate file. HDFS Block size can not be compared with the
traditional file system block size.

3 Comments on “which best describe a Hadoop cluster’s block size storage parameters once you set the HDFS default block

  1. red says:

    in case the record falls in between two blocks, the first block accepts it and next block starts from the next whole record. in this case the block size of first block will be little bit greater than 64 Mb. Hence i feel the answer is ‘C’




    0



    0
    1. red says:

      Contradicting my earlier answer C, Firstly While writing a file we can specify what should be the blocksize for the file. Secondly if blocksize parameter on a client is 128MB and the client is writing file, it’ll overwrite the default NN blocksize parameter. Hence i believe the answer should be A




      0



      0

Leave a Reply

Your email address will not be published. Required fields are marked *