Briefing Cloudera Knowledge

which best describe a Hadoop cluster’s block size storage parameters once you set the HDFS def

Choose which best describe a Hadoop cluster’s block size storage parameters once you set the
HDFS default block size to 64MB?

A.
The block size of files in the cluster can be determined as the block is written.

B.
The block size of files in the Cluster will all be multiples of 64MB.

C.
The block size of files in the duster will all at least be 64MB.

D.
The block size of files in the cluster will all be the exactly 64MB.

Explanation:
Note: What is HDFS Block size? How is it different from traditional file system block size?

In HDFS data is split into blocks and distributed across multiple nodes in the cluster. Each block is
typically 64Mb or 128Mb in size. Each block is replicated multiple times. Default is to replicate
each block three times. Replicas are stored on different nodes. HDFS utilizes the local file system
to store each HDFS block as a separate file. HDFS Block size can not be compared with the
traditional file system block size.