You use the hadoop fs –put command to add a file “sales.txt” to HDFS. This file is small enough
that it fits into a single block, which is replicated to three nodes in your cluster (with a replication
factor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the
replication of file in this situation?

A.
The file will remain under-replicated until the administrator brings that node back online
B.
The cluster will re-replicate the file the next time the system administrator reboots the
NameNode daemon (as long as the file’s replication factor doesn’t fall below)
C.
This will be immediately re-replicated and all other HDFS operations on the cluster will halt until
the cluster’s replication values are resorted
D.
The file will be re-replicated automatically after the NameNode determines it is under-replicated
based on the block reports it receives from the NameNodes
D is the right one
0
0
I think D is correct. NameNodes keep track of the replication. Replication factor 3 has to be fulfilled before ending the transaction. Agree ?
0
0
D is the correct answer.
During put command if file got copied at two location (where replication factor is 3) then during heart beat and block report send by dn to nn, namenode will find the under replicated blocks and will replicate it.
0
0
D is right, not the B
0
0
D
0
0
D(Correction – it receives from the Datanodes)
0
0
many questions has typos and other misleading errors. I hope site admin is aware of these.
0
0
D
0
0
Correct answer is B
0
0