Briefing Cloudera Knowledge

What do you have to do on the cluster to allow the worker node to join, and start storing HDFS block

You have installed a cluster running HDFS and MapReduce version 2 (MRv2) on YARN.
You have no afs.hosts entry()ies in your hdfs-alte.xml configuration file. You configure a
new worker node by setting fs.default.name in its configuration files to point to the
NameNode on your cluster, and you start the DataNode daemon on that worker node. What
do you have to do on the cluster to allow the worker node to join, and start storing HDFS
blocks?

A.
Without creating a dfs.hosts file or making any entries, run the command hadoop
dfsadmin –refreshHadoop on the NameNode

B.
Create a dfs.hosts file on the NameNode, add the worker node’s name to it, then issue
the command hadoop dfsadmin –refreshNodes on the NameNode

C.
Restart the NameNode

D.
Nothing; the worker node will automatically join the cluster when the DataNode daemon
is started.