PrepAway - Latest Free Exam Questions & Answers

What do you have to do on the cluster to allow the worker node to join, and start storing HDFS blocks?

You have installed a cluster running HDFS and MapReduce version 2 (MRv2) on YARN.
You have no afs.hosts entry()ies in your hdfs-alte.xml configuration file. You configure a
new worker node by setting fs.default.name in its configuration files to point to the
NameNode on your cluster, and you start the DataNode daemon on that worker node. What
do you have to do on the cluster to allow the worker node to join, and start storing HDFS
blocks?

PrepAway - Latest Free Exam Questions & Answers

A.
Without creating a dfs.hosts file or making any entries, run the command hadoop
dfsadmin –refreshHadoop on the NameNode

B.
Create a dfs.hosts file on the NameNode, add the worker node’s name to it, then issue
the command hadoop dfsadmin –refreshNodes on the NameNode

C.
Restart the NameNode

D.
Nothing; the worker node will automatically join the cluster when the DataNode daemon
is started.

2 Comments on “What do you have to do on the cluster to allow the worker node to join, and start storing HDFS blocks?

  1. abc says:

    B.
    Create a dfs.hosts file on the NameNode, add the worker node’s name to it, then issue
    the command hadoop dfsadmin –refreshNodes on the NameNode




    0



    0
  2. Manoj Sekharan says:

    D.

    The question clearly says as below:

    >>You have NO dfs.hosts entry()ies in your hdfs-alte.xml configuration file.

    Hence answer is D




    0



    0

Leave a Reply

Your email address will not be published. Required fields are marked *