Your Hadoop cluster contains nodes in three racks. You have not configured the dfs.hosts
property in the NameNode’s configuration file. What results?

A.
The NameNode will update the dfs.hosts property to include machines running the DataNode
daemon on the next NameNode reboot or with the command dfsadmin –refreshNodes
B.
No new nodes can be added to the cluster until you specify them in the dfs.hosts file
C.
Any machine running the DataNode daemon can immediately join the cluster
D.
Presented with a blank dfs.hosts property, the NameNode will permit DataNodes specified in
mapred.hosts to join the cluster
C is the correct answer. If the value is empty, all hosts are permitted.
0
0
but you have to put the comand refreshnode, isnt’it
0
0
Hi,
What’s the correct answer her ? Is it C or A.
Any help is greatly appreciated.
0
0
-refesrhnode has to be run only if there is dfs.hosts file.
0
0
Correct answer is B.
The dfs.hosts and mapred.hosts properties allow an administrator to supply a file
containing an approved list of hostnames. If a machine is not in this list, it will be denied access to
the cluster. This can be used to enforce policies regarding which teams of developers have access
to which MapReduce sub-clusters. These are configured in exactly the same way as the excludes
file.
Reference: Apache Hadoop , Module 7: Managing a Hadoop Cluster
0
0
B seems correct
Names a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted.
0
0
C seems correct >>> If the value is empty, all hosts are permitted.
Names a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted.
0
0
The dfs.hosts is just an “extra security layer”, if that doesn’t exist all machines running a DataNode daemon can join into the cluster (if the nameresolution works fine).
Answer: C
0
0
C is correct:
http://gbif.blogspot.com/2011/01/setting-up-hadoop-cluster-part-1-manual.html
“If the value is empty, all hosts are permitted.”
http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
0
0
C is correct answer,
no need to do refreshNameNode as data node will heartbeat into name node. to test it , you can just add a datanode and keep running dfsadmin -report, it will start showing newly added data node in a bit.
RefreshNAme node is only done when you update include/exclude file and want your name node to read that file.
0
0
from Cloudera:
Both files (dfs.hosts and mapr.hosts are optional)
– If omitted, any host may connect and act as a DataNode/TaskTracker
– This is a possible security/data integrity issue
0
0
I have the same idea. A
0
0