You’re upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running
HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce version 1 (MRv1)
to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a
block size of 128MB for all new files written to the cluster after upgrade. What should you do?

A.
You cannot enforce this, since client code can always override this value
B.
Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the
NameNode, and set the parameter to final
C.
Set dfs.block.size to 128 M on all the worker nodes and client machines, and set the parameter
to final. You do not need to set this value on the NameNode
D.
Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the
NameNode, and set the parameter to final
E.
Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the
parameter to final. You do not need to set this value on the NameNode
Explanation:
—
“E” is correct answer
dfs.block.size is in “bytes” so, 1024 * 1024 * 128MB = 134217728
0
0
Answer: E
Namenode doesn’t store blocks
dfs.block.size is to be config on datanode.
Difference between option c & e is it should be 128M or 134217728 not 128 M(space)
0
0
I referenced this link to verify this answer:: https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml says the following for dfs.blocksize ::
“The default block size for new files, in bytes. You can use the following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in bytes (such as 134217728 for 128 MB)”
So the answer is ‘E’ since the syntax in option ‘C’ is incorrect (like Arun mentioned).
0
0
>>You want to set and enforce a block size of 128MB for all new files written to the cluster after upgrade.
You can only enforce it by making dfs.block.size parameter to final in the Namenode. Marking final everywhere else will not have any impact, is the Administrator who should mark this parameter to final at Namenode.
0
0
I’ve just changed in hdfs-site.xml
dfs.block.size
# 134217728
67108864
The default block size for new files(in bytes).
restarted data-node:
root@slave01:~# /etc/init.d/hadoop-0.20-datanode restart
Restarting Hadoop datanode daemon: stopping datanode
Starting Hadoop datanode daemon: starting datanode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-datanode-slave01.out
hadoop-0.20-datanode.
and we have:
root@slave01:~# mv foo.txt foo1.txt
root@slave01:~# hadoop fs -put foo1.txt input
root@slave01:~# hadoop fs -stat %o /user/root/input/foo1.txt
67108864
root@slave01:~# hadoop fs -stat %o /user/root/input/foo.txt
134217728
new file(foo1.txt) has a new blocksize
no need to touch NameNode.
so correct answer is E
0
0
agree Yuriy, even if I set dfs.blocksize to 256mb on name node and create a file with 64 mb block size on data node, that works….
Step #1 – block size on client , coming from default-hdfs.
——————————————————————
[hduser@cdh-data1 ~]$ hdfs getconf -confKey dfs.blocksize
134217728
[hduser@cdh-data1 ~]$
Setp #2 – block size on nn , coming from hdfs-site.xml ( manually update in xml)
—————————————————————————–
dfs.blocksize
268435456
true
[hduser@cdh-master hadoop]$ hdfs getconf -confKey dfs.blocksize
268435456
[hduser@cdh-master hadoop]$
Step #3 loading data from slave with smaller block size:
——————————————————————————-
[hduser@cdh-data1 ~]$ hadoop fs -Ddfs.blocksize=67108864 -copyFromLocal LLL.txt /LLL_Yurie_block3.txt
[hduser@cdh-data1 ~]$
Step #4, actual block size for file loaded in hdfs
——————————————————————————-
[hduser@cdh-data1 ~]$ hadoop fs -stat %o /LLL_Yurie_block3.txt
67108864
[hduser@cdh-data1 ~]$
So changing config for dfs.blocksize on nn has no meaning for block size….
0
0
E
0
0
Thanks Yuriy and Dev so much for your tests
0
0
c ad e both are corect by in cloudera we give sizen in 128M instant of 134217728 so fpr cloudera exam e is correct answer
0
0
E is the correct answer
0
0
Correct answer is C
0
0