You are migrating a cluster from MApReduce version 1 (MRv1) to MapReduce version 2 (MRv2)
on YARN. You want to maintain your MRv1 TaskTracker slot capacities when you migrate. What
should you do/

A.
Configure yarn.applicationmaster.resource.memory-mb and
yarn.applicationmaster.resource.cpu-vcores so that ApplicationMaster container allocations match
the capacity you require.
B.
You don’t need to configure or balance these properties in YARN as YARN dynamically
balances resource management capabilities on your cluster
C.
Configure mapred.tasktracker.map.tasks.maximum and
mapred.tasktracker.reduce.tasks.maximum ub yarn-site.xml to match your cluster’s capacity set
by the yarn-scheduler.minimum-allocation
D.
Configure yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpuvcores to match the capacity you require under YARN for each NodeManager
D is correct:
In MRv1, the mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum properties dictated how many map and reduce slots each TaskTracker had. These properties no longer exist in YARN. Instead, YARN uses yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores, which control the amount of memory and CPU on each host, both available to both maps and reduces.
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_mapreduce_to_yarn_migrate.html#concept_e4c_3my_xl_unique_1
0
0
D
0
0
D
0
0
D
0
0
I have the same idea. D
0
0
D.
One of the larger changes in MRv2 is the way that resources are managed. In MRv1, each host was configured with a fixed number of map slots and a fixed number of reduce slots. Under YARN, there is no distinction between resources available for maps and resources available for reduces – all resources are available for both. Second, the notion of slots has been discarded, and resources are now configured in terms of amounts of memory (in megabytes) and CPU (in “virtual cores”
0
0