Indentify the number of failed task attempts you can expect when you run the job with mapred.max.map.attempts
You wrote a map function that throws a runtime exception when it encounters a control character
in input data. The input supplied to your mapper contains twelve such characters totals, spread
across five file splits. The first four file splits each have two control characters and the last split has
four control characters.
Indentify the number of failed task attempts you can expect when you run the job with
mapred.max.map.attempts set to 4:
which method in the Mapper you should use to implement code for reading the file and populating the associativ
You want to populate an associative array in order to perform a map-side join. You’ve decided to
put this information in a text file, place that file into the DistributedCache and read it in your
Mapper before any records are processed.
Indentify which method in the Mapper you should use to implement code for reading the file and
populating the associative array?
which interface is most likely to reduce the amount of intermediate data transferred across the network?
You’ve written a MapReduce job that will process 500 million input records and generated 500
million key-value pairs. The data is not uniformly distributed. Your MapReduce job will create a
significant amount of intermediate data that it needs to transfer between mappers and reduces
which is a potential bottleneck. A custom implementation of which interface is most likely to reduce
the amount of intermediate data transferred across the network?
Can you use MapReduce to perform a relational join on two large tables sharing a key?
Can you use MapReduce to perform a relational join on two large tables sharing a key? Assume
that the two tables are formatted as comma-separated files in HDFS.
Where is intermediate data written to after being emitted from the Mapper’s map method?
You have just executed a MapReduce job. Where is intermediate data written to after being
emitted from the Mapper’s map method?
How will you gather this data for your analysis?
You want to understand more about how users browse your public website, such as which pages
they visit prior to placing an order. You have a farm of 200 web servers hosting your website. How
will you gather this data for your analysis?
which two issues?
MapReduce v2 (MRv2/YARN) is designed to address which two issues?
which invocation correctly passes.mapred.job.name with a value of Example to Hadoop?
You need to run the same job many times with minor variations. Rather than hardcoding all job
configuration options in your drive code, you’ve decided to have your Driver subclass
org.apache.hadoop.conf.Configured and implement the org.apache.hadoop.util.Tool interface.
Indentify which invocation correctly passes.mapred.job.name with a value of Example to Hadoop?
Indentify what determines the data types used by the Mapper for a given job.
You are developing a MapReduce job for sales reporting. The mapper will process input keys
representing the year (IntWritable) and input values representing product indentifies (Text).
Indentify what determines the data types used by the Mapper for a given job.
Identify the MapReduce v2 (MRv2 / YARN) daemon responsible for launching application containers and monitoring
Identify the MapReduce v2 (MRv2 / YARN) daemon responsible for launching application
containers and monitoring application resource usage?