Which of the following is not an ODBC connector property?
Which of the following is not an ODBC connector property?
which partitioning method will be used by default?
You have a job that reads in Sequential File followed by a Transformer stage. When you run this
job, which partitioning method will be used by default?
How many processes will be created?
A job reads from a sequential file using a SequentialFile stage with option “number of readers” set
to 2. This data goes to a Transformer stage and then is written to a dataset using the DataSet
stage. The default configuration file has three nodes. The environment variable
$APT_DISABLE_COMBINATION is set to “True” and partitioning is set to “Auto”. How many
processes will be created?
Which two properties can be set to read a fixed width sequential file in parallel?
Which two properties can be set to read a fixed width sequential file in parallel? (Choose two.)
Which two partitioning methods require keys?
Which two partitioning methods require keys? (Choose two.)
What is the most efficient hash partitioning strategy for each link?
Click the Exhibit button.
These three Sequential Files shown in the exhibit need to be joined. Join_1 is on Columns CustID
and OrderID. Join_2 is on CustID and LocationID. What is the most efficient hash partitioning
strategy for each link?
How many osh processes will this job create?
A job design consists of an input sequential file, a Modify stage, followed by a Filter stage and an
output SequentialFile stage. The job is run on an SMP machine with a configuration file defined
with three nodes. No environment variables were set for the job. How many osh processes will this
job create?
Which statement is true about the Web Services Pack?
Which statement is true about the Web Services Pack?
Which parallel job design would satisfy this functional requirement?
A customer wants to select the entire order details for the largest transaction for each of 2 million
customers from a 20 million row DB2 source table containing order history. Which parallel job
design would satisfy this functional requirement?
What would provide optimal performance while satisfying the business requirements?
Using a DB2 for z/OS source database, a 200 million row source table with 30 million distinct
values must be aggregated to calculate the average value of two column attributes. What would
provide optimal performance while satisfying the business requirements?