HOTSPOT
You plan to implement a predictive analytics solution in Azure Machine Learning Studio (ML Studio). You intend
to train the solution by using existing data that resides on-premises. The on-premises data is a collection of
determined text files that total 5 GB in size.
You need to identify the process of adding the existing data to the solution.
What should you identify? To answer, select the appropriate options in the answer area.
Hot Area:

This is incorrect. Enter Data module is used to manually type data in. https://msdn.microsoft.com/en-us/library/azure/dn905948.aspx
0
0
In my opinion correct answer is as follows:
* an Azure SQL Database
* a DataSet
* Reader Module
Based on following MS materials:
“The size limit for uploading local datasets directly to Azure ML is 1.98 GB.” – https://social.msdn.microsoft.com/Forums/en-US/30876ca1-0675-4637-85e3-e6e8ba2f70c9/how-to-bring-large-dataset-from-local-computer-to-azure-machine-learning-studio?forum=MachineLearning
“The output of Import Data is a dataset that can be used with any experiment.”
“This module was previously named Reader. If you previously used the Reader module in an experiment, it will be renamed to Import Data when you refresh the experiment.” – https://msdn.microsoft.com/en-us/library/azure/dn905997.aspx
0
0
Looking at the link you provided is says the direct limit is 1.98, but max is 10GB.. Staging is not mentioned at all in the question, but it is the way it is done on large datasets
Could the answer be?
*ML Studio
*Experiment
*Reader Module
0
0
i agree with:
*ML Studio
*Experiment
*Reader Module
0
0
agree
0
0
The reader module doesn’t accept csv files, you would import the csv files into a azure sql then create an experiment and use the reader module to injest.
I think the answer is:
SQL
Experiment
Reader
Tricky question either way
0
0
By the way, SOME new 243Q 70-535 dumps are available here:
https://drive.google.com/open?id=1Tqc3nKkqLq3RuEdZ4qV-ddYyN5SwbSNB
Best Regards!
0
0