PrepAway - Latest Free Exam Questions & Answers

What should you identify?

HOTSPOT
You plan to implement a predictive analytics solution in Azure Machine Learning Studio (ML Studio). You intend
to train the solution by using existing data that resides on-premises. The on-premises data is a collection of
determined text files that total 5 GB in size.
You need to identify the process of adding the existing data to the solution.
What should you identify? To answer, select the appropriate options in the answer area.
Hot Area:

PrepAway - Latest Free Exam Questions & Answers

Answer:

7 Comments on “What should you identify?

  1. Zenon says:

    In my opinion correct answer is as follows:
    * an Azure SQL Database
    * a DataSet
    * Reader Module
    Based on following MS materials:
    “The size limit for uploading local datasets directly to Azure ML is 1.98 GB.” – https://social.msdn.microsoft.com/Forums/en-US/30876ca1-0675-4637-85e3-e6e8ba2f70c9/how-to-bring-large-dataset-from-local-computer-to-azure-machine-learning-studio?forum=MachineLearning
    “The output of Import Data is a dataset that can be used with any experiment.”
    “This module was previously named Reader. If you previously used the Reader module in an experiment, it will be renamed to Import Data when you refresh the experiment.” – https://msdn.microsoft.com/en-us/library/azure/dn905997.aspx




    0



    0
    1. CarlT says:

      Looking at the link you provided is says the direct limit is 1.98, but max is 10GB.. Staging is not mentioned at all in the question, but it is the way it is done on large datasets

      Could the answer be?
      *ML Studio
      *Experiment
      *Reader Module




      0



      0
  2. demisa says:

    The reader module doesn’t accept csv files, you would import the csv files into a azure sql then create an experiment and use the reader module to injest.

    I think the answer is:

    SQL
    Experiment
    Reader

    Tricky question either way




    0



    0

Leave a Reply