PrepAway - Latest Free Exam Questions & Answers

What two approaches will meet these requirements?

You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want
to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to
run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling
policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user
or by Auto Scaling. What two approaches will meet these requirements? (Choose two.)

PrepAway - Latest Free Exam Questions & Answers

A.
Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process.
Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web
server logs.
Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom
metrics.

B.
On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier.
Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is
stopped/terminated.
Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour.

C.
On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket.
Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is
stopped/terminated.
Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every
hour.

D.
Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process.
Create a log group object in AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to
Amazon Redshift and run reports every hour.

4 Comments on “What two approaches will meet these requirements?

    1. Poppin says:

      Can’t be A because if you schedule it every hour you’d lose whatever information was logged between hours when the instance was terminated.




      0



      0
      1. Sean says:

        “Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom
        metrics.”

        They do not mean an ec2 instance that is a web server, this one would be for running the reports. And since Cloudwatch Logs Agent streams data to cloudwatch logs, no data loss takes place.




        1



        0
  1. JoeS says:

    By process of elimination, B and D cannot be right.
    B because of Glacier.
    D because no such thing as “Data pipeline logs agent”

    A and C




    4



    0

Leave a Reply

Your email address will not be published. Required fields are marked *