PrepAway - Latest Free Exam Questions & Answers

How should you implement this for each deployment?

Your application is currently running on Amazon EC2 instances behind a load balancer. Your management has decided
to use a Blue/Green deployment strategy. How should you implement this for each deployment?

PrepAway - Latest Free Exam Questions & Answers

A.
Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to.

B.
Using AWS CloudFormation, create a test stack for validating the code, and then deploy the code to each production Amazon EC2
instance.

C.
Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load
balancer using Amazon Route 53 after testing.

D.
Launch more Amazon EC2 instances to ensure high availability, de-register each Amazon EC2 instance from the load balancer,
upgrade it, and test it, and then register it again with the load balancer.

20 Comments on “How should you implement this for each deployment?

  1. Haynes says:

    Hello everyone, anyone is preparing for the AWS Certified DevOps Engineer – Professional exam? Got some new questions (Aug/17/2017),share here:

    QUESTION
    You are responsible for your company’s large multi-tiered Windows-based web application running on Amazon EC2 instances situated behind a load balancer.
    While reviewing metrics, you’ve started noticing an upwards trend for slow customer page load time.
    Your manager has asked you to come up with a solution to ensure that customer load time is not affected by too many requests per second.
    Which technique would you use to solve this issue?

    A. Re-deploy your infrastructure using an AWS CloudFormation template.
    Configure Elastic Load Balancing health checks to initiate a new AWS CloudFormation stack when health checks return failed.
    B. Re-deploy your infrastructure using an AWS CloudFormation template.
    Spin up a second AWS CloudFormation stack.
    Configure Elastic Load Balancing SpillOver functionality to spill over any slow connections to the second AWS CloudFormation stack.
    C. Re-deploy your infrastructure using AWS CloudFormation, Elastic Beanstalk, and Auto Scaling.
    Set up your Auto Scaling group policies to scale based on the number of requests per second as well as the current customer load time.
    D. Re-deploy your application using an Auto Scaling template.
    Configure the Auto Scaling template to spin up a new Elastic Beanstalk application when the customer load time surpasses your threshold.

    Answer: C

    QUESTION
    Your company releases new features with high frequency while demanding high application availability.
    As part of the application’s A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed in near real-time, to ensure that the application is working flawlessly after each deployment. If the logs show arty anomalous behavior, then the application version of the instance is changed to a more stable one.
    Which of the following methods should you use for shipping and analyzing the logs in a highly available manner?

    A. Ship the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour.
    B. Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
    C. Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.
    D. Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner.
    E. Store the logs locally on each instance and then have an Amazon Kinesis stream pull the logs for live analysis.

    Answer: C

    QUESTION
    You have a code repository that uses Amazon S3 as a data store. During a recent audit of your security controls, some concerns were raised about maintaining the integrity of the data in the Amazon S3 bucket. Another concern was raised around securely deploying code from Amazon S3 to applications running on Amazon EC2 in a virtual private cloud.
    What are some measures that you can implement to mitigate these concerns? Choose 2 answers.

    A. Add an Amazon S3 bucket policy with a condition statement to allow access only from Amazon EC2 instances with RFC 1918 IP addresses and enable bucket versioning.
    B. Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning.
    C. Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instances.
    Use these credentials to securely access the Amazon S3 bucket when deploying code.
    D. Create an Amazon Identity and Access Management role with authorization to access the Amazon 53 bucket, and launch all of your application’s Amazon EC2 instances with this role.
    E. Use AWS Data Pipeline to lifecycle the data in your Amazon S3 bucket to Amazon Glacier on a weekly basis.
    F. Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon .5.3 bucket to your Amazon EC2 instances.

    Answer: BD

    QUESTION
    You have an application consisting of a stateless web server tier running on Amazon EC2 instances behind load balancer, and are using Amazon RDS with read replicas.
    Which of the following methods should you use to implement a self-healing and cost-effective architecture? Choose 2 answers.

    A. Set up a third-party monitoring solution on a cluster of Amazon EC2 instances in order to emit custom CloudWatch metrics to trigger the termination of unhealthy Amazon EC2 instances.
    B. Set up scripts on each Amazon EC2 instance to frequently send ICMP pings to the load balancer in order to determine which instance is unhealthy and replace it.
    C. Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon RDS DB CPU utilization CloudWatch metric to scale the instances.
    D. Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon EC2 CPU utilization CloudWatch metric to scale the instances.
    E. Use a larger Amazon EC2 instance type for the web server tier and a larger DB instance type for the data storage layer to ensure that they don’t become unhealthy.
    F. Set up an Auto Scaling group for the database tier along with an Auto Scaling policy that uses the Amazon RDS read replica lag CloudWatch metric to scale out the Amazon RDS read replicas. G. Use an Amazon RDS Multi-AZ deployment.

    Answer: AD

    QUESTION
    Your application is currently running on Amazon EC2 instances behind a load balancer.
    Your management has decided to use a Blue/Green deployment strategy.
    How should you implement this for each deployment?

    A. Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to.
    B. Using AWS CloudFormation, create a test stack for validating the code, and then deploy the code to each production Amazon EC2 instance.
    C. Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing.
    D. Launch more Amazon EC2 instances to ensure high availability, de-register each Amazon EC2 instance from the load balancer, upgrade it, and test it, and then register it again with the load balancer.

    Answer: C




    0



    0
    1. Sadeel Anjum says:

      Explanation:
      Using a third party solution is never a good idea when we have AWS tools to do so. so A can’t be the answer and deploying RDS in multi-AZs is a selg healing solution so G is the best answer here (and D too).




      0



      0
  2. Wiseman says:

    2017 new AWS Certified DevOps Engineer – Professional questions share!

    QUESTION
    Your development team wants account-level access to production instances in order to do live debugging of a highly secure environment.
    Which of the following should you do?

    A. Place the credentials provided by Amazon Elastic Compute Cloud (EC2) into a secure Amazon Sample Storage Service (S3) bucket with encryption enabled.
    Assign AWS Identity and Access Management (IAM) users to each developer so they can download the credentials file.
    B. Place an internally created private key into a secure S3 bucket with server-side encryption using customer keys and configuration management, create a service account on all the instances using this private key, and assign IAM users to each developer so they can download the file.
    C. Place each developer’s own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the user’s public keys into the appropriate account.
    D. Place the credentials provided by Amazon EC2 onto an MFA encrypted USB drive, and physically share it with each developer so that the private key never leaves the office.

    Answer: C

    QUESTION
    As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs.
    The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance.
    Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

    A. Ensure that the I/O block sizes for the test are randomly selected.
    B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test.
    C. Ensure that snapshots of the Amazon EBS volumes are created as a backup.
    D. Ensure that the Amazon EBS volume is encrypted.
    E. Ensure that the Amazon EBS volume has been pre-warmed by creating a snapshot of the volume before the test.

    Answer: B

    QUESTION
    After reviewing the last quarter’s monthly bills, management has noticed an increase in the overall bill from Amazon.
    After researching this increase in cost, you discovered that one of your new services is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket.
    Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls.
    What process should you use to help mitigate the cost?

    A. Update your Amazon S3 buckets’ lifecycle policies to automatically push a list of objects to a new bucket, and use this list to view objects associated with the application’s bucket.
    B. Create a new DynamoDB table. Use the new DynamoDB table to store all metadata about all objects uploaded to Amazon S3.
    Any time a new object is uploaded, update the application’s internal Amazon S3 object metadata cache from DynamoDB.
    C. Using Amazon SNS, create a notification on any new Amazon S3 objects that automatically updates a new DynamoDB table to store all metadata about the new object.
    Subscribe the application to the Amazon SNS topic to update its internal Amazon S3 object metadata cache from the DynamoDB table.
    D. Upload all images to Amazon SQS, set up SQS lifecycles to move all images to Amazon S3, and initiate an Amazon SNS notification to your application to update the application’s Internal Amazon S3 object metadata cache.
    E. Upload all images to an ElastiCache filecache server. Update your application to now read all file metadata from the ElastiCache filecache server, and configure the ElastiCache policies to push all files to Amazon S3 for long-term storage.

    Answer: C

    QUESTION
    Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application.
    You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

    A. Publish your data to CloudWatch Logs, and configure your application to autoscale to handle the load on demand.
    B. Publish your log data to an Amazon S3 bucket.
    Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is configured to pull down your log files stored an Amazon S3.
    C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is configured to process your logging data.
    D. Configure an Auto Scaling group to increase the size of your Amazon EMR duster.
    E. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

    Answer: C

    QUESTION
    You are using Elastic Beanstalk to manage your e-commerce store. The store is based on an open source e- commerce platform and is deployed across multiple instances in an Auto Scaling group. Your development team often creates new “extensions” for the e-commerce store.
    These extensions include PHP source code as well as an SQL upgrade script used to make any necessary updates to the database schema.
    You have noticed that some extension deployments fail due to an error when running the SQL upgrade script. After further investigation, you realize that this is because the SQL script is being executed on all of your Amazon EC2 instances.
    How would you ensure that the SQL script is only executed once per deployment regardless of how many Amazon EC2 instances are running at the time?

    A. Use a “Container command” within an Elastic Beanstalk configuration file to execute the script, ensuring that the “leader only” flag is set to true.
    B. Make use of the Amazon EC2 metadata service to query whether the instance is marked as the leader” in the Auto Scaling group.
    Only execute the script if “true” is returned.
    C. Use a “Solo Command” within an Elastic Beanstalk configuration file to execute the script.
    The Elastic Beanstalk service will ensure that the command is only executed once.
    D. Update the Amazon RDS security group to only allow write access from a single instance in the Auto Scaling group; that way, only one instance will successfully execute the script on the database.

    Answer: A




    1



    0
  3. Sadeel Anjum says:

    C is the right answer.

    In Blue/Green deployment we deploy a new environment (parallel to old one) then use DNS routing to migrate to new environment after testing.




    1



    0
    1. Monika says:

      Deepak is useless. I bought dumps from him and he gave me trash questions. This thief is just making money here. guys, be careful with Deepak.




      10



      5

Leave a Reply

Your email address will not be published. Required fields are marked *