PrepAway - Latest Free Exam Questions & Answers

Which setup win meet the requirements?

You have recently joined a startup company building sensors to measure street noise and air quality in urban
areas. The company has been running a pilot deployment of around 100 sensors for 3 months each sensor
uploads 1KB of sensor data every minute to a backend hosted on AWS.
During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of
sensor data per month in the database.
The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a
PostgreSQL RDS database with 500GB standard storage.
The pilot is considered a success and your CEO has managed to get the attention or some potential investors.
The business plan requires a deployment of at least 100K sensors which needs to be supported by the
backend. You also need to store sensor data for at least two years to be able to compare year over year
Improvements.
To secure funding, you have to make sure that the platform meets these requirements and leaves room for
further scaling. Which setup win meet the requirements?

PrepAway - Latest Free Exam Questions & Answers

A.
Add an SQS queue to the ingestion layer to buffer writes to the RDS instance

B.
Ingest data into a DynamoDB table and move old data to a Redshift cluster

C.
Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage

D.
Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

6 Comments on “Which setup win meet the requirements?

    1. Don says:

      You just need the question not the answers. We are supposed to find the answers ourselves as part of the learning process. Even if you pay and get official question banks from examcollection.com the answers are still not accurate and worse not updated.




      0



      0
  1. DK says:

    On this one I think he is wrong.

    The scaling factor is 1000 (from 100 sensors to 100K sensors).

    You’re therefore writing 3000GB of data per month (3TB) – not a problem for Dynamo or Redshift (but expensive in Dynamo).

    More importantly, your IOPS have scaled from 10 to 10,000. That’s the Dynamo limit in all bar one of the AWS regions. And the question explicitly talks about leaving room to scale further.

    He claims that C doesn’t solve the ingestion issue, but this is constantly (not burst) streaming data in which case putting a buffer in between doesn’t reduce the load on the downstream redshift database for any appreciable length of time and so the point is moot.




    0



    0

Leave a Reply

Your email address will not be published. Required fields are marked *