You have a proprietary data store on-premises that must be backed up daily by dumping the data store
contents to a single compressed 50GB file and sending the file to AWS. Your SLAs state that any dump file
backed up within the past 7 days can be retrieved within 2 hours. Your compliance department has stated that
all data must be held indefinitely. The time required to restore the data store from a backup is approximately 1
hour. Your on-premise network connection is capable of sustaining 1gbps to AWS.
Which backup methods to AWS would be most cost-effective while still meeting all of your requirements?

A.
Send the daily backup files to Glacier immediately after being generated
B.
Transfer the daily backup files to an EBS volume in AWS and take daily snapshots of the volume
C.
Transfer the daily backup files to S3 and use appropriate bucket lifecycle policies to send to Glacier
D.
Host the backup files on a Storage Gateway with Gateway-Cached Volumes and take daily snapshots
Explanation:
http://aws.amazon.com/storagegateway/faqs/
Not sure why D is the answer here and not C.
0
0
“he time required to restore the data store from a backup is approximately 1
hour.” This rules out glacier
Host the backup files on a Storage Gateway with Gateway-Cached Volumes and take daily snapshots
A store gateway can only store data to its local cache or s3. Making this the correct answer.
0
0
I think C should the correct answer. D is not cost effective if you want to store data indefinitely, Glacier is more cost effective solution.
0
0
To add, I believe the statement “the time required to restore the data store from a backup is approximately 1 hour.” is a red herring, because it is just stating how long it takes to restore data from the backup files, nothing to do with storage.
The actual requirements to me are:
“Your SLAs state that any dump file backed up within the past 7 days can be retrieved within 2 hours. Your compliance department has stated that all data must be held indefinitely”
In this case, C will meet the criteria since you can create a lifecycle policy to move data to glacier after 7 days and have cheaper storage.
0
0
What about the restoration time of 1 hour ? Glacier doesn’t meet that requirement.
0
0
Yes Kelvin, you’re right about. I misread about the 1 hour restoration time. They can backup the data into glacier after certain time period.
0
0
http://jayendrapatil.com/aws-storage-gateway/
0
0
Remember that data retrieval job from Glacier can take a minimum of 4-5 hours, hence A and C cannot be correct. The answer is D
0
0
D is not cost effective.
It’s C because the “appropriate bucket lifecycle policies” will only move data to Glacier after 7 days.
The 2 hour/1 hour restores are coming from S3.
0
0
I too feel like C is the right answer.At 1 gbps, a 50 gb file should only take 50 seconds. Even with another hour to restore the data we are well below the 2 hour requirement. Simple S3 with a scheduled move to Glacier after the 7 day period seems not only inexpensive but also sufficient for the needs mentioned here.
0
0
50GB is byte not bit
0
0
C is the amazon way, they love their concept of data lifecycle and move between S3 and Glacier.
I also go for C.
0
0
C and D are candidates, but overall C looks a better choice considering :
1gbps transfer speed.
Quick download only required for 7 days
Indefinite storage , pointing to Glacier service
0
0
my answer is C.
we can move data from S3 to S3-IA also, not only Glacier. That will save cost as well as keep the restore time within 1 hour for sure.
0
0
c
0
0
C is the Answer. requirement is last 7 days so we can send data to Glacier after 7 days.
0
0
I am going with C, , because in the stored volume mode, you are storing data locally, the binary-compressed format is already available, and the bandwidth of your AWS connection meets the 7days/2hour SLA.
0
0