PrepAway - Latest Free Exam Questions & Answers

What AWS architecture would you recommend?

A web design company currently runs several FTP servers that their 250 customers use to upload and
download large graphic files They wish to move this system to AWS to make it more scalable, but they wish to
maintain customer privacy and Keep costs to a minimum.
What AWS architecture would you recommend?

PrepAway - Latest Free Exam Questions & Answers

A.
ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM
user for each customer Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the ‘username’ Policy variable.

B.
Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3
client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only
to that one customer.

C.
Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum
network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as
part of the user Data startup script on each Instance.

D.
Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead
of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one
customer.

36 Comments on “What AWS architecture would you recommend?

  1. James Mortenson says:

    Answer A makes sense however in order to maintain customer privacy the credentials should not change and despite the cost of running the FTP is higher they should not compromise the user access methods. The hassle of changing the process, post implementation support, type of S3 client suggestion would be nightmare. The customers have already defined the use of FTP, they all have to change their codes.

    Therefore Answer C is correct in my opinion.




    1



    0
  2. JK says:

    I agree that A is the best solution. It makes specific mention to keep costs to a minimum and makes no mention of reducing impact to customers.

    Work to what is specifically stated or requested in the question, not what is omitted. Assumptions are always incorrect.




    2



    1
  3. muthu says:

    The 2 important point in question is privacy and keep the cost minimum .They didn’t concern about re write the code . They pointed out “Customer wish to move” .So they ready to re architect the env .Also they are uploading and downloading large files including scalable manner. With S3 privacy can achieve with IAM group policy and scalable also possible .cost minimum..So S3 will fit for all these requirements…If you use FTP server what is the plan for storage? no clue….

    So i would prefer option A.




    0



    1
  4. Senator says:

    Just checking in why B cannot go? the question says cheapest solution, and RRS is cheaper than the normal s3, as s3 is more durable than RRS, but the question talks on scalability.

    Will appreciate some clarity from gurus in here 🙂




    0



    0
    1. kirrim says:

      @Senator: You’re right, re: question lists scalability as a key requirement. Customer is moving to AWS with 250 users today, but if they are moving to AWS specifically for the scalability AWS can get them, then they anticipate many more.

      Re: option B: Managing many users (250 is already bad enough) as separate IAM accounts, separate buckets, and separate bucket policies is going to become a headache if this company anticipates growing out to “web scale”.

      So while B is definitely less expensive using RRS than normal S3, it fails the key scalability requirement that is the reason the customer is moving to AWS in the first place.

      Option D fails the scalability requirement for the same reason, have to write a policy per user.

      Re: Option A, and the excellent explanation in the writeup michjojo linked to above, you only have one bucket policy to manage, because it uses the IAM user’s name in each directory, and then just substitutes the username attribute as a variable in the bucket policy. So one policy fits all, and Option A definitely scales better. And storing in S3 is probably going to be cheaper than storing to EBS volumes on FTP servers as in Option C.




      0



      1
      1. DC says:

        It cannot be B because you can not “turn ON or OFF S3 Reduced Redundancy” that’s a type of storage, not a feature to be turned on/off. The answer doesn’t make sense




        0



        0
  5. Charles says:

    Answer must be C, any mention of the S3 client should be taken with caution as it’s meant to be used as a developer’s tool and not a client interface




    1



    0
  6. ab star says:

    I’ll go with C, reason: in question we have keywords ‘scalable’ and company wants to ‘move systems’ to aws, which is best suited for Auto-scaling group.




    1



    0
  7. Joe says:

    A is wrong IMO, because you need to preserve privacy, but the suggested solution gives all users “has an IAM policy that permits access to subdirectories”, which means that user A can access sub-directory B, and so on.

    So C is better for me.




    1



    0
      1. mutiger91 says:

        The term “IAM user” doesn’t mean someone who has permissions to make changes in IAM. It means someone defined in IAM as a user of AWS. You can create a user account in IAM with one and only one permission: to write into one specific prefix in a bucket.




        0



        0
  8. Blah says:

    C is not correct because:
    Load a central list of ftp users from S3 as part of the user Data

    user data can be only executed when launch instance. So if any change of ftp users list should cause headache.




    0



    1

Leave a Reply

Your email address will not be published. Required fields are marked *