Practice Free SAA-C03 Exam Online Questions
A company maintains about 300 TB in Amazon S3 Standard storage month after month. The S3 objects are each typically around 50 GB in size and are frequently replaced with multipart uploads by their global application. The number and size of S3 objects remain constant but the company’s S3 storage costs are increasing each month.
How should a solutions architect reduce costs in this situation?
- A . Switch from multipart uploads to Amazon S3 Transfer Acceleration.
- B . Enable an S3 Lifecycle policy that deletes incomplete multipart uploads.
- C . Configure S3 inventory to prevent objects from being archived too quickly.
- D . Configure Amazon CloudFront to reduce the number of objects stored in Amazon S3.
B
Explanation:
This option is the most cost-effective way to reduce the S3 storage costs in this situation. Incomplete multipart uploads are parts of objects that are not completed or aborted by the application. They consume storage space and incur charges until they are deleted. By enabling an S3 Lifecycle policy that deletes incomplete multipart uploads, you can automatically remove them after a specified period of time (such as one day) and free up the storage space. This will reduce the S3 storage costs and also improve the performance of the application by avoiding unnecessary retries or errors.
Option A is not correct because switching from multipart uploads to Amazon S3 Transfer Acceleration will not reduce the S3 storage costs. Amazon S3 Transfer Acceleration is a feature that enables faster data transfers to and from S3 by using the AWS edge network. It is useful for improving the upload speed of large objects over long distances, but it does not affect the storage space or charges. In fact, it may increase the costs by adding a data transfer fee for using the feature.
Option C is not correct because configuring S3 inventory to prevent objects from being archived too quickly will not reduce the S3 storage costs. Amazon S3 Inventory is a feature that provides a report of the objects and their metadata in an S3 bucket. It is useful for managing and auditing the S3 objects, but it does not affect the storage space or charges. In fact, it may increase the costs by generating additional S3 objects for the inventory reports.
Option D is not correct because configuring Amazon CloudFront to reduce the number of objects stored in Amazon S3 will not reduce the S3 storage costs. Amazon CloudFront is a content delivery network (CDN) that distributes the S3 objects to edge locations for faster and lower latency access. It is useful for improving the download speed and availability of the S3 objects, but it does not affect the storage space or charges. In fact, it may increase the costs by adding a data transfer fee for using the service.
Reference: Managing your storage lifecycle
Using multipart upload
Amazon S3 Transfer Acceleration
Amazon S3 Inventory
What Is Amazon CloudFront?
A company stores data in PDF format in an Amazon S3 bucket. The company must follow a legal requirement to retain all new and existing data in Amazon S3 for 7 years.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Turn on the S3 Versionmg feature for the S3 bucket Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor authentication (MFA) delete for all S3 objects.
- B . Turn on S3 Object Lock with governance retention mode for the S3 bucket Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
- C . Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
- D . Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance
C
Explanation:
S3 Object Lock enables a write-once-read-many (WORM) model for objects stored in Amazon S3. It can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely1. S3 Object Lock has two retention modes: governance mode and compliance mode. Compliance mode provides the highest level of protection and prevents any user, including the root user, from deleting or modifying an object version until the retention period expires. To use S3 Object Lock, a new bucket with Object Lock enabled must be created, and a default retention period can be optionally configured for objects placed in the bucket2. To bring existing objects into compliance, they must be recopied into the bucket with a retention period specified.
Option A is incorrect because S3 Versioning and S3 Lifecycle do not provide WORM protection for objects. Moreover, MFA delete only applies to deleting object versions, not modifying them.
Option B is incorrect because governance mode allows users with special permissions to override or remove the retention settings or delete the object if necessary. This does not meet the legal requirement of retaining all data for 7 years.
Option D is incorrect because S3 Batch Operations cannot be used to apply compliance mode
retention periods to existing objects. S3 Batch Operations can only apply governance mode retention
periods or legal holds.
Reference URL: 2:
https: //docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-console.html 3: https: //docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-dynamic-data-access 4: https: //docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html 1: https: //docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html : https: //docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html : https: //docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html : https: //aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires the company to store all application logs in Amazon OpenSearch Service (Amazon Elasticsearch Service) in near-real time.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
- B . Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
- C . Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery stream’s source. Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream’s destination.
- D . Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service)
B
Explanation:
https: //computingforgeeks.com/stream-logs-in-aws-from-cloudwatch-to-elasticsearch/
An entertainment company is using Amazon DynamoDB to store media metadata. The application is read intensive and experiencing delays. The company does not have staff to handle additional operational overhead and needs to improve the performance efficiency of DynamoDB without reconfiguring the application.
What should a solutions architect recommend to meet this requirement?
- A . Use Amazon ElastiCache for Redis.
- B . Use Amazon DynamoDB Accelerator (DAX).
- C . Replicate data by using DynamoDB global tables.
- D . Use Amazon ElastiCache for Memcached with Auto Discovery enabled.
B
Explanation:
https: //aws.amazon.com/dynamodb/dax/
A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. The AMIs contain critical data and configurations that are necessary for the company’s operations. The company wants to implement a solution that will recover accidentally deleted AMIs quickly and efficiently.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a separate AWS account.
- B . Copy all AMIs to another AWS account periodically.
- C . Create a retention rule in Recycle Bin.
- D . Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.
C
Explanation:
Recycle Bin is a data recovery feature that enables you to restore accidentally deleted Amazon EBS snapshots and EBS-backed AMIs. When using Recycle Bin, if your resources are deleted, they are retained in the Recycle Bin for a time period that you specify before being permanently deleted. You can restore a resource from the Recycle Bin at any time before its retention period expires. This solution has the least operational overhead, as you do not need to create, copy, or upload any additional resources. You can also manage tags and permissions for AMIs in the Recycle Bin. AMIs in the Recycle Bin do not incur any additional charges.
Reference: Recover AMIs from the Recycle Bin
Recover an accidentally deleted Linux AMI
A company has an application that places hundreds of .csv files into an Amazon S3 bucket every hour. The files are 1 GB in size. Each time a file is uploaded, the company needs to convert the file to Apache Parquet format and place the output file into an S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS Lambda function to download the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Invoke the Lambda function for each S3 PUT event.
- B . Create an Apache Spark job to read the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Create an AWS Lambda function for each S3 PUT event to invoke the Spark job.
- C . Create an AWS Glue table and an AWS Glue crawler for the S3 bucket where the application places the .csv files. Schedule an AWS Lambda function to periodically use Amazon Athena to query the AWS Glue table, convert the query results into Parquet format, and place the output files into an S3 bucket.
- D . Create an AWS Glue extract, transform, and load (ETL) job to convert the .csv files to Parquet format and place the output files into an S3 bucket. Create an AWS Lambda function for each S3 PUT event to invoke the ETL job.
D
Explanation:
https: //docs.aws.amazon.com/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-parquet.html
A company has an application that uses Docker containers in its local data center. The application runs on a container host that stores persistent data in a volume on the host. The container instances use the stored persistent data.
The company wants to move the application to a fully managed service because the company does not want to manage any servers or storage infrastructure.
Which solution will meet these requirements?
- A . Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a persistent volume mounted in the containers.
- B . Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the containers.
- C . Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon S3 bucket. Map the S3 bucket as a persistent storage volume mounted in the containers.
- D . Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launch type. Create an
Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the containers.
B
Explanation:
This solution meets the requirements because it allows the company to move the application to a fully managed service without managing any servers or storage infrastructure. AWS Fargate is a serverless compute engine for containers that runs the Amazon ECS tasks. With Fargate, the company does not need to provision, configure, or scale clusters of virtual machines to run containers. Amazon EFS is a fully managed file system that can be accessed by multiple containers concurrently. With EFS, the company does not need to provision and manage storage capacity. EFS provides a simple interface to create and configure file systems quickly and easily. The company can use the EFS volume as a persistent storage volume mounted in the containers to store the persistent data. The company can also use the EFS mount helper to simplify the mounting process.
Reference: Amazon ECS on AWS Fargate, Using Amazon EFS file systems with Amazon ECS, Amazon EFS mount helper.
A company needs to retain its AWS CloudTrail logs for 3 years. The company is enforcing CloudTrail across a set of AWS accounts by using AWS Organizations from the parent account. The CloudTrail target S3 bucket is configured with S3 Versioning enabled. An S3 Lifecycle policy is in place to delete current objects after 3 years.
After the fourth year of use of the S3 bucket, the S3 bucket metrics show that the number of objects has continued to rise. However, the number of new CloudTrail logs that are delivered to the S3 bucket has remained consistent.
Which solution will delete objects that are older than 3 years in the MOST cost-effective manner?
- A . Configure the organization’s centralized CloudTrail trail to expire objects after 3 years.
- B . Configure the S3 Lifecycle policy to delete previous versions as well as current versions.
- C . Create an AWS Lambda function to enumerate and delete objects from Amazon S3 that are older than 3 years.
- D . Configure the parent account as the owner of all objects that are delivered to the S3 bucket.
B
Explanation:
https: //docs.aws.amazon.com/awscloudtrail/latest/userguide/best-practices-security.html#: ~: text=The%20CloudTrail%20trail, time%20has%20passed.
A company is developing an application in the AWS Cloud. The application’s HTTP API contains critical information that is published in Amazon API Gateway. The critical information must be accessible from only a limited set of trusted IP addresses that belong to the company’s internal network.
Which solution will meet these requirements?
- A . Set up an API Gateway private integration to restrict access to a predefined set ot IP addresses.
- B . Create a resource policy for the API that denies access to any IP address that is not specifically allowed.
- C . Directly deploy the API in a private subnet. Create a network ACL. Set up rules to allow the traffic from specific IP addresses.
- D . Modify the security group that is attached to API Gateway to allow inbound traffic from only the trusted IP addresses.
B
Explanation:
Amazon API Gateway supports resource policies, which allow you to control access to your API by specifying the IP addresses or ranges that can access the API. By creating a resource policy that explicitly denies access to any IP address outside the allowed set, you can ensure that only trusted IP addresses (such as those from your internal network) can access the critical information in your API. This approach provides fine-grained access control without the need for additional infrastructure or complex configurations.
Option A (Private integration): API Gateway private integrations are for creating private APIs that are only accessible within a VPC, but this solution is about restricting access to certain IP addresses.
Option C (Private subnet and ACLs): Deploying the API in a private subnet and using network ACLs adds unnecessary complexity and isn’t the best fit for HTTP APIs.
Option D (Security group): API Gateway doesn’t have a security group because it isn’t a resource inside a VPC. Instead, resource policies are the correct mechanism for controlling IP-based access. AWS
Reference: Controlling Access to API Gateway with Resource Policies
A company is developing a new mobile app. The company must implement proper traffic filtering to protect its Application Load Balancer (ALB) against common application-level attacks, such as cross-site scripting or SQL injection. The company has minimal infrastructure and operational staff. The company needs to reduce its share of the responsibility in managing, updating, and securing servers for its AWS environment.
What should a solutions architect recommend to meet these requirements?
- A . Configure AWS WAF rules and associate them with the ALB.
- B . Deploy the application using Amazon S3 with public hosting enabled.
- C . Deploy AWS Shield Advanced and add the ALB as a protected resource.
- D . Create a new ALB that directs traffic to an Amazon EC2 instance running a third-party firewall, which then passes the traffic to the current ALB.
A
Explanation:
A solutions architect should recommend option A, which is to configure AWS WAF rules and associate them with the ALB. This will allow the company to apply traffic filtering at the application layer, which is necessary for protecting the ALB against common application-level attacks such as cross-site scripting or SQL injection. AWS WAF is a managed service that makes it easy to protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. The company can easily manage and update the rules to ensure the security of its application.