Practice Free SAA-C03 Exam Online Questions
A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB instances to build and run a payment processing application. The company will run the application in its on-premises data center for compliance purposes.
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the company’s operational team to build the application.
Which activities are the responsibility of the company’s operational team? (Select THREE.)
- A . Providing resilient power and network connectivity to the Outposts racks
- B . Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
- C . Physical security and access controls of the data center environment
- D . Availability of the Outposts infrastructure including the power supplies, servers, and network-ing equipment within the Outposts racks
- E . Physical maintenance of Outposts components
- F . Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events
A, C, F
Explanation:
These answers are correct because they reflect the customer’s responsibilities for using AWS Outposts as part of the solution. According to the AWS shared responsibility model, the customer is responsible for providing resilient power and network connectivity to the Outposts racks, ensuring physical security and access controls of the data center environment, and providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events. AWS is responsible for managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts, as well as the availability of the Outposts infrastructure including the power supplies, servers, and networking equipment within the Outposts racks, and the physical maintenance of Outposts components.
Reference:
https: //docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html
https: //www.contino.io/insights/the-sandwich-responsibility-model-aws-outposts/
A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 128 KB in size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to save money on storage while keeping the most accessed files readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?
- A . Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.
- B . Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days.
- C . Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
- D . Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
D
Explanation:
This solution meets the requirements of saving money on storage while keeping the most accessed files readily available for the users. S3 Lifecycle policy can automatically move objects from one storage class to another based on predefined rules. S3 Standard-IA is a lower-cost storage class for data that is accessed less frequently, but requires rapid access when needed. It is suitable for ringtones older than 90 days that are downloaded infrequently.
Option A is incorrect because configuring S3 Standard-IA for the initial storage tier of the objects can incur higher costs for frequent access and retrieval fees.
Option B is incorrect because moving the files to S3 Intelligent-Tiering can incur additional monitoring and automation fees that may not be necessary for ringtones older than 90 days.
Option C is incorrect because using S3 inventory to manage objects and move them to S3 Standard-IA can be complex and time-consuming, and it does not provide automatic cost savings.
Reference:
https: //aws.amazon.com/s3/storage-classes/
https: //aws.amazon.com/s3/cloud-storage-cost-optimization-ebook/
A company is running a photo hosting service in the us-east-1 Region. The service enables users across multiple countries to upload and view photos. Some photos are heavily viewed for months, and others are viewed for less than a week. The application allows uploads of up to 20 MB for each photo. The service uses the photo metadata to determine which photos to display to each user.
Which solution provides the appropriate user access MOST cost-effectively?
- A . Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items.
- B . Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.
- C . Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags to keep track of metadata.
- D . Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon OpenSearch Service.
B
Explanation:
This solution provides the appropriate user access most cost-effectively because it uses the Amazon S3 Intelligent-Tiering storage class, which automatically optimizes storage costs by moving data to the most cost-effective access tier when access patterns change, without performance impact or operational overhead1. This storage class is ideal for data with unknown, changing, or unpredictable access patterns, such as photos that are heavily viewed for months or less than a week. By storing the photo metadata and its S3 location in DynamoDB, the application can quickly query and retrieve the relevant photos for each user. DynamoDB is a fast, scalable, and fully managed NoSQL database service that supports key-value and document data models2.
Reference: 1: Amazon S3 Intelligent-Tiering Storage Class | AWS3, Overview section2: Amazon DynamoDB – NoSQL Cloud Database Service4, Overview section.
A large international university has deployed all of its compute services in the AWS Cloud These services include Amazon EC2. Amazon RDS. and Amazon DynamoDB. The university currently relies on many custom scripts to back up its infrastructure. However, the university wants to centralize management and automate data backups as much as possible by using AWS native options.
Which solution will meet these requirements?
- A . Use third-party backup software with an AWS Storage Gateway tape gateway virtual tape library.
- B . Use AWS Backup to configure and monitor all backups for the services in use
- C . Use AWS Config to set lifecycle management to take snapshots of all data sources on a schedule.
- D . Use AWS Systems Manager State Manager to manage the configuration and monitoring of backup tasks.
B
Explanation:
Understanding the Requirement: The university wants to centralize management and automate backups for its AWS services (EC2, RDS, and DynamoDB), reducing reliance on custom scripts.
Analysis of Options:
Third-party backup software with AWS Storage Gateway: This solution introduces external dependencies and adds complexity compared to using native AWS services.
AWS Backup: Provides a centralized, fully managed service to automate and manage backups across various AWS services, including EC2, RDS, and DynamoDB.
AWS Config: Primarily used for compliance and configuration monitoring, not for backup management.
AWS Systems Manager State Manager: Useful for configuration management but not specifically
designed for managing backups.
Best Solution:
AWS Backup: This service offers the necessary functionality to centralize and automate backups, providing a streamlined and integrated solution with minimal effort.
Reference: AWS Backup
A large international university has deployed all of its compute services in the AWS Cloud These services include Amazon EC2. Amazon RDS. and Amazon DynamoDB. The university currently relies on many custom scripts to back up its infrastructure. However, the university wants to centralize management and automate data backups as much as possible by using AWS native options.
Which solution will meet these requirements?
- A . Use third-party backup software with an AWS Storage Gateway tape gateway virtual tape library.
- B . Use AWS Backup to configure and monitor all backups for the services in use
- C . Use AWS Config to set lifecycle management to take snapshots of all data sources on a schedule.
- D . Use AWS Systems Manager State Manager to manage the configuration and monitoring of backup tasks.
B
Explanation:
Understanding the Requirement: The university wants to centralize management and automate backups for its AWS services (EC2, RDS, and DynamoDB), reducing reliance on custom scripts.
Analysis of Options:
Third-party backup software with AWS Storage Gateway: This solution introduces external dependencies and adds complexity compared to using native AWS services.
AWS Backup: Provides a centralized, fully managed service to automate and manage backups across various AWS services, including EC2, RDS, and DynamoDB.
AWS Config: Primarily used for compliance and configuration monitoring, not for backup management.
AWS Systems Manager State Manager: Useful for configuration management but not specifically
designed for managing backups.
Best Solution:
AWS Backup: This service offers the necessary functionality to centralize and automate backups, providing a streamlined and integrated solution with minimal effort.
Reference: AWS Backup
A company has a mobile app for customers. The app’s data is sensitive and must be encrypted at rest. The company uses AWS Key Management Service (AWS KMS)
The company needs a solution that prevents the accidental deletion of KMS keys. The solution must use Amazon Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts to delete a KMS key
Which solution will meet these requirements with the LEAST operational overhead”
- A . Create an Amazon EventBndge rule that reacts when a user tries to delete a KMS key Configure an AWS Config rule that cancels any deletion of a KMS key Add the AWS Config rule as a target of the EventBridge rule Create an SNS topic that notifies the administrators
- B . Create an AWS Lambda function that has custom logic to prevent KMS key deletion Create an Amazon CloudWatch alarm that is activated when a user tries to delete a KMS key Create an Amazon EventBridge rule that invokes the Lambda function when the DeleteKey operation is performed Create an SNS topic Configure the EventBndge rule to publish an SNS message that notifies the administrators
- C . Create an Amazon EventBndge rule that reacts when the KMS DeleteKey operation is performed Configure the rule to initiate an AWS Systems Manager Automationrunbook Configure the runbook to cancel the deletion of the KMS key Create an SNS topic Configure the EventBndge rule to publish an SNS message that notifies the administrators.
- D . Create an AWS CloudTrail trail Configure the trail to delrver logs to a new Amazon CloudWatch log group Create a CloudWatch alarm based on the metric filter for the CloudWatch log group Configure the alarm to use Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed
C
Explanation:
This solution meets the requirements with the least operational overhead because it uses AWS services that are fully managed and scalable. The EventBridge rule can detect the DeleteKey operation from the AWS KMS API and trigger the Systems Manager Automation runbook, which can execute a predefined workflow to cancel the key deletion. The EventBridge rule can also publish an SNS message to the topic that sends an email notification to the administrators. This way, the company can prevent the accidental deletion of KMS keys and notify the administrators of any attempts to delete them.
Option A is not a valid solution because AWS Config rules are used to evaluate the configuration of AWS resources, not to cancel the deletion of KMS keys.
Option B is not a valid solution because it requires creating and maintaining a custom Lambda function that has logic to prevent KMS key deletion, which adds operational overhead.
Option D is not a valid solution because it only notifies the administrators of the DeleteKey operation, but does not cancel it.
Reference: Using Amazon EventBridge rules to trigger Systems Manager Automation workflows – AWSSystems Manager
Using Amazon SNS for system-to-administrator communications – Amazon Simple Notification Service Deleting AWS KMS keys – AWS Key Management Service
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to increase the disk space without downtime.
Which solution meets these requirements with the LEAST amount of effort?
- A . Enable storage autoscaling in RDS.
- B . Increase the RDS database instance size.
- C . Change the RDS database instance storage type to Provisioned IOPS.
- D . Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
A
Explanation:
https: //aws.amazon.com/about-aws/whats-new/2019/06/rds-storage-auto-scaling/
A company needs to run a critical data processing workload that uses a Python script every night. The workload takes 1 hour to finish.
Which solution will meet these requirements MOST cost-effectively?
- A . Deploy an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type. Use the Fargate Spot capacity provider. Schedule the job to run once every night.
- B . Deploy an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch type. Schedule the job to run once every night.
- C . Create an AWS Lambda function that uses the existing Python code. Configure Amazon EventBridge to invoke the function once every night.
- D . Create an Amazon EC2 On-Demand Instance that runs Amazon Linux. Migrate the Python script to the instance. Use a cron job to schedule the script. Create an AWS Lambda function to start and stop the instance once every night.
A company has migrated several applications to AWS in the past 3 months. The company wants to know the breakdown of costs for each of these applications. The company wants to receive a regular report that Includes this Information.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS Budgets to download data for the past 3 months into a csv file. Look up the desired information.
- B . Load AWS Cost and Usage Reports into an Amazon RDS DB instance. Run SQL queries to gel the desired information.
- C . Tag all the AWS resources with a key for cost and a value of the application’s name. Activate cost allocation tags Use Cost Explorer to get the desired information.
- D . Tag all the AWS resources with a key for cost and a value of the application’s name. Use the AWS Billing and Cost Management console to download bills for the past 3 months. Look up the desired information.
C
Explanation:
This solution is the most cost-effective and efficient way to break down costs per application. Tagging Resources: By tagging all AWS resources with a specific key (e.g., "cost") and a value representing the application’s name, you can easily identify and categorize costs associated with each application. This tagging strategy allows for granular tracking of costs within AWS. Activating Cost Allocation Tags: Once tags are applied to resources, you need to activate cost allocation tags in the AWS Billing and Cost Management console. This ensures that the costs associated with each tag are included in your billing reports and can be used for cost analysis. AWS Cost Explorer: Cost Explorer is a powerful tool that allows you to visualize, understand, and manage your AWS costs and usage over time. You can filter and group your cost data by the tags you’ve applied to resources, enabling you to easily see the cost breakdown for each application. Cost Explorer also supports generating regular reports, which can be scheduled and emailed to stakeholders.
Why Not Other Options?
Option A (AWS Budgets): AWS Budgets is more focused on setting cost and usage thresholds and monitoring them, rather than providing detailed cost breakdowns by application.
Option B (Load Cost and Usage Reports into RDS): This approach is less cost-effective and involves more operational overhead, as it requires setting up and maintaining an RDS instance and running SQL queries.
Option D (AWS Billing and Cost Management Console): While you can download bills, this method is more manual and less dynamic compared to using Cost Explorer with activated tags.
AWS
Reference: AWS Tagging Strategies- Overview of how to use tagging to organize and track AWS resources.
AWS Cost Explorer- Details on how to use Cost Explorer to analyze costs.
A company has migrated several applications to AWS in the past 3 months. The company wants to know the breakdown of costs for each of these applications. The company wants to receive a regular report that Includes this Information.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS Budgets to download data for the past 3 months into a csv file. Look up the desired information.
- B . Load AWS Cost and Usage Reports into an Amazon RDS DB instance. Run SQL queries to gel the desired information.
- C . Tag all the AWS resources with a key for cost and a value of the application’s name. Activate cost allocation tags Use Cost Explorer to get the desired information.
- D . Tag all the AWS resources with a key for cost and a value of the application’s name. Use the AWS Billing and Cost Management console to download bills for the past 3 months. Look up the desired information.
C
Explanation:
This solution is the most cost-effective and efficient way to break down costs per application. Tagging Resources: By tagging all AWS resources with a specific key (e.g., "cost") and a value representing the application’s name, you can easily identify and categorize costs associated with each application. This tagging strategy allows for granular tracking of costs within AWS. Activating Cost Allocation Tags: Once tags are applied to resources, you need to activate cost allocation tags in the AWS Billing and Cost Management console. This ensures that the costs associated with each tag are included in your billing reports and can be used for cost analysis. AWS Cost Explorer: Cost Explorer is a powerful tool that allows you to visualize, understand, and manage your AWS costs and usage over time. You can filter and group your cost data by the tags you’ve applied to resources, enabling you to easily see the cost breakdown for each application. Cost Explorer also supports generating regular reports, which can be scheduled and emailed to stakeholders.
Why Not Other Options?
Option A (AWS Budgets): AWS Budgets is more focused on setting cost and usage thresholds and monitoring them, rather than providing detailed cost breakdowns by application.
Option B (Load Cost and Usage Reports into RDS): This approach is less cost-effective and involves more operational overhead, as it requires setting up and maintaining an RDS instance and running SQL queries.
Option D (AWS Billing and Cost Management Console): While you can download bills, this method is more manual and less dynamic compared to using Cost Explorer with activated tags.
AWS
Reference: AWS Tagging Strategies- Overview of how to use tagging to organize and track AWS resources.
AWS Cost Explorer- Details on how to use Cost Explorer to analyze costs.