Practice Free SAA-C03 Exam Online Questions
A company plans to use an Amazon S3 bucket to archive backup data. Regulations require the company to retain the backup data for 7 years.
During the retention period, the company must prevent users, including administrators, from deleting the data. The company can delete the data after 7 years.
Which solution will meet these requirements?
- A . Create an S3 bucket policy that denies delete operations for 7 years. Create an S3 Lifecycle policy to delete the data after 7 years.
- B . Create an S3 Object Lock default retention policy that retains data for 7 years in governance mode.
Create an S3 Lifecycle policy to delete the data after 7 years. - C . Create an S3 Object Lock default retention policy that retains data for 7 years in compliance mode.
Create an S3 Lifecycle policy to delete the data after 7 years. - D . Create an S3 Batch Operations job to set a legal hold on each object for 7 years. Create an S3 Lifecycle policy to delete the data after 7 years.
C
Explanation:
Comprehensive and Detailed Step-by-Step
The requirement is toprevent data deletion by any user, including administrators, for 7 years while allowing automatic deletion afterward.
S3 Object Lock in Compliance Mode (Correct Choice – C)
Compliance mode ensures that even the root user cannot delete or modify the objects during the retention period.
After 7 years, the S3 Lifecycle policy automatically deletes the objects. This meets bothimmutability and automatic deletionrequirements. Governance Mode (Option B – Incorrect)
Governance mode prevents deletion, but administrators can override it.
The requirement explicitly states thateven administrators must not be able to delete the data.
S3 Bucket Policy (Option A – Incorrect)
An S3 bucket policy candeny deletes, but policies can be modified at any time by administrators.
It does not enforce strict retention like Object Lock.
S3 Batch Operations Job (Option D – Incorrect)
A legal hold does not have an automatic expiration.
Legal holds must be manually removed, which is not efficient.
Why Option C is Correct:
S3 Object Lock in Compliance Mode prevents deletion by all users, including administrators.
The S3 Lifecycle policy deletes the data automatically after 7 years, reducing operational overhead.
Reference: S3 Object Lock Compliance Mode
S3 Lifecycle Policies
A company hosts its applications in multiple private and public subnets in a VPC. The applications in the private subnets need to access an API. The API is available on the internet and is hosted in the company’s on-premises data center. A solutions architect needs to establish connectivity for applications in the private subnets.
Which solution will meet these requirements MOST cost-effectively?
- A . Create a transit gateway to connect the VPC to the on-premises network. Use the transit gateway to route API calls from the private subnets to the on-premises data center.
- B . Create a NAT gateway in the public subnet of the VPC. Use the NAT gateway to allow the private subnets to access the API over the internet.
- C . Establish an AWS PrivateLink connection to connect the VPC to the on-premises network. Use PrivateLink to make API calls from the private subnets to the on-premises data center.
- D . Implement an AWS Site-to-Site VPN connection between the VPC and the on-premises data center. Use the VPN connection to make API calls from the private subnets to the on-premises data center.
D
Explanation:
AWS Site-to-Site VPN is a cost-effective way to securely connect your on-premises data center with AWS resources. In this scenario:
Applications in private subnets require access to the API hosted in the on-premises data center. A Site-to-Site VPN connectionis a secure and cost-efficient option to route traffic between the VPC and on-premises resources.
Transit GatewayandPrivateLinkare not cost-effective for this use case.
NAT Gateway only provides internet access for private subnets, which is not suitable for reaching an on-premises resource.
AWS Documentation
Reference: AWS Site-to-Site VPN
A company recently migrated a large amount of research data to an Amazon S3 bucket. The company needs an automated solution to identify sensitive data in the bucket. A security team also needs to monitor access patterns for the data 24 hours a day, 7 days a week to identify suspicious activities or evidence of tampering with security controls.
- A . Set up AWS CloudTrail reporting, and grant the security team read-only access to the CloudTrail reports. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.
- B . Enable Amazon Macie and Amazon GuardDuty on the account. Grant the security team access to Macie and GuardDuty. Review the findings with the security team.
- C . Set up an Amazon S3 Inventory report. Use Amazon Athena and Amazon QuickSight to identify sensitive data. Create a dashboard for the security team to review findings.
- D . Use AWS Identity and Access Management (IAM) Access Advisor to monitor for suspicious activity and tampering. Create a dashboard for the security team. Set up an Amazon S3 Inventory report to identify sensitive data. Review the findings with the security team.
A company is deploying an application in three AWS Regions using an Application Load Balancer Amazon Route 53 will be used to distribute traffic between these Regions.
Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?
- A . Create an A record with a latency policy.
- B . Create an A record with a geolocation policy.
- C . Create a CNAME record with a failover policy.
- D . Create a CNAME record with a geoproximity policy.
A
Explanation:
To provide the most high-performing experience for the users of the application, a solutions architect should use a latency routing policy for the Route 53 A record. This policy allows Route 53 to route traffic to the AWS Region that provides the lowest possible latency for the users1. A latency routing policy can also improve the availability of the application, as Route 53 can automatically route traffic to another Region if the primary Region becomes unavailable2.
Reference: 1: https: //docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency
2: https: //aws.amazon.com/route53/faqs/#Latency_Based_Routing
A company wants to implement new security compliance requirements for its development team to limit the use of approved Amazon Machine Images (AMIs).
The company wants to provide access to only the approved operating system and software for all its Amazon EC2 instances. The company wants the solution to have the least amount of lead time for launching EC2 instances.
Which solution will meet these requirements?
- A . Create a portfolio by using AWS Service Catalog that includes only EC2 instances launched with approved AMIs. Ensure that all required software is preinstalled on the AMIs. Create the necessary permissions for developers to use the portfolio.
- B . Create an AMI that contains the approved operating system and software by using EC2 Image Builder. Give developers access to that AMI to launch the EC2 instances.
- C . Create an AMI that contains the approved operating system Tell the developers to use the approved AMI Create an Amazon EventBridge rule to run an AWS Systems Manager script when a new EC2 instance is launched. Configure the script to install the required software from a repository.
- D . Create an AWS Config rule to detect the launch of EC2 instances with an AMI that is not approved. Associate a remediation rule to terminate those instances and launch the instances again with the approved AMI. Use AWS Systems Manager to automatically install the approved software on the launch of an EC2 instance.
A
Explanation:
AWS Service Catalogis designed to allow organizations to manage a catalog of approved products (including AMIs) that users can deploy. By creating a portfolio that contains only EC2 instances launched with preapproved AMIs, the company can enforce compliance with the approved operating system and software for all EC2 instances. Service Catalog also streamlines the process of launching EC2 instances, reducing the lead time while ensuring that developers use only the approved configurations.
Option B (EC2 Image Builder): While EC2 Image Builder helps in creating and managing AMIs, it doesn’t provide the enforcement mechanism that Service Catalog does.
Option C (EventBridge rule and Systems Manager script): This solution is reactive and involves more operational complexity compared to Service Catalog.
Option D (AWS Config rule): This option is reactive (it terminates non-compliant instances after launch) and introduces additional operational overhead. AWS
Reference: AWS Service Catalog
A company is migrating applications from an on-premises Microsoft Active Directory that the company manages to AWS. The company deploys the applications in multiple AWS accounts. The company uses AWS Organizations to manage the accounts centrally.
The company’s security team needs a single sign-on solution across all the company’s AWS accounts. The company must continue to manage users and groups that are in the on-premises Active Directory
Which solution will meet these requirements?
- A . Create an Enterprise Edition Active Directory in AWS Directory Service for Microsoft Active Directory. Configure the Active Directory to be the identity source for AWS IAM Identity Center
- B . Enable AWS IAM Identity Center. Configure a two-way forest trust relationship to connect the company’s self-managed Active Directory with IAM Identity Center by using AWS Directory Service for Microsoft Active Directory.
- C . Use AWS Directory Service and create a two-way trust relationship with the company’s self-managed Active Directory.
- D . Deploy an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within AWS IAM Identity Center.
B
Explanation:
The company is looking for a solution that provides single sign-on (SSO) across multiple AWS accounts while continuing to manage users and groups in their on-premises Active Directory (AD). AWS IAM Identity Center (formerly AWS SSO) is the recommended solution for this type of requirement.
AWS IAM Identity Centerprovides a centralized identity management solution, enabling single sign-on across multiple AWS accounts and other cloud applications. It can integrate with on-premises Active Directory to leverage existing users and groups.
By configuring a two-way forest trust relationship between AWS Directory Service for Microsoft
Active Directory and the company’s on-premises Active Directory, users can be authenticated by their on-premises AD and still access AWS resources through IAM Identity Center. This solution allows centralized management of AWS accounts within AWS Organizations.
The two-way trust allows mutual access between the on-premises AD and the AWS Directory Service. This means that users and groups in the on-premises AD can be used for authentication in AWS IAM Identity Center while maintaining the existing identity management system. AWS
Reference: AWS IAM Identity Center Documentation
AWS Directory Service for Microsoft Active Directory Trust Relationships AWS Directory Service Integration with IAM Identity Center Why the other options are incorrect:
A company has an application that is running on Amazon EC2 instances A solutions architect has standardized the company on a particular instance family and various instance sizes based on the current needs of the company.
The company wants to maximize cost savings for the application over the next 3 years. The company needs to be able to change the instance family and sizes in the next 6 months based on application popularity and usage
Which solution will meet these requirements MOST cost-effectively?
- A . Compute Savings Plan
- B . EC2 Instance Savings Plan
- C . Zonal Reserved Instances
- D . Standard Reserved Instances
A
Explanation:
Understanding the Requirement: The company wants to maximize cost savings for their application over the next three years, with the flexibility to change the instance family and sizes within the next six months based on application popularity and usage.
Analysis of Options:
Compute Savings Plan: This plan offers the most flexibility, allowing the company to change instance families, sizes, and regions. It applies to EC2, AWS Fargate, and AWS Lambda, offering significant cost savings with this flexibility.
EC2 Instance Savings Plan: This plan is less flexible than the Compute Savings Plan, as it only applies to EC2 instances and allows changes within a specific instance family.
Zonal Reserved Instances: These provide a discount on EC2 instances but are tied to a specific availability zone and instance type, offering the least flexibility.
Standard Reserved Instances: These offer discounts on EC2 instances but with more restrictions compared to Savings Plans, particularly when changing instance types and families. Best Option for Flexibility and Savings:
The Compute Savings Plan is the most cost-effective solution because it allows the company to maintain flexibility while still achieving significant cost savings. This is critical for adapting to changing application demands without being locked into specific instance types or families.
Reference: AWS Savings Plans
EC2 Instance Types
A company has stored millions of objects across multiple prefixes in an Amazon S3 bucket by using the Amazon S3 Glacier Deep Archive storage class. The company needs to delete all data older than 3 years except for a subset of data that must be retained. The company has identified the data that must be retained and wants to implement a serverless solution.
Which solution will meet these requirements?
- A . Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.
- B . Use AWS Batch to delete objects older than 3 years except for the data that must be retained
- C . Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.
- D . Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.
D
Explanation:
To meet the requirement of deleting objects older than 3 years while retaining certain data, this solution leverages serverless technologies to minimize operational overhead.
S3 Inventory: S3 Inventory provides a flat file that lists all the objects in an S3 bucket and their metadata, which can be configured to include data such as the last modified date. This inventory can be generated daily or weekly.
AWS Lambda Function: A Lambda function can be created to process the S3 Inventory report, filtering out the objects that need to be retained and identifying those that should be deleted.
S3 Batch Operations: S3 Batch Operations can execute tasks such as object deletion at scale. By invoking the Lambda function through S3 Batch Operations, you can automate the process of deleting the identified objects, ensuring that the solution is serverless and requires minimal operational management.
Why Not Other Options?
Option A (AWS CLI script on EC2): Running a script on an EC2 instance adds unnecessary operational overhead and is not serverless.
Option B (AWS Batch): AWS Batch is designed for running large-scale batch computing workloads, which is overkill for this scenario.
Option C (AWS Glue + script): AWS Glue is more suited for ETL tasks, and this approach would add unnecessary complexity compared to the serverless Lambda solution.
AWS
Reference: Amazon S3 Inventory- Information on how to set up and use S3 Inventory.
S3 Batch Operations- Documentation on how to perform bulk operations on S3 objects using S3 Batch Operations.
A company has a nightly batch processing routine that analyzes report files that an on-premises file system receives daily through SFTP. The company wants to move the solution to the AWS Cloud. The solution must be highly available and resilient. The solution also must minimize operational effort.
Which solution meets these requirements?
- A . Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) file system for storage. Use an Amazon EC2 instance in an Auto Scaling group with a scheduled scaling policy to run the batch operation.
- B . Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic Block Store {Amazon EBS) volume for storage. Use an Auto Scaling group with the minimum number of instances and desired number of instances set to 1.
- C . Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic File System (Amazon EFS) file system for storage. Use an Auto Scaling group with the minimum number of instances and desired number of instances set to 1.
- D . Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to pull the batch files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an Auto Scaling group with a scheduled scaling policy to run the batch operation.
D
Explanation:
The solution that meets the requirements of high availability, performance, security, and static IP addresses is to use Amazon CloudFront, Application Load Balancers (ALBs), Amazon Route 53, and AWS WAF. This solution allows the company to distribute its HTTP-based application globally using CloudFront, which is a content delivery network (CDN) service that caches content at edge locations and provides static IP addresses for each edge location. The company can also use Route 53 latency-based routing to route requests to the closest ALB in each Region, which balances the load across the EC2 instances. The company can also deploy AWS WAF on the CloudFront distribution to protect the application against common web exploits by creating rules that allow, block, or count web requests based on conditions that are defined. The other solutions do not meet all the requirements because they either use Network Load Balancers (NLBs), which do not support HTTP-based applications, or they do not use CloudFront, which provides better performance and security than AWS Global Accelerator.
Reference: =
Amazon CloudFront
Application Load Balancer
Amazon Route 53
AWS WAF
Topic 5, Exam Pool E
A company needs to migrate a legacy application from an on-premises data center to the AWS Cloud because of hardware capacity constraints. The application runs 24 hours a day. & days a week, . The application database storage continues to grow over time.
What should a solution architect do to meet these requirements MOST cost-affectivity?
- A . Migrate the application layer to Amazon FC2 Spot Instances Migrate the data storage layer to Amazon S3.
- B . Migrate the application layer to Amazon EC2 Reserved Instances Migrate the data storage layer to Amazon RDS On-Demand Instances.
- C . Migrate the application layer to Amazon EC2 Reserved instances Migrate the data storage layer to Amazon Aurora Reserved Instances.
- D . Migrate the application layer to Amazon EC2 On Demand Amazon Migrate the data storage layer to Amazon RDS Reserved instances.
C
Explanation:
https: //docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraMySQL.html