Practice Free SAA-C03 Exam Online Questions
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective (RPO) and
recovery time objective (RTO) are 24 hours.
Which solution will meet these requirements MOST cost-effectively?
- A . Create a cross-Region read replica and promote the read replica to the primary instance
- B . Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
- C . Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket
- D . Copy automatic snapshots to another Region every 24 hours.
D
Explanation:
This solution is the most cost-effective and meets the RPO and RTO requirements of 24 hours.
Automatic Snapshots: Amazon RDS automatically creates snapshots of your DB instance at regular intervals. By copying these snapshots to another AWS Region every 24 hours, you ensure that you have a backup available in a different geographic location, providing disaster recovery capability.
RPO and RTO: Since the company’s RPO and RTO are both 24 hours, copying snapshots daily to another Region is sufficient. In the event of a disaster, you can restore the DB instance from the most recent snapshot in the target Region.
Why Not Other Options?
Option A (Cross-Region Read Replica): This could provide a faster recovery time but is more costly due to the ongoing replication and resource usage in another Region.
Option B (DMS Cross-Region Replication): While effective for continuous replication, it introduces complexity and cost that isn’t necessary given the 24-hour RPO/RTO.
Option C (Cross-Region Native Backup Copy): This involves more manual steps and doesn’t offer as straightforward a solution as automated snapshot copying.
AWS
Reference: Amazon RDS Automated Backups and Snapshots- Details on automated backups and snapshots in RDS.
Copying an Amazon RDS DB Snapshot- How to copy DB snapshots to another Region.
A company runs multiple applications in multiple AWS accounts within the same organization in AWS Organizations. A content management system (CMS) runs on Amazon EC2 instances in a VPC. The CMS needs to access shared files from an Amazon Elastic File System (Amazon EFS) file system that is deployed in a separate AWS account. The EFS account is in a separate VPC.
Which solution will meet this requirement?
- A . Mount the EFS file system on the EC2 instances by using the EFS Elastic IP address.
- B . Enable VPC sharing between the two accounts. Use the EFS mount helper to mount the file system on the EC2 instances. Redeploy the EFS file system in a shared subnet.
- C . Configure AWS Systems Manager Run Command to mount the EFS file system on the EC2 instances.
- D . Install the amazon-efs-utils package on the EC2 instances. Add the mount target in the efs-config file. Mount the EFS file system by using the EFS access point.
D
Explanation:
To access an EFS file system across accounts and VPCs, the EFS must be mounted using VPC peering or AWS Transit Gateway, and the EC2 instances must use the amazon-efs-utils package with the correct mount target or access point.
Using an EFS access point simplifies access management, especially across accounts, by providing a POSIX identity and access policy layer.
VPC sharing doesn’t support EFS directly unless the subnet and resources are shared properly, which requires redeployment.
Therefore, option D is the most complete and correct.
A company is planning to migrate an on-premises online transaction processing (OLTP) database that uses MySQL to an AWS managed database management system. Several reporting and analytics applications use the on-premises database heavily on weekends and at the end of each month. The cloud-based solution must be able to handle read-heavy surges during weekends and at the end of each month.
Which solution will meet these requirements?
- A . Migrate the database to an Amazon Aurora MySQL cluster. Configure Aurora Auto Scaling to use
replicas to handle surges. - B . Migrate the database to an Amazon EC2 instance that runs MySQL. Use an EC2 instance type that has ephemeral storage. Attach Amazon EBS Provisioned IOPS SSD (io2) volumes to the instance.
- C . Migrate the database to an Amazon RDS for MySQL database. Configure the RDS for MySQL database for a Multi-AZ deployment, and set up auto scaling.
- D . Migrate from the database to Amazon Redshift. Use Amazon Redshift as the database for both OLTP and analytics applications.
A
Explanation:
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run Amazon Linux in an Auto Scaling group. Each instance stores product manuals on Amazon EBS volumes.
New instances often start with outdated data and may take up to 30 minutes to download updates. The company needs a solution ensuring all instances always have up-to-date product manuals, can scale rapidly, and does not require application code changes.
Which solution will meet these requirements?
- A . Store the product manuals on instance store volumes attached to each EC2 instance.
- B . Store the product manuals in an Amazon S3 bucket. Configure EC2 instances to download updates from the bucket.
- C . Store the product manuals in an Amazon EFS file system. Mount the EFS volume on the EC2 instances.
- D . Store the product manuals in an S3 bucket using S3 Standard-IA. Configure EC2 instances to download updates from S3.
C
Explanation:
Amazon EFS provides a shared, fully managed, POSIX-compliant file system that can be mounted by all EC2 instances. Any update made to the file system is immediately visible to all instances, ensuring every new instance has the latest product manuals without delay.
EFS automatically scales storage and throughput, meeting high-demand conditions with no application changes required.
S3 requires instances to download files locally, causing delay and stale data issues (Options B and D). Instance store volumes (Option A) are ephemeral and not shared, making them unsuitable for consistent data distribution.
A company manages an application that stores data on an Amazon RDS for PostgreSQL Multi-AZ DB instance. High traffic on the application is causing increased latency for many read queries.
A solutions architect must improve the performance of the application.
Which solution will meet this requirement?
- A . Enable Amazon RDS Performance Insights. Configure storage capacity to scale automatically.
- B . Configure the DB instance to use DynamoDB Accelerator (DAX).
- C . Create a read replica of the DB instance. Serve read traffic from the read replica.
- D . Use Amazon Data Firehose between the application and Amazon RDS to increase the concurrency of database requests.
C
Explanation:
For Amazon RDS relational databases experiencing read-heavy workloads, AWS best practice is to create read replicas and offload read traffic to those replicas.
A read replica:
Uses asynchronous replication from the primary.
Can serve read-only queries, thereby reducing load and query latency on the primary.
Can be used behind an application-level read/write splitting mechanism or via endpoints that direct read traffic to replicas.
Performance Insights (Option A) helps diagnose performance problems but does not itself offload traffic.
DAX (Option B) is a cache for DynamoDB, not RDS.
Kinesis Data Firehose (Option D) is a streaming ingestion service and cannot increase RDS query concurrency.
A company runs an application that uses Docker containers in an on-premises data center. The application runs on a container host that stores persistent data files in a local volume. Container instances use the stored persistent data.
The company wants to migrate the application to fully managed AWS services.
Which solution will meet these requirements?
- A . Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Attach an Amazon Elastic Block Store (Amazon EBS) volume to an Amazon EC2 instance. Mount the EBS volume on the containers to provide persistent storage.
- B . Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Mount the EFS volume on the containers to provide persistent storage.
- C . Use Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create an Amazon DynamoDB table. Configure the application to use the DynamoDB table for persistent storage.
- D . Use Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Mount the EFS volume on the containers to provide persistent storage.
B
Explanation:
The company wants to move from an on-premises Docker environment to fully managed AWS services with persistent storage.
The best fit is:
Amazon ECS with AWS Fargate launch type: This is a serverless container orchestration solution where AWS manages the underlying infrastructure, removing the need to manage EC2 or Kubernetes nodes.
Amazon EFS (Elastic File System): This is a fully managed, scalable, and shared file system for use with ECS tasks. It supports persistent storage for containers, replacing the local volumes used on-premises.
This combination (ECS + Fargate + EFS) is fully managed and requires no manual server maintenance.
Option A uses EKS with self-managed nodes, which is not fully managed.
Option C (DynamoDB) is for structured key-value storage, not for persistent file storage.
Option D uses ECS with EC2 launch type, which is not serverless and requires managing instances.
Reference: Using Amazon ECS with AWS Fargate
Mounting EFS volumes in ECS tasks
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the Aurora database by using user names and passwords that the company stores locally in a file.
The company changes the user names and passwords every month. The company wants to minimize the operational overhead of credential management.
Which solution will meet these requirements?
- A . Store the credentials as a secret within AWS Secrets Manager. Assign IAM permissions to the secret. Reconfigure the application to call the secret. Enable rotation on the secret and configure rotation to occur on a monthly schedule.
- B . Use AWS Systems Manager Parameter Store to create a new parameter for the credentials. Use IAM policies to restrict access to the parameter. Reconfigure the application to access the parameter.
- C . Create an Amazon S3 bucket to store objects. Use an AWS Key Management Service (AWS KMS) key to encrypt the objects. Migrate the credentials file to the S3 bucket. Update the application to retrieve the credentials file from the S3 bucket.
- D . Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the encrypted EBS volumes to the EC2 instances. Migrate the credentials file to the new EBS volumes.
A
Explanation:
AWS Secrets Manager is purpose-built to securely store, manage, and rotate sensitive credentials such as database user names and passwords.
Option A is the most operationally efficient solution because it eliminates manual password rotation, reduces human error, and centralizes secret lifecycle management. Secrets Manager integrates natively with Amazon Aurora, enabling automated credential rotation using AWS-managed or custom Lambda rotation logic. Once rotation is enabled, Secrets Manager updates the database credentials and stores the new values securely without requiring administrators to manually update files on EC2 instances.
By assigning IAM permissions to the secret, access can be tightly controlled using least-privilege principles. The application retrieves credentials at runtime, removing the need to store passwords locally on disk, which significantly improves security posture. Secrets Manager also provides auditing capabilities through AWS CloudTrail, allowing visibility into secret access and changes.
Option B (Systems Manager Parameter Store) can securely store secrets, but automated rotation is not natively supported in the same way as Secrets Manager. Implementing rotation with Parameter Store would require additional custom automation, increasing operational complexity.
Option C stores credentials in S3, which is not designed for frequent credential rotation or secure secret access patterns, even when encrypted.
Option D only encrypts credentials at rest on individual instances and does not address rotation, distribution, or centralized management, resulting in high operational overhead.
Therefore, A best meets the requirements by providing secure storage, automated monthly rotation, fine-grained access control, and minimal operational effort, aligning with AWS security and operational excellence best practices.
A company uses Amazon EC2 instances to host a website. The website uses an Amazon S3 bucket to store media files. The company wants to automate infrastructure creation across multiple Regions and securely grant EC2 access to S3 using IAM.
Which solution will meet these requirements MOST securely?
- A . Store IAM access keys in UserData.
- B . Store access keys in S3 and reference them in CloudFormation.
- C . Use an IAM role and instance profile in CloudFormation.
- D . Retrieve access keys dynamically and store them on EC2.
C
Explanation:
The most secure and AWS-recommended way for EC2 instances to access AWS services is by using IAM roles attached through instance profiles.
Option C correctly implements this pattern.
IAM roles provide temporary credentials via the EC2 metadata service, eliminating the need for long-term access keys. Using CloudFormation ensures consistent, repeatable deployments across Regions while maintaining security best practices.
All other options expose long-term credentials, increasing the risk of compromise and violating AWS security guidance.
Therefore, C is the correct and most secure solution.
A company plans to deploy containerized microservices in the AWS Cloud. The containers must mount a persistent file store that the company can manage by using OS-level permissions. The company requires fully managed services to host the containers and file store.
- A . Use AWS Lambda functions and an Amazon API Gateway REST API to handle the microservices.
Use Amazon S3 buckets for storage. - B . Use Amazon EC2 instances to host the microservices. Use Amazon Elastic Block Store (Amazon EBS) volumes for storage.
- C . Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to handle the microservices. Use an Amazon Elastic File System (Amazon EFS) file system for storage.
- D . Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to handle the microservices. Use an Amazon EC2 instance that runs a dedicated file store for storage.
C
Explanation:
Amazon ECS on AWS Fargate: AWS Fargate is a serverless compute engine for containers that works with Amazon ECS. It allows you to run containers without managing servers or clusters.
Amazon EFS: Amazon Elastic File System (EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It can be mounted to ECS tasks running on Fargate, allowing containers to access a shared file system with standard file system semantics, including OS-level permissions.
Reference: Using Amazon EFS with Amazon ECS
Amazon EFS: How it works
A company runs an application on premises. The application needs to periodically upload large files to an Amazon S3 bucket. A solutions architect needs a solution to provide the application with short-lived authenticated access to the S3 bucket. The solution must not use long-term credentials. The solution needs to be secure and scalable.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an IAM user that has an access key and a secret key. Store the keys on the on-premises server in an environment variable. Attach a policy to the IAM user that restricts access to only the S3 bucket.
- B . Configure an AWS Site-to-Site VPN connection from the on-premises environment to the company’s VPC. Launch an Amazon EC2 instance with an instance profile. Route all file uploads from the on-premises application through the EC2 instance to the S3 bucket.
- C . Configure an S3 bucket policy to allow access for the on-premises server’s public IP address. Configure the policy to allow PUT operations only from the server’s IP address.
- D . Configure a trust relationship between the on-premises server and AWS Security Token Service (AWS STS). Generate credentials by assuming an IAM role for each upload operation.
D
Explanation:
The best practice to securely provide short-term access to AWS resources, such as Amazon S3, without using long-term credentials, is to use AWS Security Token Service (STS). According to the AWS IAM Best Practices and the Security Pillar of the AWS Well-Architected Framework, temporary credentials should always be used over long-term credentials when possible.
From AWS IAM Documentation:
“Use temporary credentials (IAM roles and AWS STS) instead of long-term access keys. Temporary security credentials are short-term, automatically expire, and are retrieved using AWS STS.”
(Source: IAM Best Practices C IAM User Guide)
Option D outlines the use of a trust relationship and assume-role mechanism via AWS STS. This allows the on-premises application to request temporary, scoped-down credentials to upload files to S3 securely. This approach is:
Secure C Uses short-lived credentials with least privilege
Scalable C No need for EC2 or VPN tunnels
Low Operational Overhead C No infrastructure to maintain
AWS-Recommended C Aligned with security best practices
In contrast:
Option A uses long-term credentials, which is a security risk.
Option B requires additional infrastructure (EC2, VPN), increasing complexity and cost.
Option C relies on IP-based access, which is insecure and not a form of identity-based authentication.
Reference: AWS IAM Best Practices C "Use temporary credentials"
AWS Security Token Service C "Temporary Security Credentials"
AWS Well-Architected Framework C Security Pillar
