Practice Free SAA-C03 Exam Online Questions
A company runs several applications on Amazon EC2 instances. The company stores configuration files in an Amazon S3 bucket.
A solutions architect must provide the company’s applications with access to the configuration files.
The solutions architect must follow AWS best practices for security.
Which solution will meet these requirements?
- A . Use the AWS account root user access keys.
- B . Use the AWS access key ID and the EC2 secret access key.
- C . Use an IAM role to grant the necessary permissions to the applications.
- D . Activate multi-factor authentication (MFA) and versioning on the S3 bucket.
C
Explanation:
The best security practice when providing EC2 instances access to AWS services (like S3) is to use an IAM role with an instance profile. This avoids hardcoding secrets and enables automatic credential rotation.
“We strongly recommend that you use IAM roles for applications that run on Amazon EC2 instances to securely access AWS services.”
― IAM Roles for Amazon EC2 Benefits:
No manual credentials
Temporary and automatically rotated keys
Least privilege access via IAM policies
Incorrect A: Root user access is not to be used for programmatic access.
B: Storing secret keys is insecure and discouraged.
D: MFA/versioning improves object protection, not access control.
Reference: Best Practices for IAM Using IAM Roles with EC2
A company runs an application on EC2 instances that need access to RDS credentials stored in AWS Secrets Manager.
Which solution meets this requirement?
- A . Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy to grant the role access to the secret.
- B . Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the user access to the secret.
- C . Create a resource-based policy for the secret. Use EC2 Instance Connect to access the secret.
- D . Create an identity-based policy for the secret. Grant direct access to the EC2 instances.
A
Explanation:
Option Auses an IAM role attached to the EC2 instance profile, enabling secure and automated access to Secrets Manager. This is the recommended approach.
Option Buses IAM users, which is less secure and harder to manage.
Option Cis not practical for accessing secrets programmatically.
Option Dviolates best practices by granting direct access to the EC2 instance.
A company is deploying an application in three AWS Regions using an Application Load Balancer.
Amazon Route 53 will be used to distribute traffic between these Regions.
Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?
- A . Create an A record with a latency policy.
- B . Create an A record with a geolocation policy.
- C . Create a CNAME record with a failover policy.
- D . Create a CNAME record with a geoproximity policy.
A
Explanation:
Latency-based routing in Amazon Route 53 is designed to route users to the Region that provides the lowest network latency, based on Amazon’s measurements of latency between AWS Regions and users’ networks. For applications deployed in multiple Regions, this provides the highest performance experience for global users.
Therefore, creating an A record with a latency routing policy is the correct choice.
Geolocation (Option B) routes based on user location, which may not always correspond to the lowest latency.
Failover (Option C) is for active-passive architectures, not performance optimization.
Geoproximity (Option D) is more complex and focused on directing traffic based on geographic bias rather than measured latency.
A company has an application that uses a MySQL database that runs on an Amazon EC2 instance. The instance currently runs in a single Availability Zone. The company requires a fault-tolerant database solution that provides a recovery time objective (RTO) and a recovery point objective (RPO) of 2 minutes or less.
Which solution will meet these requirements?
- A . Migrate the MySQL database to Amazon RDS. Create a read replica in a second Availability Zone. Create a script that detects availability interruptions and promotes the read replica when needed.
- B . Migrate the MySQL database to Amazon RDS for MySQL. Configure the new RDS for MySQL database to use a Multi-AZ deployment.
- C . Create a second MySQL database in a second Availability Zone. Use native MySQL commands to sync the two databases every 2 minutes. Create a script that detects availability interruptions and promotes the second MySQL database when needed.
- D . Create a copy of the EC2 instance that runs the MySQL database. Deploy the copy in a second Availability Zone. Create a Network Load Balancer. Add both instances as targets.
B
Explanation:
Amazon RDS Multi-AZ deployments provide automatic failover for relational databases such as MySQL, ensuring high availability and durability. The feature maintains synchronous replication between a primary DB instance and a standby in a separate Availability Zone. AWS guarantees that failover typically completes within minutes, ensuring an RTO and RPO of less than 2 minutes.
Option A requires manual promotion of replicas, which cannot meet the strict RTO/RPO requirement.
Option C depends on custom scripts and manual synchronization, introducing operational risk.
Option D creates active-active EC2-based databases, which do not provide synchronous replication or automated failover.
Therefore, Multi-AZ RDS (B) is the managed, resilient, and operationally efficient solution that meets the business requirements.
Reference:
• Amazon RDS User Guide ― Multi-AZ deployments
• AWS Well-Architected Framework ― Reliability Pillar: High availability and disaster recovery
A machine learning (ML) team is building an application that uses data that is in an Amazon S3 bucket. The ML team needs a storage solution for its model training workflow on AWS. The ML team requires high-performance storage that supports frequent access to training datasets. The storage solution must integrate natively with Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon Elastic Block Store (Amazon EBS) volumes to provide high-performance storage. Use AWS DataSync to migrate data from the S3 bucket to EBS volumes.
- B . Use Amazon EC2 ML instances to provide high-performance storage. Store training data on Amazon EBS volumes. Use the S3 Copy API to copy data from the S3 bucket to EBS volumes.
- C . Use Amazon FSx for Lustre to provide high-performance storage. Store training datasets in Amazon S3 Standard storage.
- D . Use Amazon EMR to provide high-performance storage. Store training datasets in Amazon S3 Glacier Instant Retrieval storage.
C
Explanation:
Amazon FSx for Lustre is a high-performance file system optimized for fast processing of workloads such as machine learning, high-performance computing (HPC), and video processing. It integrates natively with Amazon S3, allowing you to:
Access S3 Data: FSx for Lustre can be linked to an S3 bucket, presenting S3 objects as files in the file system.
High Performance: It provides sub-millisecond latencies, high throughput, and millions of IOPS, which are ideal for ML workloads.Amazon Web Services, Inc.
Minimal Operational Overhead: Being a fully managed service, it reduces the complexity of setting up and managing high-performance file systems.
Reference: Amazon FSx for Lustre C High-Performance File System Integrated with S3Amazon Web Services, Inc.
What is Amazon FSx for Lustre?
A company deployed an application in two AWS Regions. If the application fails in one Region, traffic must fail over to the second Region. The failover must avoid stale DNS client caches, and the company requires one endpoint for both Regions.
Which solution meets these requirements?
- A . Use a CloudFront distribution with multiple origins.
- B . Use Route 53 weighted routing with equal weights.
- C . Use AWS Global Accelerator and assign static anycast IPs to the application.
- D . Use Route 53 IP-based routing to switch Regions.
C
Explanation:
AWS Global Accelerator provides static anycast IP addresses that remain constant regardless of Regional failover. AWS directs traffic to the optimal healthy Region without relying on DNS TTL values, eliminating the risk of stale DNS caches.
Route 53 routing (Options B and D) still depends on DNS caching behavior. CloudFront (Option A) is for content delivery, not Regional failover for applications.
A company runs a workload in an AWS Region. Users connect to the workload by using an Amazon API Gateway REST API.
The company uses Amazon Route 53 as its DNS provider and has created a Route 53 Hosted Zone.
The company wants to provide unique and secure URLs for all workload users.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Select THREE.)
- A . Create a wildcard custom domain name in the Route 53 hosted zone as an alias for the API Gateway endpoint.
- B . Use AWS Certificate Manager (ACM) to request a wildcard certificate that matches the custom domain in a second Region.
- C . Create a hosted zone for each user in Route 53. Create zone records that point to the API Gateway endpoint.
- D . Use AWS Certificate Manager (ACM) to request a wildcard certificate that matches the custom domain name in the same Region.
- E . Use API Gateway to create multiple API endpoints for each user.
- F . Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).
A,D,F
Explanation:
To provide unique, secure URLs efficiently:
A: Create a wildcard custom domain in Route 53 as an alias for the API Gateway endpoint.
D: Request a wildcard certificate in ACM in the same Region as API Gateway (certificates must be in the same Region as the API).
F: Create a custom domain name in API Gateway and attach the certificate.
“You can configure a custom domain name for your API Gateway API. To use a wildcard certificate, request it from ACM in the same Region as your API.”
― API Gateway Custom Domain Names
This combination provides secure wildcard URLs without creating separate endpoints or hosted zones per user.
A company needs to give a globally distributed development team secure access to the company’s AWS resources in a way that complies with security policies.
The company currently uses an on-premises Active Directory for internal authentication. The company uses AWS Organizations to manage multiple AWS accounts that support multiple projects.
The company needs a solution to integrate with the existing infrastructure to provide centralized identity management and access control.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Set up AWS Directory Service to create an AWS managed Microsoft Active Directory on AWS. Establish a trust relationship with the on-premises Active Directory. Use IAM roles that are assigned to Active Directory groups to access AWS resources within the company’s AWS accounts.
- B . Create an IAM user for each developer. Manually manage permissions for each IAM user based on each user’s involvement with each project. Enforce multi-factor authentication (MFA) as an additional layer of security.
- C . Use AD Connector in AWS Directory Service to connect to the on-premises Active Directory. Integrate AD Connector with AWS IAM Identity Center. Configure permissions sets to give each AD group access to specific AWS accounts and resources.
- D . Use Amazon Cognito to deploy an identity federation solution. Integrate the identity federation solution with the on-premises Active Directory. Use Amazon Cognito to provide access tokens for developers to access AWS accounts and resources.
C
Explanation:
Using AD Connector with AWS IAM Identity Center (formerly AWS Single Sign-On) allows the company to leverage its existingon-premises Active Directoryfor centralized identity management and access control. AD Connector acts as a proxy to the on-premises AD withoutrequiring additional infrastructure or complex setup. This solution integrates seamlessly with AWS, allowing the development team to use their existing AD credentials to access AWS resources across multiple accounts managed by AWS Organizations. The permissions for AWS resources can be managed centrally through IAM Identity Center by configuring permission sets.
This solution provides:
Least operational overhead: AD Connector is fully managed, and IAM Identity Center allows centralized management of permissions across accounts.
Secure access: The solution complies with security policies by using existing AD authentication mechanisms.
Option A (AWS Managed AD): Setting up a fully managed AWS AD and establishing a trust is more complex and involves additional operational overhead.
Option B (IAM Users): Manually managing IAM users and permissions is less scalable and increases operational complexity.
Option D (Cognito): Amazon Cognito is more suited for user-facing applications rather than internal identity management for AWS resources.
AWS
Reference: AD Connector with IAM Identity Center
AWS IAM Identity Center
A company stores a file in an S3 bucket containing IP allow/deny lists. The file must be accessible via an HTTP endpoint. Firewalls outside AWS must read the file. The company wants to restrict access to only the firewall IP addresses.
The S3 Block Public Access feature is enabled on the account.
Which solution meets these requirements?
- A . Host the bucket as a static website and restrict access by IP.
- B . Create a bucket policy that explicitly allows access only from the firewall IP addresses.
- C . Create a CloudFront distribution with the S3 bucket as the origin. Use an origin access control (OAC) that allows access only from the firewall IP addresses.
- D . Create a Lambda function to validate IP addresses and return the lists.
B
Explanation:
S3 Block Public Access only blocks public access―not explicitly allowed access.
An S3 bucket policy can explicitly permit access only from specific source IP addresses using the aws: SourceIp condition. This allows secure, direct HTTP access to the S3 object from known firewall IP addresses.
Static website hosting (Option A) requires the bucket to be public and is blocked by the enabled S3 Block Public Access setting.
CloudFront with OAC (Option C) is unnecessary and adds cost and complexity.
Lambda (Option D) introduces operational overhead and is not needed since S3 policies can enforce IP restrictions directly.
A company is building a serverless application to process ecommerce orders. The application must handle bursts of traffic and process orders asynchronously in the order received.
Which solution will meet these requirements?
- A . Use Amazon SNS with AWS Lambda.
- B . Use Amazon SQS FIFO with AWS Lambda.
- C . Use Amazon SQS standard with AWS Batch.
- D . Use Amazon SNS with AWS Batch.
B
Explanation:
The key requirements are asynchronous processing, high availability, burst handling, and strict message ordering. Amazon SQS FIFO queues are specifically designed to guarantee exactly-once processing and ordered message delivery, making them ideal for transactional workflows like ecommerce order processing.
Option B meets all requirements. SQS FIFO preserves the order of messages within a message group and scales automatically to absorb traffic spikes. Integrating SQS FIFO with AWS Lambda enables serverless, event-driven processing with automatic scaling and no infrastructure management. Lambda processes messages as they arrive while maintaining order guarantees.
Option A (SNS) does not guarantee ordering and is designed for fan-out messaging.
Option C (SQS standard) scales well but does not preserve order.
Option D introduces unnecessary batch infrastructure and increased latency.
Therefore, B is the best solution because it combines ordered delivery, resilience, elasticity, and serverless compute, ensuring reliable and scalable order processing.
