Practice Free SAA-C03 Exam Online Questions
A company needs to archive an on-premises relational database. The company wants to retain the data. The company needs to be able to run SQL queries on the archived data to create annual reports.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Database Migration Service (AWS DMS) to migrate the on-premises database to an Amazon RDS instance. Retire the on-premises database. Maintain the RDS instance in a stopped state until the data is needed for reports.
- B . Set up database replication from the on-premises database to an Amazon EC2 instance. Retire the on-premises database. Make a snapshot of the EC2 instance. Maintain the EC2 instance in a stopped state until the data is needed for reports.
- C . Create a database backup on premises. Use AWS DataSync to transfer the data to Amazon S3. Create an S3 Lifecycle configuration to move the data to S3 Glacier Deep Archive. Restore the backup to Amazon EC2 instances to run reports.
- D . Use AWS Database Migration Service (AWS DMS) to migrate the on-premises databases to Amazon S3 in Apache Parquet format. Store the data in S3 Glacier Flexible Retrieval. Use Amazon Athena to run reports.
D
Explanation:
Amazon S3 is the most cost-effective option for archiving data. Using AWS DMS to migrate to S3 in Apache Parquet format provides an optimized columnar format for analytics. By storing in S3 Glacier Flexible Retrieval, costs are minimized while maintaining compliance with retention. When queries are required, Athena can run SQL queries directly on archived data in S3 without provisioning infrastructure. Options A and B rely on maintaining RDS or EC2 instances, which increases cost and operational overhead.
Option C requires full restores to EC2 before running queries, which is slow and inefficient.
Therefore, D provides the lowest operational overhead and direct query capability with Athena.
Reference:
• AWS DMS Documentation ― Migrating databases to Amazon S3 in Parquet format
• Amazon Athena User Guide ― Querying data stored in S3• AWS Well-Architected Framework ― Cost Optimization Pillar
A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations around the world. The application consists of backend web services that are exposed over HTTPS on port 443. The application is hosted on Amazon EC2 Instances behind an Application Load Balancer (ALB). The retail locations communicate with the web application over the public internet. The company allows each retail location to register the IP address that the retail location has been allocated by its local ISP.
The company’s security team recommends to increase the security of the application endpoint by restricting access to only the IP addresses registered by the retail locations.
What should a solutions architect do to meet these requirements?
- A . Associate an AWS WAF web ACL with the ALB Use IP rule sets on the ALB to filter traffic Update the IP addresses in the rule to Include the registered IP addresses
- B . Deploy AWS Firewall Manager to manage the ALB. Configure firewall rules to restrict traffic to the ALB Modify the firewall rules to include the registered IP addresses.
- C . Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on the ALB to validate that incoming requests are from the registered IP addresses.
- D . Configure the network ACL on the subnet that contains the public interface of the ALB Update the ingress rules on the network ACL with entries for each of the registered IP addresses.
A
Explanation:
AWS WAF (Web Application Firewall): AWS WAF allows you to create custom rules to block or allow web requests based on conditions that you specify.
Web ACL (Access Control List):
Create a web ACL and associate it with the ALB.
Use IP rule sets to specify the IP addresses of the retail locations that are allowed to access the application.
Security and Flexibility:
AWS WAF provides a scalable way to manage access control, ensuring that only traffic from registered IP addresses is allowed.
You can dynamically update the IP rule sets to add or remove IP addresses as needed.
Operational Simplicity: Using AWS WAF with a web ACL is straightforward and integrates seamlessly with the ALB, providing an efficient solution for managing access control based on IP addresses.
Reference: AWS WAF
How AWS WAF Works
A developer is creating a serverless application that performs video encoding. The encoding process runs as background jobs and takes several minutes to encode each video. The process must not send an immediate result to users.
The developer is using Amazon API Gateway to manage an API for the application. The developer needs to run test invocations and request validations. The developer must distribute API keys to control access to the API.
Which solution will meet these requirements?
- A . Create an HTTP API. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the HTTP API. Use the Event invocation type to call the Lambda function.
- B . Create a REST API with the default endpoint type. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the REST API. Use the Event invocation type to call the Lambda function.
- C . Create an HTTP API. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the HTTP API. Use the Request Response invocation type to call the Lambda function.
- D . Create a REST API with the default endpoint type. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the REST API. Use the Request Response invocation type to call the Lambda function.
B
Explanation:
Background Jobs with Event Invocation Type:
The Event invocation type is asynchronous, meaning the Lambda function does not send an
immediate result to the API Gateway and processes the request in the background. This is ideal for
video encoding tasks that take time.
REST API vs. HTTP API:
REST APIs support advanced features like API keys, request validation, and throttling that HTTP APIs do not support fully.
Since the developer needs API keys and request validations, a REST API is the correct choice.
Integration with Lambda:
AWS Lambda integration is seamless with REST APIs, and using the Event invocation ensures
asynchronous processing.
Incorrect Options Analysis:
Option A: HTTP API lacks full support for API keys and validation.
Option C and D: Request Response invocation type requires immediate responses, unsuitable for
background jobs.
Reference: AWS Lambda Invocation Types
Amazon API Gateway REST APIs
A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application’s ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.
- A . Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.
- B . Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.
- C . Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.
- D . Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.
A
Explanation:
Key Requirements:
Handle traffic spikes efficiently and reduce latency caused by cold starts.
Optimize costs during low traffic periods.
Analysis of Options:
Option A:
Provisioned Concurrency: Reduces cold start latency by pre-warming Lambda environments for the required number of concurrent executions.
AWS Application Auto Scaling: Automatically adjusts provisioned concurrency based on demand, ensuring cost optimization by scaling down during low traffic.
Correct Approach: Provides a balance between performance during traffic spikes and cost optimization during idle periods.
Option B:
Using EC2 instances with Auto Scaling introduces unnecessary complexity for a serverless architecture. It requires additional management and does not address the issue of cold starts for Lambda.
Incorrect Approach: Contradicts the serverless design philosophy and increases operational overhead.
Option C:
Setting a fixed concurrency level ensures performance during spikes but does not optimize costs during low traffic. This approach would maintain provisioned instances unnecessarily. Incorrect Approach: Lacks cost optimization.
Option D:
Using EventBridge Scheduler for periodic invocations may reduce cold starts but does not dynamically scale based on traffic demand. It also leads to unnecessary invocations during idle times. Incorrect Approach: Suboptimal for high traffic fluctuations and cost control.
AWS Solution Architect
Reference: AWS Lambda Provisioned Concurrency
AWS Application Auto Scaling with Lambda
A global company runs a data lake application in the us-east-1 Region and the eu-west-1 Region in an active-passive configuration. Application data is stored locally in Amazon S3 buckets in each AWS Region. The bucket in us-east-1 is the primary active bucket that handles all writes. The company needs to ensure that the application has Regional fault tolerance. The company also needs the storage layer to provide a highly available active-active capability for reads across Regions. The storage layer must provide low latency access through a single global endpoint.
- A . Create an Amazon CloudFront distribution in each Region. Set the S3 bucket within each Region as the origin for the CloudFront distribution in the same Region.
- B . Use S3 Transfer Acceleration for cross-Region data transfers to the S3 buckets.
- C . Configure AWS Backup to replicate S3 buckets across Regions. Set up a disaster recovery environment.
- D . Create an S3 Multi-Region Access Point. Configure cross-Region replication.
D
Explanation:
Amazon S3 Multi-Region Access Points allow applications to access S3 buckets in multiple Regions through a single global endpoint. This provides active-active read access with automatic routing to the closest bucket for low latency. With cross-Region replication, writes in the primary Region are automatically copied to the secondary Region, providing fault tolerance.
Option A (CloudFront) provides caching and distribution, but does not address write replication or active-active bucket access.
Option B (Transfer Acceleration) optimizes uploads across distances but does not enable cross-Region fault tolerance.
Option C (AWS Backup) is designed for backup/restore, not real-time multi-Region reads and writes. Therefore, D is the correct solution for active-active read access and disaster recovery.
Reference:
• Amazon S3 Multi-Region Access Points ― Global endpoint access and routing
• AWS Well-Architected Framework ― Reliability Pillar: Multi-Region design
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users while reducing the load on the origin.
Which solution meets these requirements MOST cost-effectively?
- A . Deploy an AWS Global Accelerator accelerator in front of the web servers.
- B . Deploy an Amazon CloudFront web distribution in front of the S3 bucket.
- C . Deploy an Amazon ElastiCache (Redis OSS) instance in front of the web servers.
- D . Deploy an Amazon ElastiCache (Memcached) instance in front of the web servers.
B
Explanation:
Amazon CloudFront is a highly cost-effective CDN that caches content like images and videos at edge locations globally. This reduces latency and the load on the origin S3 bucket. It is ideal for static content that is accessed by many users.
Reference: AWS Documentation C Amazon CloudFront with S3 Integration
A company hosts its main public web application in one AWS Region across multiple Availability Zones. The application uses an Amazon EC2 Auto Scaling group and an Application Load Balancer (ALB).
A web development team needs a cost-optimized compute solution to improve the company’s ability to serve dynamic content globally to millions of customers.
Which solution will meet these requirements?
- A . Create an Amazon CloudFront distribution. Configure the existing ALB as the origin.
- B . Use Amazon Route 53 to serve traffic to the ALB and EC2 instances based on the geographic location of each customer.
- C . Create an Amazon S3 bucket with public read access enabled. Migrate the web application to the S3 bucket. Configure the S3 bucket for website hosting.
- D . Use AWS Direct Connect to directly serve content from the web application to the location of each customer.
A
Explanation:
Amazon CloudFront is a global content delivery network (CDN) that caches and distributes content to users with low latency. By setting the existing ALB as the origin, CloudFront can cache dynamic and static content closer to users worldwide, improving performance and reducing the load on the application servers. This is the most cost-optimized and AWS-recommended way to globally serve dynamic content for web applications.
Reference Extract:
"CloudFront accelerates delivery of both static and dynamic content, reducing latency and offloading origin resources by caching content at edge locations."
Source: AWS Certified Solutions Architect C Official Study Guide, CloudFront and Global Application section.
A company is planning to migrate customer records to an Amazon S3 bucket. The company needs to ensure that customer records are protected against unauthorized access and are encrypted in transit and at rest. The company must monitor all access to the S3 bucket.
- A . Use AWS Key Management Service (AWS KMS) to encrypt customer records at rest. Create an S3 bucket policy that includes the aws: SecureTransport condition. Use an IAM policy to control access to the records. Use AWS CloudTrail to monitor access to the records.
- B . Use AWS Nitro Enclaves to encrypt customer records at rest. Use AWS Key Management Service (AWS KMS) to encrypt the records in transit. Use an IAM policy to control access to the records. Use AWS CloudTrail and AWS Security Hub to monitor access to the records.
- C . Use AWS Key Management Service (AWS KMS) to encrypt customer records at rest. Create an Amazon Cognito user pool to control access to the records. Use AWS CloudTrail to monitor access to the records. Use Amazon GuardDuty to detect threats.
- D . Use server-side encryption with Amazon S3 managed keys (SSE-S3) with default settings to encrypt the records at rest. Access the records by using an Amazon CloudFront distribution that uses the S3 bucket as the origin. Use IAM roles to control access to the records. Use Amazon CloudWatch to monitor access to the records.
A
Explanation:
Comprehensive and Detailed
Encryption at Rest: AWS Key Management Service (AWS KMS) provides centralized control over the cryptographic keys used to protect data. By using AWS KMS with Amazon S3, you can manage encryption keys and define policies to control access to them.
Encryption in Transit: By enforcing the aws: SecureTransport condition in the S3 bucket policy, you ensure that all data is transmitted over HTTPS, protecting data in transit.
Access Control: IAM policies allow you to define fine-grained permissions for users and roles, ensuring that only authorized entities can access the customer records.
Monitoring: AWS CloudTrail provides a record of actions taken by a user, role, or AWS service in Amazon S3, enabling you to monitor access to the records.
Reference: Protecting data using encryption
Using IAM policies to control access to Amazon S3 resources Logging Amazon S3 API calls using AWS CloudTrail
A company has a web application that uses several web servers that run on Amazon EC2 instances.
The instances use a shared Amazon RDS for MySQL database.
The company requires a secure method to store database credentials. The credentials must be automatically rotated every 30 days without affecting application availability.
Which solution will meet these requirements?
- A . Store database credentials in AWS Secrets Manager. Create an AWS Lambda function to automatically rotate the credentials. Use Amazon EventBridge to run the Lambda function on a schedule. Grant the necessary IAM permissions to allow the web servers to access Secrets Manager.
- B . Store database credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to access OpsCenter.
- C . Store database credentials in an Amazon S3 bucket. Create an AWS Lambda function to automatically rotate the credentials. Use Amazon EventBridge to run the Lambda function on a schedule. Grant the necessary IAM permissions to allow the web servers to retrieve credentials from the S3 bucket.
- D . Store the credentials in a local file on each of the web servers. Use an AWS KMS key to encrypt the credentials. Create a cron job on each server to rotate the credentials every 30 days.
A
Explanation:
AWS Secrets Manager is a fully managed service specifically designed to securely store and automatically rotate database credentials, API keys, and other secrets. Secrets Manager provides built-in integration with Amazon RDS for automatic credential rotation on a configurable schedule without requiring downtime. It also manages the secure distribution of the credentials to authorized services, such as your web servers, using IAM policies. Manual solutions (S3, files, cron jobs) do not provide the same level of automation, audit, or security. Reference Extract from AWS Documentation / Study Guide:
"AWS Secrets Manager enables you to rotate, manage, and retrieve database credentials securely. It supports automatic rotation of secrets for supported AWS databases without requiring application downtime."
Source: AWS Certified Solutions Architect C Official Study Guide, Security and Secrets Management section.
A company is developing a content sharing platform that currently handles 500 GB of user-generated media files. The company expects the amount of content to grow significantly in the future. The company needs a storage solution that can automatically scale, provide high durability, and allow direct user uploads from web browsers.
- A . Store the data in an Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled.
- B . Store the data in an Amazon Elastic File System (Amazon EFS) Standard file system.
- C . Store the data in an Amazon S3 Standard bucket.
- D . Store the data in an Amazon S3 Express One Zone bucket.
C
Explanation:
Amazon S3 Standard provides virtually unlimited scalability, high durability (11 nines), and millisecond latency. It is designed for storing large volumes of unstructured content such as media files. S3 also supports pre-signed URLs and direct browser uploads, enabling users to upload files securely without passing through backend servers. EBS volumes (A) are block storage, limited to single AZ, and not suitable for web-scale storage. EFS (B) is a shared file system for POSIX workloads, not for direct browser uploads. S3 Express One Zone (D) offers higher performance for small objects but does not provide cross-AZ durability, making it unsuitable for growing global content. Therefore, option C is the most scalable, durable, and cost-effective solution.
Reference:
• Amazon S3 User Guide ― Direct browser uploads and durability
• AWS Well-Architected Framework ― Performance Efficiency Pillar
