Practice Free SAA-C03 Exam Online Questions
A company is developing a social media application that must scale to meet demand spikes and handle ordered processes.
Which AWS services meet these requirements?
- A . ECS with Fargate, RDS, and SQS for decoupling.
- B . ECS with Fargate, RDS, and SNS for decoupling.
- C . DynamoDB, Lambda, DynamoDB Streams, and Step Functions.
- D . Elastic Beanstalk, RDS, and SNS for decoupling.
A
Explanation:
Option Acombines ECS with Fargate for scalability, RDS for relational data, and SQS for decoupling with message ordering (FIFO queues).
Option Buses SNS, which does not maintain message order.
Option Cis suitable for serverless workflows but not relational data.
Option Drelies on Elastic Beanstalk, which offers less flexibility for scaling.
A company has AWS Lambda functions that use environment variables. The company does not want its developers to see environment variables in plaintext.
Which solution will meet these requirements?
- A . Deploy code to Amazon EC2 instances instead of using Lambda functions.
- B . Configure SSL encryption on the Lambda functions to use AWS CloudHSM to store and encrypt the environment variables.
- C . Create a certificate in AWS Certificate Manager (ACM). Configure the Lambda functions to use the certificate to encrypt the environment variables.
- D . Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda functions to use the KMS key to store and encrypt the environment variables.
D
Explanation:
AWS Lambda supports encrypting environment variables at rest using AWS KMS. You can use encryption helpers (or Lambda’s built-in support) to encrypt sensitive environment variable values using a KMS key. These encrypted variables are not visible in plaintext to developers, either in the console or when running the code.
AWS Documentation Extract:
"AWS Lambda automatically encrypts environment variables at rest. For additional security, you can use AWS KMS keys and encryption helpers to encrypt environment variables, ensuring they are never exposed in plaintext."
(Source: AWS Lambda documentation, Environment Variables Security)
A: Does not address the issue (and adds more management overhead).
B, C: There is no native support for environment variable encryption via CloudHSM or ACM.
Reference: AWS Certified Solutions Architect C Official Study Guide, Lambda Security Best Practices.
A company is building a web application. The company needs a load balancing solution that supports HTTPS header-based routing. The company’s security team also requires a rules-based method of blocking specific incoming requests to decrease the effects of malicious activity.
Which solution will meet these requirements?
- A . Create an Application Load Balancer (ALB). Configure an HTTPS listener with mutual TLS enabled.
- B . Create an Application Load Balancer (ALB). Integrate the ALB with AWS WAF. Configure the security team’s required rules.
- C . Create an Application Load Balancer (ALB). Integrate the ALB with AWS Config. Apply custom rules to all ALB resources.
- D . Create a Network Load Balancer (NLB). Configure AWS Network Firewall with the security team’s required rules.
B
Explanation:
Application Load Balancer (ALB) supports HTTP/HTTPS layer 7 routing, including header-based routing, path-based routing, and host-based routing.
AWS WAF is designed to provide rules-based filtering (block, allow, count) of HTTP(S) requests to protect applications from common exploits and malicious traffic.
ALB integrates directly with AWS WAF, so you can attach a web ACL with custom rules defined by the security team to block specific patterns while still using header-based routing.
Why the others are not correct:
A: Mutual TLS adds client certificate authentication but does not provide rules-based blocking or WAF-style inspection.
C: AWS Config is for configuration compliance and auditing, not request filtering.
D: NLB operates at layer 4 (TCP/UDP); it does not support HTTP header-based routing.
A solutions architect is creating a data reporting application that will send traffic through third-party network firewalls in an AWS security account. The firewalls and application servers must be load balanced.
The application uses TCP connections to generate reports. The reports can run for several hours and can be idle for up to 1 hour. The reports must not time out during an idle period.
Which solution will meet these requirements?
- A . Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout period to 1 hour.
- B . Use a single firewall in the security account. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout and firewall idle timeout periods to 1 hour.
- C . Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the idle timeout periods for the ALB, the GWLB, and the firewalls to 1 hour.
- D . Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Configure the ALB idle timeout period to 1 hour. Increase the application server capacity to finish the report generation faster.
C
Explanation:
Since the application uses long-lived TCP connections and must remain idle for up to 1 hour without timeout, all components involved in the connection path (ALB, GWLB, and firewall) must have their idle timeout values configured to at least 1 hour.
Gateway Load Balancer supports transparent insertion of firewalls, and configuring consistent idle timeouts ensures connections don’t drop mid-session.
Using just one firewall (option B) introduces a single point of failure. Increasing capacity (option D) doesn’t solve idle timeout issues.
Therefore, C provides a resilient and complete configuration.
A company processes streaming data by using Amazon Kinesis Data Streams and an AWS Lambda function. The streaming data comes from devices that are connected to the internet. The company is experiencing scaling problems and needs to implement shard-level control and custom checkpointing.
Which solution will meet these requirements with the LEAST latency?
- A . Connect Kinesis Data Streams to Amazon Data Firehose to ingest incoming data to an Amazon S3 bucket. Configure S3 Event Notifications to invoke the Lambda function.
- B . Increase the provisioned concurrency settings for the Lambda function. Stream the data from Kinesis Data Streams to an Amazon Simple Queue Service (Amazon SQS) standard queue. Invoke the Lambda function to process the messages.
- C . Run the Lambda function code in an Amazon Elastic Container Service (Amazon ECS) container that runs on AWS Fargate. Change the code to use the Kinesis Client Library (KCL).
- D . Increase the memory and provisioned concurrency settings for the Lambda function. Stream the data from Kinesis Data Streams to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to be invoked by the SQS queue.
C
Explanation:
Comprehensive and Detailed 250 to 300 words of Explanation (AWS documentation-based, no links):
The requirements “shard-level control” and “custom checkpointing” point directly to the Kinesis Client Library (KCL), which is designed for building consumer applications that coordinate shard processing, perform load balancing across workers, and manage checkpoints in a durable store (commonly DynamoDB) with fine-grained control. While Lambda can consume from Kinesis Data Streams, its event source mapping abstracts shard coordination and checkpoint behavior; it is not the best fit when you explicitly need custom checkpointing logic and shard-level control beyond the managed integration.
Running the existing Lambda processing logic as a long-running consumer in Amazon ECS on AWS Fargate allows the company to operate a scalable KCL-based consumer fleet without managing servers. With Fargate, AWS manages the underlying compute while the application maintains direct control over shard assignment and checkpoint timing/frequency through KCL―exactly what the requirement calls for. Latency is minimized because the consumer reads directly from Kinesis Data Streams and processes records continuously, avoiding additional buffering layers or store-and-forward patterns.
Option A adds significant latency by delivering to S3 through Firehose and then triggering processing from S3 object notifications; this is optimized for delivery and batch-style processing, not low-latency streaming with shard-level control.
Options B and D introduce SQS between Kinesis and processing, which adds another hop and does not inherently provide shard-level control or KCL-style checkpointing; FIFO also limits throughput and is not intended for high-scale streaming fan-in. Increasing Lambda provisioned concurrency can reduce cold starts, but it does not solve the need for custom checkpointing at the shard level.
Therefore, C best meets shard-level control and custom checkpointing requirements with the least latency by using a direct KCL consumer on a managed compute platform (Fargate).
A company has an organization in AWS Organizations that has all features enabled. The company has multiple Amazon S3 buckets in multiple AWS Regions around the world. The S3 buckets contain sensitive data.
The company needs to ensure that no personally identifiable information (PII) is stored in the S3 buckets. The company also needs a scalable solution to identify PII.
Which solution will meet these requirements?
- A . In the Organizations management account, configure an Amazon Macie administrator IAM user as the delegated administrator for the global organization. Use the Macie administrator user to configure Macie settings to scan for PII.
- B . For each Region in the Organizations management account, designate a delegated Amazon Macie administrator account. In the Macie administrator account, add all accounts in the organization. Use the Macie administrator account to enable Macie. Configure automated sensitive data discovery for all accounts in the organization.
- C . For each Region in the Organizations management account, configure a service control policy (SCP) to identify PII. Apply the SCP to the organization root.
- D . In the Organizations management account, configure AWS Lambda functions to scan for PII in each Region.
B
Explanation:
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data in AWS. To scale across Regions and accounts in AWS Organizations, Macie supports delegated administration, automated sensitive data discovery, and multi-account aggregation through a centralized admin account.
Reference: AWS Documentation C Amazon Macie Multi-Account Configuration
A company runs its critical storage application in the AWS Cloud. The application uses Amazon S3 in two AWS Regions. The company wants the application to send remote user data to the nearest S3 bucket with no public network congestion. The company also wants the application to fail over with the least amount of management of Amazon S3.
Which solution will meet these requirements?
- A . Implement an active-active design between the two Regions. Configure the application to use the regional S3 endpoints closest to the user.
- B . Use an active-passive configuration with S3 Multi-Region Access Points. Create a global endpoint for each of the Regions.
- C . Send user data to the regional S3 endpoints closest to the user. Configure an S3 cross-account replication rule to keep the S3 buckets synchronized.
- D . Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cross-Region Replication.
D
Explanation:
AWS S3 Multi-Region Access Points enable customers to use a single global endpoint for S3 bucket access across multiple AWS Regions, providing automatic routing to the nearest Region. This reduces public network congestion by directing user data to the closest S3 bucket and supports high availability with active-active configuration.
Cross-Region Replication ensures data is replicated between buckets in different Regions, meeting the failover and resilience requirements with minimal management overhead.
Option D aligns best with AWS’s recommended approach to resilient, low-latency, and simplified multi-Region S3 access.
Option A lacks the global endpoint and automatic failover.
Option B incorrectly describes Multi-
Region Access Points configuration and suggests global endpoints per Region, which is contradictory.
Option C’s cross-account replication adds complexity and does not provide a single global endpoint.
Reference: AWS Well-Architected Framework ― Reliability Pillar
(https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
Amazon S3 Multi-Region Access Points
(https: //docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPoints.html)
S3 Cross-Region Replication
(https: //docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html)
A company has a VPC with multiple private subnets that host multiple applications. The applications must not be accessible to the internet. However, the applications need to access multiple AWS services. The applications must not use public IP addresses to access the AWS services.
- A . Configure interface VPC endpoints for the required AWS services. Route traffic from the private subnets through the interface VPC endpoints.
- B . Deploy a NAT gateway in each private subnet. Route traffic from the private subnets through the NAT gateways.
- C . Deploy internet gateways in each private subnet. Route traffic from the private subnets through the internet gateways.
- D . Set up an AWS Direct Connect connection between the private subnets. Route traffic from the private subnets through the Direct Connect connection.
A
Explanation:
AWS VPC endpoints (interface and gateway) allow private connectivity from VPC resources to AWS services without requiring public IP addresses or internet gateways. This ensures applications remain isolated in private subnets while securely accessing AWS services. NAT gateways (B) would allow internet access, which does not meet the security requirement. Internet gateways (C) directly expose traffic to the internet, which violates the isolation requirement. Direct Connect (D) connects on-premises environments to AWS but does not provide service access from private subnets.
Therefore, option A ― using interface VPC endpoints ― is the correct solution.
Reference:
• Amazon VPC User Guide ― VPC endpoints (interface and gateway)
• AWS Well-Architected Framework ― Security Pillar: Network isolation and private connectivity
A healthcare company uses an Amazon EMR cluster to process patient data. The data must be encrypted in transit and at rest. Local volumes in the cluster also need to be encrypted.
Which solution will meet these requirements?
- A . Create Amazon EBS volumes. Enable encryption. Attach the volumes to the existing EMR cluster.
- B . Create an EMR security configuration that encrypts the data and the volumes as required.
- C . Create an EC2 instance profile for the EMR instances. Configure the instance profile to enforce encryption.
- D . Create a runtime role that has a trust policy for the EMR cluster.
B
Explanation:
Amazon EMR allows the creation of security configurations to specify settings for encrypting data at rest, data in transit, or both. These configurations can be applied to clusters to ensure that data stored in Amazon S3, local disks, and data moving between nodes is encrypted.
By creating and applying an EMR security configuration, the company can ensure that all data processing complies with encryption requirements for sensitive patient data.
A company’s solutions architect is building a static website to be deployed in Amazon S3 for a production environment. The website integrates with an Amazon Aurora PostgreSQL database by using an AWS Lambda function. The website that is deployed to production will use a Lambda alias that points to a specific version of the Lambda function.
The company must rotate the database credentials every 2 weeks. Lambda functions that the company deployed previously must be able to use the most recent credentials.
Which solution will meet these requirements?
- A . Store the database credentials in AWS Secrets Manager. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Secrets Manager.
- B . Include the database credentials as part of the Lambda function code. Update the credentials periodically and deploy the new Lambda function.
- C . Use Lambda environment variables. Update the environment variables when new credentials are available.
- D . Store the database credentials in AWS Systems Manager Parameter Store. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Systems Manager Parameter Store.
A
Explanation:
AWS Secrets Manager is the managed service for securely storing, rotating, and retrieving database credentials. When you store Aurora credentials in Secrets Manager and enable automatic rotation, Secrets Manager updates the credentials in both the database and the stored secret.
Each Lambda function version or alias can call Secrets Manager at runtime to retrieve the current secret value, so even older deployed Lambda versions that use an alias will always obtain the most recent credentials without redeployment.
Why others are not suitable:
B: Embeds credentials in code, requiring redeployment on every rotation and violating security best practices.
C: Environment variables are version-specific; old aliases would continue using outdated values unless you redeploy or change them.
D: Parameter Store can store and rotate secrets but is less integrated for database credential rotation than Secrets Manager; Secrets Manager is the purpose-built minimal-overhead choice here.
