Practice Free SAA-C03 Exam Online Questions
A healthcare company is developing an AWS Lambda function that publishes notifications to an encrypted Amazon Simple Notification Service (Amazon SNS) topic. The notifications contain protected health information (PHI).
The SNS topic uses AWS Key Management Service (AWS KMS) customer-managed keys for encryption. The company must ensure that the application has the necessary permissions to publish messages securely to the SNS topic.
Which combination of steps will meet these requirements? (Select THREE.)
- A . Create a resource policy for the SNS topic that allows the Lambda function to publish messages to the topic.
- B . Use server-side encryption with AWS KMS keys (SSE-KMS) for the SNS topic instead of customer-managed keys.
- C . Create a resource policy for the encryption key that the SNS topic uses that has the necessary AWS KMS permissions.
- D . Specify the Lambda function’s Amazon Resource Name (ARN) in the SNS topic’s resourcepolicy.
- E . Associate an Amazon API Gateway HTTP API with the SNS topic to control access to the topic by using API Gateway resource policies.
- F . Configure a Lambda execution role that has the necessary IAM permissions to use a customer-managed key in AWS KMS.
A,C,F
Explanation:
To securely publish messages to an encrypted Amazon SNS topic, the following steps are required:
A company uses an Amazon CloudFront distribution to serve thousands of media files to users. The CloudFront distribution uses a private Amazon S3 bucket as an origin.
A solutions architect must prevent users in specific countries from accessing the company’s files.
Which solution will meet these requirements in the MOST operationally-efficient way?
- A . Require users to access the files by using CloudFront signed URLs.
- B . Configure geographic restrictions in CloudFront.
- C . Require users to access the files by using CloudFront signed cookies.
- D . Configure an origin access control (OAC) between CloudFront and the S3 bucket.
B
Explanation:
CloudFront geographic restrictions (also known as geo-blocking) allow you to allow or deny content delivery to specific countries with minimal configuration.
“You can use geo restriction, also known as geoblocking, to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution.”
― CloudFront Geo Restriction
This is the most operationally efficient approach ― no code, no signed URL logic.
Incorrect A/C: Signed URLs/cookies are for individual access control, not geo-blocking.
D: OAC controls access between CloudFront and S3, not to block specific countries.
Reference: Geographic Restrictions in CloudFront
A company stores 5 PB of archived data on physical tapes. The company needs to preserve the data for another 10 years. The data center that stores the tapes has a 10 Gbps Direct Connect connection to an AWS Region. The company wants to migrate the data to AWS within the next 6 months.
- A . Read the data from the tapes on premises. Use local storage to stage the data. Use AWS DataSync to migrate the data to Amazon S3 Glacier Flexible Retrieval storage.
- B . Use an on-premises backup application to read the data from the tapes. Use the backup
application to write directly to Amazon S3 Glacier Deep Archive storage. - C . Order multiple AWS Snowball Edge devices. Copy the physical tapes to virtual tapes on the Snowball Edge devices. Ship the Snowball Edge devices to AWS. Create an S3 Lifecycle policy to move the tapes to Amazon S3 Glacier Instant Retrieval storage.
- D . Configure an on-premises AWS Storage Gateway Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup software to copy the physical tapes to the virtual tapes. Move the virtual tapes to Amazon S3 Glacier Deep Archive storage.
D
Explanation:
Analysis:
The company’s requirements are to migrate 5 PB of data from physical tapes to AWS within 6 months, preserve the data for 10 years, and ensure cost efficiency.AWS Storage Gateway Tape Gatewayis purpose-built for such use cases, as it seamlessly integrates with backup applications and provides virtual tape storage in Amazon S3 Glacier Deep Archive.
Why Option D is Correct:
Tape Gateway: Enables the migration of physical tapes to virtual tapes. Virtual tapes are stored in Amazon S3 and can later be archived in Amazon S3 Glacier Deep Archive for long-term storage.
Cost Efficiency: Amazon S3 Glacier Deep Archive is the lowest-cost storage class for long-term data preservation.
Operational Simplicity: Tape Gateway integrates with existing on-premises backup software, reducing the need for additional tools or manual processes.
Scalability: Can handle the migration of large datasets, such as 5 PB, within the required timeframe.
Why Other Options Are Not Ideal:
Option A:
AWS DataSync is not designed for reading data directly from physical tapes. Staging the data on local storage adds unnecessary complexity and cost. Not suitable.
Option B:
Using backup applications to write directly to S3 Glacier Deep Archive may not leverage AWS-native services optimally. Tape Gateway simplifies the workflow significantly. Less efficient.
Option C:
Snowball Edge is ideal for environments without high-bandwidth connectivity. However, the company already has a 10 Gbps Direct Connect, making Tape Gateway a better choice. Not cost-effective.
AWS
Reference: AWS Storage Gateway – Tape Gateway: AWS Documentation – Tape Gateway Amazon S3 Glacier Deep Archive: AWS Documentation – Glacier Deep Archive
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the Aurora database by using user names and passwords that the company stores locally in a file.
The company changes the user names and passwords every month. The company wants to minimize the operational overhead of credential management.
Which solution will meet these requirements?
- A . Store the credentials as a secret within AWS Secrets Manager. Assign IAM permissions to the secret. Reconfigure the application to call the secret. Enable rotation on the secret and configure rotation to occur on a monthly schedule.
- B . Use AWS Systems Manager Parameter Store to create a new parameter for the credentials. Use IAM policies to restrict access to the parameter. Reconfigure the application to access the parameter.
- C . Create an Amazon S3 bucket to store objects. Use an AWS Key Management Service (AWS KMS) key to encrypt the objects. Migrate the credentials file to the S3 bucket. Update the application to retrieve the credentials file from the S3 bucket.
- D . Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the encrypted EBS volumes to the EC2 instances. Migrate the credentials file to the new EBS volumes.
A
Explanation:
AWS Secrets Manager is purpose-built to securely store, manage, and rotate sensitive credentials such as database user names and passwords.
Option A is the most operationally efficient solution because it eliminates manual password rotation, reduces human error, and centralizes secret lifecycle management. Secrets Manager integrates natively with Amazon Aurora, enabling automated credential rotation using AWS-managed or custom Lambda rotation logic. Once rotation is enabled, Secrets Manager updates the database credentials and stores the new values securely without requiring administrators to manually update files on EC2 instances.
By assigning IAM permissions to the secret, access can be tightly controlled using least-privilege principles. The application retrieves credentials at runtime, removing the need to store passwords locally on disk, which significantly improves security posture. Secrets Manager also provides auditing capabilities through AWS CloudTrail, allowing visibility into secret access and changes.
Option B (Systems Manager Parameter Store) can securely store secrets, but automated rotation is not natively supported in the same way as Secrets Manager. Implementing rotation with Parameter
Store would require additional custom automation, increasing operational complexity.
Option C stores credentials in S3, which is not designed for frequent credential rotation or secure secret access patterns, even when encrypted.
Option D only encrypts credentials at rest on individual instances and does not address rotation, distribution, or centralized management, resulting in high operational overhead.
Therefore, A best meets the requirements by providing secure storage, automated monthly rotation, fine-grained access control, and minimal operational effort, aligning with AWS security and operational excellence best practices.
A retail company runs its application on AWS. The application uses Amazon EC2 for web servers, Amazon RDS for database services, and Amazon CloudFront for global content distribution.
The company needs a solution to mitigate DDoS attacks.
Which solution will meet this requirement?
- A . Implement AWS WAF custom rules to limit the length of query requests. Configure CloudFront to work with AWS WAF.
- B . Enable AWS Shield Advanced. Configure CloudFront to work with Shield Advanced.
- C . Use Amazon Inspector to scan the EC2 instances. Enable Amazon GuardDuty.
- D . Enable Amazon Macie. Configure CloudFront Origin Shield.
B
Explanation:
AWS Shield Advanced provides advanced DDoS protection for AWS workloads, including EC2, CloudFront, and RDS. When integrated with CloudFront, Shield Advanced offers comprehensive detection and mitigation against large and sophisticated DDoS attacks, along with 24×7 access to the AWS DDoS Response Team (DRT). AWS WAF provides application-level protection, but for complete DDoS mitigation, Shield Advanced is the recommended solution.
Reference Extract from AWS Documentation / Study Guide:
"AWS Shield Advanced provides expanded DDoS attack protection for applications running on AWS. It offers always-on detection and automatic inline mitigations that minimize application downtime and latency."
Source: AWS Certified Solutions Architect C Official Study Guide, Security and DDoS Protection section.
A company recently launched a new product that is highly available in one AWS Region. The product consists of an application that runs on Amazon Elastic Container Service (Amazon ECS), apublic Application Load Balancer (ALB), and an Amazon DynamoDB table. The company wants a solution that will make the application highly available across Regions.
Which combination of steps will meet these requirements? (Select THREE.)
- A . In a different Region, deploy the application to a new ECS cluster that is accessible through a new ALB.
- B . Create an Amazon Route 53 failover record.
- C . Modify the DynamoDB table to create a DynamoDB global table.
- D . In the same Region, deploy the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is accessible through a new ALB.
- E . Modify the DynamoDB table to create global secondary indexes (GSIs).
- F . Create an AWS PrivateLink endpoint for the application.
A,B,C
Explanation:
To make the application highly available across regions:
Deploy the application in a different region using a newECS clusterandALBto ensure regional redundancy.
UseRoute 53 failover routingto automatically direct traffic to the healthy region in case of failure.
UseDynamoDB Global Tablesto ensure the database is replicated and available across multiple regions, supporting read and write operations in each region.
Option D (EKS cluster in the same region): This does not provide regional redundancy.
Option E (Global Secondary Indexes): GSIs improve query performance but do not provide multi-region availability.
Option F (PrivateLink): PrivateLink is for secure communication, not for cross-region high availability.
AWS
Reference: DynamoDB Global Tables
Amazon ECS with ALB
A company needs to connect its on-premises data center network to a new VPC. The data center network has a 100 Mbps symmetrical internet connection. An application that is running on premises will transfer multiple gigabytes of data each day. The application will use an Amazon Data Firehose delivery stream for processing.
What should a solutions architect recommend for maximum performance?
- A . Create a VPC peering connection between the on-premises network and the VPC. Configure routing for the on-premises network to use the VPC peering connection.
- B . Procure an AWS Snowball Edge Storage Optimized device. After several days’ worth of data has accumulated, copy the data to the device and ship the device to AWS for expedited transfer to Firehose. Repeat as needed.
- C . Create an AWS Site-to-Site VPN connection between the on-premises network and the VPC. Configure BGP routing between the customer gateway and the virtual private gateway. Use the VPN connection to send the data from on premises to Firehose.
- D . Use AWS PrivateLink to create an interface VPC endpoint for Firehose in the VPC. Set up a 1 Gbps AWS Direct Connect connection between the on-premises network and AWS. Use the PrivateLink endpoint to send the data from on premises to Firehose.
D
Explanation:
AWS Direct Connect provides a dedicated network connection from on-premises to AWS, offering greater bandwidth and more consistent performance than internet-based connections or VPN. AWS PrivateLink enables secure, private connectivity to supported AWS services such as Kinesis Data Firehose over Direct Connect, bypassing the public internet and providing the highest throughput and lowest latency possible. This is the recommended solution for consistently transferring large volumes of data with maximum reliability and performance.
Reference Extract from AWS Documentation / Study Guide:
"AWS Direct Connect and AWS PrivateLink provide private, high-throughput connectivity between on-premises and AWS services, bypassing the public internet and ensuring maximum performance for large data transfers."
Source: AWS Certified Solutions Architect C Official Study Guide, Hybrid Networking section.
A company operates a data lake in Amazon S3. The company wants to query and filter data directly in S3 without downloading objects.
Which solution will meet these requirements?
- A . Use Amazon Athena to query and filter the objects in Amazon S3.
- B . Use Amazon EMR to process and filter the objects.
- C . Use Amazon API Gateway to retrieve filtered results.
- D . Use Amazon ElastiCache to cache the objects.
A
Explanation:
Amazon Athena enables serverless SQL queries directly on S3 objects, eliminating the need to download or preprocess data. EMR adds operational overhead, and API Gateway/ElastiCache are not data lake query engines.
A company is designing an application on AWS that processes sensitive data. The application stores and processes financial data for multiple customers.
To meet compliance requirements, the data for each customer must be encrypted separately at rest by using a secure, centralized key management solution. The company wants to use AWS Key Management Service (AWS KMS) to implement encryption.
Which solution will meet these requirements with the LEAST operational overhead’?
- A . Generate a unique encryption key for each customer. Store the keys in an Amazon S3 bucket. Enable server-side encryption.
- B . Deploy a hardware security appliance in the AWS environment that securely stores customer-provided encryption keys. Integrate the security appliance with AWS KMS to encrypt the sensitive data in the application.
- C . Create a single AWS KMS key to encrypt all sensitive data across the application.
- D . Create separate AWS KMS keys for each customer’s data that have granular access control and logging enabled.
D
Explanation:
This solution meets the requirement of encrypting each customer’s data separately with the least operational overhead by leveraging AWS Key Management Service (KMS).
Separate AWS KMS Keys: By creating separate KMS keys for each customer, you can ensure that each customer’s data is encrypted with a unique key. This approach satisfies the compliance requirement for separate encryption and provides fine-grained control over access to the keys.
Granular Access Control: AWS KMS allows you to define key policies and use IAM policies to grant specific permissions to the keys. This ensures that only authorized users or services can access the keys, thereby maintaining the principle of least privilege.
Logging and Monitoring: AWS KMS integrates with AWS CloudTrail, which logs all key usage and management activities. This provides an audit trail that is essential for meeting compliance requirements.
Why Not Other Options?
Option A (Store keys in S3): Storing keys in S3 is not recommended because it does not provide the same level of security, access control, or integration with AWS services as KMS does.
Option B (Hardware security appliance): Deploying a hardware security appliance adds significant operational overhead and complexity, which is unnecessary given that KMS already provides a secure and centralized key management solution.
Option C (Single KMS key for all data): Using a single KMS key does not meet the requirement of encrypting each customer’s data separately.
AWS
Reference: AWS Key Management Service (KMS)- Overview of KMS, its features, and best practices for key management.
Using AWS KMS for Multi-Tenant Applications- Guidance on how to design applications using KMS for multi-tenancy.
A company needs to store confidential files on AWS. The company accesses the files every week. The company must encrypt the files by using envelope encryption, and the encryption keys must be rotated automatically. The company must have an audit trail to monitor encryption key usage.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Store the confidential files in Amazon S3.
- B . Store the confidential files in Amazon S3 Glacier Deep Archive.
- C . Use server-side encryption with customer-provided keys (SSE-C).
- D . Use server-side encryption with Amazon S3 managed keys (SSE-S3).
- E . Use server-side encryption with AWS KMS managed keys (SSE-KMS).
A,E
Explanation:
Amazon S3 is suitable for storing data that needs to be accessed weekly and integrates with AWS Key Management Service (KMS) to provide encryption at rest with server-side encryption using KMS-managed keys (SSE-KMS).
SSE-KMS uses envelope encryption and allows automatic key rotation and logging through AWS CloudTrail, satisfying the requirements for audit trails and compliance.
S3 Glacier Deep Archive is unsuitable due to its high retrieval latency. SSE-C requires customer-side management of encryption keys, with no support for automatic rotation or audit. SSE-S3 does not use customer-managed keys and lacks fine-grained control and auditing.
