Practice Free SAA-C03 Exam Online Questions
A solutions architect is storing sensitive data generated by an application in Amazon S3. The solutions architect wants to encrypt the data at rest. A company policy requires an audit trail of when the AWS KMS key was used and by whom.
Which encryption option will meet these requirements?
- A . Server-side encryption with Amazon S3 managed keys (SSE-S3)
- B . Server-side encryption with AWS KMS managed keys (SSE-KMS)
- C . Server-side encryption with customer-provided keys (SSE-C)
- D . Server-side encryption with self-managed keys
B
Explanation:
SSE-KMS (Server-side encryption with AWS Key Management Service) not only encrypts data at rest but also integrates with AWS CloudTrail to provide detailed logs of key usage ― meeting the audit requirement.
“SSE-KMS provides the ability to audit key usage to see who used the key and when, via AWS CloudTrail.”
― Amazon S3 Encryption Documentation Benefits:
Encryption with customer-managed or AWS-managed KMS keys Audit trails of key usage events
Fine-grained access control Incorrect Options:
A: SSE-S3 does not support auditing of key usage.
C: SSE-C does not integrate with CloudTrail or KMS.
D: Self-managed keys require external key infrastructure and custom audit logging.
Reference: Using SSE-KMS with S3
AWS KMS Logging with CloudTrail
A company has an on-premises application that uses SFTP to collect financial data from multiple vendors. The company is migrating to the AWS Cloud. The company has created an application that uses Amazon S3 APIs to upload files from vendors.
Some vendors run their systems on legacy applications that do not support S3 APIs. The vendors want to continue to use SFTP-based applications to upload data. The company wants to use managed services for the needs of the vendors that use legacy applications.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS Database Migration Service (AWS DMS) instance to replicate data from the storage of the vendors that use legacy applications to Amazon S3. Provide the vendors with the credentials to access the AWS DMS instance.
- B . Create an AWS Transfer Family endpoint for vendors that use legacy applications.
- C . Configure an Amazon EC2 instance to run an SFTP server. Instruct the vendors that use legacy applications to use the SFTP server to upload data.
- D . Configure an Amazon S3 File Gateway for vendors that use legacy applications to upload files to an SMB file share.
B
Explanation:
AWS Transfer Family is a fully managed service that provides SFTP, FTPS, and FTP access directly to Amazon S3. It allows organizations to support legacy data transfer protocols without managing infrastructure. This approach gives the vendors a familiar SFTP interface, with files landing directly in
S3, and requires minimal operational effort compared to managing EC2 servers or using gateways.
Reference Extract from AWS Documentation / Study Guide:
"AWS Transfer Family enables secure transfer of files directly into and out of Amazon S3 using SFTP, FTPS, and FTP, and is fully managed by AWS, significantly reducing operational overhead."
Source: AWS Certified Solutions Architect C Official Study Guide, Data Migration and Transfer section.
A company currently stores 5 TB of data in on-premises block storage systems. The company’s current storage solution provides limited space for additional data. The company runs applications on premises that must be able to retrieve frequently accessed data with low latency. The company requires a cloud-based storage solution.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Use Amazon S3 File Gateway Integrate S3 File Gateway with the on-premises applications to store and directly retrieve files by using the SMB file system.
- B . Use an AWS Storage Gateway Volume Gateway with cached volumes as iSCSt targets.
- C . Use an AWS Storage Gateway Volume Gateway with stored volumes as iSCSI targets.
- D . Use an AWS Storage Gateway Tape Gateway. Integrate Tape Gateway with the on-premises applications to store virtual tapes in Amazon S3.
B
Explanation:
The company needs a cloud-based storage solution for frequently accessed data with low latency, while retaining their current on-premises infrastructure for some data storage. AWS Storage Gateway’sVolume Gateway with cached volumesis the most appropriate solution for this scenario.
AWS Storage Gateway – Volume Gateway (Cached Volumes):
Volume Gateway with cached volumesallows you to store frequently accessed data in the AWS Cloud while keeping the most recently accessed data cached locally on-premises. This ensures low-latency access to active data while providing scalability for the rest of the data in the cloud.
The cached volume option stores the primary data in Amazon S3 but caches frequently accessed data locally, ensuring fast access. This configuration is well-suited for applications that require fast access to frequently used data but can tolerate cloud-based storage for the rest.
Since the company is facing limited on-premises storage, cached volumes provide an ideal solution, as they reduce the need for additional on-premises storage infrastructure.
Why Not the Other Options?
Option A (S3 File Gateway): S3 File Gateway provides a file-based interface (SMB/NFS) for storing data directly in S3. While it is great for file storage, the company’s need for block-level storage with iSCSI targets makes Volume Gateway a better fit.
Option C (Volume Gateway – Stored Volumes): Stored volumes keep all the data on-premises and asynchronously back up to AWS. This would not address the company’s storage limitations since they would still need substantial on-premises storage.
Option D (Tape Gateway): Tape Gateway is designed for archiving and backup, not for frequently accessed low-latency data.
AWS
Reference: AWS Storage Gateway – Volume Gateway
A company is using AWS DataSync to migrate millions of files from an on-premises system to AWS.
The files are 10 KB in size on average.
The company wants to use Amazon S3 for file storage. For the first year after the migration the files will be accessed once or twice and must be immediately available. After 1 year the files must be archived for at least 7 years.
Which solution will meet these requirements MOST cost-effectively?
- A . Use an archive tool lo group the files into large objects. Use DataSync to migrate the objects. Store the objects in S3 Glacier Instant Retrieval for the first year. Use a lifecycle configuration to transition the files to S3 Glacier Deep Archive after 1 year with a retention period of 7 years.
- B . Use an archive tool to group the files into large objects. Use DataSync to copy the objects to S3 Standard-Infrequent Access (S3 Standard-IA). Use a lifecycle configuration to transition the files to S3 Glacier Instant Retrieval after 1 year with a retention period of 7 years.
- C . Configure the destination storage class for the files as S3 Glacier Instant. Retrieval Use a lifecycle policy to transition the files to S3 Glacier Flexible Retrieval after 1 year with a retention period of 7 years.
- D . Configure a DataSync task to transfer the files to S3 Standard-Infrequent Access (S3 Standard-IA) Use a lifecycle configuration to transition the files to S3. Deep Archive after 1 year with a retention period of 7 years.
A company stores sensitive customer data in an Amazon DynamoDB table. The company frequently updates the data. The company wants to use the data to personalize offers for customers.
The company’s analytics team has its own AWS account. The analytics team runs an application on Amazon EC2 instances that needs to process data from the DynamoDB tables. The company needs to follow security best practices to create a process to regularly share data from DynamoDB to the analytics team.
Which solution will meet these requirements?
- A . Export the required data from the DynamoDB table to an Amazon S3 bucket as multiple JSON files.
Provide the analytics team with the necessary IAM permissions to access the S3 bucket. - B . Allow public access to the DynamoDB table. Create an IAM user that has permission to access DynamoDB. Share the IAM user with the analytics team.
- C . Allow public access to the DynamoDB table. Create an IAM user that has read-only permission for DynamoDB. Share the IAM user with the analytics team.
- D . Create a cross-account IAM role. Create an IAM policy that allows the AWS account ID of the analytics team to access the DynamoDB table. Attach the IAM policy to the IAM role. Establish a trust relationship between accounts.
D
Explanation:
Usingcross-account IAM rolesis the most secure and scalable way to share data between AWS accounts.
Atrust relationshipallows the analytics team’s account to assume the role in the main account and access the DynamoDB table directly.
Ais feasible but involves data duplication and additional costs for storing the JSON files in S3.
B and Cviolate security best practices by allowing public access to sensitive data and sharing credentials, which is highly discouraged.
AWS Documentation
Reference: Cross-Account Access with Roles
Best Practices for Amazon DynamoDB Security
A company runs its production workload on an Amazon Aurora MySQL DB cluster that includes six Aurora Replicas. The company wants near-real-time reporting queries from one of its departments to be automatically distributed across three of the Aurora Replicas. Those three replicas have a different compute and memory specification from the rest of the DB cluster.
Which solution meets these requirements?
- A . Create and use a custom endpoint for the workload.
- B . Create a three-node cluster clone and use the reader endpoint.
- C . Use any of the instance endpoints for the selected three nodes.
- D . Use the reader endpoint to automatically distribute the read-only workload.
A
Explanation:
In Amazon Aurora, a custom endpoint is a feature that allows you to create a load-balanced endpoint that directs traffic to a specific set of instances in your Aurora DB cluster. This is particularly useful when you want to route traffic to a subset of instances that have different configurations or when you want to isolate specific workloads (e.g., reporting queries) to certain instances.
Custom Endpoint: The correct solution is to create a custom endpoint that includes the three Aurora Replicas that the department wants to use for near-real-time reporting. This custom endpoint will distribute the reporting queries only across the three selected replicas with the specified compute and memory configurations, ensuring that these queries do not affect the rest of the DB cluster.
Other Options:
Option B (Create a three-node cluster clone): This would create a separate cluster with its own resources, but it is not necessary and could incur additional costs. Also, it doesn’t leverage the existing replicas.
Option C (Use any of the instance endpoints): This would involve manually managing connections to individual instances, which is not scalable or automatic.
Option D (Use the reader endpoint): The reader endpoint would distribute the read queries across all replicas in the cluster, not just the selected three. This would not meet the requirement to limit the reporting queries to only three specific replicas.
AWS
Reference: Amazon Aurora Endpoints- Provides detailed information on the different types of endpoints available in Aurora, including custom endpoints.
Custom Endpoints in Amazon Aurora- Specific documentation on how to create and use custom endpoints to direct traffic to selected instances in an Aurora cluster.
A genomics research company is designing a scalable architecture for a loosely coupled workload. Tasks in the workload are independent and can be processed in parallel. The architecture needs to minimize management overhead and provide automatic scaling based on demand.
- A . Use a cluster of Amazon EC2 instances. Use AWS Systems Manager to manage the workload.
- B . Implement a serverless architecture that uses AWS Lambda functions.
- C . Use AWS ParallelCluster to deploy a dedicated high-performance cluster.
- D . Implement vertical scaling for each workload task.
B
Explanation:
For workloads where tasks are independent and can be processed in parallel, and where minimizing management overhead is a priority, a serverless architecture using AWS Lambda is ideal.
AWS Lambda allows you to run code without provisioning or managing servers. It automatically scales your application by running code in response to each trigger.
Parallel Processing: Lambda functions can process multiple tasks concurrently, making it suitable for parallel workloads.
Automatic Scaling: Lambda automatically scales by running code in response to each event, scaling precisely with the size of the workload.
Minimal Management Overhead: With Lambda, there’s no need to manage the underlying infrastructure, reducing operational complexity. Wikipedia
Reference: AWS Lambda C Run Code Without Thinking About Servers Best Practices for Designing and Architecting with AWS Lambda
A company runs a latency-sensitive gaming service in the AWS Cloud. The gaming service runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). An Amazon DynamoDB table stores the gaming data. All the infrastructure is in a single AWS Region. The main user base is in that same Region.
A solutions architect needs to update the architecture to support a global expansion of the gaming service. The gaming service must operate with the least possible latency.
Which solution will meet these requirements?
- A . Create an Amazon CloudFront distribution in front of the ALB.
- B . Deploy an Amazon API Gateway regional API endpoint. Integrate the API endpoint with the ALB.
- C . Create an accelerator in AWS Global Accelerator. Add a listener. Configure the endpoint to point to the ALB.
- D . Deploy the ALB and the fleet of EC2 instances to another Region. Use Amazon Route 53 with geolocation routing.
C
Explanation:
For latency-sensitive, globally distributed applications such as online gaming, minimizing network latency between users and application endpoints is critical. AWS Global Accelerator is designed specifically for this purpose. It uses the AWS global network and Anycast IP addresses to route user traffic to the closest healthy endpoint, reducing latency and improving performance for users worldwide.
Option C is the best solution because Global Accelerator directs traffic over the AWS backbone network instead of the public internet, which significantly reduces jitter and latency. It also provides built-in health checks and automatic failover, improving availability as the service expands globally. Importantly, Global Accelerator works seamlessly with existing Application Load Balancers, allowing the company to enhance performance without redesigning the application stack.
Option A (CloudFront) is optimized for caching static and cacheable content and is not ideal for real-time, stateful gaming traffic.
Option B adds unnecessary API abstraction and additional latency.
Option D introduces regional duplication and DNS-based routing, which is slower to react to network conditions and does not provide the same low-latency routing as Global Accelerator.
Therefore, C meets the requirement for global expansion with the least possible latency while maintaining a simple and highly performant architecture.
A company wants to provide a third-party system that runs in a private data center with access to its AWS account. The company wants to call AWS APIs directly from the third-party system. The company has an existing process for managing digital certificates. The company does not want to use SAML or OpenID Connect (OIDC) capabilities and does not want to store long-term AWS credentials.
Which solution will meet these requirements?
- A . Configure mutual TLS to allow authentication of the client and server sides of the communication channel.
- B . Configure AWS Signature Version 4 to authenticate incoming HTTPS requests to AWS APIs.
- C . Configure Kerberos to exchange tickets for assertions that can be validated by AWS APIs.
- D . Configure AWS Identity and Access Management (IAM) Roles Anywhere to exchange X.509 certificates for AWS credentials to interact with AWS APIs.
D
Explanation:
A company is running a web-based game in two Availability Zones in the us-west-2 Region. The web servers use an Application Load Balancer (ALB) in public subnets. The ALB has an SSL certificate from AWS Certificate Manager (ACM) with a custom domain name. The game is written in JavaScript and runs entirely in a user’s web browser.
The game is increasing in popularity in many countries around the world. The company wants to update the application architecture and optimize costs without compromising performance.
What should a solutions architect do to meet these requirements?
- A . Use Amazon CloudFront and create a global distribution that points to the ALB. Reuse the existing certificate from ACM for the CloudFront distribution. Use Amazon Route 53 to update the application
alias to point to the distribution. - B . Use AWS CloudFormation to deploy the application stack to AWS Regions near countries where the game is popular. Use ACM to create a new certificate for each application instance. Use Amazon Route 53 with a geolocation routing policy to direct traffic to the local application instance.
- C . Use Amazon S3 and create an S3 bucket in AWS Regions near countries where the game is popular. Deploy the HTML and JavaScript files to each S3 bucket. Use ACM to create a new certificate for each S3 bucket. Use Amazon Route 53 with a geolocation routing policy to direct traffic to the local S3 bucket.
- D . Use Amazon S3 and create an S3 bucket in us-west-2. Deploy the HTML and JavaScript files to the S3 bucket. Use Amazon CloudFront and create a global distribution with the S3 bucket as the origin. Use ACM to create a new certificate for the distribution. Use Amazon Route 53 to update the application alias to point to the distribution.
D
Explanation:
The correct answer is D because the application is written entirely in JavaScript and runs in users’ web browsers, which means the workload is essentially a static web application. Static assets such as HTML, JavaScript, CSS, and related files are best hosted on Amazon S3, which provides highly durable and low-cost object storage. Putting Amazon CloudFront in front of the S3 bucket allows the application to be delivered globally through edge locations, which reduces latency for users in many countries while also lowering the load on the origin.
This design is more cost-effective than continuing to serve the application from EC2 instances behind an ALB because it eliminates most of the compute and load balancing cost for static content delivery. CloudFront caches the content close to users and can improve performance worldwide without the complexity of deploying full application stacks in multiple Regions.
Option A is less cost-effective because it still depends on the ALB and EC2 instances as the origin for content that is static. Also, CloudFront requires an ACM certificate in us-east-1 for custom domain names, so reusing the existing certificate from the ALB is not the right assumption.
Option B introduces significant multi-Region infrastructure cost and management overhead.
Option C is unnecessarily complex because multiple S3 buckets in multiple Regions are not required when CloudFront can cache globally from a single origin.
AWS best practices for static web applications recommend Amazon S3 for storage and Amazon CloudFront for global distribution. This approach provides strong performance, simplicity, and cost
optimization.
