Practice Free SAA-C03 Exam Online Questions
A company collects 10 GB of telemetry data every day from multiple devices. The company stores the data in an Amazon S3 bucket that is in a source data account.
The company has hired several consulting agencies to analyze the company’s data. Each agency has a unique AWS account. Each agency requires read access to the company’s data. The company needs a secure solution to share the data from the source data account to the consulting agencies.
Which solution will meet these requirements with the LEAST operational effort?
- A . Set up an Amazon CloudFront distribution. Use the S3 bucket as the origin.
- B . Make the S3 bucket public for a limited time. Inform only the agencies that the bucket is publicly accessible.
- C . Configure cross-account access for the S3 bucket to the accounts that the agencies own.
- D . Set up an IAM user for each agency in the source data account. Grant each agency IAM user access to the company’s S3 bucket.
C
Explanation:
The most secure and least operationally intensive method is to configure cross-account access using resource-based policies on the S3 bucket. This allows trusted external AWS accounts (consulting agencies) to securely access the S3 data without the need to manage user credentials or build additional infrastructure.
Options A and B pose security risks.
Option D increases operational complexity and violates least privilege by managing external users inside your AWS account.
A company stores sensitive customer data in an Amazon DynamoDB table. The company frequently updates the data. The company wants to use the data to personalize offers for customers.
The company’s analytics team has its own AWS account. The analytics team runs an application on Amazon EC2 instances that needs to process data from the DynamoDB tables. The company needs to follow security best practices to create a process to regularly share data from DynamoDB to the analytics team.
Which solution will meet these requirements?
- A . Export the required data from the DynamoDB table to an Amazon S3 bucket as multiple JSON files.
Provide the analytics team with the necessary IAM permissions to access the S3 bucket. - B . Allow public access to the DynamoDB table. Create an IAM user that has permission to access DynamoDB. Share the IAM user with the analytics team.
- C . Allow public access to the DynamoDB table. Create an IAM user that has read-only permission for DynamoDB. Share the IAM user with the analytics team.
- D . Create a cross-account IAM role. Create an IAM policy that allows the AWS account ID of the analytics team to access the DynamoDB table. Attach the IAM policy to the IAM role. Establish a trust relationship between accounts.
D
Explanation:
Using cross-account IAM roles is the most secure and scalable way to share data between AWS accounts.
A trust relationship allows the analytics team’s account to assume the role in the main account and access the DynamoDB table directly.
Ais feasible but involves data duplication and additional costs for storing the JSON files in S3.
B and C violate security best practices by allowing public access to sensitive data and sharing
credentials, which is highly discouraged.
AWS Documentation
Reference: Cross-Account Access with Roles
Best Practices for Amazon DynamoDB Security
A large financial services company uses Amazon ElastiCache (Redis OSS) for its new application that has a global user base. A solutions architect must develop a caching solution that will be available across AWS Regions and include low-latency replication and failover capabilities for disaster recovery (DR). The company’s security team requires the encryption of cross-Region data transfers.
Which solution meets these requirements with the LEAST amount of operational effort?
- A . Enable cluster mode in ElastiCache (Redis OSS). Then create multiple clusters across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a cluster in the failover Region to handle production traffic when DR is required.
- B . Create a global data store in ElastiCache (Redis OSS). Then create replica clusters in two other Regions. Promote one of the replica clusters as primary when DR is required.
- C . Disable cluster mode in ElastiCache (Redis OSS). Then create multiple replication groups across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a replication group in the failover Region to primary when DR is required.
- D . Create a snapshot of ElastiCache (Redis OSS) in the primary Region and copy it to the failover Region. Use the snapshot to restore the cluster from the failover Region when DR is required.
B
Explanation:
The optimal solution for low-latency global caching with disaster recovery and cross-Region replication is to use Amazon ElastiCache Global Datastore for Redis OSS.
A Global Datastore enables fully managed cross-Region replication and supports automatic failover by promoting read replica clusters in another Region.
ElastiCache ensures encryption in-transit and at-rest, meeting compliance and security requirements.
It’s a fully managed AWS-native feature, reducing operational effort compared to setting up DMS-based or snapshot-based replication manually.
Other options (A, C, D):
Require manual setup and management (e.g., custom DMS pipelines, snapshots). Do not offer real-time replication or failover without manual intervention.
Reference: ElastiCache Global Datastore for Redis
A company is running a media store across multiple Amazon EC2 instances distributed across multiple Availability Zones in a single VPC. The company wants a high-performing solution to share data between all the EC2 instances, and prefers to keep the data within the VPC only.
What should a solutions architect recommend?
- A . Create an Amazon S3 bucket and call the service APIs from each instance’s application.
- B . Create an Amazon S3 bucket and configure all instances to access it as a mounted volume.
- C . Configure an Amazon Elastic Block Store (Amazon EBS) volume and mount it across all instances.
- D . Configure an Amazon Elastic File System (Amazon EFS) file system and mount It across all instances.
D
Explanation:
Amazon Elastic File System (EFS) is a managed file storage service that can be mounted across multiple EC2 instances. It provides a scalable and high-performing solution to share data among instances within a VPC.
High Performance: EFS provides scalable performance for workloads that require high throughput and IOPS. It is particularly well-suited for applications that need to share data across multiple instances.
Ease of Use: EFS can be easily mounted on multiple instances across different Availability Zones, providing a shared file system accessible to all the instances within the VPC.
Security: EFS can be configured to ensure that data remains within the VPC, and it supports encryption at rest and in transit.
Why Not Other Options?:
Option A (Amazon S3 bucket with APIs): While S3 is excellent for object storage, it is not a file system and does not provide the low-latency access required for shared data between instances.
Option B (S3 bucket as a mounted volume): S3 is not designed to be mounted as a file system, and this approach would introduce unnecessary complexity and latency.
Option C (EBS volume shared across instances): EBS volumes cannot be attached to multiple instances simultaneously. It is not designed to be shared across instances like EFS. AWS
Reference: Amazon EFS- Overview of Amazon EFS and its features.
Best Practices for Amazon EFS- Recommendations for using EFS with multiple instances.
A company hosts an application on Amazon EC2 instances that are part of a target group behind an Application Load Balancer (ALB). The company has attached a security group to the ALB.
During a recent review of application logs, the company found many unauthorized login attempts from IP addresses that belong to countries outside the company’s normal user base. The company wants to allow traffic only from the United States and Australia.
- A . Edit the default network ACL to block IP addresses from outside of the allowed countries.
- B . Create a geographic match rule in AWS WAF. Attach the rule to the ALB.
- C . Configure the ALB security group to allow the IP addresses of company employees. Edit the default network ACL to block IP addresses from outside of the allowed countries.
- D . Use a host-based firewall on the EC2 instances to block IP addresses from outside of the allowed countries. Configure the ALB security group to allow the IP addresses of company employees.
B
Explanation:
Why Option B is Correct:
AWS WAF: Provides a simple way to create geographic match rules to block or allow traffic based on country IP ranges.
Least Operational Overhead: Attaching the WAF rule to the ALB ensures centralized control without modifying ACLs or instance firewalls.
Why Other Options Are Not Ideal:
Option A: Network ACLs operate at the subnet level and can become complex to manage for dynamic or evolving IP ranges.
Option C: Managing IP-based rules in security groups and ACLs lacks scalability and does not provide country-based filtering.
Option D: Configuring host-based firewalls increases operational overhead and does not leverage AWS-managed solutions.
AWS
Reference: AWS WAF Geomatch: AWS Documentation – WAF Geomatch
A company has Amazon EC2 instances in multiple AWS Regions. The instances all store and retrieve confidential data from the same Amazon S3 bucket. The company wants to improve the security of its current architecture.
The company wants to ensure that only the Amazon EC2 instances within its VPC can access the S3 bucket. The company must block all other access to the bucket.
Which solution will meet this requirement?
- A . Use IAM policies to restrict access to the S3 bucket.
- B . Use server-side encryption (SSE) to encrypt data in the S3 bucket at rest. Store the encryption key on the EC2 instances.
- C . Create a VPC endpoint for Amazon S3. Configure an S3 bucket policy to allow connections only from the endpoint.
- D . Use AWS Key Management Service (AWS KMS) with customer-managed keys to encrypt the data before sending the data to the S3 bucket.
C
Explanation:
Creating a VPC endpoint for S3 and configuring a bucket policy to allow access only from the endpoint ensures that only EC2 instances within the VPC can access the S3 bucket. This solution improves security by restricting access at the network level without the need for public internet access.
Option A (IAM policies): IAM policies alone cannot restrict access based on the network location.
Option B and D (Encryption): Encryption secures data at rest but does not restrict network access to the bucket.
AWS
Reference: Amazon S3 VPC Endpoints
A city’s weather forecast team is using Amazon DynamoDB in the data tier for an application. The application has several components. The analysis component of the application requires repeated reads against a large dataset. The application has started to temporarily consume all the read capacity in the DynamoDB table and is negatively affecting other applications that need to access the same data.
Which solution will resolve this issue with the LEAST development effort?
- A . Use DynamoDB Accelerator (DAX).
- B . Use Amazon CloudFront in front of DynamoDB.
- C . Create a DynamoDB table with a local secondary index (LSI).
- D . Use Amazon ElastiCache in front of DynamoDB.
A
Explanation:
Explanation (AWS Docs):
DynamoDB Accelerator (DAX) is a fully managed, in-memory cache specifically for DynamoDB. It reduces read load and latency without requiring code changes (only SDK config). This is the least development effort.
“Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available in-memory cache for DynamoDB that delivers microsecond read performance and requires minimal application changes.” ― Amazon DAX
A company is migrating a large amount of data from on-premises storage to AWS. Windows, Mac, and Linux based Amazon EC2 instances in the same AWS Region will access the data by using SMB and NFS storage protocols. The company will access a portion of the data routinely. The company will access the remaining data infrequently.
The company needs to design a solution to host the data.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an Amazon Elastic File System (Amazon EFS) volume that uses EFS Intelligent-Tiering. Use AWS DataSync to migrate the data to the EFS volume.
- B . Create an Amazon FSx for ONTAP instance. Create an FSx for ONTAP file system with a root volume that uses the auto tiering policy. Migrate the data to the FSx for ONTAP volume.
- C . Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate the data to the S3 bucket by using an AWS Storage Gateway Amazon S3 File Gateway.
- D . Create an Amazon FSx for OpenZFS file system. Migrate the data to the new volume.
B
Explanation:
Amazon FSx for ONTAP supports both NFS and SMB protocols and includes automated tiering between SSD and capacity pool storage, optimizing cost and performance. It is ideal for mixed operating systems and varied access patterns with minimal administrative overhead.
Reference: AWS Documentation C Amazon FSx for NetApp ONTAP and Auto Tiering for Multi-Protocol Access
A solutions architect is creating a data reporting application that will send traffic through third-party network firewalls in an AWS security account. The firewalls and application servers must be load balanced.
The application uses TCP connections to generate reports. The reports can run for several hours and can be idle for up to 1 hour. The reports must not time out during an idle period.
Which solution will meet these requirements?
- A . Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout period to 1 hour.
- B . Use a single firewall in the security account. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout and firewall idle timeout periods to 1 hour.
- C . Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the idle timeout periods for the ALB, the GWLB, and the firewalls to 1 hour.
- D . Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Configure the ALB idle timeout period to 1 hour. Increase the application server capacity to finish the report generation faster.
C
Explanation:
Since the application uses long-lived TCP connections and must remain idle for up to 1 hour without timeout, all components involved in the connection path (ALB, GWLB, and firewall) must have their idle timeout values configured to at least 1 hour.
Gateway Load Balancer supports transparent insertion of firewalls, and configuring consistent idle timeouts ensures connections don’t drop mid-session.
Using just one firewall (option B) introduces a single point of failure. Increasing capacity (option D) doesn’t solve idle timeout issues. Therefore, C provides a resilient and complete configuration.
A solutions architect needs to implement a solution that can handle up to 5, 000 messages per second. The solution must publish messages as events to multiple consumers. The messages are up to 500 KB in size. The message consumers need to have the ability to use multiple programming languages to consume the messages with minimal latency. The solution must retain published messages for more than 3 months. The solution must enforce strict ordering of the messages.
Which solution will meet these requirements?
- A . Publish messages to an Amazon Kinesis Data Streams data stream. Enable enhanced fan-out.
Ensure that consumers ingest the data stream by using dedicated throughput. - B . Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to subscribe to the topic.
- C . Publish messages to Amazon EventBridge. Allow each consumer to create rules to deliver messages to the consumer’s own target.
- D . Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use Amazon Data Firehose to subscribe to the topic.
A
Explanation:
