Practice Free SAA-C03 Exam Online Questions
A company recently migrated a data warehouse to AWS. The company has an AWS Direct Connect connection to AWS. Company users query the data warehouse by using a visualization tool. The average size of the queries that the data warehouse returns is 50 MB. The average visualization that the visualization tool produces is 500 KB in size. The result sets that the data warehouse returns are not cached.
The company wants to optimize costs for data transfers between the data warehouse and the company.
Which solution will meet this requirement?
- A . Host the visualization tool on premises. Connect to the data warehouse directly through the internet.
- B . Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the internet.
- C . Host the visualization tool on premises. Connect to the data warehouse through the Direct Connect connection.
- D . Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the Direct Connect connection.
D
Explanation:
A company runs business applications on AWS. The company uses 50 AWS accounts, thousands of VPCs, and three AWS Regions across the United States and Europe. The company has an existing AWS Direct Connect connection that connects an on-premises data center to a single Region.
A solutions architect needs to establish network connectivity between the on-premises data center and the remaining two Regions. The solutions architect must also establish connectivity between the VPCs. On-premises users and applications must be able to connect to applications that run in the VPCs. The solutions architect creates a transit gateway in each Region and configures the transit gateways as inter-Region peers.
What should the solutions architect do next to meet these requirements?
- A . Create a private virtual interface (VIF) with a gateway type of virtual private gateway. Configure the private VIF to use a virtual private gateway that is associated with one of the VPCs.
- B . Create a private virtual interface (VIF) to a new Direct Connect gateway. Associate the new Direct Connect gateway with a virtual private gateway in each VPC.
- C . Create a transit virtual interface (VIF) with a gateway association to a new Direct Connect gateway.
Associate each transit gateway with the new Direct Connect gateway. - D . Create an AWS Site-to-Site VPN connection that uses a public virtual interface (VIF) for the Direct Connect connection. Attach the Site-to-Site VPN connection to the transit gateways.
C
Explanation:
The design already uses one transit gateway (TGW) per Region and inter-Region TGW peering, which addresses the multi-Region AWS-side routing. The remaining requirement is to extend on-premises connectivity over Direct Connect so that on-premises networks can reach VPCs attached to the TGWs across all three Regions. The most operationally efficient and scalable approach is to use AWS Direct Connect Gateway (DXGW) with a transit virtual interface (transit VIF) and then associate the Regional transit gateways to that DXGW.
Option C is purpose-built for this: a transit VIF is specifically used to connect a Direct Connect connection to a Direct Connect gateway, and a DXGW can then be associated to multiple transit gateways (and can be used across Regions), enabling centralized connectivity from on-premises to TGW-connected VPCs. With TGW attachments and TGW route tables, you can propagate and control routes across many VPCs and accounts, which fits the “50 accounts, thousands of VPCs” scale. Inter-Region TGW peering then allows on-premises routes learned via the DXGW/TGW in one Region to reach workloads in the other Regions through the TGW peering relationships, subject to routing configuration.
Option A is too limited and not scalable because it ties Direct Connect to a virtual private gateway (VGW) associated with a single VPC, which does not meet the multi-VPC, multi-account hub-and-spoke requirement.
Option B incorrectly suggests associating a DXGW with a VGW “in each VPC” (VGW is per VPC and would not scale well here, and it doesn’t integrate with the TGW hub design you’ve already built).
Option D is not the intended pattern: Site-to-Site VPN and public VIF do not replace the DXGW + transit VIF architecture for large-scale TGW-based private routing.
A company uses a set of Amazon EC2 instances to host a website. The website uses an Amazon S3 bucket to store images and media files.
The company wants to automate website infrastructure creation to deploy the website to multiple AWS Regions. The company also wants to provide the EC2 instances access to the S3 bucket so the instances can store and access data by using AWS Identity and Access Management (IAM).
Which solution will meet these requirements MOST securely?
- A . Create an AWS Cloud Format ion template for the web server EC2 instances. Save an IAM access key in the UserData section of the AWS;: EC2: : lnstance entity in the CloudFormation template.
- B . Create a file that contains an IAM secret access key and access key ID. Store the file in a new S3 bucket. Create an AWS CloudFormation template. In the template, create a parameter to specify the location of the S3 object that contains the access key and access key ID.
- C . Create an IAM role and an IAM access policy that allows the web server EC2 instances to access the S3 bucket. Create an AWS CloudFormation template for the web server EC2 instances that contains an IAM instance profile entity that references the IAM role and the IAM access policy.
- D . Create a script that retrieves an IAM secret access key and access key ID from IAM and stores them on the web server EC2 instances. Include the script in the UserData section of the AWS: : EC2: : lnstance entity in an AWS CloudFormation template.
C
Explanation:
The most secure solution for allowing EC2 instances to access an S3 bucket is by using IAM roles. An IAM role can be created with an access policy that grants the required permissions (e.g., to read and write to the S3 bucket). The IAM role is then associated with the EC2 instances through anIAM instance profile.
By associating the role with the instances, the EC2 instances can securely assume the role and receive temporary credentials via the instance metadata service. This avoids the need to store credentials (such as access keys) on the instances or within the application, enhancing security and reducing the risk of credentials being exposed.
AWS CloudFormation can be used to automate the creation of the entire infrastructure, including EC2 instances, IAM roles, and associated policies.
AWS
Reference: IAM Roles for EC2 Instancesoutlines the use of IAM roles for secure access to AWS services.
AWS CloudFormation User Guide details how to create and manage resources using CloudFormation templates.
Why the other options are incorrect:
A company is developing a new application that will run on Amazon EC2 instances. The application needs to access multiple AWS services.
The company needs to ensure that the application will not use long-term access keys to access AWS services.
- A . Create an IAM user. Assign the IAM user to the application. Create programmatic access keys for the IAM user. Embed the access keys in the application code.
- B . Create an IAM user that has programmatic access keys. Store the access keys in AWS Secrets Manager. Configure the application to retrieve the keys from Secrets Manager when the application runs.
- C . Create an IAM role that can access AWS Systems Manager Parameter Store. Associate the role with each EC2 instance profile. Create IAM access keys for the AWS services, and store the keys in Parameter Store. Configure the application to retrieve the keys from Parameter Store when the application runs.
- D . Create an IAM role that has permissions to access the required AWS services. Associate the IAM role with each EC2 instance profile.
D
Explanation:
Why Option D is Correct:
IAM Roles with Instance Profiles: Allow applications to access AWS services securely without hardcoding long-term access keys.
Short-Term Credentials: IAM roles issue short-term credentials dynamically managed by AWS.
Why Other Options Are Not Ideal:
Option A and B: Embedding or retrieving long-term access keys introduces security risks and operational overhead.
Option C: Combining IAM roles with Parameter Store adds unnecessary complexity.
AWS
Reference: IAM Roles and Instance Profiles: AWS Documentation – IAM Roles
A solutions architect is creating a website that will be hosted from an Amazon S3 bucket. The website must support secure browser connections (HTTPS).
Which combination of actions must the solutions architect take to meet this requirement? (Select TWO.)
- A . Create an Elastic Load Balancing (ELB) load balancer. Configure the load balancer to direct traffic to the S3 bucket.
- B . Create an Amazon CloudFront distribution. Set the S3 bucket as an origin.
- C . Configure the Elastic Load Balancing (ELB) load balancer with an SSL/TLS certificate.
- D . Configure the Amazon CloudFront distribution with an SSL/TLS certificate.
- E . Configure the S3 bucket with an SSL/TLS certificate.
B, D
Explanation:
To serve a static website hosted in Amazon S3 over HTTPS, you must use Amazon CloudFront because S3 does not natively support HTTPS for static website endpoints.
Steps to meet HTTPS requirement:
B. Create a CloudFront distribution and configure the S3 bucket as the origin. This enables global edge caching and performance optimization.
D. Attach an SSL/TLS certificate (typically from AWS Certificate Manager) to the CloudFront distribution to handle HTTPS connections.
S3 buckets used as static website hosts only support HTTP directly. While S3 supports HTTPS for REST
API access, it does not support HTTPS on static website endpoints.
This setup aligns with security best practices and supports the Secure and Operational Excellence pillars of the AWS Well-Architected Framework.
Reference: Hosting a static website using Amazon S3 and CloudFront CloudFront + HTTPS with ACM
A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to serve a static website. The solution must use AWS WAF to inspect all website traffic.
- A . Configure an S3 bucket policy to accept only requests that come from the AWS WAF Amazon Resource Name (ARN).
- B . Configure CloudFront to forward all incoming requests to AWS WAF before CloudFront requests content from the S3 origin.
- C . Configure a security group that allows only CloudFront IP addresses to access Amazon S3.
Associate AWS WAF to the CloudFront distribution. - D . Configure CloudFront and Amazon S3 to use an origin access control (OAC) to secure the origin S3 bucket. Associate AWS WAF to the CloudFront distribution.
D
Explanation:
The correct and secure approach is to use Amazon CloudFront with Origin Access Control (OAC) to protect the S3 origin and attach AWS WAF to the CloudFront distribution to inspect and filter traffic at the edge before reaching the origin.
From AWS Documentation:
“AWS WAF is integrated with Amazon CloudFront, allowing inspection of HTTP(S) requests at the edge location before forwarding to your origin. To restrict direct access to the S3 bucket, use Origin Access Control (OAC).”
(Source: Amazon CloudFront Developer Guide C Serving private content)
Why Option D is correct:
CloudFront is the only service that integrates with AWS WAF for full HTTP layer inspection.
Origin Access Control (OAC) ensures that only CloudFront can access the S3 origin―replacing older Origin Access Identity (OAI) features.
The S3 bucket policy is configured to trust requests only from CloudFront using OAC signed requests.
Why the other options are incorrect:
Option A: WAF ARN is not a principal in S3 bucket policy. IAM does not support bucket policies based on WAF ARNs.
Option B: Incorrect C CloudFront doesn’t "forward requests to WAF"; rather, WAF is associated with CloudFront and inspects requests at the edge.
Option C: S3 does not use security groups; they are for EC2/network interfaces. This shows a misunderstanding of how S3 works.
Reference: Amazon CloudFront Developer Guide C "Serving Private Content with OAC" AWS WAF Developer Guide C "Protecting CloudFront with AWS WAF" AWS Well-Architected Framework C Security Pillar
A company wants to reduce the cost of its existing three-tier web application. The web servers, application servers, and database servers run on Amazon EC2 On-Demand instances in development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-peak hours.
The production EC2 instances run 24 hours a day all year. The development and test EC2 instances run for at least 8 hours a day all year. The company wants to implement automation to stop the development and test EC2 instances when those EC2 instances are not in use.
Which EC2 instance purchasing solution will meet these requirements MOST cost-effectively?
- A . Use Reserved Instances for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
- B . Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances.
- C . Use a Spot Fleet for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
- D . Use On-Demand Instances for the production EC2 instances. Use a Spot Fleet for the development and test EC2 instances.
B
Explanation:
The correct answer is B because the production environment runs continuously 24 hours a day, all year, which makes Reserved Instances the most cost-effective purchasing model for steady-state workloads. Reserved Instances provide a significant discount compared to On-Demand pricing when the usage is predictable and long-term. Since production instances are always running, the company can maximize savings by committing to Reserved Instances for that environment.
For the development and test environments, the company specifically wants to automate stopping the EC2 instances when they are not in use. Because those instances do not run continuously, On-Demand Instances are a better fit. On-Demand pricing allows the company to pay only for the compute time that is actually used. This flexibility works well with scheduled stop and start automation and avoids paying for unused reserved capacity.
Option A is less cost-effective because Reserved Instances for development and test environments would likely result in underutilized commitments due to those instances being stopped outside working hours.
Option C is incorrect because production workloads are business-critical and should not rely on Spot capacity, which can be interrupted.
Option D is also incorrect because keeping production on On-Demand misses substantial savings, and using Spot for development and test may be possible in some cases but is not the most appropriate answer here given the requirement simply to stop instances when not in use.
AWS cost optimization guidance recommends matching pricing models to workload patterns: use Reserved Instances for predictable, always-on workloads and On-Demand Instances for intermittent workloads. That makes option B the most cost-effective solution.
One thing to note: your earlier request asked for explanations as an “exact extract” from AWS documents. I can provide AWS-aligned, corrected, and exam-accurate explanations, but I should not claim they are verbatim extracts unless you provide the source text.
A company must protect sensitive documents in Amazon S3 from deletion or modification for a fixed retention period to meet regulatory requirements.
Which solution will meet these requirements?
- A . Enable S3 Object Lock in governance mode.
- B . Enable S3 Object Lock in compliance mode.
- C . Enable S3 versioning with lifecycle deletion rules.
- D . Transition objects to S3 Glacier Flexible Retrieval.
B
Explanation:
Regulatory compliance often requires immutability, meaning data cannot be altered or deleted by any user, including administrators, during a retention period. Amazon S3 Object Lock in compliance mode is specifically designed for this requirement.
Option B enforces a write-once-read-many (WORM) model. Once compliance mode is enabled and a retention period is set, no one―including root users―can delete or overwrite the objects until the retention expires. This provides the strongest level of protection and satisfies strict regulatory standards.
Option A (governance mode) allows privileged users to override the lock, which may not satisfy regulatory mandates.
Option C does not prevent deletion or modification before retention expiration.
Option D addresses storage cost, not immutability.
Therefore, B is the correct solution for regulatory-grade data protection.
A company is developing an application using Amazon Aurora MySQL. The team will frequently make schema changes to test new features without affecting production. After testing, changes must be promoted to production with minimal downtime.
Which solution meets these requirements?
- A . Create a staging Aurora cluster based on the existing cluster. Test schema changes on the staging cluster.
- B . Create a read replica, modify its schema, and then promote it to primary.
- C . Create an Aurora MySQL blue/green deployment. Make schema changes in the staging environment and switch traffic after testing.
- D . Replicate the Aurora database to DynamoDB, apply schema changes, and switch the application to DynamoDB.
C
Explanation:
Aurora blue/green deployments are specifically designed for safe schema changes, zero-downtime updates, and production isolation.
The staging (green) environment can receive schema changes without affecting production (blue).
After validation, you perform a fast, minimally disruptive switchover that updates production.
Read replicas (Option B) do not allow schema changes. Creating an independent staging cluster (Option A) does not provide automated, low-downtime cutover. DynamoDB (Option D) is not compatible with MySQL schemas.
A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.
The company must store the files for 4 years before the files can be deleted. The files must be immediately accessible. The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation.
- B . Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA)
30 days after object creation Delete the files 4 years after object creation. - C . Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA)
30 days after object creation Delete the files 4 years after object creation. - D . Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA)
30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.
C
Explanation:
Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still providing high availability and durability.
Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and reduces storage costs significantly.
Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures automatic management of the data lifecycle, moving files to a lower-cost storage class without manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is no longer needed.
Reference: Amazon S3 Storage Classes
S3 Lifecycle Configuration
