Practice Free SAA-C03 Exam Online Questions
A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?
- A . Use AWS Config rules to define and detect resources that are not properly tagged.
- B . Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
- C . Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
- D . Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
A
Explanation:
To ensure all Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters are configured with tags, a solutions architect should use AWS Config rules to define and detect resources that are not properly tagged. AWS Config rules are a set of customizable rules that AWS Config uses to evaluate AWS resource configurations for compliance with best practices and company policies. Using AWS Config rules can minimize the effort of configuring and operating this check because it automates the process of identifying non-compliant resources and notifying the responsible teams.
Reference: AWS Config Developer Guide: AWS Config Rules (https: //docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html)
A company needs a backup strategy for its three-tier stateless web application. The web application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling policy that is configured to respond to scaling events. The database tier runs on Amazon RDS for PostgreSQL. The web application does not require temporary local storage on the EC2 instances. The company’s recovery point objective (RPO) is 2 hours
The backup strategy must maximize scalability and optimize resource utilization for this environment.
Which solution will meet these requirements?
- A . Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO
- B . Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the RPO
- C . Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
- D . Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO
C
Explanation:
Since the application has no local data on instances, AMIs alone can meet the RPO by restoring instances from the most recent AMI backup. When combined with automated RDS backups for the database, this provides a complete backup solution for this environment. The other options involving EBS snapshots would be unnecessary given the stateless nature of the instances. AMIs provide all the backup needed for the app tier. This uses native, automated AWS backup features that require minimal ongoing management: – AMI automated backups provide point-in-time recovery for the stateless app tier. – RDS automated backups provide point-in-time recovery for the database.
A company runs a three-tier web application in the AWS Cloud that operates across three Availability Zones. The application architecture has an Application Load Balancer, an Amazon EC2 web server that hosts user session states, and a MySQL database that runs on an EC2 instance. The company expects sudden increases in application traffic. The company wants to be able to scale to meet future application capacity demands and to ensure high availability across all three Availability Zones.
Which solution will meet these requirements?
- A . Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads.
Migrate the web server to an Auto Scaling group that is in three Availability Zones. - B . Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Memcached with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
- C . Migrate the MySQL database to Amazon DynamoDB. Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
- D . Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
A
Explanation:
This answer is correct because it meets the requirements of scaling to meet future application capacity demands and ensuring high availability across all three Availability Zones. By migrating the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment, the company can benefit from automatic failover, backup, and patching of the database across multiple Availability Zones. By using Amazon ElastiCache for Redis with high availability, the company can store session data and cache reads in a fast, in-memory data store that can also fail over across Availability Zones. By migrating the web server to an Auto Scaling group that is in three Availability Zones, the company can automatically scale the web server capacity based on the demand and traffic patterns.
Reference: https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html https: //docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html https: //docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
A company has developed an API using an Amazon API Gateway REST API and AWS Lambda functions. The API serves static and dynamic content to users worldwide. The company wants to decrease the latency of transferring content for API requests.
- A . Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.
- B . Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding in the API definition to compress the application data in transit.
- C . Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.
- D . Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for the Lambda functions.
A
Explanation:
A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The company’s online application uses the database to process transactions. The data analysis team uses the same production database to run reports for analytical processing. The company wants to reduce operational overhead by moving to managed services wherever possible.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Migrate to Amazon RDS for Microsoft SQL Server. Use read replicas for reporting purposes.
- B . Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes.
- C . Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes.
- D . Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes.
A
Explanation:
Amazon RDS for Microsoft SQL Server is a fully managed service that offers SQL Server 2014, 2016, 2017, and 2019 editions while offloading database administration tasks such as backups, patching, and scaling. Amazon RDS supports read replicas, which are read-only copies of the primary database that can be used for reporting purposes without affecting the performance of the online application.
This solution will meet the requirements with the least operational overhead, as it does not require any code changes or manual intervention.
Reference:
1 provides an overview of Amazon RDS for Microsoft SQL Server and its benefits.
2 explains how to create and use read replicas with Amazon RDS.
A company is developing a real-time multiplayer game that uses UDP for communications between the client and servers In an Auto Scaling group Spikes in demand are anticipated during the day, sothe game server platform must adapt accordingly Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.
Which solution should a solutions architect recommend?
- A . Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage
- B . Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage
- C . Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage
- D . Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage
B
Explanation:
A Network Load Balancer is a type of load balancer that operates at the connection level (Layer 4) and can load balance both TCP and UDP traffic1. A Network Load Balancer is suitable for scenarios where high performance and low latency are required, such as real-time multiplayer games1. A Network Load Balancer can also handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone1.
To meet the requirements of the scenario, the solutions architect should use a Network Load Balancer for traffic distribution between the EC2 instances in the Auto Scaling group. The Network Load Balancer can route UDPtraffic from the client to the servers on the appropriate port2. TheNetwork Load Balancer can also support TLS offloading for secure communications
between the client and servers1.
Amazon DynamoDB is a fully managed NoSQL database service that can store and retrieve any amount of data with consistent performance and low latency3. Amazon DynamoDB on-demand is a flexible billing option that requires no capacity planning and charges only for the read and write requests that are performed on the tables3. Amazon DynamoDB on-demand is ideal forscenarios where the application traffic is unpredictable or sporadic, such as gaming applications3.
To meet the requirements of the scenario, the solutions architect should use Amazon DynamoDB on-demand for data storage. Amazon DynamoDB on-demand can store gamer scores and other non-relational data without intervention from the developers. Amazon DynamoDB on-demand can also scale automatically to handle any level of requesttraffic without affecting performance or availability3.
A media company uses an Amazon CloudFront distribution to deliver content over the internet. The company wants only premium customers to have access to the media streams and file content. The company stores all content in an Amazon S3 bucket. The company also delivers content on demand to customers for a specific purpose, such as movie rentals or music downloads.
Which solution will meet these requirements?
- A . Generate and provide S3 signed cookies to premium customers
- B . Generate and provide CloudFront signed URLs to premium customers.
- C . Use origin access control (OAC) to limit the access of non-premium customers
- D . Generate and activate field-level encryption to block non-premium customers.
B
Explanation:
CloudFront Signed URLs: These URLs allow you to provide limited access to content that is being
served through an Amazon CloudFront distribution. Signed URLs can be generated to grant time-limited access to premium customers.
Content Restriction:
By using CloudFront signed URLs, you can control access to your media streams and file content stored in S3.
These URLs can be customized with an expiration time, ensuring that access is only available for a specific period, which is useful for scenarios like movie rentals or music downloads. Security and Flexibility:
Signed URLs ensure that only authenticated users (premium customers) can access the restricted content.
This approach integrates seamlessly with CloudFront and S3, providing an efficient way to manage access controls without additional overhead.
Operational Efficiency: Using CloudFront signed URLs leverages AWS managed services to handle the complexity of access control, reducing the need for custom implementation and maintenance.
Reference: Serving Private Content with Signed URLs and Signed Cookies
A company’s web application consists of multiple Amazon EC2 instances that run behind an Application Load Balancer in a VPC. An Amazon RDS for MySQL DB instance contains the data. The company needs the ability to automatically detect and respond to suspicious or unexpected behavior in its AWS environment. The company already has added AWS WAF to its architecture.
What should a solutions architect do next to protect against threats?
- A . Use Amazon GuardDuty to perform threat detection. Configure Amazon EventBridge to filter for GuardDuty findings and to Invoke an AWS Lambda function to adjust the AWS WAF rules.
- B . Use AWS Firewall Manager to perform threat detection. Configure Amazon EventBridge to filter for Firewall Manager findings and to invoke an AWS Lambda function to adjust the AWS WAF web ACL
- C . Use Amazon Inspector to perform threat detection and lo update the AWS WAF rules. Create a VPC network ACL to limit access to the web application.
- D . Use Amazon Macie to perform threat detection and to update the AWS WAF rules. Create a VPC network ACL to limit access to the web application.
A
Explanation:
Understanding the Requirement: The company needs to automatically detect and respond to suspicious or unexpected behavior in its AWS environment, beyond the existing AWS WAF setup.
Analysis of Options:
Amazon GuardDuty: Provides continuous monitoring and threat detection across AWS accounts and resources, including integration with AWS WAF for automated response.
AWS Firewall Manager: Manages firewall rules across multiple accounts but is more focused on central management than threat detection.
Amazon Inspector: Focuses on security assessments and vulnerability management rather than real-time threat detection.
Amazon Macie: Primarily used for data security and privacy, not comprehensive threat detection.
Best Solution:
Amazon GuardDuty with EventBridge and Lambda: This combination ensures continuous threat detection and automated response by updating AWS WAF rules based on GuardDuty findings.
Reference: Amazon GuardDuty
Amazon EventBridge
AWS Lambda
A company uses on-premises servers to host its applications. The company is running out of storage capacity. The applications use both block storage and NFS storage. The company needs a high-performing solution that supports local caching without re-architecting its existing applications.
Which combination of actions should a solutions architect take to meet these requirements? (Select TWO.)
- A . Mount Amazon S3 as a file system to the on-premises servers.
- B . Deploy an AWS Storage Gateway file gateway to replace NFS storage.
- C . Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers.
- D . Deploy an AWS Storage Gateway volume gateway to replace the block storage
- E . Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises servers.
B, D
Explanation:
https: //aws.amazon.com/storagegateway/file/
File Gateway provides a seamless way to connect to the cloud in order to store application data files and backup images as durable objects in Amazon S3 cloud storage. File Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. It can be used for on-premises applications, and for Amazon EC2-based applications that need file protocol access to S3 object storage.
https: //aws.amazon.com/storagegateway/volume/
Volume Gateway presents cloud-backed iSCSI block storage volumes to your on-premises applications. Volume Gateway stores and manages on-premises data in Amazon S3 on your behalf and operates in either cache mode or stored mode. In the cached Volume Gateway mode, your primary data is stored in Amazon S3, while retaining your frequently accessed data locally in the cache for low latency access.
A company hosts a data lake on Amazon S3. The data lake ingests data in Apache Parquet format from various data sources. The company uses multiple transformation steps to prepare the ingested data. The steps include filtering of anomalies, normalizing of data to standard date and time values, and generation of aggregates for analyses.
The company must store the transformed data in S3 buckets that data analysts access. The company
needs a prebuilt solution for data transformation that does not require code. The solution must
provide data lineage and data profiling. The company needs to share the data transformation steps
with employees throughout the company.
Which solution will meet these requirements?
- A . Configure an AWS Glue Studio visual canvas to transform the data. Share the transformation steps
with employees by using AWS Glue jobs. - B . Configure Amazon EMR Serverless to transform the data. Share the transformation steps with employees by using EMR Serveriess jobs.
- C . Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees by using DataBrew recipes.
- D . Create Amazon Athena tables for the data. Write Athena SQL queries to transform the data. Share the Athena SQL queries with employees.
C
Explanation:
The most suitable solution for the company’s requirements is to configure AWS Glue DataBrew to transform the data and share the transformation steps with employees by using DataBrew recipes. This solution will provide a prebuilt solution for data transformation that does not require code, and will also provide data lineage and data profiling. The company can easily share the data transformation steps with employees throughout the company by using DataBrew recipes.
AWS Glue DataBrew is a visual data preparation tool that makes it easy for data analysts and data scientists to clean and normalize data for analytics or machine learning by up to 80%faster. Users can upload their data from various sources, such as Amazon S3, Amazon RDS, Amazon Redshift, Amazon Aurora, or Glue Data Catalog, and use a point-and-click interface to apply over 250 built-in transformations. Users can also preview the results of each transformation step and see how it affects the quality and distribution of the data1.
A DataBrew recipe is a reusable set of transformation steps that can be applied to one or more datasets. Users can create recipes from scratch or use existing ones from the DataBrew recipe library. Users can also export, import, or share recipes with other users or groups within their AWS account or organization2.
DataBrew also provides data lineage and data profiling features that help users understand and improve their data quality. Data lineage shows the source and destination of each dataset and how it is transformed by each recipe step. Data profiling shows various statistics and metrics about each dataset, such as column