Practice Free SAA-C03 Exam Online Questions
A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Choose two.)
- A . Create an ongoing replication task.
- B . Create a database backup of the on-premises database
- C . Create an AWS Database Migration Service (AWS DMS) replication server
- D . Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
- E . Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization
A, C
Explanation:
AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora. With AWS Database Migration Service, you can also continuously replicate data with low latency from any supported source to any supported target. For example, you can replicate from multiple sources to Amazon Simple Storage Service (Amazon S3) to build a highly available and scalable data lake solution. You can also consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift. Learn more about the supported source and target databases. https: //aws.amazon.com/dms/
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the company’s on-premises data centers in the United States. Asia, and Europe. The company’s compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application
What should a solutions architect do to meet these requirements?
- A . A Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS
- B . Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator and register the ALBs as its endpoints Provide access to the application by using a CNAME that points to the accelerator DNS
- C . Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints In Route 53. create a latency-based record that points to the three NLBs. and use it as an origin for an Amazon CloudFront distribution Provide access to the application by using a CNAME that points to the CloudFront DNS
- D . Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints In Route 53 create a latency-based record that points to the three ALBs and use it as an origin for an Amazon CloudFront distribution- Provide access to the application by using a CNAME that points to the CloudFront DNS
A
Explanation:
https: //aws.amazon.com/step-functions/#: ~: text=AWS%20Step%20Functions%20is%20a, machine%20learning%20(ML)%20pipeline
s.
"A common use case for AWS Step Functions is a task that requires human intervention (for example, an approval process). Step Functions makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow called a state machine. You can quickly build and run state machines to execute the steps of your application in a reliable and scalable fashion. (https: //aws.amazon.com/pt/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/)"
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the company’s on-premises data centers in the United States. Asia, and Europe. The company’s compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application
What should a solutions architect do to meet these requirements?
- A . A Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS
- B . Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator and register the ALBs as its endpoints Provide access to the application by using a CNAME that points to the accelerator DNS
- C . Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints In Route 53. create a latency-based record that points to the three NLBs. and use it as an origin for an Amazon CloudFront distribution Provide access to the application by using a CNAME that points to the CloudFront DNS
- D . Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints In Route 53 create a latency-based record that points to the three ALBs and use it as an origin for an Amazon CloudFront distribution- Provide access to the application by using a CNAME that points to the CloudFront DNS
A
Explanation:
https: //aws.amazon.com/step-functions/#: ~: text=AWS%20Step%20Functions%20is%20a, machine%20learning%20(ML)%20pipeline
s.
"A common use case for AWS Step Functions is a task that requires human intervention (for example, an approval process). Step Functions makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow called a state machine. You can quickly build and run state machines to execute the steps of your application in a reliable and scalable fashion. (https: //aws.amazon.com/pt/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/)"
A company has a web application that has thousands of users. The application uses 8-10 user-uploaded images to generate Al images. Users can download the generated Al Images once every 6 hours. The company also has a premium user option that gives users the ability to download the generated Al images anytime
The company uses the user-uploaded images to run Al model training twice a year. The company needs a storage solution to store the images.
Which storage solution meets these requirements MOST cost-effectively?
- A . Move uploaded images to Amazon S3 Glacier Deep Archive. Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
- B . Move uploaded images to Amazon S3 Glacier Deep Archive. Move all generated Al images to S3 Glacier Flexible Retrieval.
- C . Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
- D . Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move all generated Al images to S3 Glacier Flexible Retrieval
C
Explanation:
S3 One Zone-IA:
Suitable for infrequently accessed data that doesn’t require multiple Availability Zone resilience.
Cost-effective for storing user-uploaded images that are only used for AI model training twice a year.
S3 Standard:
Ideal for frequently accessed data with high durability and availability.
Store premium user-generated AI images here to ensure they are readily available for download at any time.
S3 Standard-IA:
Cost-effective storage for data that is accessed less frequently but still requires rapid retrieval.
Store non-premium user-generated AI images here, as these images are only downloaded once every 6 hours, making it a good balance between cost and accessibility.
Cost-Effectiveness: This solution optimizes storage costs by categorizing data based on access patterns and durability requirements, ensuring that each type of data is stored in the most cost-effective manner.
Reference: Amazon S3 Storage Classes
S3 One Zone-IA
An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLBs) across multiple AWS Regions. The NLBs can route requests to targets overthe internet. The company wants to improve the customer playing experience by reducing end-to-end load time for its global customer base.
Which solution will meet these requirements?
- A . Create Application Load Balancers (ALBs) in each Region to replace the existing NLBs. Register the existing EC2 instances as targets for the ALBs in each Region.
- B . Configure Amazon Route 53 to route equally weighted traffic to the NLBs in each Region.
- C . Create additional NLBs and EC2 instances in other Regions where the company has large customer bases.
- D . Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as target endpoints.
D
Explanation:
The company wants to reduce end-to-end load time for its global customer base. AWS Global Acceleratorprovides a network optimization service that reduces latency by routing traffic to the nearest AWS edge locations, improving the user experience for globally distributed customers.
AWS Global Accelerator:
Global Acceleratorimproves the performance of your applications by routing traffic through AWS’s global network infrastructure. This reduces the number of hops and latency compared to using the public internet.
By creating astandard acceleratorand configuring the existing NLBs as target endpoints, Global Accelerator ensures that traffic from users around the world is routed to the nearest AWS edge location and then through optimized paths to the NLBs in each region. This significantly improves end-to-end load time for global customers.
Why Not the Other Options?
Option A (ALBs instead of NLBs): ALBs are designed for HTTP/HTTPS traffic and provide layer 7 features, but they wouldn’t solve the latency issue for a global customer base. The key problem here is latency, and Global Accelerator is specifically designed to address that.
Option B (Route 53 weighted routing): Route 53 can route traffic to different regions, but it doesn’t optimize network performance. It simply balances traffic between endpoints without improving latency.
Option C (Additional NLBs in more regions): This could potentially improve latency but would require setting up infrastructure in multiple regions. Global Accelerator is a simpler and more efficient solution that leverages AWS’s existing global network. AWS
Reference: AWS Global Accelerator
By using AWS Global Accelerator with the existing NLBs, the company can optimize global traffic routing and improve the customer experience by minimizing latency. Therefore, Option Dis the correct answer.
A company uses Amazon S3 to host its static website. The company wants to add a contact form to the webpage. The contact form will have dynamic server-side components for users to input their name, email address, phone number, and user message.
The company expects fewer than 100 site visits each month. The contact form must notify the company by email when a customer fills out the form.
Which solution will meet these requirements MOST cost-effectively?
- A . Host the dynamic contact form in Amazon Elastic Container Service (Amazon ECS). Set up Amazon Simple Email Service (Amazon SES) to connect to a third-party email provider.
- B . Create an Amazon API Gateway endpoint that returns the contact form from an AWS Lambda function. Configure another Lambda function on the API Gateway to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
- C . Host the website by using AWS Amplify Hosting for static content and dynamic content. Use server-side scripting to build the contact form. Configure Amazon Simple Queue Service (Amazon SQS) to deliver the message to the company.
- D . Migrate the website from Amazon S3 to Amazon EC2 instances that run Windows Server. Use Internet Information Services (IIS) for Windows Server to host the webpage. Use client-side scripting to build the contact form. Integrate the form with Amazon WorkMail.
B
Explanation:
Using API Gateway and Lambda enables serverless handling of form submissions with minimal cost and infrastructure. When coupled with Amazon SNS, it allows instant email notifications without running servers, making it ideal for low-traffic workloads.
Reference: AWS Documentation C Serverless Contact Form with API Gateway, Lambda, and SNS
What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?
- A . Update the bucket policy to deny if the PutObject does not have an s3 x-amz-acl header set
- B . Update the bucket policy to deny if the PutObject does not have an s3: x-amz-aci header set to private.
- C . Update the bucket policy to deny if the PutObject does not have an aws Secure Transport header set to true
- D . Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
D
Explanation:
https: //aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/#: ~: text=Solution%20overview, console%2C%20CLI%2C%20or%20SDK.&text=To%20encrypt%20an%20object%20at, S3%2C%20or%20SSE%2DKMS.
A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to transform the data and save the data in JSON format for later analysis.
Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days, users will upload a few files or no files.
Which solution meets these requirements with the LEAST operational overhead?
- A . Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an Amazon Aurora DB cluster.
- B . Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
- C . Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB. Most Voted
- D . Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis
Data Streams when a new file is uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in Amazon Aurora DB cluster.
C
Explanation:
Amazon S3 sends event notifications about S3 buckets (for example, object created, object removed, or object restored) to an SNS topic in the same Region.
The SNS topic publishes the event to an SQS queue in the central Region.
The SQS queue is configured as the event source for your Lambda function and buffers the event messages for the Lambda function.
The Lambda function polls the SQS queue for messages and processes the Amazon S3 event notifications according to your application’s requirements.
https: //docs.aws.amazon.com/prescriptive-guidance/latest/patterns/subscribe-a-lambda-function-to-event-notifications-from-s3-buckets-in-different-aws-regions.html
How can DynamoDB data be made available for long-term analytics with minimal operational overhead?
- A . Configure DynamoDB incremental exports to S3.
- B . Configure DynamoDB Streams to write records to S3.
- C . Configure EMR to copy DynamoDB data to S3.
- D . Configure EMR to copy DynamoDB data to HDFS.
A
Explanation:
Option Ais the most automated and cost-efficient solution for exporting data to S3 for analytics.
Option Binvolves manual setup of Streams to S3.
Options C and Dintroduce complexity with EMR.
A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances, and its database runs on Amazon RDS. The company is anticipating a large increase in sales during an upcoming holiday weekend. A solutions architect needs to build asolution to analyze the performance of the web application with a granularity of no more than 2 minutes.
What should the solutions architect do to meet this requirement?
- A . Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.
- B . Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.
- C . Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.
- D . Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch togs from the S3 bucket to process raw data tor further analysis with Amazon QuickSight.
B
Explanation:
To analyze the performance of the web application with granularity of no more than 2 minutes, enablingdetailed monitoringon EC2 instances is the best solution. By default, CloudWatch provides metrics at a 5-minute interval. Enabling detailed monitoring allows you to collect metrics at 1-minute intervals, which will give you the level of granularity you need to analyze performance during peak traffic.
Amazon CloudWatch metrics can then be used to analyze CPU utilization, memory usage, disk I/O, and network throughput, among other performance-related metrics, at the desired granularity.
Option A: Sending CloudWatch logs to Redshift for analysis is unnecessary and overcomplicated for simple performance analysis, which can be done using CloudWatch metrics alone.
Option C: Fetching EC2 logs via Lambda adds complexity, and CloudWatch metrics already provide the required data for performance analysis.
Option D: Sending logs to S3 and using Redshift for analysis is also more complex than necessary for
simple performance monitoring.
AWS
Reference: Monitoring Amazon EC2 with CloudWatch
Amazon CloudWatch Detailed Monitoring