Practice Free SAA-C03 Exam Online Questions
A company is migrating an online marketplace application from a mainframe system to an Auto Scaling group of Amazon EC2 instances. The EC2 instances access an Amazon Aurora cluster. The application requires a scalable, persistent caching solution to store the results of in-progress transactions and SQL queries.
- A . Use an Amazon ElastiCache (Redis OSS) cluster to serve transaction and query results.
- B . Use an Amazon CloudFront distribution with an Amazon S3 bucket as the origin to cache the transactions. Add an Amazon EC2 instance store volume to the EC2 instances for query result caching.
- C . Use an Amazon ElastiCache (Memcached) cluster to serve transaction and query results.
- D . Use an Amazon ElastiCache (Redis OSS) cluster to cache the transactions. Add an Amazon EC2 instance store volume to the EC2 instances for query result caching.
A
Explanation:
Why Option A is Correct:
ElastiCache for Redis: Provides persistent, scalable caching for in-progress transactions and SQL queries. Redis supports data durability and advanced features, making it suitable for transactional
workloads.
Integration with Aurora: Easily integrates with the Aurora cluster to improve query performance.
Why Other Options Are Not Ideal:
Option B: CloudFront and S3 are unsuitable for transactional caching. EC2 instance store volumes are ephemeral and lack persistence.
Option C: Memcached does not offer persistence or advanced transactional support, unlike Redis.
Option D: Combining Redis with EC2 instance store is unnecessary; Redis alone meets all caching
requirements.
AWS
Reference: Amazon ElastiCache: AWS Documentation – ElastiCache
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?
- A . S3 Standard
- B . S3 Intelligent-Tiering
- C . S3 Standard-Infrequent Access {S3 Standard-IA)
- D . S3 One Zone-Infrequent Access (S3 One Zone-IA)
B
Explanation:
S3 Intelligent-Tiering – Perfect use case when you don’t know the frequency of access or irregular patterns of usage.
Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. If you have data residency requirements that can’t be met by an existing AWS Region, you can use the S3 Outposts storage class to store your S3 data on-premises. Amazon S3 also offers capabilities to manage your data throughout its lifecycle. Once an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.
https: //aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?nc1=h_ls
A company wants to create an application to store employee data in a hierarchical structured relationship. The company needs a minimum-latency response to high-traffic queries for the employee data and must protect any sensitive data. The company also needs to receive monthly email messages if any financial information is present in the employee data.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)
- A . Use Amazon Redshift to store the employee data in hierarchies. Unload the data to Amazon S3 every month.
- B . Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month.
- C . Configure Amazon fvlacie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly events to AWS Lambda.
- D . Use Amazon Athena to analyze the employee data in Amazon S3. Integrate Athena with Amazon QuickSight to publish analysis dashboards and share the dashboards with users.
- E . Configure Amazon Macie for the AWS account Integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon Simple Notification Service (Amazon SNS) subscription.
B, E
Explanation:
Generally, for building a hierarchical relationship model, a graph database such as Amazon Neptune is a better choice. In some cases, however, DynamoDB is a better choice for hierarchical data modeling because of its flexibility, security, performance, and scale. https: //docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-hierarchical-data-model/introduction.html
A company has a multi-tier payment processing application that is based on virtual machines (VMs). The communication between the tiers occurs asynchronously through a third-party middleware solution that guarantees exactly-once delivery.
The company needs a solution that requires the least amount of infrastructure management. The solution must guarantee exactly-once delivery for application messaging
Which combination of actions will meet these requirements? (Select TWO.)
- A . Use AWS Lambda for the compute layers in the architecture.
- B . Use Amazon EC2 instances for the compute layers in the architecture.
- C . Use Amazon Simple Notification Service (Amazon SNS) as the messaging component between the compute layers.
- D . Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the compute layers.
- E . Use containers that are based on Amazon Elastic Kubemetes Service (Amazon EKS) for the compute layers in the architecture.
A, D
Explanation:
This solution meets the requirements because it requires the least amount of infrastructure management and guarantees exactly-once delivery for application messaging. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You only pay for the compute time you consume.Lambda scales automatically with the size of your workload. Amazon SQS FIFO queues are designed to ensure that messages are processed exactly once, in the exact order that they are sent. FIFO queues have high availability and deliver messages in a strict first-in, first-out order. You can use Amazon SQS to decouple and scale microservices, distributed systems, and serverless applications.
Reference: AWS Lambda, Amazon SQS FIFO queues
A company hosts an application on Amazon EC2 instances that run in a single Availability Zone. The application is accessible by using the transport layer of the Open Systems Interconnection (OSI) model. The company needs the application architecture to have high availability
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO_)
- A . Configure new EC2 instances in a different AvailabiIity Zone. Use Amazon Route 53 to route traffic to all instances.
- B . Configure a Network Load Balancer in front of the EC2 instances.
- C . Configure a Network Load Balancer tor TCP traffic to the instances. Configure an Application Load Balancer tor HTTP and HTTPS traffic to the instances.
- D . Create an Auto Scaling group for the EC2 instances. Configure the Auto Scaling group to use multiple Availability Zones. Configure the Auto Scaling group to run application health checks on the instances_
- E . Create an Amazon CloudWatch alarm. Configure the alarm to restart EC2 instances that transition to a stopped state
A, D
Explanation:
To achieve high availability for an application that runs on EC2 instances, the application should be deployed across multiple Availability Zones and use a load balancer to distribute traffic. An Auto Scaling group can be used to launch and manage EC2 instances in multiple Availability Zones and perform health checks on them. A Network Load Balancer can be used to handle transport layer traffic to the EC2 instances.
Reference: Auto Scaling Groups
What Is a Network Load Balancer?
A company is designing a cloud communications platform that is driven by APIs. The application is hosted on Amazon EC2 instances behind a Network Load Balancer (NLB). The company uses Amazon API Gateway to provide external users with access to the application through APIs. The company wants to protect the platform against web exploits like SQL injection and also wants to detect and mitigate large, sophisticated DDoS attacks.
Which combination of solutions provides the MOST protection? (Select TWO.)
- A . Use AWS WAF to protect the NLB.
- B . Use AWS Shield Advanced with the NLB.
- C . Use AWS WAF to protect Amazon API Gateway.
- D . Use Amazon GuardDuty with AWS Shield Standard.
- E . Use AWS Shield Standard with Amazon API Gateway.
B, C
Explanation:
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources.
You can protect the following resource types:
Amazon CloudFront distribution
Amazon API Gateway REST API
Application Load Balancer
AWS AppSync GraphQL API
Amazon Cognito user pool
https: //docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
A company uses Amazon RDS with default backup settings for Its database tier. The company needs to make a dally backup of the database to meet regulatory requirements. The company must retain the backups (or 30 days.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Write an AWS Lambda function to create an RDS snapshot every day.
- B . Modify the RDS database lo have a retention period of 30 days for automated backups.
- C . Use AWS Systems Manager Maintenance Windows to modify the RDS backup retention period.
- D . Create a manual snapshot every day by using the AWS CLI. Modify the RDS backup retention period.
B
Explanation:
Current Backup Settings: By default, Amazon RDS creates automated backups with a retention period of 7 days.
Regulatory Requirements: The requirement is to retain daily backups for 30 days.
Adjusting Retention Period: You can modify the RDS instance settings to increase the automated backup retention period to 30 days.
Operational Overhead: This solution is the simplest as it leverages existing automated backups and requires minimal intervention.
Implementation: The change can be made via the AWS Management Console, AWS CLI, or AWS SDKs.
Reference
Amazon RDS Backups: Amazon RDS Documentation
A company hosts its web application on AWS using seven Amazon EC2 instances. The company requires that the IP addresses of all healthy EC2 instances be returned in response to DNS queries.
Which policy should be used to meet this requirement?
- A . Simple routing policy
- B . Latency routing policy
- C . Multivalue routing policy
- D . Geolocation routing policy
C
Explanation:
Use a multivalue answer routing policy to help distribute DNS responses across multiple resources. For example, use multivalue answer routing when you want to associate your routing records with a Route 53 health check. For example, use multivalue answer routing when you need to return multiple values for a DNS query and route traffic to multiple IP addresses. https: //aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
A serverless application uses Amazon API Gateway. AWS Lambda, and Amazon DynamoDB. The Lambda function needs permissions to read and write to the DynamoDB table.
Which solution will give the Lambda function access to the DynamoDB table MOST securely?
- A . Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters as part of the Lambda environment variables. Ensure that other AWS users do not have read and write access to the Lambda function configuration
- B . Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and write access to the DynamoDB table. Update the configuration of the Lambda function to use the new role as the execution role.
- C . Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters in AWS Systems Manager Parameter Store as secure string parameters. Update the Lambda function code to retrieve the secure string parameters before connecting to the DynamoDB table.
- D . Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read and write access from the Lambda function. Update the code of the Lambda function to attach to the new role as an execution role.
B
Explanation:
Option B suggests creating an IAM role that includes Lambda as a trusted service, meaning the role is specifically designed for Lambda functions. The role should have a policy attached to it that grants the required read and write access to the DynamoDB table.
A company is building a three-tier application on AWS. The presentation tier will serve a static website. The logic tier is a containerized application. This application will store data in a relational database. The company wants to simplify deployment and to reduce operational costs.
Which solution will meet these requirements?
- A . Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
- B . Use Amazon CloudFront to host static content. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the database.
- C . Use Amazon S3 to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
- D . Use Amazon EC2 Reserved Instances to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the database.
A
Explanation:
Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. You can use Amazon S3 to host static content for your website, such as HTML files, images, videos, etc. Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that allows you to run and scale containerized applications on AWS. AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS. Fargate makes it easy for you to focus on building your applications by removing the need to provision and manage servers. You can use Amazon ECS with AWS Fargate for compute power for your containerized application logic tier. Amazon RDS is a managed relational database service that makes it easy to set up, operate, and scale a relational database in the cloud. You can use a managed Amazon RDS cluster for the database tier of your application. This solution will simplify deployment and reduce operational costs for your three-tier application.
Reference:
https: //docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
https: //docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
https: //docs.aws.a mazon.com/AmazonRDS/latest/UserGuide/Welcome.html