Practice Free SAA-C03 Exam Online Questions
A company is developing a social media application that must scale to meet demand spikes and handle ordered processes.
Which AWS services meet these requirements?
- A . ECS with Fargate, RDS, and SQS for decoupling.
- B . ECS with Fargate, RDS, and SNS for decoupling.
- C . DynamoDB, Lambda, DynamoDB Streams, and Step Functions.
- D . Elastic Beanstalk, RDS, and SNS for decoupling.
A
Explanation:
Option A combines ECS with Fargate for scalability, RDS for relational data, and SQS for decoupling with message ordering (FIFO queues).
Option Buses SNS, which does not maintain message order.
Option C is suitable for serverless workflows but not relational data.
Option D relies on Elastic Beanstalk, which offers less flexibility for scaling.
A company recently migrated its web application to AWS by rehosting the application on Amazon EC2 instances in a single AWS Region. The company wants to redesign its application architecture to be highly available and fault tolerant. Traffic must reach all running EC2 instances randomly.
Which combination of steps should the company take to meet these requirements? (Choose two.)
- A . Create an Amazon Route 53 failover routing policy.
- B . Create an Amazon Route 53 weighted routing policy.
- C . Create an Amazon Route 53 multivalue answer routing policy.
- D . Launch three EC2 instances: two instances in one Availability Zone and one instance in another Availability Zone.
- E . Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.
C, E
Explanation:
https: //aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure.
Which solution will meet these requirements?
- A . Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes
- B . Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.
- C . Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
- D . Use AWS Cloud Formation to set up the infrastructure. Use AWS Service Catalog to track changes.
B
Explanation:
AWS CloudFormationallows for the automated, repeatable setup of infrastructure, reducing human error and ensuring consistency.AWS Configprovides the ability to track changes in the infrastructure, ensuring that all changes are logged and auditable, which satisfies the requirement for tracking incremental changes.
Option A and C (AWS Organizations): AWS Organizations manage multiple accounts, but they are not designed for infrastructure setup or change tracking.
Option D (Service Catalog): Service Catalog is used for deploying products, not for setting up
infrastructure or tracking changes.
AWS
Reference: AWS Config
AWS CloudFormation
An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (Pll). The company wants to use the data in three applications. Only one of the applications needs to process the Pll. The Pll must be removed before the other two applications process the data.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application requests.
- B . Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the requesting application.
- C . Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom dataset. Point each application to its respective S3 bucket.
- D . Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom dataset. Point each application to its respective DynamoDB table.
B
Explanation:
https: //aws.amazon.com/blogs/aws/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-from-s3/
S3 Object Lambda is a new feature of Amazon S3 that enables customers to add their own code to process data retrieved from S3 before returning it to the application. By using S3 ObjectLambda, the data can be processed and transformed in real-time, without the need to store multiple copies of the data in separate S3 buckets or DynamoDB tables.
In this case, the Pll can be removed from the data by the code added to S3 Object Lambda before returning the data to the two applications that do not need to process Pll. The one application that requires Pll can be pointed to the original S3 bucket where the Pll is still stored.
Using S3 Object Lambda is the simplest and most cost-effective solution, as it eliminates the need to maintain multiple copies of the same data in different buckets or tables, which can result in additional storage costs and operational overhead.
An application runs on an Amazon EC2 instance that has an Elastic IP address in VPC A. The application requires access to a database in VPC B. Both VPCs are in the same AWS account.
Which solution will provide the required access MOST securely?
- A . Create a DB instance security group that allows all traffic from the public IP address of the application server in VPC A.
- B . Configure a VPC peering connection between VPC A and VPC B.
- C . Make the DB instance publicly accessible. Assign a public IP address to the DB instance.
- D . Launch an EC2 instance with an Elastic IP address into VPC
- E . Proxy all requests through the new EC2 instance.
B
Explanation:
A VPC peering connection is a networking connection between two VPCs that enables users to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. A VPC peering connection can be createdbetween VPCs in the same or different AWS accounts and Regions1. By configuring a VPC peering connection between VPC A and VPC B, the solution can provide the required access most securely.
A company has multiple Amazon RDS DB instances that run in a development AWS account. All the instances have tags to identify them as development resources. The company needs the development DB instances to run on a schedule only during business hours.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an Amazon CloudWatch alarm to identify RDS instances that need to be stopped Create an AWS Lambda function to start and stop the RDS instances.
- B . Create an AWS Trusted Advisor report to identify RDS instances to be started and stopped. Create an AWS Lambda function to start and stop the RDS instances.
- C . Create AWS Systems Manager State Manager associations to start and stop the RDS instances.
- D . Create an Amazon EventBridge rule that invokes AWS Lambda functions to start and stop the RDS instances.
D
Explanation:
To run RDS instances only during business hours with the least operational overhead, you can useAmazon EventBridgeto schedule events that invoke AWS Lambda functions. The Lambda functions can be configured to start and stop the RDS instances based on the specified schedule (business hours). EventBridge rules allow you to define recurring events easily, and Lambda functions provide a
serverless way to manage RDS instance start and stop operations, reducing administrative overhead.
Option A: While CloudWatch alarms could be used, they are more suited for monitoring, and using Lambda with EventBridge is simpler.
Option B (Trusted Advisor): Trusted Advisor is not ideal for scheduling tasks.
Option C (Systems Manager): Systems Manager could also work, but EventBridge and Lambda offer a
more streamlined and lower-overhead solution.
AWS
Reference: Amazon EventBridge Scheduler
AWS Lambda
A medical research lab produces data that is related to a new study. The lab wants to make the data available with minimum latency to clinics across the country for their on-premises, file-based applications. The data files are stored in an Amazon S3 bucket that has read-only permissions for each clinic.
What should a solutions architect recommend to meet these requirements?
- A . Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
- B . Migrate the files to each clinic’s on-premises applications by using AWS DataSync for processing.
- C . Deploy an AWS Storage Gateway volume gateway as a virtual machine (VM) on premises at each clinic.
- D . Attach an Amazon Elastic File System (Amazon EFS) file system to each clinic’s on-premises servers.
A
Explanation:
AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS’s storage infrastructure. By deploying a file gateway as a virtual machine on each clinic’s premises, the medical research lab can provide low-latency access to the data stored in the S3 bucket while maintaining read-only permissions for each clinic. This solution allows the clinics to access the data files directly from their on-premises file-based applications without the need for data transfer or migration.
A company is planning to migrate a commercial off-the-shelf application from is on-premises data center to AWS. The software has a software licensing model using sockets and cores with predictable capacity and uptime requirements. The company wants to use its existing licenses, which were purchased earlier this year.
Which Amazon EC2 pricing option is the MOST cost-effective?
- A . Dedicated Reserved Hosts
- B . Dedicated On-Demand Hosts
- C . Dedicated Reserved Instances
- D . Dedicated On-Oemand Instances
A
Explanation:
https: //aws.amazon.com/ec2/dedicated-hosts/ Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2, so that you get the flexibility and cost effectiveness of using your own licenses, but with the resiliency, simplicity and elasticity of AWS.
A company wants to migrate an application that uses a microservice architecture to AWS. The services currently run on Docker containers on-premises. The application has an event-driven architecture that uses Apache Kafka. The company configured Kafka to use multiple queues to send and receive messages. Some messages must be processed by multiple services.
Which solution will meet these requirements with the LEAST management overhead?
- A . Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Deploy a Kafka cluster on EC2 instances to handle service-to-service communication.
- B . Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create multiple Amazon Simple Queue Service (Amazon SQS) queues to handle service-to-service communication.
- C . Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Deploy an Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster to handle service-to-service communication.
- D . Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Use Amazon EventBridge to handle service-to-service communication.
A global company is using Amazon API Gateway to design REST APIs for its loyalty club users in the us-east-1 Region and the ap-southeast-2 Region. A solutions architect must design a solution to protect these API Gateway managed REST APIs across multiple accounts from SQL injection and cross-site scripting attacks.
Which solution will meet these requirements with the LEAST amount of administrative effort?
- A . Set up AWS WAF in both Regions. Associate Regional web ACLs with an API stage.
- B . Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.
- C . Set up AWS Shield in bath Regions. Associate Regional web ACLs with an API stage.
- D . Set up AWS Shield in one of the Regions. Associate Regional web ACLs with an API stage.
A
Explanation:
Using AWS WAF has several benefits. Additional protection against web attacks using criteria that you specify. You can define criteria using characteristics of web requests such as the following: Presence of SQL code that is likely to be malicious (known as SQL injection). Presence of a script that is likely to be malicious (known as cross-site scripting). AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of protections. https: //docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html