Practice Free SAA-C03 Exam Online Questions
The following IAM policy is attached to an IAM group. This is the only policy applied to the group.
- A . Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied.
- B . Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA).
- C . Group members are allowed the ec2: Stoplnstances and ec2: Terminatelnstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members arepermitted any other Amazon EC2 action.
- D . Group members are allowed the ec2: Stoplnstances and ec2: Terminatelnstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
D
Explanation:
This answer is correct because it reflects the effect of the IAM policy on the group members. The policy has two statements: one with an Allow effect and one with a Deny effect. The Allow statement grants permission to perform any EC2 action on any resource within the us-east-1 Region. The Deny statement overrides the Allow statement and denies permission to perform the ec2: StopInstances and ec2: TerminateInstances actions on any resource within the us-east-1 Region, unless the group member is logged in with MFA. Therefore, the group members can perform any EC2 action except stopping or terminating instances in the us-east-1 Region, unless they use MFA.
A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but should be completed within a few seconds after a request is made.
Which compute service should the solutions architect have the API invoke to deliver the requirements at the lowest cost?
- A . An AWS Glue job
- B . An AWS Lambda function
- C . A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS)
- D . A containerized service hosted in Amazon ECS with Amazon EC2
B
Explanation:
API Gateway + Lambda is the perfect solution for modern applications with serverless architecture.
A company has a production workload that runs on 1, 000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?
- A . Create an AWS Lambda function to apply the patch to all EC2 instances.
- B . Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
- C . Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
- D . Use AWS Systems Manager Run Command to run a custom command that applies the patch to all
EC2 instances.
B
Explanation:
https: //docs.aws.amazon.com/systems-manager/latest/userguide/about-windows-app-patching.html
A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL in the database layer. Several players will compete concurrently online. The game’s developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores.
What should a solutions architect do to meet these requirements?
- A . Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
- B . Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
- C . Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
- D . Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
B
Explanation:
This answer is correct because it meets the requirements of displaying a top-10 scoreboard in near-real time and offering the ability to stop and restore the game while preserving the current scores.
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. You can use Amazon ElastiCache for Redis to set up an ElastiCache for Redis cluster to compute and cache the scores for the web application to display. You can use Redis data structures such as sorted sets and hashes to store and rank the scores of the players, and use Redis commands such as ZRANGE and ZADD to retrieve and update the scores efficiently. You can also use Redis persistence features such as snapshots and append-only files (AOF) to enable point-in-time recovery of your data, which can help you stop and restore the game while preserving the current scores.
Reference:
https: //docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html
https: //redis.io/topics/data-types
https: //redis.io/topics/persistence
A company runs a monolithic application in its on-premises data center. The company used Java/Tomcat to build the application. The application uses Microsoft SQL Server as a database. The company wants to migrate the application to AWS.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Use AWS App2Container to containerize the application. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS). Deploy the database to Amazon RDS for SQL Server. Configure a Multi-AZ deployment.
- B . Containerize the application and deploy the application on a self-managed Kubernetes cluster on an Amazon EC2 instance. Deploy the database on a separate EC2 instance. Set up Microsoft SQL Server Always On availability groups.
- C . Deploy the frontend of the web application as a website on Amazon S3. Use Amazon DynamoDB for the database tier.
- D . Use AWS App2Container to containerize the application. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon DynamoDB for the database tier.
A company is developing a containerized web application that needs to be highly available and scalable. The application requires access to GPU resources.
- A . Package the application as an AWS Lambda function in a container image. Use Lambda to run the containerized application on a runtime with GPU access.
- B . Deploy the application container to Amazon Elastic Kubernetes Service (Amazon EKS). Use AWS Fargate to manage compute resources and access to GPU resources.
- C . Deploy the application container to Amazon Elastic Container Registry (Amazon ECR). Use Amazon ECR to run the containerized application with an attached GPU.
- D . Run the application on Amazon EC2 instances from a GPU instance family by using Amazon Elastic Container Service (Amazon ECS) for orchestration.
D
Explanation:
Why Option D is Correct:
GPU Access: Only EC2 instances in the GPU family (e.g., P2, P3) can provide GPU resources.
ECS Orchestration: Simplifies container deployment and management.
Why Other Options Are Not Ideal:
Option A: Lambda does not support GPU-based runtimes.
Option B: AWS Fargate does not support GPU-based workloads.
Option C: ECR is a container registry, not an orchestration or execution service.
AWS
Reference: Amazon ECS with GPU Instances: AWS Documentation – ECS GPU Instances
A company has a new mobile app. Anywhere in the world, users can see local news on topics they choose. Users also can post photos and videos from inside the app.
Users access content often in the first minutes after the content is posted. New content quickly replaces older content, and then the older content disappears. The local nature of the news means that users consume 90%of the content within the AWS Region where it is uploaded.
Which solution will optimize the user experience by providing the LOWEST latency for content uploads?
- A . Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.
- B . Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
- C . Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to Amazon S3.
- D . Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple distributions of Amazon CloudFront.
B
Explanation:
The most suitable solution for optimizing the user experience by providing the lowest latency for content uploads is to upload and store content in Amazon S3 and use S3 Transfer Acceleration for the uploads. This solution will enable the company to leverage the AWS global network and edge locations to speed up the data transfer between the users and the S3 buckets.
Amazon S3 is a storage service that provides scalable, durable, and highly available object storage for any type of data. Amazon S3 allows users to store and retrieve data from anywhere on the web, and offers various features such as encryption, versioning, lifecycle management, and replication1.
S3 Transfer Acceleration is a feature of Amazon S3 that helps users transfer data to and from S3 buckets more quickly. S3 Transfer Acceleration works by using optimized network paths and Amazon’s backbone network to accelerate data transfer speeds. Userscan enable S3 Transfer Acceleration for their buckets and use a distinct URL to access them, such as <bucket>.s3-accelerate.amazonaws.com2.
The other options are not correct because they either do not provide the lowest latency or are not
suitable for the use case. Uploading and storing content in Amazon S3 and using Amazon CloudFront for the uploads is not correct because this solution is not designed for optimizing uploads, but rather for optimizing downloads. Amazon CloudFront is a content delivery network (CDN) that helps users distribute their content globally with low latency and high transfer speeds. CloudFront works by caching the content at edge locations around the world, so that users can access it quickly and easily from anywhere3. Uploading content to Amazon EC2 instances in the Region that is closest to the user and copying the data to Amazon S3 is not correct because this solution adds unnecessary complexity and cost to the process. Amazon EC2 is a computing service that provides scalable and secure virtual servers in the cloud. Users can launch, stop, or terminate EC2 instances as needed, and choose from various instance types, operating systems, and configurations4. Uploading and storing content in Amazon S3 in the Region that is closest to the user and using multiple distributions of Amazon CloudFront is not correct because this solution is not cost-effective or efficient for the use case. As mentioned above, Amazon CloudFront is a CDN that helps users distribute their content globally with low latency and high transfer speeds. However, creating multiple CloudFront distributions for each Region would incur additional charges and management overhead, and would not be necessary since 90%of the content is consumed within the same Region where it is uploaded3.
Reference:
What Is Amazon Simple Storage Service? – Amazon Simple Storage Service Amazon S3 Transfer Acceleration – Amazon Simple Storage Service.
What Is Amazon CloudFront? – Amazon CloudFront.
What Is Amazon EC2? – Amazon Elastic Compute Cloud
An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists of a web application and a backend Microsoft SQL database for storage.
The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS Cloud must be highly available and scalable.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in both Availability Zones.
- B . Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2 instances across separate AWS Regions with database replication.
- C . Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ deployment.
- D . Migrate the web application to three Amazon EC2 instances across three Availability Zones behind an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.
C
Explanation:
To ensure high availability and scalability, the web application should run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer (ALB). The database should be migrated to Amazon RDSwithMulti-AZ deployment, which ensures fault tolerance and automatic failover in case of an AZ failure. This setup minimizes administrative overhead while meeting the company’s requirements for high availability and scalability.
Option A: Read replicas are typically used for scaling read operations, and Multi-AZ provides better availability for a transactional database.
Option B: Replicating across AWS Regions adds unnecessary complexity for a single web application.
Option D: EC2 instances across three Availability Zones add unnecessary complexity for this scenario.
AWS
Reference: Auto Scaling Groups
Amazon RDS Multi-AZ
A company has an on-premises business application that generates hundreds of files each day. These files are stored on an SMB file share and require a low-latency connection to the application servers. A new company policy states all application-generated files must be copied to AWS. There is already a VPN connection to AWS.
The application development team does not have time to make the necessary code modifications to move the application to AWS.
Which service should a solutions architect recommend to allow the application to copy files to AWS?
- A . Amazon Elastic File System (Amazon EFS)
- B . Amazon FSx for Windows File Server
- C . AWS Snowball
- D . AWS Storage Gateway
D
Explanation:
Understanding the Requirement: The company needs to copy files generated by an on-premises application to AWS without modifying the application code. The files are stored on an SMB file share and require a low-latency connection to the application servers.
Analysis of Options:
Amazon Elastic File System (EFS): EFS is designed for Linux-based workloads and does not natively support SMB file shares.
Amazon FSx for Windows File Server: FSx supports SMB file shares but would require changes to the application or additional infrastructure to connect on-premises systems.
AWS Snowball: Suitable for large data transfers but not for continuous, low-latency file copying. AWS Storage Gateway: Provides a hybrid cloud storage solution, supporting SMB file shares and enabling seamless copying of files to AWS without requiring changes to the application.
Best Solution:
AWS Storage Gateway: This service meets the requirement for a low-latency, seamless file transfer solution from on-premises to AWS without modifying the application code.
Reference: AWS Storage Gateway
Amazon FSx for Windows File Server
An ecommerce company runs an application that uses an Amazon DynamoDB table in a single AWS Region. The company wants to deploy the application to a second Region. The company needs to support multi-active replication with low latency reads and writes to the existing DynamoDB table in both Regions.
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Create a DynamoDB global secondary index (GSI) for the existing table. Create a new table in the second Region. Convert the existing DynamoDB table to a global table. Specify the new table as the secondary table.
- B . Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create a new application that uses the DynamoDB Streams Kinesis Adapter and the Amazon Kinesis Client Library (KCL). Configure the new application to read data from the DynamoDB table in the first Region and to write the data to the new table in the second Region.
- C . Convert the existing DynamoDB table to a global table. Choose the appropriate second Region to achieve active-active write capabilities in both Regions.
- D . Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create an AWS Lambda function in the first Region that reads data from the table in the first Region and writes the data to the new table in the second Region. Set a DynamoDB stream as the input trigger for the Lambda function.
C
Explanation:
Converting the existing DynamoDB table to aglobal tableprovides active-active replication and low-latency reads and writes in both Regions. DynamoDB global tables are specifically designed for multi-Region and multi-active use cases.
Option A: GSIs do not provide multi-Region replication or active-active capabilities.
Option B and D: Using DynamoDB Streams and custom replication is less operationally efficient than global tables and introduces additional complexity. AWS Documentation
Reference: DynamoDB Global Tables