Practice Free SAA-C03 Exam Online Questions
A company hosts an application in a private subnet. The company has already integrated the application with Amazon Cognito. The company uses an Amazon Cognito user pool to authenticate users.
The company needs to modify the application so the application can securely store user documents in an Amazon S3 bucket.
Which combination of steps will securely integrate Amazon S3 with the application? (Select TWO.)
- A . Create an Amazon Cognito identity pool to generate secure Amazon S3 access tokens for users when they successfully log in.
- B . Use the existing Amazon Cognito user pool to generate Amazon S3 access tokens for users when they successfully log in.
- C . Create an Amazon S3 VPC endpoint in the same VPC where the company hosts the application.
- D . Create a NAT gateway in the VPC where the company hosts the application. Assign a policy to the S3 bucket to deny any request that is not initiated from Amazon Cognito.
- E . Attach a policy to the S3 bucket that allows access only from the users’ IP addresses.
A, C
Explanation:
To securely integrate Amazon S3 with an application that uses Amazon Cognito for user authentication, the following two steps are essential:
Step 1: Create an Amazon Cognito Identity Pool (Option A)
Amazon Cognito Identity Poolsallow users to obtain temporary AWS credentials to access AWS resources, such as Amazon S3, after successfully authenticating with the Cognito user pool. The identity pool bridges the gap between user authentication and AWS service access by generating temporary credentials using AWS Identity and Access Management (IAM).
Once a user logs in using the Cognito User Pool, the identity pool provides IAM roles with specific permissionsthat the application can use to access S3 securely. This ensures that each user has appropriate access controls while accessing the S3 bucket.
This is a secure way to ensure that users only have temporary and least-privilege access to the S3 bucket for their documents.
Step 2: Create an Amazon S3 VPC Endpoint (Option C)
By creating an Amazon S3 VPC endpoint, the company ensures that communication between the application (which is hosted in a private subnet) and the S3 bucket occurs over the AWS private network, without the need to traverse the internet. This enhances security and prevents exposure of data to public networks.
The VPC endpointallows the application to access the S3 bucket privately and securely within the VPC. It also ensures that traffic stays within the AWS network, reducing attack surface and improving overall security.
Why the Other Options Are Incorrect:
Option B: This is incorrect because Amazon Cognito User Poolsare used for user authentication, not for generating S3 access tokens. To provide S3 access, you need to use Amazon Cognito Identity Pools, which offer AWS credentials.
Option D: ANAT gatewayis unnecessary in this scenario. Using a VPC endpointfor S3 access provides a more secure and cost-effective solution by keeping traffic within AWS.
Option E: Attaching a policy to restrict access based on IP addresses is not scalable or efficient. It would require managing users’ dynamic IP addresses, which is not an effective security measure for this use case.
AWS
Reference: Amazon Cognito Identity Pools
Amazon VPC Endpoints for S3
A company hostss a three application on Amazon EC2 instances in a single Availability Zone. The web application uses a self-managed MySQL database that is hosted on an EC2 instances to store data in an Amazon Elastic Block Store (Amazon EBS) volume. The MySQL database currently uses a 1 TB Provisioned IOPS SSD (io2) EBS volume. The company expects traffic of 1, 000 IOPS for both reads and writes at peak traffic.
The company wants to minimize any distruptions, stabilize perperformace, and reduce costs while retaining the capacity for double the IOPS. The company wants to more the database tier to a fully managed solution that is highly available and fault tolerant.
Which solution will meet these requirements MOST cost-effectively?
- A . Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with an io2 Block Express EBS volume.
- B . Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General Purpose SSD (gp2) EBS volume.
- C . Use Amazon S3 Intelligent-Tiering access tiers.
- D . Use two large EC2 instances to host the database in active-passive mode.
B
Explanation:
RDS supported Storage >
https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html GP2 max IOPS >https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose.html#gp2-performance
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1), and magnetic (also known as standard). They differ in performance characteristics and price, which means that you can tailor your storage performance and cost to the needs of your database workload. You can create MySQL, MariaDB, Oracle, and PostgreSQL RDS DB instances with up to 64 tebibytes (TiB) of storage. You can create SQL Server RDS DB instances with up to 16 TiB of storage. For this amount of storage, use the Provisioned IOPS SSD and General Purpose SSD storage types. https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
A company needs a solution to prevent photos with unwanted content from being uploaded to the company’s web application. The solution must not involve training a machine learning (ML) model.
Which solution will meet these requirements?
- A . Create and deploy a model by using Amazon SageMaker Autopilot. Create a real-time endpoint that the web application invokes when new photos are uploaded.
- B . Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.
- C . Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content. Associate the function with the web application.
- D . Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content.
Create a Lambda function URL that the web application invokes when new photos are uploaded.
B
Explanation:
Amazon Rekognition provides pretrained models that can detect inappropriate or unsafe content (such as nudity or violence) without requiring users to build their own ML models. Using a Lambda function to call Rekognition when new photos are uploaded is a serverless and scalable solution.
Reference: AWS Documentation C Amazon Rekognition Content Moderation
A company is building an ecommerce application that uses a relational database to store customer
data and order history. The company also needs a solution to store 100 GB of product images. The
company expects the traffic flow for the application to be predictable.
Which solution will meet these requirements MOST cost-effectively?
- A . Use Amazon RDS for MySQL for the database. Store the product images in an Amazon S3 bucket.
- B . Use Amazon DynamoDB for the database. Store the product images in an Amazon S3 bucket.
- C . Use Amazon RDS for MySQL for the database. Store the product images in an Amazon Aurora MySQL database.
- D . Create three Amazon EC2 instances. Install MongoDB software on the instances to use as the database. Store the product images in an Amazon RDS for MySQL database with a Multi-AZ deployment.
A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket Queries will be simple and will run on-demand A solutions architect needs to perform the analysis with minimal changes to the existing architecture
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
- A . Use Amazon Redshift to load all the content into one place and run the SQL queries as needed
- B . Use Amazon CloudWatch Logs to store the logs Run SQL queries as needed from the Amazon CloudWatch console
- C . Use Amazon Athena directly with Amazon S3 to run the queries as needed
- D . Use AWS Glue to catalog the logs Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed
C
Explanation:
Amazon Athena can be used to query JSON in S3
A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solutions architect needs to share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner’s AWS account. The AMI is backed by Amazon Elastic Block Store (Amazon EBS) and uses a customer managed customer master key (CMK) to encrypt EBS volume snapshots.
What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner’s AWS account?
- A . Make the encrypted AMI and snapshots publicly available. Modify the CMK’s key policy to allow the MSP Partner’s AWS account to use the key
- B . Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner’s AWS account only. Modify the CMK’s key policy to allow the MSP Partner’s AWS account to use the key.
- C . Modify the launchPermission property of the AMI Share the AMI with the MSP Partner’s AWS account only. Modify the CMK’s key policy to trust a new CMK that is owned by the MSP Partner for encryption.
- D . Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner’s AWS account. Encrypt the S3 bucket with a CMK that is owned by the MSP Partner Copy and launch the AMI in the MSP Partner’s AWS account.
B
Explanation:
Share the existing KMS key with the MSP external account because it has already been used to encrypt the AMI snapshot. https: //docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
- A . Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
- B . Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
- C . Use an API Gateway authorizer to block any requests while the application processes an order.
- D . Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.
B
Explanation:
To ensure that orders are processed in the order that they are received, the best solution is to use an Amazon SQS FIFO (First-In-First-Out) queue. This type of queue maintains the exact order in which messages are sent and received. In this case, the application can send information about new orders to an Amazon API Gateway REST API, which can then use an API Gateway integration to send a message to an Amazon SQS FIFO queue for processing. The queue can then be configured to invoke an AWS Lambda function to perform the necessary processing on each order. This ensures that orders are processed in the exact order in which they are received.
A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket. However, the videos are large in their raw format.
Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the performance and scalability of the app while minimizing operational overhead.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Deploy Amazon CloudFront for content delivery and caching
- B . Use AWS DataSync to replicate the video files across AWS Regions in other S3 buckets
- C . Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
- D . Deploy an Auto Scaling group of Amazon EC2 instances in Local Zones for content delivery and caching
- E . Deploy an Auto Scaling group of Amazon EC2 Instances to convert the video files to more appropriate formats.
A, C
Explanation:
Understanding the Requirement: The mobile app captures and uploads raw video clips to S3, but users experience buffering and playback issues due to the large size of these videos.
Analysis of Options:
Amazon CloudFront: A content delivery network (CDN) that can cache and deliver content globally with low latency. It helps reduce buffering by delivering content from edge locations closer to the users.
AWS DataSync: Primarily used for data transfer and replication across AWS Regions, which does not directly address the video size and buffering issue.
Amazon Elastic Transcoder: A media transcoding service that can convert raw video files into formats and resolutions more suitable for streaming, reducing the size and improving playback performance. EC2 Instances in Local Zones: While this could provide content delivery and caching, it involves more operational overhead compared to using CloudFront.
EC2 Instances for Transcoding: Involves setting up and maintaining infrastructure, leading to higher operational overhead compared to using Elastic Transcoder. Best Combination of Solutions:
Deploy Amazon CloudFront: This optimizes the performance by caching content at edge locations, reducing latency and buffering for users.
Use Amazon Elastic Transcoder: This reduces the file size and converts videos into formats better
suited for streaming on mobile devices.
Reference: Amazon CloudFront
Amazon Elastic Transcoder
A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.
The company must store the files for 4 years before the files can be deleted. The files must be immediately accessible. The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation.
- B . Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA)
30 days after object creation Delete the files 4 years after object creation. - C . Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA)
30 days after object creation Delete the files 4 years after object creation. - D . Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA)
30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.
C
Explanation:
Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still providing high availability and durability.
Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and reduces storage costs significantly.
Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures automatic management of the data lifecycle, moving files to a lower-cost storage class without manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is no longer needed.
Reference: Amazon S3 Storage Classes
S3 Lifecycle Configuration
A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the us-west-1 Region. The company manually backs up the workloads to create an image as needed.
In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the us-west-2 Region. The company wants no more than 24 hours of data loss on the EC2 instances. The company also wants to automate any backups of the EC2 instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Select TWO.)
- A . Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Copy the image on demand.
- B . Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.
- C . Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2.
- D . Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define the destination for the copy as us-west-2. Specify the backup schedule to run twice daily.
- E . Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.
B, D
Explanation:
Option B suggests using an EC2-backed Amazon Machine Image (AMI) lifecycle policy to automate the backup process. By configuring the policy to run twice daily and specifying the copy to the us-west-2 Region, the company can ensure regular backups are created and copied to the alternate region.
Option D proposes using AWS Backup, which provides a centralized backup management solution. By creating a backup vault and backup plan based on tag values, the company can automate the backup process for the EC2 instances. The backup schedule can be set to run twice daily, and the destination for the copy can be defined as the us-west-2 Region.
Both options automate the backup process and include copying the backups to the us-west-2 Region, ensuring data resilience in the event of a disaster. These solutions minimize administrative effort by leveraging automated backup and copy mechanisms provided by AWS services.