Practice Free SAA-C03 Exam Online Questions
A developer is creating a serverless application that performs video encoding. The encoding process runs as background jobs and takes several minutes to encode each video. The process must not send an immediate result to users.
The developer is using Amazon API Gateway to manage an API for the application. The developer needs to run test invocations and request validations. The developer must distribute API keys to control access to the API.
Which solution will meet these requirements?
- A . Create an HTTP API. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the HTTP API. Use the Event invocation type to call the Lambda function.
- B . Create a REST API with the default endpoint type. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the REST API. Use the Event invocation type to call the Lambda function.
- C . Create an HTTP API. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the HTTP API. Use the RequestResponse invocation type to call the Lambda function.
- D . Create a REST API with the default endpoint type. Create an AWS Lambda function to handle the encoding jobs. Integrate the function with the REST API. Use the RequestResponse invocation type to call the Lambda function.
B
Explanation:
Background Jobs with Event Invocation Type:
The Event invocation type is asynchronous, meaning the Lambda function does not send an
immediate result to the API Gateway and processes the request in the background. This is ideal for
video encoding tasks that take time.
REST API vs. HTTP API:
REST APIs support advanced features like API keys, request validation, and throttling that HTTP APIs do not support fully.
Since the developer needs API keys and request validations, a REST API is the correct choice.
Integration with Lambda:
AWS Lambda integration is seamless with REST APIs, and using the Event invocation ensures
asynchronous processing.
Incorrect Options Analysis:
Option A: HTTP API lacks full support for API keys and validation.
Option CandD: RequestResponse invocation type requires immediate responses, unsuitable for
background jobs.
Reference: AWS Lambda Invocation Types
Amazon API Gateway REST APIs
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
- A . Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins Configure Route 53 to route traffic to the CloudFront distribution.
- B . Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Configure Route 53 to route traffic to the CloudFront distribution.
- C . Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints Create a custom domain name that points to the accelerator DNS name Use the custom domain name as an endpoint for the web application.
- D . Create an Amazon CloudFront distribution that has the ALB as an origin
- E . Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content, Point the other domain name to the accelerator DNS name for static content Use the domain names as endpoints for the web application.
C
Explanation:
Static content can be cached at Cloud front Edge locations from S3 and dynamic content EC2 behind the ALB whose performance can be improved by Global Accelerator whose one endpoint is ALB and other Cloud front. So with regards to custom domain name endpoint is web application is R53 alias records for the custom domain point to web applicationhttps: //aws.amazon.com/blogs/networking-and-content-delivery/improving-availability-and-performance-for-application-load-balancers-using-one-click-integration-with-aws-global-accelerator/
A marketing team wants to build a campaign for an upcoming multi-sport event. The team has news reports from the past five years in PDF format. The team needs a solution to extract insights about the content and the sentiment of the news reports. The solution must use Amazon Textract to process the news reports.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Provide the extracted insights to Amazon Athena for analysis Store the extracted insights and analysis in an Amazon S3 bucket.
- B . Store the extracted insights in an Amazon DynamoDB table. Use Amazon SageMaker to build a sentiment model.
- C . Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon S3 bucket.
- D . Store the extracted insights in an Amazon S3 bucket. Use Amazon QuickSight to visualize and analyze the data.
C
Explanation:
Amazon Textractcan extract text from the PDFs, and Amazon Comprehend is the most suitable service to analyze the extracted text for sentiment and insights. Comprehend offers a fully managed, low-operational overhead solution for analyzing text data. The results can then be stored in an Amazon S3bucket, ensuring scalability and easy access.
Option A: Athena is for querying structured data and is not suitable for sentiment analysis.
Option B: SageMaker adds complexity and is not necessary when Comprehend can handle sentiment analysis natively.
Option D: QuickSight is used for visualization and analytics, but it does not provide sentiment analysis.
AWS
Reference: Amazon Comprehend
Amazon Textract
A company stores customer data in a multitenant Amazon S3 bucket. Each customer’s data is stored in a prefix that is unique to the customer. The company needs to migrate data for specific customers to a new. dedicated S3 bucket that is in the same AWS Region as the source bucket. The company must preserve object metadata such as creation date and version IDs.
After the migration is finished, the company must delete the source data for the migrated customers from the original multitenant S3 bucket.
Which combination of solutions will meet these requirements with the LEAST overhead? (Select THREE.)
- A . Create a new S3 bucket as a destination bucket. Enable versioning on the new bucket.
- B . Use S3 batch operations to copy objects from the specified prefixes to the destination bucket.
- C . Use the S3 CopyObject API, and create a script to copy data to the destination S3 bucket.
- D . Configure S3 Same-Region Replication (SRR) to replicate existing data from the specified prefixes in the source bucket to the destination bucket.
- E . Configure AWS DataSync to migrate data from the specified prefixes in the source bucket to the destination bucket.
- F . Use an S3 Lifecycle policy to delete objects from the source bucket after the data is migrated to the destination bucket.
A, B, F
Explanation:
The combination of these solutions provides an efficient and automated way to migrate data while preserving metadata and ensuring cleanup:
Create a new S3 bucket with versioning enabled (Option A) to preserve object metadata like version IDs during migration.
UseS3 batch operations (Option B) to efficiently copy data from specific prefixes in the source bucket to the destination bucket, ensuring minimal overhead.
Use an S3 Lifecycle policy (Option F) to automatically delete the data from the source bucket after it has been migrated, reducing manual intervention.
Option C (CopyObject API): This approach would require more manual scripting and effort.
Option D (Same-Region Replication): SRR is designed for ongoing replication, not for one-time migrations.
Option E (DataSync): DataSync adds more complexity than necessary for this task.
AWS
Reference: S3 Batch Operations
S3 Lifecycle Policies
A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company desgned the application to work with session affinity (sticky sessions) for a better user experience.
The application must be available publicly over the internet as an endpoint_ A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint
Which combination of steps will meet these requirements? (Select TWO)
- A . Create a public Network Load Balancer Specify the application target group.
- B . Create a Gateway Load Balancer Specify the application target group.
- C . Create a public Application Load Balancer Specify the application target group.
- D . Create a second target group. Add Elastic IP addresses to the EC2 instances
- E . Create a web ACL in AWS WAF Associate the web ACL with the endpoint
C, E
Explanation:
C and E are the correct answers because they allow the company to create a public endpoint for its web application that supports session affinity (sticky sessions) and has a WAF applied for additional security. By creating a public Application Load Balancer, the company can distribute incoming traffic across multiple EC2 instances in an Auto Scaling group and specify the application target group. By creating a web ACL in AWS WAF and associating it with the Application Load Balancer, the company can protect its web application from common web exploits. By enabling session stickiness on the Application Load Balancer, the company can ensure that subsequent requests from a user during a session are routed to the same target.
Reference: Application Load Balancers
AWS WAF
Target Groups for Your Application Load Balancers
How Application Load Balancer Works with Sticky Sessions
A company uses multiple vendors to distribute digital assets that are stored in Amazon S3 buckets. The company wants to ensure that its vendor AWS accounts have the minimum access that is needed to download objects in these S3 buckets
Which solution will meet these requirements with the LEAST operational overhead?
- A . Design a bucket policy that has anonymous read permissions and permissions to list ail buckets.
- B . Design a bucket policy that gives read-only access to users. Specify IAM entities as principals
- C . Create a cross-account IAM role that has a read-only access policy specified for the IAM role.
- D . Create a user policy and vendor user groups that give read-only access to vendor users
C
Explanation:
A cross-account IAM role is a way to grant users from one AWS account access to resources in another AWS account. The cross-account IAM role can have a read-only access policy attached to it, which allows the users to download objects from the S3 buckets without modifying or deleting them.
The cross-account IAM role also reduces the operational overhead of managing multiple IAM users and policies in each account. The cross-account IAM role meets all the requirements of the question, while the other options do not.
Reference:
https: //docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html
https: //aws.amazon.com/blogs/storage/setting-up-cross-account-amazon-s3-access-with-s3-access-points/
https: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
A company runs its ecommerce application on AWS. Every new order is published as a message in a RabbitMQ queue that runs on an Amazon EC2 instance in a single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This application stores the details in a PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?
- A . Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group (or EC2 instances that host the application. Create another Multi-AZAuto Scaling group for EC2 instances that host the PostgreSQL database.
- B . Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for EC2 instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
- C . Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2 instances that host the application. Migrate the database to runon a Multi-AZ deployment of Amazon RDS fqjPostgreSQL.
- D . Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2 instances that host the application. Create a third Multi-AZ AutoScaling group for EC2 instances that host the PostgreSQL database.
B
Explanation:
Migrating to Amazon MQ reduces the overhead on the queue management. C and D are dismissed.
Deciding between A and B means deciding to go for an AutoScaling group for EC2 or an RDS for Postgress (both multi- AZ). The RDS option has less operational impact, as provide as a service the tools and software required. Consider for instance, the effort to add an additional node like a read replica, to the DB. https: //docs.aws.amazon.com/amazon-mq/latest/developer-guide/active-standby-broker-deployment.htmlhttps: //aws.amazon.com/rds/postgresql/
A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being accessed or are rarely accessed.
Which solution will accomplish this goal with the LEAST operational overhead?
- A . Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.
- B . Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.
- C . Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with Amazon Athena.
- D . Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon CloudWatch Logs.
A
Explanation:
S3 Storage Lens is a fully managed S3 storage analytics solution that provides a comprehensive view of object storage usage, activity trends, and recommendations to optimize costs. Storage Lens allows you to analyze object access patterns across all of your S3 buckets and generate detailed metrics and reports.
A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being accessed or are rarely accessed.
Which solution will accomplish this goal with the LEAST operational overhead?
- A . Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.
- B . Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.
- C . Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with Amazon Athena.
- D . Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon CloudWatch Logs.
A
Explanation:
S3 Storage Lens is a fully managed S3 storage analytics solution that provides a comprehensive view of object storage usage, activity trends, and recommendations to optimize costs. Storage Lens allows you to analyze object access patterns across all of your S3 buckets and generate detailed metrics and reports.
A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates to the five businesses that the company owns. The company’s research and development (R&D) business is separating from the company and will need its own organization. A solutions architect creates a separate new management account for this purpose.
What should the solutions architect do next in the new management account?
- A . Have the R&D AWS account be part of both organizations during the transition.
- B . Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior organization.
- C . Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D AWS account to the new R&D AWS account.
- D . Have the R&D AWS account join the new organization. Make the new management account a member of the prior organization.
B
Explanation:
it allows the solutions architect to create a separate organization for the research and development (R&D) business and move its AWS account to the new organization. By inviting the R&D AWS account to be part of the new organization after it has left the prior organization, the solutions architect can ensure that there is no overlap or conflict between the two organizations. The R&D AWS account can accept or decline the invitation to join the new organization. Once accepted, it will be subject to any policies and controls applied by the new organization.
Reference: Inviting an AWS Account to Join Your Organization
Leaving an Organization as a Member Account