Practice Free SAP-C02 Exam Online Questions
A manufacturing company is building an inspection solution for its factory. The company has IP cameras at the end of each assembly line. The company has used Amazon SageMaker to train a machine learning (ML) model to identify common defects from still images.
The company wants to provide local feedback to factory workers when a defect is detected. The company must be able to provide this feedback even if the factory’s internet connectivity is down. The company has a local Linux server that hosts an API that provides local feedback to the workers.
How should the company deploy the ML model to meet these requirements?
- A . Set up an Amazon Kinesis video stream from each IP camera to AWS. Use Amazon EC2 instances to take still images of the streams. Upload the images to an Amazon S3 bucket. Deploy a SageMaker endpoint with the ML model. Invoke an AWS Lambda function to call the inference endpoint when new images are uploaded. Configure the Lambda function to call the local API when a defect is detected.
- B . Deploy AWS IoT Greengrass on the local server. Deploy the ML model to the Greengrass server. Create a Greengrass component to take still images from the cameras and run inference. Configure the component to call the local API when a defect is detected.
- C . Order an AWS Snowball device. Deploy a SageMaker endpoint the ML model and an Amazon EC2 instance on the Snowball device. Take still images from the cameras. Run inference from the EC2 instance. Configure the instance to call the local API when a defect is detected.
- D . Deploy Amazon Monitron devices on each IP camera. Deploy an Amazon Monitron Gateway on premises. Deploy the ML model to the Amazon Monitron devices. Use Amazon Monitron health state alarms to call the local API from an AWS Lambda function when a defect is detected.
B
Explanation:
The company should use AWS IoT Greengrass to deploy the ML model to the local server and provide local feedback to the factory workers. AWS IoT Greengrass is a service that extends AWS cloud capabilities to local devices, allowing them to collect and analyze data closer to the source of information,react autonomously to local events, and communicate securely with each other on local networks1. AWS IoT Greengrass also supports MLinference at the edge, enabling devices to run ML models locally without requiring internet connectivity2. The other options are not correct because:
Setting up an Amazon Kinesis video stream from each IP camera to AWS would not work if the factory’s internet connectivity is down. It would also incur unnecessary costs and latency to stream video data to the cloud and back.
Ordering an AWS Snowball device would not be a scalable or cost-effective solution for deploying the ML model. AWS Snowball is a service that provides physical devices for data transfer and edge computing, but it is not designed for continuous operation or frequent updates3.
Deploying Amazon Monitron devices on each IP camera would not work because Amazon Monitron is a service that monitors the condition and performance of industrial equipment using sensors and machine learning, not cameras4.
Reference:
https://aws.amazon.com/greengrass/
https://docs.aws.amazon.com/greengrass/v2/developerguide/use-machine-learning-inference.html
https://aws.amazon.com/snowball/
https://aws.amazon.com/monitron/
A solutions architect is designing a solution to process events. The solution must have the ability to scale in and out based on the number of events that the solution receives. If a processing error occurs, the event must move into a separate queue for review.
Which solution will meet these requirements?
- A . Send event details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure an AWS Lambda function as a subscriber to the SNS topic to process the events. Add an on-failure destination to the function. Set an Amazon Simple Queue Service (Amazon SQS) queue as the target.
- B . Publish events to an Amazon Simple Queue Service (Amazon SQS) queue. Create an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to scale in and out based on the ApproximateAgeOfOldestMessage metric of the queue. Configure the application to write failed messages to a dead-letter queue.
- C . Write events to an Amazon DynamoDB table. Configure a DynamoDB stream for the table. Configure the stream to invoke an AWS Lambda function. Configure the Lambda function to process the events.
- D . Publish events to an Amazon EventBridge event bus. Create and run an application on an Amazon EC2 instance with an Auto Scaling group that isbehind an Application Load Balancer (ALB). Set the ALB as the event bus target. Configure the event bus to retry events. Write messages to a dead-letter queue if the application cannot process the messages.
A
Explanation:
Amazon Simple Notification Service (Amazon SNS) is a fully managed pub/sub messaging service that enables users to send messages to multiple subscribers1. Users can sendevent details to an Amazon SNS topic and configure an AWS Lambda function as a subscriber to the SNS topic to process the events. Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying compute resources2. Users can add an on-failure destination to the function and set an Amazon Simple Queue Service (Amazon SQS) queue as the target. Amazon SQS is a fully managed message queuing service that enables users to decouple and scale microservices, distributed systems, and serverless applications3. This way, if a processing error occurs, the event will move into the separate queue for review.
Option B is incorrect because publishing events to an Amazon SQS queue and creating an Amazon EC2 Auto Scaling group will not have the ability to scale in and out based on the number of events that the solution receives. Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. Auto Scaling is a feature that helps users maintain application availability and allows them to scale their EC2 capacity up or down automatically according to conditions they define. However, for this use case, using SQS and EC2 will not take advantage of the serverless capabilities of Lambda and SNS.
Option C is incorrect because writing events to an Amazon DynamoDB table and configuring a DynamoDB stream for the table will not have the ability to move events into a separate queue for review if a processing error occurs. Amazon DynamoDB is a fully managed key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB Streams is a feature that captures data modification events in DynamoDB tables. Users can configure the stream
to invoke a Lambda function, but they cannot configure an on-failure destination for the function.
Option D is incorrect because publishing events to an Amazon EventBridge event bus and setting an Application Load Balancer (ALB) as the event bus target will not have the ability to move events into a separate queue for review if a processing error occurs. Amazon EventBridge is a serverless event bus service that makes it easy to connect applications with data from a variety of sources. An ALB is a load balancer that distributes incoming application traffic across multiple targets, such as EC2 instances, containers, IP addresses, Lambda functions, and virtual appliances. Users can configure EventBridge to retry events, but they cannot configure an on-failure destination for the ALB.
A media storage application uploads user photos to Amazon S3 for processing by AWS Lambda functions. Application state is stored in Amazon DynamoOB tables. Users are reporting that some uploaded photos are not being processed properly. The application developers trace the logs and find that Lambda is experiencing photo processing issues when thousands of users upload photos simultaneously. The issues are the result of Lambda concurrency limits and the performance of DynamoDB when data is saved.
Which combination of actions should a solutions architect take to increase the performance and
reliability of the application? (Select TWO.)
- A . Evaluate and adjust the RCUs for the DynamoDB tables.
- B . Evaluate and adjust the WCUs for the DynamoDB tables.
- C . Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
- D . Add an Amazon Simple Queue Service (Amazon SQS) queue and reprocessing logic between Amazon S3 and the Lambda functions.
- E . Use S3 Transfer Acceleration to provide lower latency to users.
B,D
Explanation:
B: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.requests
D: https://aws.amazon.com/blogs/compute/robust-serverless-application-design-with-aws-lambda-dlq/c
A company has 50 AWS accounts that are members of an organization in AWS Organizations Each account contains multiple VPCs. The company wants to use AWS Transit Gateway to establish connectivity between the VPCs in each member account Each time a new member account is created, the company wants to automate the process of creating a new VPC and a transit gateway attachment.
Which combination of steps will meet these requirements? (Select TWO)
- A . From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager
- B . Prom the management account, share the transit gateway with member accounts by using an AWS Organizations SCP
- C . Launch an AWS CloudFormation stack set from the management account that automatical^/ creates a new VPC and a VPC transit gateway attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway ID.
- D . Launch an AWS CloudFormation stack set from the management account that automatical^ creates a new VPC and a peering transit gateway attachment in a member account. Share the attachment with the transit gateway in the management account by using a transit gateway service-linked role.
- E . From the management account, share the transit gateway with member accounts by using AWS Service Catalog
A,C
Explanation:
https://aws.amazon.com/blogs/mt/self-service-vpcs-in-aws-control-tower-using-aws-service-catalog/
https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-transitgatewayattachment.html
A company is migrating its blog platform to AWS. The company’s on-premises servers connect to AWS through an AWS Site-to-Site VPN connection. The blog content is updated several times a day by multiple authors and is served from a file share on a network-attached storage (NAS) server. The company needs to migrate the blog platform without delaying the content updates. The company has deployed Amazon EC2 instances across multiple Availability Zones to run the blog platform behind an Application Load Balancer. The company also needs to move 200 TB of archival data from its on-premises servers to Amazon S3 as soon as possible.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create a weekly cron job in Amazon EventBridge. Use the cron job to invoke an AWS Lambda function to update the EC2 instances from the NAS server.
- B . Configure an Amazon Elastic Block Store (Amazon EBS) Multi-Attach volume for the EC2 instances to share for content access. Write code to synchronize the EBS volume with the NAS server weekly.
- C . Mount an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers to act as the NAS server. Copy the blog data to the EFS file system. Mount the EFS file system to the EC2 instances to serve the content.
- D . Order an AWS Snowball Edge Storage Optimized device. Copy the static data artifacts to the device. Ship the device to AWS.
- E . Order an AWS Snowcone SSD device. Copy the static data artifacts to the device. Ship the device to
AWS.
C,D
Explanation:
Comprehensive and Detailed in Depth
C is correct because Amazon EFS offers a scalable, elastic file system that can be mounted on both on-premises servers (via AWS Direct Connect or VPN) and EC2 instances. This allows real-time access to shared content with no migration downtime.
D is correct because AWS Snowball Edge Storage Optimized devices are designed for transferring
petabyte-scale data (up to 80 TB per device). It’s a best practice for moving 200 TB of archival data
when speed and bandwidth are constraints.
Reference: Amazon EFS C Mounting on-premises
AWS Snowball Edge C Overview
A company has deployed applications to thousands of Amazon EC2 instances in an AWS account. A security audit discovers that several unencrypted Amazon EBS volumes are attached to the EC2 instances. The company’s security policy requires the EBS volumes to be encrypted.
The company needs to implement an automated solution to encrypt the EBS volumes. The solution also must prevent development teams from creating unencrypted EBS volumes.
Which solution will meet these requirements?
- A . Configure the AWS Config managed rule that identifies unencrypted EBS volumes. Configure an automatic remediation action. Associate an AWS Systems Manager Automation runbook that includes the steps to create a new encrypted EBS volume. Create an AWS KMS customer managed key. In the key policy, include a statement to deny the creation of unencrypted EBS volumes.
- B . Use AWS Systems Manager Fleet Manager to create a list of unencrypted EBS volumes. Create a Systems Manager Automation runbook that includes the steps to create a new encrypted EBS volume. Create an SCP to deny the creation of unencrypted EBS volumes.
- C . Use AWS Systems Manager Fleet Manager to create a list of unencrypted EBS volumes. Create a Systems Manager Automation runbook that includes the steps to create a new encrypted EBS volume. Modify the AWS account setting for EBS encryption to always encrypt new EBS volumes.
- D . Configure the AWS Config managed rule that identifies unencrypted EBS volumes. Configure an automatic remediation action. Associate an AWS Systems Manager Automation runbook that includes the steps to create a new encrypted EBS volume. Modify the AWS account setting for EBS encryption to always encrypt new EBS volumes.
A company has developed APIs that use Amazon API Gateway with Regional endpoints. The APIs call AWS Lambda functions that use API Gateway authentication mechanisms. After a design review, a solutions architect identifies a set of APIs that do not require public access.
The solutions architect must design a solution to make the set of APIs accessible only from a VPC. All APIs need to be called with an authenticated user.
Which solution will meet these requirements with the LEAST amount of effort?
- A . Create an internal Application Load Balancer (ALB). Create a target group. Select the Lambda function to call. Use the ALB DNS name to call the API from the VPC.
- B . Remove the DNS entry that is associated with the API in API Gateway. Create a hosted zone in Amazon Route 53. Create a CNAME record in the hosted zone. Update the API in API Gateway with the CNAME record. Use the CNAME record to call the API from the VPC.
- C . Update the API endpoint from Regional to private in API Gateway. Create an interface VPC endpoint in the VPC. Create a resource policy, and attach it to the API. Use the VPC endpoint to call the API from the VPC.
- D . Deploy the Lambda functions inside the VPC. Provision an EC2 instance, and install an Apache server. From the Apache server, call the Lambda functions. Use the internal CNAME record of the EC2 instance to call the API from the VPC.
C
Explanation:
This solution requires the least amount of effort as it only requires to update the API endpoint to private in API Gateway and create an interface VPC endpoint. Then create a resource policy and attach it to the API. This will make the API only accessible from the VPC and still keep the authentication mechanism intact.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/private-api-gateway-vpc-endpoint/
https://aws.amazon.com/api-gateway/features/
A startup company recently migrated a large ecommerce website to AWS. The website has experienced a 70% increase in sates Software engineers are using a private GitHub repository to manage code. The DevOps team is using Jenkins for builds and unit testing. The engineers need to receive notifications for bad builds and zero downtime during deployments. The engineers also need to ensure any changes to production are seamless for users and can be rolled back in the event of a major issue
The software engineers have decided to use AWS CodePipeline to manage their build and deployment process
Which solution will meet these requirements?
- A . Use GitHub websockets to trigger the CodePipeline pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy inan in-place all-at-once deployment configuration using AWS CodeDeploy
- B . Use GitHub webhooks to trigger the CodePipelme pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue’green deployment using AWS CodeDeploy
- C . Use GitHub websockets to trigger the CodePipelme pipeline. Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue/green deployment using AWS CodeDeploy.
- D . Use GitHub webhooks to trigger the CodePipeline pipeline Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in an m-place. all-at-once deployment configuration using AWS CodeDeploy
B
Explanation:
GitHub Webhooks to Trigger CodePipeline:
Configure GitHub webhooks to trigger the AWS CodePipeline pipeline. This ensures that every code push to the repository automatically triggers the pipeline, initiating the build and deployment process.
Unit Testing with Jenkins and AWS CodeBuild:
Use Jenkins integrated with the AWS CodeBuild plugin to perform unit testing. Jenkins will manage the build process, and the results will be handled by CodeBuild. Notifications for Bad Builds:
Configure Amazon SNS (Simple Notification Service) to send alerts for any failed builds. This keeps the engineering team informed of build issues immediately, allowing for quick resolutions. Blue/Green Deployment with AWS CodeDeploy:
Utilize AWS CodeDeploy with a blue/green deployment strategy. This method reduces downtime and risk by running two identical production environments (blue and green). CodeDeploy shifts traffic between these environments, allowing you to test in the new environment (green) while the old environment (blue) remains live. If issues arise, you can quickly roll back to the previous environment.
This solution provides seamless, zero-downtime deployments, and the ability to quickly roll back changes if necessary, fulfilling the requirements of the startup company. Reference
AWS DevOps Blog on Integrating Jenkins with AWS CodeBuild and CodeDeploy 【 32 】 .
Plain English Guide to AWS CodePipeline with GitHub 【 33 】 .
Jenkins Plugin for AWS CodePipeline 【 34 】 .
A company needs to implement disaster recovery for a critical application that runs in a single AWS Region. The application’s users interact with a web frontend that is hosted on Amazon EC2 Instances behind an Application Load Balancer (ALB). The application writes to an Amazon RD5 tor MySQL DB instance. The application also outputs processed documents that are stored in an Amazon S3 bucket. The company’s finance team directly queries the database to run reports. During busy periods, these queries consume resources and negatively affect application performance.
A solutions architect must design a solution that will provide resiliency during a disaster. The solution must minimize data loss and must resolve the performance problems that result from the finance team’s queries.
Which solution will meet these requirements?
- A . Migrate the database to Amazon DynamoDB and use DynamoDB global tables. Instruct the finance team to query a global table in a separate Region. Create an AWS Lambda function to periodically synchronize the contents of the original S3 bucket to a new S3 bucket in the separate Region. Launch EC2 instances and create an ALB in the separate Region. Configure the application to point to the new S3 bucket.
- B . Launch additional EC2 instances that host the application in a separate Region. Add the additional instances to the existing ALB. In the separate Region, create a read replica of the RDS DB instance. Instruct the finance team to run queries ageist the read replica. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 Docket in the separate Region. During a disaster, promote the read replace to a standalone DB instance. Configure the application to point to the new S3 bucket and to the newly project read replica.
- C . Create a read replica of the RDS DB instance in a separate Region. Instruct the finance team to run queries against the read replica. Create AMIs of the EC2 instances mat host the application frontend-Copy the AMIs to the separate Region. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region. During a disaster, promote the read replica to a standalone DB instance. Launch EC2 instances from the AMIs and create an ALB to present the application to end users. Configure the application to point to the new S3 bucket.
- D . Create hourly snapshots of the RDS DB instance. Copy the snapshots to a separate Region. Add an Amazon Elastic ache cluster m front of the existing RDS database. Create AMIs of the EC2 instances that host the application frontend Copy the AMIs to the separate Region. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region. During a disaster, restore. The database from the latest RDS snapshot. Launch EC2 Instances from the AMIs and create an ALB to present the application to end users. Configure the application to point to the new S3 bucket
C
Explanation:
Implementing a disaster recovery strategy that minimizes data loss and addresses performance issues involves creating a read replica of the RDS DB instance in a separate region and directing the finance team’s queries to this replica. This solution alleviates the performance impact on the primary database. Using Amazon S3 Cross-Region Replication (CRR) ensures that processed documents are available in the disaster recovery region. In the event of a disaster, the read replica can be promoted to a standalone DB instance, and EC2 instances can be launched from pre-created AMIs to serve the web frontend, thereby ensuring resiliency and minimal data loss.
AWS Documentation on Amazon RDS Read Replicas, Amazon S3 Cross-Region Replication, and Amazon EC2 AMIs provides comprehensive guidance on implementing a robust disaster recovery solution. This approach is in line with AWS best practices for high availability and disaster recovery planning.
An online magazine will launch its latest edition this month. This edition will be the first to be distributed globally. The magazine’s dynamic website currently uses an Application Load Balancer in front of the web tier, a fleet of Amazon EC2 instances for web and application servers, and Amazon Aurora MySQL. Portions of the website include static content and almost all traffic is read-only.
The magazine is expecting a significant spike in internet traffic when the new edition is launched.
Optimal performance is a top priority for the week following the launch.
Which combination of steps should a solutions architect take to reduce system response times for a global audience? (Choose two.)
- A . Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Deploy S3 buckets in cross-Region replication mode.
- B . Ensure the web and application tiers are each in Auto Scaling groups. Introduce an AWS Direct Connect connection. Deploy the web and application tiers in Regions across the world.
- C . Migrate the database from Amazon Aurora to Amazon RDS for MySQL. Ensure all three of the application tiersג" web, application, and databaseג" are in private subnets.
- D . Use an Aurora global database for physical cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources. Deploy the web and application tiers in Regions across the world.
- E . Introduce Amazon Route 53 with latency-based routing and Amazon CloudFront distributions. Ensure the web and application tiers are each in Auto Scaling groups.
