Practice Free SAP-C02 Exam Online Questions
A video processing company has an application that downloads images from an Amazon S3 bucket, processes the images, stores a transformed image in a second S3 bucket, and updates metadata about the image in an Amazon DynamoDB table. The application is written in Node.js and runs by using an AWS Lambda function. The Lambda function is invoked when a new image is uploaded to Amazon S3.
The application ran without incident for a while. However, the size of the images has grown significantly. The Lambda function is now failing frequently with timeout errors. The function timeout is set to its maximum value. A solutions architect needs to refactor the application’s architecture to prevent invocation failures. The company does not want to manage the underlying infrastructure.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
- A . Modify the application deployment by building a Docker image that contains the application code.
Publish the image to Amazon Elastic Container Registry (Amazon ECR). - B . Create a new Amazon Elastic Container Service (Amazon ECS) task definition with a compatibility type of AWS Fargate. Configure the task definition to use the new image in Amazon Elastic Container Registry (Amazon ECR). Adjust the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.
- C . Create an AWS Step Functions state machine with a Parallel state to invoke the Lambda function.
Increase the provisioned concurrency of the Lambda function. - D . Create a new Amazon Elastic Container Service (Amazon ECS) task definition with a compatibility type of Amazon EC2. Configure the task definition to use the new image in Amazon Elastic Container Registry (Amazon ECR). Adjust the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.
- E . Modify the application to store images on Amazon Elastic File System (Amazon EFS) and to store metadata on an Amazon RDS DB instance. Adjust the Lambda function to mount the EFS file share.
A,B
Explanation:
A company has 50 AWS accounts that are members of an organization in AWS Organizations Each account contains multiple VPCs. The company wants to use AWS Transit Gateway to establish connectivity between the VPCs in each member account Each time a new member account is created, the company wants to automate the process of creating a new VPC and a transit gateway attachment.
Which combination of steps will meet these requirements? (Select TWO)
- A . From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager
- B . Prom the management account, share the transit gateway with member accounts by using an AWS Organizations SCP
- C . Launch an AWS CloudFormation stack set from the management account that automatical^/ creates a new VPC and a VPC transit gateway attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway ID.
- D . Launch an AWS CloudFormation stack set from the management account that automatical^ creates a new VPC and a peering transit gateway attachment in a member account. Share the attachment with the transit gateway in the management account by using a transit gateway service-linked role.
- E . From the management account, share the transit gateway with member accounts by using AWS Service Catalog
A,C
Explanation:
https://aws.amazon.com/blogs/mt/self-service-vpcs-in-aws-control-tower-using-aws-service-catalog/
https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-transitgatewayattachment.html
A company is planning to migrate an application from on premises to the AWS Cloud. The company will begin the migration by moving the application underlying data storage to AWS. The application data is stored on a shared tile system on premises and the application servers connect to the shared file system through SMB
A solutions architect must implement a solution that uses an Amazon S3 bucket for shared storage. Until the application is fully migrated and code is rewritten to use native Amazon S3 APIs the application must continue to have access to the data through SMB. The solutions architect must migrate the application data to AWS (o its new location while still allowing the on-premises application to access the data
Which solution will meet these requirements?
- A . Create a new Amazon FSx for Windows File Server file system Configure AWS DataSync with one location for the on-premises file share and one location for the new Amazon FSx file system Create a new DataSync task to copy the data from the on-premises file share location to the Amazon FSx file system
- B . Create an S3 bucket for the application Copy the data from the on-premises storage to the S3 bucket
- C . Deploy an AWS Server Migration Service (AWS SMS) VM to the on-premises environment Use AWS SMS to migrate the file storage server from on premises to an Amazon EC2 instance
- D . Create an S3 bucket for the application Deploy a new AWS Storage Gateway file gateway on anon-premises VM Create a new file share that stores data in the S3 bucket and is associated with the file gateway Copy the data from the on-premises storage to the new file gateway endpoint
D
Explanation:
Create an S3 Bucket:
Log in to the AWS Management Console and navigate to Amazon S3.
Create a new S3 bucket that will serve as the destination for the application data.
Deploy AWS Storage Gateway:
Download and deploy the AWS Storage Gateway virtual machine (VM) on your on-premises environment. This VM can be deployed on VMware ESXi, Microsoft Hyper-V, or Linux KVM. Configure the File Gateway:
Configure the deployed Storage Gateway as a file gateway. This will enable it to present Amazon S3 buckets as SMB file shares to your on-premises applications.
Create a New File Share:
Within the Storage Gateway configuration, create a new file share that is associated with the S3 bucket you created earlier. This file share will use the SMB protocol, allowing your on-premises applications to access the S3 bucket as if it were a local SMB file share. Copy Data to the File Gateway:
Use your preferred method (such as robocopy, rsync, or similar tools) to copy data from the on-premises storage to the newly created file gateway endpoint. This data will be stored in the S3 bucket, maintaining accessibility through SMB.
Ensure Secure and Efficient Data Transfer:
AWS Storage Gateway ensures that all data in transit is encrypted using TLS, providing secure data transfer to AWS. It also provides local caching for frequently accessed data, improving access performance for on-premises applications.
This approach allows your existing on-premises applications to continue accessing data via SMB while leveraging the scalability and durability of Amazon S3.
Reference
AWS Storage Gateway Overview 【 67 】 .
AWS DataSync and Storage Gateway Hybrid Architecture 【 66 】 .
AWS S3 File Gateway Details 【 68 】 .
A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances, Amazon Elastic File System (Amazon EFS) file systems, and Amazon RDS DB instances.
To meet regulatory and business requirements, the company must make the following changes for data backups:
* Backups must be retained based on custom daily, weekly, and monthly requirements.
* Backups must be replicated to at least one other AWS Region immediately after capture.
*. The backup solution must provide a single source of backup status across the AWS environment.
*. The backup solution must send immediate notifications upon failure of any resource backup.
Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Select THREE.)
- A . Create an AWS Backup plan with a backup rule for each of the retention requirements.
- B . Configure an AWS backup plan to copy backups to another Region.
- C . Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.
- D . Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP- JOB- COMPLETED.
- E . Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.
- F . Set up RDS snapshots on each database.
A,B,D
Explanation:
Cross region with AWS Backup: https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html
A company that has multiple AWS accounts is using AWS Organizations. The company’s AWS accounts host VPCs, Amazon EC2 instances, and containers.
The company’s compliance team has deployed a security tool in each VPC where the company has deployments. The security tools run on EC2 instances and send information to the AWS account that is dedicated for the compliance team. The company has tagged all the compliance-related resources with a key of “costCenter” and a value or “compliance”.
The company wants to identify the cost of the security tools that are running on the EC2 instances so that the company can charge the compliance team’s AWS account. The cost calculation must be as accurate as possible.
What should a solutions architect do to meet these requirements?
- A . In the management account of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Use the tag breakdown in the report to obtain the total cost for the costCenter tagged resources.
- B . In the member accounts of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Schedule a monthly AWS Lambda function to retrieve the reports and calculate the total cost for the costCenter tagged resources.
- C . In the member accounts of the organization activate the costCenter user-defined tag. From the management account, schedule a monthly AWS Cost and Usage Report. Use the tag breakdown in the report to calculate the total cost for the costCenter tagged resources.
- D . Create a custom report in the organization view in AWS Trusted Advisor. Configure the report to generate a monthly billing summary for the costCenter tagged resources in the compliance team’s AWS account.
A
Explanation:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.htmlhttps://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/configurecostallocreport. html
A company stores data on an Amazon RDS for PostgreSQL DB instance in a private subnet in an AWS database account. Applications that are deployed in different VPCs access this data from different AWS accounts.
The company needs to manage the number of active connections to the DB instance. Communication between all accounts and the database account must be private and must not travel across the internet. The solution must be scalable to accommodate more consumer accounts in the future.
Which solution will meet these requirements?
- A . Connect all the VPCs in all the accounts by using a transit gateway. Configure a NAT gateway in a public subnet. Route traffic from the NAT gateway through the transit gateway to the DB instance.
- B . Create an RDS proxy in the AWS database account. Create a proxy endpoint in the private subnet. Configure AWS PrivateLink with a Network Load Balancer to provide access to the DB instance.
- C . Create a VPC peering connection between the VPC that contains the DB instance and each VPC from the other accounts. Configure an Application Load Balancer to provide access to the DB instance through the peering connection.
- D . Create a VPC peering connection between the VPC that contains the DB instance and each VPC from the other accounts. Configure a NAT gateway in a public subnet to route traffic to the DB instance.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
There are three core requirements: reduce/manage active database connections, keep communication private without internet exposure, and scale to many consumer accounts over time.
To manage and pool database connections to an Amazon RDS for PostgreSQL DB instance, the AWS-managed service designed for this purpose is Amazon RDS Proxy. RDS Proxy maintains a pool of database connections and multiplexes application connections onto fewer database connections. This reduces connection overhead on the database, improves resiliency during failovers, and helps control the number of active connections that reach the DB instance.
Next, the connectivity must be private across accounts and scalable as more consumer accounts are added. AWS PrivateLink provides scalable, private connectivity between VPCs and services across accounts without requiring VPC peering, transitive routing, or exposing traffic to the public internet. With PrivateLink, the database account can publish an endpoint service backed by a Network Load Balancer, and consumer accounts create interface endpoints in their VPCs to connect privately to that service. This is operationally scalable because adding new consumer accounts does not require managing a growing mesh of VPC peering relationships or complex route propagation; each consumer adds an interface endpoint.
Option B is the only option that addresses connection management by introducing RDS Proxy and uses PrivateLink to provide private, cross-account, scalable connectivity. The proxy endpoint being in private subnets aligns with the requirement that traffic stays private.
Option A is incorrect because a NAT gateway in a public subnet is used for outbound internet access from private subnets. It is not needed for private cross-account database access and introduces unnecessary public subnet components. Also, NAT gateways do not manage database connections. Transit gateway can provide private connectivity, but it does not address connection pooling, and the NAT gateway component is not appropriate for the stated requirement of avoiding internet exposure.
Option C is incorrect because an Application Load Balancer is not used to proxy raw PostgreSQL database traffic. PostgreSQL uses TCP, and ALB is primarily for HTTP/HTTPS and higher-layer routing. Also, VPC peering to each consumer VPC does not scale well as the number of accounts grows, and it creates operational overhead to manage many peering connections and route tables. It also does not manage DB connections.
Option D is incorrect because it uses VPC peering and NAT gateway. NAT gateway is again not the correct mechanism for private database access and does not provide connection pooling. VPC peering per consumer is not scalable and increases operational overhead.
Therefore, using RDS Proxy to manage connections and AWS PrivateLink (via an NLB-backed endpoint service) to provide private, scalable cross-account access is the correct solution.
References:
AWS documentation on Amazon RDS Proxy for connection pooling and managing database connections for Amazon RDS databases.
AWS documentation on AWS PrivateLink for private, scalable cross-account access to services through interface VPC endpoints and endpoint services backed by Network Load Balancers.
AWS guidance contrasting PrivateLink with VPC peering for scalability and operational simplicity in multi-account, multi-VPC architectures.
A solutions architect needs to review the design of an Amazon EMR cluster that is using the EMR File System (EMRFS). The cluster performs tasks that are critical to business needs. The cluster is running Amazon EC2 On-Demand Instances at all times tor all task, primary, and core nodes. The EMR tasks run each morning, starting at 1 ;00 AM. and take 6 hours to finish running. The amount of time to complete the processing is not a priority because the data is not referenced until late in the day. The solutions architect must review the architecture and suggest a solution to minimize the compute costs.
Which solution should the solutions architect recommend to meet these requirements?
- A . Launch all task, primary, and core nodes on Spool Instances in an instance fleet. Terminate the cluster, including all instances, when the processing is completed.
- B . Launch the primary and core nodes on On-Demand Instances. Launch the task nodes on Spot Instances in an instance fleet. Terminate the cluster, including all instances, when the processing is completed. Purchase Compute Savings Plans to cover the On-Demand Instance usage.
- C . Continue to launch all nodes on On-Demand Instances. Terminate the cluster, including all instances, when the processing is completed. Purchase Compute Savings Plans to cover the On-Demand Instance usage
- D . Launch the primary and core nodes on On-Demand Instances. Launch the task nodes on Spot Instances in an instance fleet. Terminate only the task node instances when the processing is completed. Purchase Compute Savings Plans to cover the On-Demand Instance usage.
A
Explanation:
Amazon EC2 Spot Instances offer spare compute capacity at steep discounts compared to On-Demand prices. Spot Instances can be interrupted by EC2 with two minutes of notification when EC2 needs the capacity back. Amazon EMR can handle Spot interruptions gracefully by decommissioning the nodes and redistributing the tasks to other nodes. By launching all nodes on Spot Instances in an instance fleet, the solutions architect can minimize the compute costs of the EMR cluster. An instance fleet is a collection of EC2 instances with different types and sizes that EMR automatically provisions to meet a defined target capacity. By terminating the cluster when the processing is completed, the solutions architect can avoid paying for idle resources.
Reference:
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-managed-scaling.html
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-fleet.html
https://aws.amazon.com/blogs/big-data/optimizing-amazon-emr-for-resilience-and-cost-with-capacity-optimized-spot-instances/
A company is running an application in the AWS Cloud. Recent application metrics show inconsistent response times and a significant increase in error rates. Calls to third-party services are causing the delays. Currently, the application calls third-party services synchronously by directly invoking an AWS Lambda function.
A solutions architect needs to decouple the third-party service calls and ensure that all the calls are eventually completed.
Which solution will meet these requirements?
- A . Use an Amazon Simple Queue Service (Amazon SQS) queue to store events and invoke the Lambda function.
- B . Use an AWS Step Functions state machine to pass events to the Lambda function.
- C . Use an Amazon EventBridge rule to pass events to the Lambda function.
- D . Use an Amazon Simple Notification Service (Amazon SNS) topic to store events and Invoke the Lambda function.
A
Explanation:
Using an SQS queue to store events and invoke the Lambda function will decouple the third-party service calls and ensure that all the calls are eventually completed. SQS allows you to store messages in a queue and process them asynchronously, which eliminates the need for the application to wait for a response from the third-party service. The messages will be stored in the SQS queue until they are processed by the Lambda function, even if the Lambda function is currently unavailable or busy. This will ensure that all the calls are eventually completed, even if there are delays or errors.
AWS Step Functions state machines can also be used to pass events to the Lambda function, but it would require additional management and configuration to set up the state machine, which would increase operational overhead.
Amazon EventBridge rule can also be used to pass events to the Lambda function, but it would not provide the same level of decoupling and reliability as SQS.
Using Amazon Simple Notification Service (Amazon SNS) topic to store events and Invoke the Lambda function, is similar to SQS, but SNS is a publish-subscribe messaging service and SQS is a queue service. SNS is used for sending messages to multiple recipients, SQS is used for sending messages to a single recipient, so SQS is more appropriate for this use case.
Reference: AWS SQS
AWSStep Functions
AWS EventBridge
AWS SNS
A company is running a serverless application that consists of several AWS Lambda functions and Amazon DynamoDB tables. The company has created new functionality that requires the Lambda functions to access an Amazon Neptune DB cluster. The Neptune DB cluster is located in three subnets in a VPC.
Which of the possible solutions will allow the Lambda functions to access the Neptune DB cluster and DynamoDB tables? (Select TWO.)
- A . Create three public subnets in the Neptune VPC, and route traffic through an internet gateway.
Host the Lambda functions in the three new public subnets. - B . Create three private subnets in the Neptune VPC, and route internet traffic through a NAT gateway. Host the Lambda functions in the three new private subnets.
- C . Host the Lambda functions outside the VPC. Update the Neptune security group to allow access from the IP ranges of the Lambda functions.
- D . Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptune database, and have the Lambda functions access Neptune over the VPC endpoint.
- E . Create three private subnets in the Neptune VPC. Host the Lambda functions in the three new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDB traffic to the VPC endpoint.
B,E
Explanation:
This option allows the company to use private subnets and VPC endpoints to connect the Lambda functions to the Neptune DB cluster and DynamoDB tables securely and efficiently1. By creating three private subnets in the Neptune VPC, the company can isolate the Lambda functions from the public internet and reduce the attack surface2. By routing internet traffic through a NAT gateway, the company can enable the Lambda functions to access AWS services that are outside the VPC, suchas Amazon S3 or Amazon CloudWatch3. By hosting the Lambda functions in the three new private subnets, the company can ensure that the Lambda functions can access the Neptune DB cluster within the same VPC4. By creating a VPC endpoint for DynamoDB, the company can enable the Lambda functions to access DynamoDB tables without going through the internet or a NAT gateway5. By routing DynamoDB traffic to the VPC endpoint, the company can improve the performance and availability of the DynamoDB access5.
Configuring a Lambda function to access resources in a VPC
Working with VPCs and subnets
NAT gateways
Accessing Amazon Neptune from AWS Lambda
VPC endpoints for DynamoDB
During an audit, a security team discovered that a development team was putting IAM user secret access keys in their code and then committing it to an AWS CodeCommit repository. The security team wants to automatically find and remediate instances of this security vulnerability.
Which solution will ensure that the credentials are appropriately secured automatically?
- A . Run a script nightly using AWS Systems Manager Run Command to search tor credentials on the development instances. If found. use AWS Secrets Manager to rotate the credentials.
- B . Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit. If credentials are found, generate new credentials and store them in AWS KMS.
- C . Configure Amazon Made to scan for credentials in CodeCommit repositories. If credentials are found, trigger an AWS Lambda function to disable the credentials and notify the user.
- D . Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. It credentials are found, disable them in AWS IAM and notify the user
D
Explanation:
CodeCommit may use S3 on the back end (and it also uses DynamoDB on the back end) but I don’t think they’re stored in buckets that you can see or point Macie to. In fact, there are even solutions out there describing how to copy your repo from CodeCommit into S3 to back it up: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-event-driven-backups-from-codecommit-to-amazon-s3-using-codebuild-and-cloudwatch-events.html
