Practice Free SAP-C02 Exam Online Questions
A company is building an application that will run on an AWS Lambda function. Hundreds of customers will use the application. The company wants to give each customer a quota of requests for a specific time period. The quotas must match customer usage patterns. Some customers must receive a higher quota for a shorter time period.
Which solution will meet these requirements?
- A . Create an Amazon API Gateway REST API with a proxy integration to invoke the Lambda function.
For each customer, configure an API Gateway usage plan that includes an appropriate request quota.
Create an API key from the usage plan for each user that the customer needs. - B . Create an Amazon API Gateway HTTP API with a proxy integration to invoke the Lambda function. For each customer, configure an API Gateway usage plan that includes an appropriate request quota. Configure route-level throttling for each usage plan. Create an API key from the usage plan for each user that the customer needs.
- C . Create a Lambda function alias for each customer. Include a concurrency limit with an appropriate request quota. Create a Lambda function URL for each function alias. Share the Lambda function URL for each alias with the relevant customer.
- D . Create an Application Load Balancer (ALB) in a VPC. Configure the Lambda function as a target for the ALB. Configure an AWS WAF web ACL for the ALB. For each customer, configure a rate-based rule that includes an appropriate request quota.
A
Explanation:
The correct answer is A.
A company is creating a REST API to share information with six of its partners based in the United States. The company has created an Amazon API Gateway Regional endpoint. Each of the six partners will access the API once per day to post daily sales figures.
After initial deployment, the company observes 1.000 requests per second originating from 500 different IP addresses around the world. The company believes this traffic is originating from a botnet and wants to secure its API while minimizing cost.
Which approach should the company take to secure its API?
- A . Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule lo block clients thai submit more than fiverequests per day. Associate the web ACL with the CloudFront distnbution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can run the POST method.
- B . Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than five requests per day. Associate the web ACL with the CloudFront distnbution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the POST method.
- C . Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method.
- D . Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
D
Explanation:
"A usage plan specifies who can access one or more deployed API stages and methods―and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys."https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span…… The following caveats apply to AWS WAF rate-based rules: The minimum rate that you can set is 100. AWS WAF checks the rate of requests every 30 seconds, and counts requests for the prior five minutes each time. Because of this, it’s possible for an IP address to send requests at too high a rate for 30 seconds before AWS WAF detects and blocks it. AWS WAF can block up to 10,000 IP addresses. If more than 10,000 IP addresses send high rates of requests at the same time, AWS WAF will only block 10,000 of them. "https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
A company needs to architect a hybrid DNS solution. This solution will use an Amazon Route 53 private hosted zone for the domain cloud.example.com for the resources stored within VPCs.
The company has the following DNS resolution requirements:
• On-premises systems should be able to resolve and connect to cloud.example.com.
• All VPCs should be able to resolve cloud.example.com.
There is already an AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway.
Which architecture should the company use to meet these requirements with the HIGHEST performance?
- A . Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
- B . Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional forwarder.
- C . Associate the private hosted zone to the shared services VPC. Create a Route 53 outbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound resolver.
- D . Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
A
Explanation:
Amazon Route 53 Resolver is a managed DNS resolver service from Route 53 that helps to create conditional forwarding rules to redirect query traffic1. By associating the private hosted zone to all the VPCs, the solutions architect can enable DNS resolution for cloud.example.com within the VPCs. By creating a Route 53 inbound resolver in the shared services VPC, the solutions architect can enable DNS resolution for cloud.example.com from on-premises systems. By attaching all VPCs to the transit gateway, the solutions architect can enable connectivity between the VPCs and the on-premises network through AWS Direct Connect. By creating forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver, the solutions architect can direct DNS queries for cloud.example.com to the Route 53 Resolver endpoint in AWS. This solution will provide the highest performance as it leverages Route 53 Resolver’s optimized routing and caching capabilities.
Reference: 1: https://aws.amazon.com/route53/resolver/
A company provides auction services for artwork and has users across North America and Europe. The company hosts its application in Amazon EC2 instances in the us-east-1 Region. Artists upload photos of their work as large-size, high-resolution image files from their mobile phones to a centralized Amazon S3 bucket created in the us-east-l Region. The users in Europe are reporting slow performance for their Image uploads.
How can a solutions architect improve the performance of the image upload process?
- A . Redeploy the application to use S3 multipart uploads.
- B . Create an Amazon CloudFront distribution and point to the application as a custom origin
- C . Configure the buckets to use S3 Transfer Acceleration.
- D . Create an Auto Scaling group for the EC2 instances and create a scaling policy.
C
Explanation:
Transfer acceleration. S3 Transfer Acceleration utilizes the Amazon CloudFront global network of edge locations to accelerate the transfer of data to and from S3 buckets. By enabling S3 Transfer Acceleration on the centralized S3 bucket, the users in Europe will experience faster uploads as their data will be routed through the closest CloudFront edge location.
A company is running multiple workloads in the AWS Cloud. The company has separate units for software development. The company uses AWS Organizations and federation with SAML to give permissions to developers to manage resources in their AWS accounts. The development units each deploy their production workloads into a common production account.
Recently, an incident occurred in the production account in which members of a development unitterminated an EC2 instance that belonged to a different development unit. A solutions architect must create a solution that prevents a similar incident from happening in the future. The solution also must allow developers the possibility to manage the instances used for their workloads.
Which strategy will meet these requirements?
- A . Create separate OUs in AWS Organizations for each development unit. Assign the created OUs to the company AWS accounts. Create separate SCPs with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit name. Assign the SCP to the corresponding OU.
- B . Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Update the IAM policy for the developers’ assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/ DevelopmentUnit.
- C . Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Create an SCP with an allow action and a StringEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/DevelopmentUnit. Assign the SCP to the root OU.
- D . Create separate IAM policies for each development unit. For every IAM policy, add an allow action and a StringEquals condition for the DevelopmentUnit resource tag and the development unit name. During SAML federation, use AWS Security Token Service (AWS STS) to assign the IAM policy and match the development unit name to the assumed IAM role.
B
Explanation:
This option allows the solutions architect to use session tags to pass additional information about the federated user, such as the development unit name, to AWS1. Session tags are key-value pairs that you can define in your identity provider (IdP) and pass in your SAML assertion1. By using a deny action and a StringNotEquals condition in the IAM policy, you can prevent developers from accessing or modifying EC2 instances that belong to a different development unit2. This way, you can enforce fine-grained access control and prevent accidental or malicious incidents.
Passing session tags in SAML assertions
Using tags for attribute-based access control
A company uses an organization in AWS Organizations to manage the company’s AWS accounts. The company uses AWS CloudFormation to deploy all infrastructure. A finance team wants to buikJ a chargeback model. The finance team asked each business unit to tag resources by using a predefined list of project values.
When the finance team used the AWS Cost and Usage Report in AWS Cost Explorer and filtered based on project, the team noticed noncompliant project values. The company wants to enforce the use of project tags for new resources.
Which solution will meet these requirements with the LEAST effort?
- A . Create a tag policy that contains the allowed project tag values in the organization’s management account. Create an SCP that denies the cloudformation: CreateStack API operation unless a project tag is added. Attach the SCP to each OU.
- B . Create a tag policy that contains the allowed project tag values in each OU. Create an SCP that denies the cloudformation: CreateStack API operation unless a project tag is added. Attach the SCP to each OU.
- C . Create a tag policy that contains the allowed project tag values in the AWS management account. Create an 1AM policy that denies the cloudformation: CreateStack API operation unless a project tag is added. Assign the policy to each user.
- D . Use AWS Service Catalog to manage the CloudFoanation stacks as products. Use a TagOptions library to control project tag values. Share the portfolio with all OUs that are in the organization.
A
Explanation:
The best solution is to create a tag policy that contains the allowed project tag values in the organization’s management account and create an SCP that denies the cloudformation: CreateStack API operation unless a project tag is added. A tag policy is a type of policy that can help standardize tags across resources in the organization’s accounts. A tag policy can specify the allowed tag keys, values, and case treatment for compliance. A service control policy (SCP) is a type of policy that can restrict the actions that users and roles can perform in the organization’s accounts. An SCP can deny access to specific API operations unless certain conditions are met, such as having a specific tag. By creating a tag policy in the management account and attaching it to each OU, the organization can enforce consistent tagging across all accounts. By creating an SCP that denies the cloudformation: CreateStack API operation unless a project tag is added, the organization can prevent users from creating new resources without proper tagging. This solution will meet the requirements with the least effort, as it does not involve creating additional resources or modifying existing ones.
Reference: Tag policies – AWS Organizations, Service control policies – AWS Organizations, AWS CloudFormation User Guide
A software as a service (SaaS) company has developed a multi-tenant environment. The company uses Amazon DynamoDB tables that the tenants share for the storage layer. The company uses AWS Lambda functions for the application services.
The company wants to offer a tiered subscription model that is based on resource consumption by each tenant Each tenant is identified by a unique tenant ID that is sent as part of each request to the Lambda functions. The company has created an AWS Cost and Usage Report (AWS CUR) in an AWS account. The company wants to allocate the DynamoDB costs to each tenant to match that tenant"s resource consumption
Which solution will provide a granular view of the DynamoDB cost for each tenant with the LEAST operational effort?
- A . Associate a new lag that is named tenant ID with each table in DynamoDB Activate the tag as a
cost allocation tag m the AWS Billing and Cost Management console Deploy new Lambda function code to log the tenant ID in Amazon CloudWatch Logs Use the AWS CUR to separate DynamoDB consumption cost for each tenant ID - B . Configure the Lambda functions to log the tenant ID and the number of RCUs and WCUs consumed from DynamoDB for each transaction to Amazon CloudWatch Logs Deploy another Lambda function to calculate the tenant costs by using the logged capacity units and the overall DynamoDB cost from the AWS Cost Explorer API Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule.
- C . Create a new partition key that associates DynamoDB items with individual tenants Deploy a Lambda function to populate the new column as part of each transaction Deploy another Lambda function to calculate the tenant costs by using Amazon Athena to calculate the number of tenant items from DynamoDB and the overall DynamoDB cost from the AWS CUR Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule
- D . Deploy a Lambda function to log the tenant ID the size of each response, and the duration of the transaction call as custom metrics to Amazon CloudWatch Logs Use CloudWatch Logs Insights to query the custom metrics for each tenant. Use AWS Pricing Calculator to obtain the overall DynamoDB costs and to calculate the tenant costs
B
Explanation:
Log Tenant ID and RCUs/WCUs:
Update the AWS Lambda functions to log the tenant ID and the number of Read Capacity Units (RCUs) and Write Capacity Units (WCUs) consumed from DynamoDB for each transaction. This data will be logged to Amazon CloudWatch Logs.
Calculate Tenant Costs:
Deploy an additional Lambda function that reads the logs from CloudWatch Logs, calculates the RCUs and WCUs used by each tenant, and then uses the AWS Cost Explorer API to retrieve the overall cost of DynamoDB usage. This function will then allocate the costs to each tenant based on their usage. Scheduled Cost Calculation:
Create an Amazon EventBridge rule to trigger the cost calculation Lambda function at regular intervals (e.g., daily or hourly). This ensures that cost allocation is continuously updated and tenants are billed accurately based on their consumption.
This solution minimizes operational effort by automating the cost allocation process and ensuring that the company can accurately bill tenants based on their resource consumption.
Reference
AWS Cost Explorer Documentation
Amazon CloudWatch Logs Documentation
AWS Lambda Documentation
A company uses AWS Organizations AWS account. A solutions architect must design a solution in which only administrator roles are allowed to use IAM actions. However the solutions archited does not have access to all the AWS account throughout the company.
Which solution meets these requirements with the LEAST operational overhead?
- A . Create an SCP that applies to at the AWS accounts to allow I AM actions only for administrator roles. Apply the SCP to the root OLI.
- B . Configure AWS CloudTrai to invoke an AWS Lambda function for each event that is related to 1AM actions. Configure the function to deny the action. If the user who invoked the action is not an administator.
- C . Create an SCP that applies to all the AWS accounts to deny 1AM actions for all users except for those with administrator roles. Apply the SCP to the root OU.
- D . Set an 1AM permissions boundary that allows 1AM actions. Attach the permissions boundary to every administrator role across all the AWS accounts.
A
Explanation:
To restrict IAM actions to only administrator roles across all AWS accounts in an organization, the most operationally efficient solution is to create a Service Control Policy (SCP) that allows IAM actions exclusively for administrator roles and apply this SCP to the root Organizational Unit (OU) of AWS Organizations. This method ensures a centralized governance mechanism that uniformly applies the policy across all accounts, thereby minimizing the need for individual account-level configurations and reducing operational complexity.
AWS Documentation on AWS Organizations and Service Control Policies offers comprehensive information on creating and managing SCPs for organizational-wide policy enforcement. This approach aligns with AWS best practices for managing permissions and ensuring secure and compliant account configurations within an AWS Organization.
A company in the United States (US) has acquired a company in Europe. Both companies use the AWS Cloud. The US company has built a new application with a microservices architecture. The US company is hosting the application across five VPCs in the us-east-2 Region. The application must be able to access resources in one VPC in the eu-west-1 Region. However, the application must not be able to access any other VPCs. The VPCs in both Regions have no overlapping CIDR ranges. All accounts are already consolidated in one organization in AWS Organizations.
Which solution will meet these requirements MOST cost-effectively?
- A . Create one transit gateway in eu-west-1. Attach the VPCs in us-east-2 and the VPC in eu-west-1 to the transit gateway. Create the necessary route entries in each VPC so that the traffic is routed through the transit gateway.
- B . Create one transit gateway in each Region. Attach the involved subnets to the regional transit
gateway. Create the necessary route entries in the associated route tables for each subnet so that the traffic is routed through the regional transit gateway. Peer the two transit gateways. - C . Create a full mesh VPC peering connection configuration between all the VPCs. Create the necessary route entries in each VPC so that the traffic is routed through the VPC peering connection.
- D . Create one VPC peering connection for each VPC in us-east-2 to the VPC in eu-west-1. Create the necessary route entries in each VPC so that the traffic is routed through the VPC peering connection.
A
Explanation:
Creating a single transit gateway in eu-west-1 allows for centralized and cost-effective connectivity between the VPCs involved, without the complexity of creating multiple peering connections or transit gateways.
This approach also allows for selective attachment of only the required VPCs, meeting the strict security and access requirements.
Route table configuration ensures traffic flows properly through the transit gateway, providing connectivity for the application without exposing other VPC resources.
This solution is the most cost-effective and operationally efficient choice for secure inter-Region communication.
A company is building an application on Amazon EMR to analyze data.
The following user groups need to perform different actions:
• Administrator: Provision EMR clusters from different configurations.
• Data engineer: Create an EMR cluster from a small set of available configurations. Run ETL scripts to process data.
• Data analyst: Create an EMR cluster with a specific configuration. Run SQL queries and Apache Hive queries on the data.
A solutions architect must design a solution that gives each group the ability to launch only its authorized EMR configurations. The solution must provide the groups with least privilege access to only the resources that they need. The solution also must provide tagging for all resources that the groups create.
Which solution will meet these requirements?
- A . Configure AWS Service Catalog to control the Amazon EMR versions available for deployment, the cluster configurations, and the permissions for each user group.
- B . Configure Kerberos-based authentication for EMR clusters when the EMR clusters launch. Specify a Kerberos security configuration and cluster-specific Kerberos options.
- C . Create IAM roles for each user group. Attach policies to the roles to define allowed actions for users. Create an AWS Config rule to check for noncompliant resources. Configure the rule to notify the company to address noncompliant resources.
- D . Use AWS CloudFormation to launch EMR clusters with attached resource policies. Create an AWS Config rule to check for noncompliant resources. Configure the rule to notify the company to address noncompliant resources.
