Practice Free SAP-C02 Exam Online Questions
A financial services company loaded millions of historical stock trades into an Amazon DynamoDB table. The table uses on-demand capacity mode. Once each day at midnight, a few million new records are loaded into the table. Application read activity against the table happens in bursts throughout the day. and a limited set of keys are repeatedly looked up. The company needs to reduce costs associated with DynamoDB.
Which strategy should a solutions architect recommend to meet this requirement?
- A . Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
- B . Deploy DynamoDB Accelerator (DAX). Configure DynamoDB auto scaling. Purchase Savings Plans in Cost Explorer
- C . Use provisioned capacity mode. Purchase Savings Plans in Cost Explorer.
- D . Deploy DynamoDB Accelerator (DAX). Use provisioned capacity mode. Configure DynamoDB auto scaling.
D
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.ProvisionedThroughput.Manual
A financial services company loaded millions of historical stock trades into an Amazon DynamoDB table. The table uses on-demand capacity mode. Once each day at midnight, a few million new records are loaded into the table. Application read activity against the table happens in bursts throughout the day. and a limited set of keys are repeatedly looked up. The company needs to reduce costs associated with DynamoDB.
Which strategy should a solutions architect recommend to meet this requirement?
- A . Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
- B . Deploy DynamoDB Accelerator (DAX). Configure DynamoDB auto scaling. Purchase Savings Plans in Cost Explorer
- C . Use provisioned capacity mode. Purchase Savings Plans in Cost Explorer.
- D . Deploy DynamoDB Accelerator (DAX). Use provisioned capacity mode. Configure DynamoDB auto scaling.
D
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.ProvisionedThroughput.Manual
A company hosts its primary API on AWS using Amazon API Gateway and AWS Lambda functions. Internal applications and external customers use this API. Some customers also use a legacy API hosted on a standalone EC2 instance.
The company wants to increase security across all APIs to prevent denial of service (DoS) attacks, check for vulnerabilities, and guard against common exploits.
What should a solutions architect do to meet these requirements?
- A . Use AWS WAF to protect both APIs. Configure Amazon Inspector to analyze the legacy API.
Configure Amazon GuardDuty to monitor for malicious attempts to access the APIs. - B . Use AWS WAF to protect the API Gateway API. Configure Amazon Inspector to analyze both APIs.
Configure Amazon GuardDuty to block malicious attempts. - C . Use AWS WAF to protect the API Gateway API. Configure Amazon Inspector to analyze the legacy API. Configure Amazon GuardDuty to monitor for malicious attempts to access the APIs.
- D . Use AWS WAF to protect the API Gateway API. Configure Amazon Inspector to protect the legacy API. Configure Amazon GuardDuty to block malicious attempts.
C
Explanation:
C is correct because:
AWS WAF integrates natively with API Gateway and protects against common web exploits (e.g., SQL injection, XSS).
Amazon Inspector can scan the legacy EC2 instance for known vulnerabilities.
Amazon GuardDuty is a continuous security monitoring service that detects threats butdoes not blocktraffic (B and D are incorrect because GuardDuty doesn’t block).
Reference: AWS WAF Overview
Amazon Inspector Overview
Amazon GuardDuty Overview
A company wants to manage the costs associated with a group of 20 applications that are infrequently used, but are still business-critical, by migrating to AWS. The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while standardizing by using a single deployment methodology.
Most of the applications are part of month-end processing routines with a small number of concurrent users, but they are occasionally run at other times Average application memory consumption is less than 1 GB. though some applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in Java that accesses multiple data sources and often runs for several hours.
Which is the MOST cost-effective solution?
- A . Deploy a separate AWS Lambda function tor each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs.
- B . Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scaling.
Monitor services and hosts by using Amazon CloudWatch. - C . Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources. Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms.
- D . Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group.
B
Explanation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatch.html Elastic Beanstalk automatically uses Amazon CloudWatch to help you monitor your application and environment status. You can navigate to the Amazon CloudWatch console to see your dashboard and get an overview of all of your resources as well as your alarms. You can also choose to view more metrics or add custom metrics.
A solutions architect must create a business case for migration of a company’s on-premises data center to the AWS Cloud. The solutions architect will use a configuration management database (CMDB) export of all the company’s servers to create the case.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS Well-Architected Tool to import the CMDB data to perform an analysis and generate recommendations.
- B . Use Migration Evaluator to perform an analysis. Use the data import template to upload the data from the CMDB export.
- C . Implement resource matching rules. Use the CMDB export and the AWS Price List Bulk API to query CMDB data against AWS services in bulk.
- D . Use AWS Application Discovery Service to import the CMDB data to perform an analysis.
B
Explanation:
https://aws.amazon.com/blogs/architecture/accelerating-your-migration-to-aws/ Build a business case with AWS Migration Evaluator. The foundation for a successful migration starts with a defined business objective (for example, growth or new offerings). In order to enable the business drivers, the established business case must then be aligned to a technical capability (increased security and elasticity). AWS Migration Evaluator (formerly known as TSO Logic) can help you meet these objectives. To get started, you can choose to upload exports from third-party tools such as Configuration Management Database (CMDB) or install a collector agent to monitor. You will receive an assessment after data collection, which includes a projected cost estimate and savings of running your on-premises workloads in the AWS Cloud. This estimate will provide a summary of the projected costs to re-host on AWS based on usage patterns. It will show the breakdown of costs by infrastructure and software licenses. With this information, you can make the business case and plan next steps.
A company is planning to migrate an on-premises data center to AWS. The company currently hosts the data center on Linux-based VMware VMs. A solutions architect must collect information about network dependencies between the VMs. The information must be in the form of a diagram that details host IP addresses, hostnames, and network connection information.
Which solution will meet these requirements?
- A . Use AWS Application Discovery Service. Select an AWS Migration Hub home AWS Region. Install the AWS Application Discovery Agent on the on-premises servers for data collection. Grant permissions to Application Discovery Service to use the Migration Hub network diagrams.
- B . Use the AWS Application Discovery Service Agentless Collector for server data collection. Export the network diagrams from the AWS Migration Hub in .png format.
- C . Install the AWS Application Migration Service agent on the on-premises servers for data collection.
Use AWS Migration Hub data in Workload Discovery on AWS to generate network diagrams. - D . Install the AWS Application Migration Service agent on the on-premises servers for data collection. Export data from AWS Migration Hub in .csv format into an Amazon CloudWatch dashboard to generate network diagrams.
A
Explanation:
To effectively gather information about network dependencies between VMs in an on-premises data center for migration to AWS, it’s crucial to use tools that can capture detailed application and server
dependencies. The AWS Application Discovery Service is designed for this purpose, particularly when migrating from environments like Linux-based VMware VMs. By installing the AWS Application Discovery Agent on the on-premises servers, the service can collect necessary data such as host IP addresses, hostnames, and network connection information. This data is crucial for creating a comprehensive network diagram that outlines the interactions and dependencies between various components of the on-premises infrastructure. The integration with AWS Migration Hub enhances this process by allowing the visualization of these dependencies in a network diagram format, aiding in the planning and execution of the migration process. This approach ensures a thorough understanding of the on-premises environment, which is essential for a successful migration to AWS.
Reference: AWS Documentation on Application Discovery Service: This provides detailed guidance on how to use the Application Discovery Service, including the installation and configuration of the Discovery Agent.
AWS Migration Hub User Guide: Offers insights on how to integrate Application Discovery Service data with Migration Hub for comprehensive migration planning and tracking.
AWS Solutions Architect Professional Learning Path: Contains advanced topics and best practices for migrating complex on-premises environments to AWS, emphasizing the use of AWS services and tools for effective migration planning and execution.
A SaaS web app runs on EC2 Linux behind an ALB. It storesuser sessionsin an RDS Multi-AZ database.
During high traffic, the app suffers latency due to session read/write.
What is the best way to reduce session latency?
- A . Store session data in Amazon S3.
- B . Use FSx for Windows and mount it.
- C . Use Multi-Attach EBS volumes.
- D . Use ElastiCache for Redis to store sessions.
D
Explanation:
Dis the AWS best practice forsession storage: UseElastiCache for Redis― a fast, in-memory data store that handles high throughput with microsecond latency.
It’s highly scalable, fault-tolerant, and optimized for temporary, fast-access session data.
Incorrect:
A: S3 is slow and object-based ― not for session I/O.
B: FSx is Windows-only and not ideal for this use case.
C: EBS Multi-Attach has limitations, complexity, and is not suitable for high-performance shared memory.
Reference: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html
A retail company has an on-premises data center in Europe. The company also has a multi-Region AWS presence that includes the eu-west-1 and us-east-1 Regions. The company wants to be able to route network traffic from its on-premises infrastructure into VPCs in either of those Regions. The company also needs to support traffic that is routed directly between VPCs in those Regions. No single points of failure can exist on the network.
The company already has created two 1 Gbps AWS Direct Connect connections from its on-premises data center. Each connection goes into a separate Direct Connect location in Europe for high availability. These two locations are named DX-A and DX-B, respectively. Each Region has a single AWS Transit Gateway that is configured to route all inter-VPC traffic within that Region.
Which solution will meet these requirements?
- A . Create a private VIF from the DX-A connection into a Direct Connect gateway. Create a private VIF from the DX-B connection into the same Direct Connect gateway for high availability. Associate both the eu-west-1 and us-east-1 transit gateways with the Direct Connect gateway. Peer the transit gateways with each other to support cross-Region routing.
- B . Create a transit VIF from the DX-A connection into a Direct Connect gateway. Associate the eu-west-1 transit gateway with this Direct Connect gateway. Create a transit VIF from the DX-B connection into a separate Direct Connect gateway. Associate the us-east-1 transit gateway with this separate Direct Connect gateway. Peer the Direct Connect gateways with each other to support high availability and cross-Region routing.
- C . Create a transit VIF from the DX-A connection into a Direct Connect gateway. Create a transit VIF from the DX-B connection into the same Direct Connect gateway for high availability. Associate both the eu-west-1 and us-east-1 transit gateways with this Direct Connect gateway. Configure the Direct Connect gateway to route traffic between the transit gateways.
- D . Create a transit VIF from the DX-A connection into a Direct Connect gateway. Create a transit VIF from the DX-B connection into the same Direct Connect gateway for high availability. Associate both the eu-west-1 and us-east-1 transit gateways with this Direct Connect gateway. Peer the transit gateways with each other to support cross-Region routing.
D
Explanation:
in this solution, two transit VIFs are created – one from the DX-A connection and one from the DX-B connection – into the same Direct Connect gateway for high availability. Both the eu-west-1 and us-east-1 transit gateways are then associated with this Direct Connect gateway. The transit gateways are then peered with each other to support cross-Region routing. This solution meets the requirements of the company by creating a highly available connection between the on-premises data center and the VPCs in both the eu-west-1 and us-east-1 regions, and by enabling direct traffic routing between VPCs in those regions.
A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an Application Load Balancer (ALB) and are configured in an Auto ScaSng group. The web application stores all blog content on an Amazon EFS volume.
The company recently added a feature ‘or Moggers to add video to their posts, attracting 10 times the previous user traffic At peak times of day. users report buffering and timeout issues while attempting to reach the site or watch videos
Which is the MOST cost-efficient and scalable deployment that win resolve the issues for users?
- A . Reconfigure Amazon EFS to enable maximum I/O.
- B . Update the Nog site to use instance store volumes tor storage. Copy the site contents to the volumes at launch and to Amazon S3 al shutdown.
- C . Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
- D . Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.
C
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/
Using an Amazon S3 bucket
Using a MediaStore container or a MediaPackage channel Using an Application Load Balancer Using a Lambda function URL
Using Amazon EC2 (or another custom origin)
Using CloudFront origin groups
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-
balancer.html
A company is running an application in the AWS Cloud. The application uses AWS Lambda functions and Amazon Elastic Container Service (Amazon ECS) containers that run with AWS Fargate technology as its primary compute. The load on the application is irregular. The application experiences long periods of no usage, followed by sudden and significant increases and decreases in traffic. The application is write-heavy and stores data in an Amazon Aurora MySQL database. The database runs on an Amazon RDS memory optimized DB instance that is not able to handle the load.
What is the MOST cost-effective way for the company to handle the sudden and significant changes in traffic?
- A . Add additional read replicas to the database. Purchase Instance Savings Plans and RDS Reserved Instances.
- B . Migrate the database to an Aurora multi-master DB cluster. Purchase Instance Savings Plans.
- C . Migrate the database to an Aurora global database. Purchase Compute Savings Plans and RDS Reserved Instances.
- D . Migrate the database to Aurora Serverless v1. Purchase Compute Savings Plans.
