Practice Free SAP-C02 Exam Online Questions
A company has multiple applications that run on Amazon EC2 instances in private subnets in a VPC. The company has deployed multiple NAT gateways in multiple Availability Zones for internet access. The company wants to block certain websites from being accessed through the NAT gateways. The company also wants to identify the internet destinations that the EC2 instances access.
The company has already created VPC flow logs for the NAT gateways’ elastic network interfaces.
Which solution will meet these requirements?
- A . Use Amazon CloudWatch Logs Insights to query the logs and determine the internet destinations that the EC2 instances communicate with. Use AWS Network Firewall to block
the websites. - B . Use Amazon CloudWatch Logs Insights to query the logs and determine the internet destinations that the EC2 instances communicate with. Use AWS WAF to block the websites.
- C . Use the BytesInFromSource and BytesInFromDestination Amazon CloudWatch metrics to
determine the internet destinations that the EC2 instances communicate with. Use AWS Network Firewall to block the websites. - D . Use the BytesInFromSource and BytesInFromDestination Amazon CloudWatch metrics to determine the internet destinations that the EC2 instances communicate with. Use AWS WAF to block the websites.
A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of weather.example.com. The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region.
Which solution will meet these requirements?
- A . Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the DynamoDB tables to global tables.
- B . Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
- C . Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.
- D . Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
C
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/dns-failover.html
A financial company is planning to migrate its web application from on premises to AWS. The company uses a third-party security tool to monitor the inbound traffic to the application. The company has used the security tool for the last 15 years, and the tool has no cloud solutions available from its vendor. The company’s security team is concerned about how to integrate the security tool with AWS technology.
The company plans to deploy the application migration to AWS on Amazon EC2 instances. The EC2 instances will run in an Auto Scaling group in a dedicated VPC. The company needs to use the security tool to inspect all packets that come in and out of the VPC. This inspection must occur in real time and must not affect the application’s performance. A solutions architect must design a target architecture on AWS that is highly available within an AWS Region.
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)
- A . Deploy the security tool on EC2 instances in a new Auto Scaling group in the existing VPC.
- B . Deploy the web application behind a Network Load Balancer.
- C . Deploy an Application Load Balancer in front of the security tool instances.
- D . Provision a Gateway Load Balancer for each Availability Zone to redirect the traffic to the security tool.
- E . Provision a transit gateway to facilitate communication between VPCs.
A,D
Explanation:
Option A, Deploy the security tool on EC2 instances in a new Auto Scaling group in the existing VPC, allows the company to use its existing security tool while still running it within the AWS environment. This ensures that all packets coming in and out of the VPC are inspected by the security tool in real time.
Option D, Provision a Gateway Load Balancer for each Availability Zone to redirect the traffic to the security tool, allows for high availability within an AWS Region. By provisioning a Gateway Load Balancer for each Availability Zone, the traffic is redirected to the security tool in the event of any failures or outages. This ensures that the security tool is always available to inspect the traffic, even in the event of a failure.
A solutions architect is investigating an issue in which a company cannot establish new sessions in Amazon Workspaces. An initial analysis indicates that the issue involves user profiles. The AmazonWorkspaces environment is configured to use Amazon FSx for Windows File Server as the profile share storage. The FSx for Windows File Server file system is configured with 10 TB of storage. The solutions architect discovers that the file system has reached its maximum capacity. The solutions architect must ensure that users can regain access. The solution also must prevent the problem from occurring again.
Which solution will meet these requirements?
- A . Remove old user profiles to create space. Migrate the user profiles to an Amazon FSx for Lustre file system.
- B . Increase capacity by using the update-file-system command. Implement an Amazon CloudWatch metric that monitors free space. Use Amazon EventBridge to invoke an AWS Lambda function to increase capacity as required.
- C . Monitor the file system by using the FreeStorageCapacity metric in Amazon CloudWatch. Use AWS Step Functions to increase the capacity as required.
- D . Remove old user profiles to create space. Create an additional FSx for Windows File Server file system. Update the user profile redirection for 50% of the users to use the new file system.
B
Explanation:
It can prevent the issue from happening again by monitoring the file system with the FreeStorageCapacity metric in Amazon CloudWatch and using Amazon EventBridge to invoke an AWS Lambda function to increase the capacity as required. This ensures that the file system always has enough free space to store user profiles and avoids reaching maximum capacity.
A company is migrating its blog platform to AWS. The company’s on-premises servers connect to AWS through an AWS Site-to-Site VPN connection. The blog content is updated several times a day by multiple authors and is served from a file share on a network-attached storage (NAS) server. The company needs to migrate the blog platform without delaying the content updates. The company has deployed Amazon EC2 instances across multiple Availability Zones to run the blog platform behind an Application Load Balancer. The company also needs to move 200 TB of archival data from its on-premises servers to Amazon S3 as soon as possible.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create a weekly cron job in Amazon EventBridge. Use the cron job to invoke an AWS Lambda function to update the EC2 instances from the NAS server.
- B . Configure an Amazon Elastic Block Store (Amazon EBS) Multi-Attach volume for the EC2 instances to share for content access. Write code to synchronize the EBS volume with the NAS server weekly.
- C . Mount an Amazon Elastic File System (Amazon EFS) file system to the on-premises servers to act as the NAS server. Copy the blog data to the EFS file system. Mount the EFS file system to the EC2 instances to serve the content.
- D . Order an AWS Snowball Edge Storage Optimized device. Copy the static data artifacts to the device. Ship the device to AWS.
- E . Order an AWS Snowcone SSD device. Copy the static data artifacts to the device. Ship the device to
AWS.
C,D
Explanation:
Comprehensive and Detailed in Depth
C is correct because Amazon EFS offers a scalable, elastic file system that can be mounted on both on-premises servers (via AWS Direct Connect or VPN) and EC2 instances. This allows real-time access to shared content with no migration downtime.
D is correct because AWS Snowball Edge Storage Optimized devices are designed for transferring petabyte-scale data (up to 80 TB per device). It’s a best practice for moving 200 TB of archival data when speed and bandwidth are constraints.
Reference: Amazon EFS C Mounting on-premises
AWS Snowball Edge C Overview
A company deploys a new web application. As pari of the setup, the company configures AWS WAF to log to Amazon S3 through Amazon Kinesis Data Firehose. The company develops an Amazon Athena query that runs once daily to return AWS WAF log data from the previous 24 hours. The volume of daily logs is constant. However, over time, the same query is taking more time to run. A solutions architect needs to design a solution to prevent the query time from continuing to increase. The solution must minimize operational overhead.
Which solution will meet these requirements?
- A . Create an AWS Lambda function that consolidates each day’s AWS WAF logs into one log file.
- B . Reduce the amount of data scanned by configuring AWS WAF to send logs to a different S3 bucket each day.
- C . Update the Kinesis Data Firehose configuration to partition the data in Amazon S3 by date and time. Create external tables for Amazon Redshift. Configure Amazon Redshift Spectrum to query the data source.
- D . Modify the Kinesis Data Firehose configuration and Athena table definition to partition the data by date and time. Change the Athena query to view the relevant partitions.
D
Explanation:
The best solution is to modify the Kinesis Data Firehose configuration and Athena table definition to partition the data by date and time. This will reduce the amount of data scanned by Athena and improve the query performance. Changing the Athena query to view the relevant partitions will also help to filter out unnecessary data. This solution requires minimal operational overhead as it does not involve creating additional resources or changing the log format.
Reference: [AWS WAF Developer Guide], [Amazon Kinesis Data Firehose User Guide], [Amazon Athena User Guide]
A company is replicating an application in a secondary AWS Region. The application in the primary Region reads from and writes to several Amazon DynamoDB tables. The application also reads customer data from an Amazon RDS for MySQL DB instance. The company plans to use the secondary Region as part of a disaster recovery plan. The application in the secondary Region must function without dependencies on the primary Region.
Which solution will meet these requirements with the LEAST development effort?
- A . Configure DynamoDB global tables. Replicate the required tables to the secondary Region. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use the DynamoDB tables and the read replica in the secondary Region.
- B . Use DynamoDB Accelerator (DAX) to cache the required tables in the secondary Region. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use DAX and the read replica in the secondary Region.
- C . Configure DynamoDB global tables. Replicate the required tables to the secondary Region. Enable Multi-AZ for the RDS DB instance. Configure the standby replica to be created in the secondary Region. Configure the secondary application to use the DynamoDB tables and the standby replica in the secondary Region.
- D . Set up DynamoDB streams from the primary Region. Process the streams in the secondary Region to populate new DynamoDB tables. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use the DynamoDB tables and the read replica in the secondary Region.
A
Explanation:
Option A provides a straightforward and efficient solution:
DynamoDB global tables automatically replicate data across multiple Regions, ensuring that the secondary Region has up-to-date data without the need for custom replication logic.
Creating a read replica of the RDS DB instance in the secondary Region allows the application to access customer data without relying on the primary Region.
Configuring the secondary application to use these resources ensures that it can function independently, fulfilling the disaster recovery requirements with minimal development effort.
This solution leverages AWS’s managed services to provide a resilient and low-maintenance disaster recovery setup.
A company has developed a hybrid solution between its data center and AWS. The company uses Amazon VPC and Amazon EC2 instances that send application logs to Amazon CloudWatch. The EC2 instances read data from multiple relational databases that are hosted on premises.
The company wants to monitor which EC2 instances are connected to the databases in near real time. The company already has a monitoring solution that uses Splunk on premises. A solutions architect needs to determine how to send networking traffic to Splunk.
How should the solutions architect meet these requirements?
- A . Enable VPC flow logs and send them to CloudWatch. Create an AWS Lambda function to periodically export the CloudWatch logs to an Amazon S3 bucket by using the predefined export function. Generate ACCESS_KEY and SECRET_KEY AWS credentials. Configure Splunk to pull the logs from the S3 bucket by using those credentials.
- B . Create an Amazon Data Firehose delivery stream with Splunk as the destination. Configure a pre-processing AWS Lambda function with a Firehose stream processor that extracts individual log events from records sent by CloudWatch Logs subscription filters. Enable VPC flow logs and send them to CloudWatch. Create a CloudWatch Logs subscription that sends log events to the Firehose delivery stream.
- C . Ask the company to log every request that is made to the databases along with the EC2 instance IP address. Export the CloudWatch logs to an Amazon S3 bucket. Use Amazon Athena to query the logs grouped by database name. Export Athena results to another S3 bucket. Invoke an AWS Lambda function to automatically send any new file that is put in the S3 bucket to Splunk.
- D . Send the CloudWatch logs to an Amazon Kinesis data stream with Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics). Configure a 1-minute sliding window to collect the events. Create a SQL query that uses the anomaly detection template to monitor any networking traffic anomalies in near real time. Send the result to an Amazon Data Firehose delivery stream with Splunk as the destination.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The company needs near-real-time visibility into which EC2 instances are connecting to on-premises databases. The correct telemetry source for network connection metadata at the VPC level is VPC Flow Logs. VPC Flow Logs capture information about IP traffic going to and from network interfaces in a VPC, including source/destination IPs, ports, protocol, and accept/reject decisions. This data can be used to infer which EC2 instance IPs are connecting to database IPs.
The company already uses Splunk on premises, so the solution should deliver these logs to Splunk with minimal delay and operational overhead. Amazon Data Firehose provides a fully managed way to deliver streaming data to supported destinations, including Splunk, with buffering and retry handling. CloudWatch Logs subscription filters can stream log events in near real time from CloudWatch Logs to destinations such as Firehose.
Option B uses the standard pattern: enable VPC Flow Logs to CloudWatch Logs, then create a CloudWatch Logs subscription filter that streams the flow logs to a Firehose delivery stream configured with Splunk as the destination. Because CloudWatch Logs subscription deliveries can batch log events, using a Firehose preprocessing Lambda to extract individual log events is a common approach to format records in a way that Splunk ingests cleanly. This yields near-real-time delivery with low operational overhead.
Option A introduces delay because it exports CloudWatch logs periodically to S3 and requires Splunk to poll S3. It also requires long-lived access keys and periodic batch exports, which is not near real time.
Option C relies on application-level logging changes and batch analytics with Athena, which is not near real time and requires substantial changes and additional pipelines.
Option D is over-engineered for the stated requirement. Using Flink and anomaly detection focuses on anomalies rather than simply identifying connections, and it adds significant operational complexity compared to direct delivery of flow logs to Splunk via Firehose.
Therefore, streaming VPC Flow Logs from CloudWatch Logs to Splunk using a Firehose delivery stream and a subscription filter is the best approach.
References:
AWS documentation on VPC Flow Logs and the metadata they provide for network connection visibility.
AWS documentation on CloudWatch Logs subscription filters for near-real-time streaming of log events.
AWS documentation on Amazon Data Firehose delivery to Splunk and optional Lambda transformations for record formatting.
A company wants to containerize a multi-tier web application and move the application from an on-
premises data center to AWS. The application includes web. application, and database tiers. The company needs to make the application fault tolerant and scalable. Some frequently accessed data must always be available across application servers. Frontend web servers need session persistence and must scale to meet increases in traffic.
Which solution will meet these requirements with the LEAST ongoing operational overhead?
- A . Run the application on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Elastic File System (Amazon EFS) for data that is frequently accessed between the web and application tiers. Store the frontend web server session data in Amazon Simple Queue Service (Amazon SOS).
- B . Run the application on Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use Amazon ElastiCache for Redis to cache frontend web server session data. Use Amazon Elastic Block Store (Amazon EBS) with Multi-Attach on EC2 instances that are distributed across multiple Availability Zones.
- C . Run the application on Amazon Elastic Kubernetes Service (Amazon EKS). Configure Amazon EKS to use managed node groups. Use ReplicaSets to run the web servers and applications. Create an Amazon Elastic File System (Amazon EFS) Me system. Mount the EFS file system across all EKS pods to store frontend web server session data.
- D . Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) Configure Amazon EKS to use managed node groups. Run the web servers and application as Kubernetes deployments in the EKS cluster. Store the frontend web server session data in an Amazon DynamoDB table. Create an Amazon Elastic File System (Amazon EFS) volume that all applications will mount at the time of deployment.
D
Explanation:
Deploying the application on Amazon EKS with managed node groups simplifies the operational overhead of managing the Kubernetes cluster. Running the web servers and application as Kubernetes deployments ensures that the desired number of pods are always running and can scale up or down as needed. Storing the frontend web server session data in an Amazon DynamoDB table provides a fast, scalable, and durable storage option that can be accessed across multiple Availability Zones. Creating an Amazon EFS volume that all applications will mount at the time of deployment allows the application to share data that is frequently accessed between the web and application tiers.
Reference:
https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html
https://docs.aws.amazon.com/eks/latest/userguide/deployments.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
A company’s solutions architect is reviewing a new internally developed application in a sandbox AWS account. The application uses an AWS Auto Scaling group of Amazon EC2 instances that have an IAM instance profile attached Part of the application logic creates and accesses secrets from AWS Secrets Manager. The company has an AWS Lambda function that calls the application API to test the functionality. The company also has created an AWS CloudTrail trail in the account
The application’s developer has attached the SecretsManagerReadWnte AWS managed IAM policy to an IAM role. The IAM role is associated with the instance profile that is attached to the EC2 instances
The solutions architect has invoked the Lambda function for testing
The solutions architect must replace the SecretsManagerReadWnte policy with a new policy that provides least privilege access to the Secrets Manager actions that the application requires
What is the MOST operationally efficient solution that meets these requirements?
- A . Generate a policy based on CloudTrail events for the IAM role Use the generated policy output to create a new IAM policy Use the newly generated IAM policy to replace the SecretsManagerReadWnte policy that is attached to the IAM role
- B . Create an analyzer in AWS Identity and Access Management Access Analyzer Use the IAM role’s Access Advisor findings to create a new IAM policy Use the newly created IAM policy to replace the SecretsManagerReadWnte policy that is attached to the IAM role
- C . Use the aws cloudtrail lookup-events AWS CLI command to filter and export CloudTrail events that are related to Secrets Manager Use a new IAM policy that contains the actions from CloudTrail to replace the SecretsManagerReadWnte policy that is attached to the IAM role
- D . Use the IAM policy simulator to generate an IAM policy for the IAM role Use the newly generated IAM policy to replace the SecretsManagerReadWnte policy that is attached to the IAM role
B
Explanation:
The IAM policy simulator will generate a policy that contains only the necessary permissions for the application to access Secrets Manager, providing the least privilege necessary to get the job done. This is the most efficient solution as it will not require additional steps such as analyzing CloudTrail events or manually creating and testing an IAM policy.
You can use the IAM policy simulator to generate an IAM policy for an IAM role by specifying the role and the API actions and resources that the application or service requires. The simulator will then generate an IAM policy that grants the least privilege access to those actions and resources.
Once you have generated an IAM policy using the simulator, you can replace the existing SecretsManagerReadWnte policy that is attached to the IAM role with the newly generated policy. This will ensure that the application or service has the least privilege access to the Secrets Manager actions that it requires.
You can access the IAM policy simulator through the IAM console, AWS CLI, and AWS SDKs. Here is
the link for more information: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_simulator.html
