Practice Free SAP-C02 Exam Online Questions
A company is running an application in the AWS Cloud. The application runs on containers in an Amazon Elastic Container Service (Amazon ECS) cluster. The ECS tasks use the Fargate launch type. The application’s data is relational and is stored in Amazon Aurora MySQL. To meet regulatory requirements, the application must be able to recover to a separate AWS Region in the event of an
application failure. In case of a failure, no data can be lost.
Which solution will meet these requirements with the LEAST amount of operational overhead?
- A . Provision an Aurora Replica in a different Region.
- B . Set up AWS DataSync for continuous replication of the data to a different Region.
- C . Set up AWS Database Migration Service (AWS DMS) to perform a continuous replication of the data to a different Region.
- D . Use Amazon Data Lifecycle Manager {Amazon DLM) to schedule a snapshot every 5 minutes.
A
Explanation:
Provision an Aurora Replica in a different Region will meet the requirement of the application being able to recover to a separate AWS Region in the event of an application failure, and no data can be lost, with the least amount of operational overhead.
A company uses an Amazon Redshift cluster to ingest data from various sources. The data is shared with other internal applications for analysis and reporting.
The cluster has eight ra3.4xlarge nodes. Data ingestion runs daily from midnight to 8 AM and takes 3 hours. The cluster has 85% average CPU utilization during ingestion. The cluster uses on-demand node pricing and is paused outside of the 8-hour daily ingestion window. Snapshots are enabled on the cluster.
The company wants to optimize this workload to reduce costs.
Which solution will meet these requirements?
- A . Create a new Redshift cluster with eight ra3.4xlarge nodes in concurrency scaling mode by using the most recent snapshot from the existing cluster. Modify the internal applications to retrieve data from the new Redshift cluster. Shut down the existing Redshift cluster. Purchase eight 1-year All Upfront Redshift reserved nodes.
- B . Create a new Redshift cluster with six ra3.16xlarge nodes by using the most recent snapshot from the existing cluster. Enable auto scaling. Modify the internal applications to retrieve data from the
new Redshift cluster. Shut down the existing Redshift cluster. - C . Create a new Redshift Serverless endpoint with 64 Redshift Processing Units (RPUs) by using the most recent snapshot from the existing Redshift cluster. Update the internal applications to retrieve data from the new Redshift Serverless endpoint. Delete the existing Redshift cluster.
- D . Configure Redshift Spectrum on the existing Redshift cluster. Set up IAM permissions to allow
Redshift Spectrum to access Amazon S3. Unload data from the existing cluster to an S3 bucket.
Update the internal applications to query the S3 data.
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The current Redshift configuration uses eight ra3.4xlarge nodes with on-demand pricing and is paused outside an 8-hour daily window. The ingestion workload runs only 3 hours per day and has about 85% CPU utilization during ingestion, which indicates that the existing node size and count are adequate for performance. The main opportunity is cost optimization for a workload that is time-bound and does not require a continuously running provisioned cluster.
Amazon Redshift Serverless is designed for such intermittent or variable workloads. With Redshift Serverless, you configure compute capacity in terms of Redshift Processing Units (RPUs) and pay per use based on the RPU-hours consumed when queries and data ingestion jobs are actually running. When there is no activity, there is no compute charge. Redshift Serverless can also be created from an existing provisioned Redshift snapshot, allowing an easy migration path from a node-based cluster to a serverless endpoint without re-ingesting or manually moving data.
Option C creates a new Redshift Serverless endpoint with 64 RPUs from the most recent snapshot. This takes advantage of snapshots that already exist on the current cluster. The internal applications are then updated to read from the new serverless endpoint, and the existing cluster is deleted to stop incurring node-based charges. For a workload that runs only a few hours per day, moving to serverless compute is usually more cost-effective than maintaining a provisioned cluster, even if that cluster is paused outside the ingestion window, because serverless eliminates the need to manage cluster sizing and pausing and optimizes billing around actual usage.
Option A proposes staying on provisioned RA3 nodes and additionally purchasing 1-year Reserved Nodes. Reserved Nodes apply discount to node hours whether or not the cluster is paused or fully utilized, and they are designed to be cost-effective for steady, predictable, long-running workloads. For an 8-hour-per-day workload, especially with only 3 hours of intensive ingestion, Reserved Nodes are likely to be less cost-optimal than an on-demand, usage-based serverless model. Also, concurrency scaling is primarily for handling bursty query concurrency, not for cost reduction of a regular ingestion workload.
Option B replaces the existing eight ra3.4xlarge nodes with six ra3.16xlarge nodes and enables auto scaling. ra3.16xlarge provides significantly more capacity per node and is generally more expensive. The original workload already runs at acceptable utilization (about 85% during ingestion), so scaling up to larger nodes is not necessary for performance. Adding auto scaling on top of this likely increases costs and complexity without a clear benefit for a predictable, time-bound daily ingestion pattern.
Option D uses Redshift Spectrum and unloads data from the cluster to Amazon S3 for querying. Redshift Spectrum allows querying data stored in S3 using external tables from within a Redshift cluster, and is charged per amount of data scanned. However, the option still depends on maintaining a Redshift cluster and does not fundamentally optimize the compute cost of the ingestion workload. It also changes query patterns and data access architecture for the internal applications and may increase query cost if large volumes of S3 data are scanned by Spectrum.
Therefore, using Redshift Serverless created from the existing snapshot and paying only for compute usage during ingestion and queries, as described in option C, is the best way to optimize cost for this specific workload pattern.
Reference: AWS documentation for Amazon Redshift Serverless concepts such as RPU-based pricing, using snapshots from provisioned clusters to create serverless workgroups, and cost optimization guidance for intermittent and spiky analytics workloads. AWS guidance on reserved node usage and suitability for steady-state provisioned Redshift clusters. AWS descriptions of Amazon Redshift Spectrum and its pricing and use cases for querying data directly in Amazon S3.
A company has an on-premises Microsoft SOL Server database that writes a nightly 200 GB export to a local drive. The company wants to move the backups to more robust cloud storage on Amazon S3. The company has set up a 10 Gbps AWS Direct Connect connection between the on-premises data center and AWS.
Which solution meets these requirements MOST cost-effectively?
- A . Create a new S3 bucket. Deploy an AWS Storage Gateway file gateway within the VPC that Is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share.
- B . Create an Amazon FSx for Windows File Server Single-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.
- C . Create an Amazon FSx for Windows File Server Multi-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.
- D . Create a new S3 bucket. Deploy an AWS Storage Gateway volume gateway within the VPC that Is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share on the volume gateway, and automate copies of this data to an S3 bucket.
A
Explanation:
https://docs.aws.amazon.com/filegateway/latest/files3/CreatingAnSMBFileShare.html
A solutions architect at a large company needs to set up network security tor outbound traffic to the internet from all AWS accounts within an organization in AWS Organizations. The organization has more than 100 AWS accounts, and the accounts route to each other by using a centralized AWS Transit Gateway. Each account has both an internet gateway and a NAT gateway tor outbound traffic to the internet. The company deploys resources only into a single AWS Region.
The company needs the ability to add centrally managed rule-based filtering on all outbound traffic to the internet for all AWS accounts in the organization. The peak load of outbound traffic will not exceed 25 Gbps in each Availability Zone.
Which solution meets these requirements?
- A . Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Create an Auto Scaling group of Amazon EC2 instances that run an open-source internet proxy for rule-based filtering across all Availability Zones in the Region. Modify all default routes to point to the proxy’s Auto Scaling group.
- B . Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Use an AWSNetwork Firewall firewall for rule-based filtering. Create Network Firewall endpoints in each Availability Zone. Modify all default routes to point to the Network Firewall endpoints.
- C . Create an AWS Network Firewall firewall for rule-based filtering in each AWS account. Modify all default routes to point to the Network Firewall firewalls in each account.
- D . In each AWS account, create an Auto Scaling group of network-optimized Amazon EC2 instances that run an open-source internet proxy for rule-based filtering. Modify all default routes to point to the proxy’s Auto Scaling group.
B
Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/
A company developed a pilot application by using AWS Elastic Beanstalk and Java. To save costs during development, the company’s development team deployed the application into a single-instance environment. Recent tests indicate that the application consumes more CPU than expected. CPU utilization is regularly greater than 85%, which causes some performance bottlenecks.
A solutions architect must mitigate the performance issues before the company launches the application to production.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a new Elastic Beanstalk application. Select a load-balanced environment type. Select all Availability Zones. Add a scale-out rule that will run if the maximum CPU utilization is over 85% for 5 minutes.
- B . Create a second Elastic Beanstalk environment. Apply the traffic-splitting deployment policy. Specify a percentage of incoming traffic to direct to the new environment in the average CPU utilization is over 85% for 5 minutes.
- C . Modify the existing environment’s capacity configuration to use a load-balanced environment type. Select all Availability Zones. Add a scale-out rule that will run if the average CPU utilization is over 85% for 5 minutes.
- D . Select the Rebuild environment action with the load balancing option Select an Availability Zones Add a scale-out rule that will run if the sum CPU utilization is over 85% for 5 minutes.
C
Explanation:
This solution will meet the requirements with the least operational overhead because it allows the company to modify the existing environment’s capacity configuration, so it becomes a load-balanced environment type. By selecting all availability zones, the company can ensure that the application is running in multiple availability zones, which can help to improve the availability and scalability of the application. The company can also add a scale-out rule that will run if the average CPU utilization is over 85% for 5 minutes, which can help to mitigate the performance issues. This solution does not require creating new Elastic Beanstalk environments or rebuilding the existing one, which reduces the operational overhead.
You can refer to the AWS Elastic Beanstalk documentation for more information on how to use this service: https://aws.amazon.com/elasticbeanstalk/ You can refer to the AWS documentation for more information on how to use autoscaling: https://aws.amazon.com/autoscaling/
A solutions architect must update an application environment within AWS Elastic Beanstalk using a blue/green deployment methodology. The solutions architect creates an environment that is identical to the existing application environment and deploys the application to the new environment.
What should be done next to complete the update?
- A . Redirect to the new environment using Amazon Route 53
- B . Select the Swap Environment URLs option
- C . Replace the Auto Scaling launch configuration
- D . Update the DNS records to point to the green environment
B
Explanation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
A travel company built a web application that uses Amazon SES to send email notifications to users. The company needs to enable logging to help troubleshoot email delivery issues. The company also needs the ability to do searches that are based on recipient, subject, and time sent.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)
- A . Create an Amazon SES configuration set with Amazon Data Firehose as the destination. Choose to send logs to an Amazon S3 bucket.
- B . Enable AWS CloudTrail logging. Specify an Amazon S3 bucket as the destination for the logs.
- C . Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent.
- D . Create an Amazon CloudWatch log group. Configure Amazon SES to send logs to the log group.
- E . Use Amazon Athena to query the logs in Amazon CloudWatch for recipient, subject, and time sent.
A, C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The company wants logging that helps troubleshoot email delivery issues and also wants to search by recipient, subject, and time sent. Amazon SES provides event publishing for email sending and delivery events through configuration sets. With a configuration set, SES can publish sending events such as send, delivery, bounce, complaint, reject, and rendering failure to destinations such as Amazon CloudWatch, Amazon SNS, or Amazon Kinesis Data Firehose. For building a searchable log store, delivering these events into Amazon S3 through Kinesis Data Firehose is an effective approach because S3 provides durable storage and integrates well with query services.
Option A creates an SES configuration set with a Kinesis Data Firehose destination that delivers logs to an S3 bucket. This captures detailed SES event data that is directly useful for troubleshooting delivery issues and retaining historical records for analysis.
Once logs are stored in Amazon S3, Amazon Athena can query the data using SQL. Athena is designed
to query data in S3 and is well suited for ad hoc searches. This meets the requirement to search based on recipient, subject, and time sent, assuming the event schema includes these fields (or they are included in the published event payload). Therefore, option C completes the solution by enabling searches over the stored log dataset.
Option B (CloudTrail) records API activity, such as calls made to SES APIs, but it is not designed to capture per-message delivery outcomes (deliveries, bounces, complaints) in a way that supports troubleshooting delivery behavior and detailed email event searching. CloudTrail is useful for auditing who called SES APIs, not for tracking message-level delivery events and outcomes.
Option D (CloudWatch log group) is another valid SES event publishing destination, but if the requirement is to perform flexible searches by multiple dimensions over a potentially large historical dataset, storing the logs in S3 and querying with Athena is a more direct and scalable pattern. Also, the provided option E is incorrect because Athena queries data in S3, not in CloudWatch Logs. CloudWatch Logs has its own query mechanism, but Athena is not used to query CloudWatch Logs directly in the way described.
Option E is incorrect because Amazon Athena does not query CloudWatch Logs as a log store. The typical searchable pattern for Athena is S3-backed datasets.
Therefore, the best combination to satisfy logging and searchable analysis requirements is to publish SES events to S3 via Kinesis Data Firehose (option A) and query those logs with Athena (option C).
References:
AWS documentation on Amazon SES configuration sets and event publishing destinations including Kinesis Data Firehose and Amazon S3 for email sending and delivery event logs.
AWS documentation on Amazon Athena for querying structured or semi-structured log data stored in Amazon S3 using SQL.
A company needs to use an AWS Transfer Family SFTP-enabled server with an Amazon S3 bucket to receive updates from a third-party data supplier. The data is encrypted with Pretty Good Privacy (PGP) encryption. The company needs a solution that will automatically decrypt the data after the company receives the data
A solutions architect will use a Transfer Family managed workflow. The company has created an 1AM service role by using an 1AM policy that allows access to AWS Secrets Manager and the S3 bucket. The role’s trust relationship allows the transfer amazonaws com service to assume the rote.
What should the solutions architect do next to complete the solution for automatic decryption?
- A . Store the PGP public key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the nominal step Associate the workflow with the Transfer Family server
- B . Store the PGP private key in Secrets Manager Add an exception-handling step in the Transfer Family managed workflow to decrypt files Configure PGP encryption parameters in the exception handler Associate the workflow with the SFTP user
- C . Store the PGP private key in Secrets Manager Add a nominal step in the Transfer Family managed workflow to decrypt files. Configure PGP decryption parameters in the nominal step Associate the workflow with the Transfer Family server
- D . Store the PGP public key in Secrets Manager Add an exception-handling step in the Transfer Family managed workflow to decrypt files Configure PGP decryption parameters in the exception handler Associate the workflow with the SFTP user
C
Explanation:
Store the PGP Private Key:
Step 1: In the AWS Management Console, navigate to AWS Secrets Manager.
Step 2: Store the PGP private key in Secrets Manager. Ensure the key is encrypted and properly secured.
Set Up the Transfer Family Managed Workflow:
Step 1: In the AWS Transfer Family console, create a new managed workflow.
Step 2: Add a nominal step to the workflow that includes the decryption of the files. Configure this step with the PGP decryption parameters, referencing the PGP private key stored in Secrets Manager. Step 3: Associate this workflow with the Transfer Family SFTP server, ensuring that incoming files are automatically decrypted upon receipt.
This solution ensures that the data is securely decrypted as it is transferred from the SFTP server to
the S3 bucket, automating the decryption process and leveraging AWS Secrets Manager for key
management.
Reference
AWS Transfer Family Documentation
Using AWS Secrets Manager for Managing Secrets
AWS Transfer Family Managed Workflows
A life sciences company is using a combination of open source tools to manage data analysis workflows and Docker containers running on servers in its on-premises data center to process genomics data Sequencing data is generated and stored on a local storage area network (SAN), and then the data is processed. The research and development teams are running into capacity issues and have decided to re-architect their genomics analysis platform on AWS to scale based on workload demands and reduce the turnaround time from weeks to days
The company has a high-speed AWS Direct Connect connection Sequencers will generate around 200 GB of data for each genome, and individual jobs can take several hours to process the data with ideal compute capacity. The end result will be stored in Amazon S3. The company is expecting 10-15 job requests each day
Which solution meets these requirements?
- A . Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS When AWS receives the Snowball Edge device and the data is loaded into Amazon S3 use S3 events to trigger an AWS Lambda function to process the data
- B . Use AWS Data Pipeline to transfer the sequencing data to Amazon S3 Use S3 events to trigger an Amazon EC2 Auto Scaling group to launch custom-AMI EC2 instances running the Docker containers to process the data
- C . Use AWS DataSync to transfer the sequencing data to Amazon S3 Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the container and process the sequencing data
- D . Use an AWS Storage Gateway file gateway to transfer the sequencing data to Amazon S3 Use S3 events to trigger an AWS Batch job that runs on Amazon EC2 instances running the Docker containers to process the data
C
Explanation:
AWS DataSync can be used to transfer the sequencing data to Amazon S3, which is a more efficient and faster method than using Snowball Edge devices. Once the data is in S3, S3 events can trigger an AWS Lambda function that starts an AWS Step Functions workflow. The Docker images can be stored in Amazon Elastic Container Registry (Amazon ECR) and AWS Batch can be used to run the container and process the sequencing data.
Set the DefaultTargetCapacityType parameter to Spot. Replace the NLB with an Application Load Balancer.
D. Create an Auto Scaling group. Attach the Auto Scaling group to the target group of the NLB. Set the minimum capacity to 4 and the maximum capacity to 28. Purchase Reserved Instances for four instances.
