Practice Free SAP-C02 Exam Online Questions
A company has an application that uses AWS Key Management Service (AWS KMS) to encrypt and decrypt data. The application stores data in an Amazon S3 bucket in an AWS Region. Company security policies require that the data is encryptedbeforebeing uploaded to S3, and decrypted when read. The S3 bucket is replicated to other AWS Regions.
A solutions architect must design a solution so that the application canencrypt and decrypt data across Regionsusingthe same key.
- A . Create a KMS multi-Region primary key. Use it to create KMS multi-Region replica keys in each Region. Update application code to use the replica key in each Region.
- B . Create a new customer-managed KMS key in each additional Region. Update application code to use the key in each Region.
- C . Use AWS Private CA to issue TLS certificates and replicate them with AWS RAM.
- D . Export the KMS key material to Systems Manager Parameter Store in each Region. Update the app to use those.
A
Explanation:
AWSKMS multi-Region keysallow encryption in one Region and decryption in another. This is specifically built forcross-Region replication scenarios, like Amazon S3 Cross-Region Replication (CRR).
With a multi-Region key, you create a primary keyin one Region andreplica keysin others. These replicas share the same key material and key ID.
Option B would fail because different keys = different IDs = decryption mismatch.
Option C is unrelated ― Private CA is for certificates, not symmetric key encryption.
Option D is insecure and violates AWS KMS key management best practices.
Reference: https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
A company is building an electronic document management system in which users upload their documents. The application stack is entirely serverless and runs on AWS in the eu-central-1 Region. The system includes a web application that uses an Amazon CloudFront distribution for delivery with Amazon S3 as the origin. The web application communicates with Amazon API Gateway Regional endpoints. The API Gateway APIs call AWS Lambda functions that store metadata in an Amazon Aurora Serverless database and put the documents into an S3 bucket.
The company is growing steadily and has completed a proof of concept with its largest customer. The company must improve latency outside of Europe.
Which combination of actions will meet these requirements? (Select TWO.)
- A . Enable S3 Transfer Acceleration on the S3 bucket. Ensure that the web application uses the Transfer Acceleration signed URLs.
- B . Create an accelerator in AWS Global Accelerator. Attach the accelerator to the CloudFront distribution.
- C . Change the API Gateway Regional endpoints to edge-optimized endpoints.
- D . Provision the entire stack in two other locations that are spread across the world. Use global databases on the Aurora Serverless cluster.
- E . Add an Amazon RDS proxy between the Lambda functions and the Aurora Serverless database.
A,C
Explanation:
https://aws.amazon.com/global-accelerator/faqs/
A company is running an event ticketing platform on AWS and wants to optimize the platform’s cost-effectiveness. The platform is deployed on Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 and is backed by an Amazon RDS for MySQL DB instance. The company is developing new application features to run on Amazon EKS with AWS Fargate.
The platform experiences infrequent high peaks in demand. The surges in demand depend on event dates.
Which solution will provide the MOST cost-effective setup for the platform?
- A . Purchase Standard Reserved Instances for the EC2 instances that the EKS cluster uses in its baseline load. Scale the cluster with Spot Instances to handle peaks. Purchase 1-year All Upfront Reserved Instances for the database to meet predicted peak load for the year.
- B . Purchase Compute Savings Plans for the predicted medium load of the EKS cluster. Scale the cluster with On-Demand Capacity Reservations based on event dates for peaks. Purchase 1-year No Upfront Reserved Instances for the database to meet the predicted base load. Temporarily scale out database read replicas during peaks.
- C . Purchase EC2 Instance Savings Plans for the predicted base load of the EKS cluster. Scale the cluster with Spot Instances to handle peaks. Purchase 1-year All Upfront Reserved Instances for the database to meet the predicted base load. Temporarily scale up the DB instance manually during peaks.
- D . Purchase Compute Savings Plans for the predicted base load of the EKS cluster. Scale the cluster with Spot Instances to handle peaks. Purchase 1-year All Upfront Reserved Instances for the database to meet the predicted base load. Temporarily scale up the DB instance manually during peaks.
B
Explanation:
They all mention using spot instances and EKS based on EC2. A spot instance is not appropriate for a production server and the company is developing new application designed for AWS Fargate, which means we must plan the future cost improvement including AWS Fargate. https://aws.amazon.com/savingsplans/compute-pricing/
A company has hundreds of AWS accounts. The company recently implemented a centralized internal process for purchasing new Reserved Instances and modifying existing Reserved Instances. This process requires all business units that want to purchase or modify Reserved Instances to submit requests to a dedicated team for procurement. Previously, business units directly purchased or modified Reserved Instances in their own respective AWS accounts autonomously.
A solutions architect needs to enforce the new process in the most secure way possible.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
- A . Ensure that all AWS accounts are part of an organization in AWS Organizations with all features enabled.
- B . Use AWS Config to report on the attachment of an IAM policy that denies access to the ec2:PurchaseReservedInstancesOffering action and the ec2:ModifyReservedInstances action.
- C . In each AWS account, create an IAM policy that denies the ec2:PurchaseReservedInstancesOffering action and the ec2:ModifyReservedInstances action.
- D . Create an SCP that denies the ec2:PurchaseReservedInstancesOffering action and theec2:ModifyReservedInstances action. Attach the SCP to each OU of the organization.
- E . Ensure that all AWS accounts are part of an organization in AWS Organizations that uses the consolidated billing feature.
A,D
Explanation:
All features C. The default feature set that is available to AWS Organizations. It includes all the functionality of consolidated billing, plus advanced features that give you more control over accounts in your organization. For example, when all features are enabled the management account of the organization has full control over what member accounts can do. The management account can apply SCPs to restrict the services and actions that users (including the root user) and roles in an account can access. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html#feature-set
A company has automated the nightly retraining of its machine learning models by using AWS Step Functions. The workflow consists of multiple steps that use AWS Lambda Each step can fail for various reasons and any failure causes a failure of the overall workflow
A review reveals that the retraining has failed multiple nights in a row without the company noticing the failure A solutions architect needs to improve the workflow so that notifications are sent for all types of failures in the retraining process
Which combination of steps should the solutions architect take to meet these requirements? (Select THREE)
- A . Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team’s mailing list.
- B . Create a task named "Email" that forwards the input arguments to the SNS topic
- C . Add a Catch field all Task Map. and Parallel states that have a statement of "Error Equals": [ “States. ALL”] and "Next": "Email".
- D . Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address.
- E . Create a task named "Email" that forwards the input arguments to the SES email address
- F . Add a Catch field to all Task Map, and Parallel states that have a statement of "Error Equals": ["states. Runtime”] and "Next": "Email".
A,B,C
Explanation:
Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team’s mailing list. This will create a topic for sending notifications and add a subscription for the team’s email list to that topic.
C. Add a Catch field to all Task, Map, and Parallel states that have a statement of "ErrorEquals": [ "States.ALL" ] and "Next": "Email". This will ensure
that any errors that occur in any of the steps in the workflow will trigger the "Email" task, which will forward the inputarguments to the SNS topic created in step A. B. Create a task named "Email" that forwards the input arguments to the SNS topic. This will allow the company to send email notifications to the team’s mailing list in case of any errors occurred in any step in the workflow.
A company is migrating a monolithic on-premises .NET Framework production application to AWS. Application demand will grow exponentially in the next 6 months. The company must ensure that the application can scale appropriately.
The application currently connects to a Microsoft SQL Server transactional database. The company has well-documented source code for the application. Some business logic is contained within stored procedures.
A solutions architect must recommend a solution to redesign the application to meet the growth in demand.
Which solution will meet this requirement MOST cost-effectively?
- A . Use Amazon API Gateway APIs and Amazon EC2 Spot Instances to rehost the application with a scalable microservices architecture. Deploy the EC2 instances in a cluster placement group. Configure EC2 Auto Scaling. Store the data and stored procedures in Amazon RDS for SQL Server.
- B . Use AWS Application Migration Service to migrate the application to AWS Elastic Beanstalk. Deploy Elastic Beanstalk packages to configure and deploy the application as microservices. Deploy Elastic Beanstalk across multiple Availability Zones and configure auto scaling. Store the data and stored procedures in Amazon RDS for MySQL.
- C . Migrate the applications by using AWS App2Container. Use AWS Fargate in multiple AWS Regions to host the containers. Use Amazon API Gateway APIs and AWS Lambda functions to call the containers. Store the data and stored procedures in Amazon DynamoDB Accelerator (DAX).
- D . Use Amazon API Gateway APIs and AWS Lambda functions to decouple the application into microservices. Use the AWS Schema Conversion Tool (AWS SCT) to review and modify the stored procedures. Store the data in Amazon Aurora Serverless v2.
D
Explanation:
D is correct because this solution modernizes the application into aserverless architectureusing API Gateway and Lambda for scalable microservices. Aurora Serverless v2supports SQL workloads and auto-scales based on demand. AWS SCTallows migration of SQL Server stored procedures to Aurora-
compatible formats. This setup ensures cost efficiency, scalability, and minimal manual intervention.
A rehosts but doesn’t refactor into microservices.
B uses MySQL, which might not support the SQL Server-specific stored procedures fully. C adds unnecessary complexity and loses relational database functionality by using DAX.
Reference: https://docs.aws.amazon.com/aurora/latest/aurora-serverless/aurora-serverless.html
A software development company has multiple engineers who ate working remotely. The company is running Active Directory Domain Services (AD DS) on an Amazon EC2 instance. The company’s security policy states that al internal, nonpublic services that are deployed in a VPC must be accessible through a VPN. Multi-factor authentication (MFA) must be used for access to a VPN.
What should a solutions architect do to meet these requirements?
- A . Create an AWS Sire-to-Site VPN connection. Configure Integration between a VPN and AD DS. Use an Amazon Workspaces client with MFA support enabled to establish a VPN connection.
- B . Create an AWS Client VPN endpoint Create an AD Connector directory tor integration with AD DS. Enable MFA tor AD Connector. Use AWS Client VPN to establish a VPN connection.
- C . Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub. Configure integration between AWS VPN CloudHub and AD DS. Use AWS Copilot to establish a VPN connection.
- D . Create an Amazon WorkLink endpoint. Configure integration between Amazon WorkLink and AD DS. Enable MFA in Amazon WorkLink. Use AWS Client VPN to establish a VPN connection.
B
Explanation:
Setting up an AWS Client VPN endpoint and integrating it with Active Directory Domain Services (AD DS) using an AD Connector directory enables secure remote access to internal services deployed in a VPC. Enabling multi-factor authentication (MFA) for AD Connector enhances security by adding an additional layer of authentication. This solution meets the company’s requirements for secure remote access through a VPN with MFA, ensuring that the security policy is adhered to while providing a seamless experience for the remote engineers.
AWS Documentation on AWS Client VPN and AD Connector provides detailed instructions on setting up a Client VPN endpoint and integrating it with existing Active Directory for authentication. This solution aligns with AWS best practices for secure remote access to AWS resources.
A company hosts a metadata API on Amazon EC2 instances behind an internet-facing Application Load Balancer (ALB). Only internal applications that run on EC2 instances in separate AWS accounts need to access the metadata API. All the internal EC2 instances use NAT gateways.
A new policy requires that traffic between internal applications must not travel across the public internet.
Which solution will meet this requirement?
- A . Create an HTTP API in Amazon API Gateway. Configure a route for the metadata API. Configure a VPC link to the VPC that hosts the metadata API’s EC2 instances. Update the API Gateway resource policy to include the account IDs of the internal applications that access the metadata API.
- B . Create a REST API in Amazon API Gateway. Specify the API Gateway endpoint type as private. Associate the REST API with the metadata API’s VPC. Create a gateway VPC endpoint for the REST API. Share the endpoint across accounts by using AWS Resource Access Manager (AWS RAM). Configure the internal applications to connect to the gateway VPC endpoint.
- C . Create an internal ALB. Register the metadata API’s EC2 instances with the internal ALB. Create an internal Network Load Balancer (NLB) that has a target group type of ALB. Register the internal ALB as the target. Configure an AWS PrivateLink endpoint service for the NLB. Grant the internal applications access to the metadata API through the PrivateLink endpoint.
- D . Create an internal ALB. Register the metadata API’s EC2 instances with the internal ALB. Configure an AWS PrivateLink endpoint service for the internal ALB. Grant the internal applications access to the metadata API through the PrivateLink endpoint.
D
Explanation:
Creating an internal ALB and configuring it as a PrivateLink endpoint service enables private connectivity between internal applications and the metadata API, ensuring that traffic does not traverse the public internet.
Internal ALB: Ensures traffic stays within the AWS network and is not exposed publicly. PrivateLink endpoint service: Provides secure, private access to the ALB from the internal EC2 instances in other AWS accounts.
Traffic stays within the AWS global network, leveraging AWS security best practices and meeting the new policy requirements for no public internet exposure.
This approach is secure, scalable, and minimizes management complexity compared to API Gateway solutions.
A company has an loT platform that runs in an on-premises environment. The platform consists of a server that connects to loT devices by using the MQTT protocol. The platform collects telemetry data from the devices at least once every 5 minutes. The platform also stores device metadata in a MongoDB cluster
An application that is installed on an on-premises machine runs periodic jobs to aggregate and transform the telemetry and device metadata. The application creates reports that users view by using another web application that runs on the same on-premises machine. The periodic jobs take 120-600 seconds to run However, the web application is always running.
The company is moving the platform to AWS and must reduce the operational overhead of the stack.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE.)
- A . Use AWS Lambda functions to connect to the loT devices
- B . Configure the loT devices to publish to AWS loT Core
- C . Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance
- D . Write the metadata to Amazon DocumentDB (with MongoDB compatibility)
- E . Use AWS Step Functions state machines with AWS Lambda tasks to prepare the reports and to
write the reports to Amazon S3 Use Amazon CloudFront with an S3origin to serve the reports - F . Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 instances to
prepare the reports Use an ingress controller in the EKS cluster to serve the reports
B,D,E
Explanation:
https://aws.amazon.com/step-functions/use-cases/
A company wants to migrate its website to AWS. The website uses microservices and runs on containers that are deployed in an on-premises, self-managed Kubernetes cluster. All the manifests that define the deployments for the containers in the Kubernetes deployment are in source control. All data for the website is stored in a PostgreSQL database. An open source container image repository runs alongside the on-premises environment.
A solutions architect needs to determine the architecture that the company will use for the website on AWS.
Which solution will meet these requirements with the LEAST effort to migrate?
- A . Create an AWS App Runner service. Connect the App Runner service to the open source container image repository. Deploy the manifests from on premises to the App Runner service. Create an Amazon RDS for PostgreSQL database.
- B . Create an Amazon EKS cluster that has managed node groups. Copy the application containers to a
new Amazon ECR repository. Deploy the manifests from on premises to the EKS cluster. Create an Amazon Aurora PostgreSQL DB cluster. - C . Create an Amazon ECS cluster that has an Amazon EC2 capacity pool. Copy the application containers to a new Amazon ECR repository. Register each container image as a new task definition. Configure ECS services for each task definition to match the original Kubernetes deployments. Create an Amazon Aurora PostgreSQL DB cluster.
- D . Rebuild the on-premises Kubernetes cluster by hosting the cluster on Amazon EC2 instances. Migrate the open source container image repository to the EC2 instances. Deploy the manifests from on premises to the new cluster on AWS. Deploy an open source PostgreSQL database on the new cluster.
B
Explanation:
Migrating to an Amazon EKS cluster with managed node groups minimizes the effort required because:
EKS is fully managed, offering native Kubernetes support, making it easy to deploy the existing Kubernetes manifests without major changes.
Copying containers to Amazon ECR allows for fully managed, scalable container image storage in AWS, eliminating reliance on the on-premises container repository.
Deploying the existing manifests directly to EKS reuses all the existing configuration, such as service definitions, deployments, and scaling policies, simplifying migration.
Using Amazon Aurora PostgreSQL provides a fully managed, highly available database service that is compatible with PostgreSQL, reducing operational overhead compared to managing a self-hosted database.
This approach leverages the power of AWS managed services while preserving the existing microservices and deployment practices, ensuring minimal disruption and fastest migration path.
