Practice Free SAP-C02 Exam Online Questions
A company wants to change its internal cloud billing strategy for each of its business units. Currently, the cloud governance team shares reports for overall cloud spending with the head of each business unit. The company uses AWS Organizations lo manage the separate AWS accounts for each business unit. The existing tagging standard in Organizations includes the application, environment, and owner. The cloud governance team wants a centralized solution so each business unit receives monthly reports on its cloud spending. The solution should also send notifications for any cloud spending that exceeds a set threshold.
Which solution is the MOST cost-effective way to meet these requirements?
- A . Configure AWS Budgets in each account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in each account to create monthly reports for each business unit.
- B . Configure AWS Budgets in the organization’s master account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization’s master account to create monthly reports for each business unit.
- C . Configure AWS Budgets in each account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use the AWS Billing and Cost Management dashboard in each account to create monthly reports for each business unit.
- D . Enable AWS Cost and Usage Reports in the organization’s master account and configure reports grouped by application, environment, and owner. Create an AWS Lambda function that processes AWS Cost and Usage Reports, sends budget alerts, and sends monthly reports to each business unit’s email list.
B
Explanation:
Configure AWS Budgets in the organizationג™s master account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organizationג™s master account to create monthly reports for each business unit.
https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-aws-budgets-reports/#:~:text=AWS%20Budgets%20gives%20you%20the,below%20the%20threshold%20you%20define
A financial services company receives a regular data feed from its credit card servicing partner Approximately 5.000 records are sent every 15 minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number (PAN) data. The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to remove and merge specific fields, and then transform the record into JSON format Additionally, extra feeds are likely to be added in the future, so any design needs to be easily expandable.
Which solutions will meet these requirements?
- A . Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
- B . Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record, and transform the record into JSON format. When the queue is empty, send the results to another S3bucket for internal processing and scale down the AWS Fargate instance.
- C . Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
- D . Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
C
Explanation:
You can use a Glue crawler to populate the AWS Glue Data Catalog with tables. The Lambda function can be triggered using S3 event notifications when object create events occur. The Lambda function will then trigger the Glue ETL job to transform the records masking the sensitive data and modifying the output format to JSON. This solution meets all requirements.
A company use an organization in AWS Organizations to manage multiple AWS accounts. The company hosts some applications in a VPC in the company’s snared services account. The company has attached a transit gateway to the VPC in the Shared services account.
The company is developing a new capability and has created a development environment that requires access to the applications that are in the snared services account. The company intends to delete and recreate resources frequently in the development account. The company also wants to give a development team the ability to recreate the team’s connection to the shared services account as required.
Which solution will meet these requirements?
- A . Create a transit gateway in the development account. Create a transit gateway peering request to the shared services account. Configure the snared services transit gateway to automatically accept peering connections.
- B . Turn on automate acceptance for the transit gateway in the shared services account. Use AWS Resource Access Manager (AWS RAM) to share the transit gateway resource in the shared services account with the development account. Accept the resource in tie development account. Create a transit gateway attachment in the development account.
- C . Turn on automate acceptance for the transit gateway in the shared services account. Create a VPC endpoint. Use the endpoint policy to grant permissions on the VPC endpoint for the development account. Configure the endpoint service to automatically accept connection requests. Provide the endpoint details to the development team.
- D . Create an Amazon EventBridge rule to invoke an AWS Lambda function that accepts the transit gateway attachment value the development account makes an attachment request. Use AWS Network Manager to store. The transit gateway in the shared services account with the development account. Accept the transit gateway in the development account.
B
Explanation:
For a development environment that requires frequent resource recreation and connectivity to applications hosted in a shared services account, the most efficient solution involves using AWS Resource Access Manager (RAM) and the transit gateway in the shared services account. By turning on automatic acceptance for the transit gateway in the shared services account and sharing it with
the development account through AWS RAM, the development team can easily recreate their connection as needed without manual intervention. This setup allows for scalable, flexible connectivity between accounts while minimizing operational overhead and ensuring consistent access to shared services.
AWS Documentation on AWS Resource Access Manager and Transit Gateway provides guidance on sharing network resources across AWS accounts and enabling automatic acceptance for transit gateway attachments. This approach is also supported by AWS best practices for multi-account strategies using AWS Organizations and network architecture.
A company has multiple business units that each have separate accounts on AWS. Each business unit manages its own network with several VPCs that have CIDR ranges that overlap. The company’s marketing team has created a new internal application and wants to make the application accessible to all the other business units. The solution must use private IP addresses only.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Instruct each business unit to add a unique secondary CIDR range to the business unit’s VPC. Peer the VPCs and use a private NAT gateway in the secondary range to route traffic to the marketing team.
- B . Create an Amazon EC2 instance to serve as a virtual appliance in the marketing account’s VPC. Create an AWS Site-to-Site VPN connection between the marketing team and each business unit’s VPC. Perform NAT where necessary.
- C . Create an AWS PrivateLink endpoint service to share the marketing application. Grant permission to specific AWS accounts to connect to the service. Create interface VPC endpoints in other accounts to access the application by using private IP addresses.
- D . Create a Network Load Balancer (NLB) in front of the marketing application in a private subnet. Create an API Gateway API. Use the Amazon API Gateway private integration to connect the API to the NLB. Activate IAM authorization for the API. Grant access to the accounts of the other business units.
C
Explanation:
With AWS PrivateLink, the marketing team can create an endpoint service to share their internal application with other accounts securely using private IP addresses. They can grant permission to specific AWS accounts to connect to the service and create interface VPC endpoints in the other accounts to access the application by using private IP addresses. This option does not require any changes to the network of the other business units, and it does not require peering or NATing. This solution is both scalable and secure.
https://aws.amazon.com/blogs/networking-and-content-delivery/connecting-networks-with-overlapping-ip-ranges/
A company is replicating an application in a secondary AWS Region. The application in the primary
Region reads from and writes to several Amazon DynamoDB tables. The application also reads customer data from an Amazon RDS for MySQL DB instance.
The company plans to use the secondary Region as part of a disaster recovery plan. The application in the secondary Region must function without dependencies on the primary Region.
Which solution will meet these requirements with the LEAST development effort?
- A . Configure DynamoDB global tables. Replicate the required tables to the secondary Region. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use the DynamoDB tables and the read replica in the secondary Region.
- B . Use DynamoDB Accelerator (DAX) to cache the required tables in the secondary Region. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use DAX and the read replica in the secondary Region.
- C . Configure DynamoDB global tables. Replicate the required tables to the secondary Region. Enable Multi-AZ for the RDS DB instance. Configure the standby replica to be created in the secondary Region. Configure the secondary application to use the DynamoDB tables and the standby replica in the secondary Region.
- D . Set up DynamoDB streams from the primary Region. Process the streams in the secondary Region to populate new DynamoDB tables. Create a read replica of the RDS DB instance in the secondary Region. Configure the secondary application to use the DynamoDB tables and the read replica in the secondary Region.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The company needs the application in the secondary Region to operate independently from the primary Region during disaster recovery. That means both the DynamoDB data and the relational customer data must be present and usable in the secondary Region without cross-Region dependencies at runtime. The company also wants the least development effort, which usually means using native managed replication features rather than building custom replication logic.
For DynamoDB, global tables are the managed multi-Region, multi-active replication feature. Global tables replicate changes between Regions automatically and allow applications in each Region to read and write to local DynamoDB tables, removing dependencies on the primary Region.
For the RDS for MySQL database, the straightforward managed approach for cross-Region disaster recovery with minimal development change is to create a cross-Region read replica in the secondary Region. A read replica uses asynchronous replication to keep data up to date. During a disaster recovery event, the company can promote the read replica in the secondary Region to become a standalone primary database instance. This provides a simple, low-development approach for having the data in the secondary Region. The application in the secondary Region can be configured to use the replica endpoint in normal times and can continue operating with the promoted instance during failover.
Option A combines DynamoDB global tables with an RDS read replica in the secondary Region. This is the least development effort because it uses AWS-managed replication for both data stores and requires only configuration changes and endpoint selection in the secondary application.
Option B is not correct because DAX is a caching layer for DynamoDB, not a replication or disaster recovery mechanism. DAX does not replicate DynamoDB tables into another Region and cannot replace having the DynamoDB data present in the secondary Region.
Option C is incorrect because RDS Multi-AZ is a high availability feature within a single Region, using synchronous replication to a standby in another Availability Zone. Multi-AZ does not create a standby in a different Region, and it does not provide cross-Region DR by itself.
Option D is less desirable because it requires custom development and operations: processing DynamoDB streams, building and operating a replication pipeline, and ensuring correctness and ordering across Regions. This is higher development effort than enabling DynamoDB global tables.
Therefore, using DynamoDB global tables plus an RDS read replica in the secondary Region (option A) meets the DR independence requirement with the least development effort.
References:
AWS documentation on DynamoDB global tables for multi-Region replication with local read/write access in each Region.
AWS documentation on Amazon RDS for MySQL cross-Region read replicas and promoting a read replica for disaster recovery.
A company runs a software-as-a-service <SaaS) application on AWS. The application consists of AWS Lambda functions and an Amazon RDS for MySQL Multi-AZ database During market events the application has a much higher workload than normal Users notice slow response times during the peak periods because of many database connections. The company needs to improve the scalable performance and availability of the database
Which solution meets these requirements?
- A . Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold
- B . Migrate the database to Amazon Aurora, and add a read replica Add a database connection pool outside of the Lambda handler function
- C . Migrate the database to Amazon Aurora and add a read replica Use Amazon Route 53 weighted records
- D . Migrate the database to Amazon Aurora and add an Aurora Replica Configure Amazon RDS Proxy to manage database connection pools
D
Explanation:
Migrate to Amazon Aurora:
Amazon Aurora is a MySQL-compatible, high-performance database designed to provide higher throughput than standard MySQL. Migrating the database to Aurora will enhance the performance and scalability of the database, especially under heavy workloads.
Add Aurora Replica:
Aurora Replicas provide read scalability and improve availability. Adding an Aurora Replica allows read operations to be distributed, thereby reducing the load on the primary instance and improving response times during peak periods.
Configure Amazon RDS Proxy:
Amazon RDS Proxy acts as an intermediary between the application and the Aurora database, managing connection pools efficiently. RDS Proxy reduces the overhead of opening and closing database connections, thus maintaining fewer active connections to the database and handling surges in database connections from the Lambda functions more effectively.
This configuration reduces the database’s resource usage and improves its ability to handle high volumes of concurrent connections.
Reference
AWS Database Blog on RDS Proxy(Amazon Web Services, Inc.).
AWS Compute Blog on Using RDS Proxy with Lambda(Amazon Web Services, Inc.).
A company operates a static content distribution platform that serves customers globally. The customers consume content from their own AWS accounts.
The company serves its content from an Amazon S3 bucket. The company uploads the content from its on-premises environment to the S3 bucket by using an S3 File Gateway.
The company wants to improve the platform’s performance and reliability by serving content from the AWS Region that is geographically closest to customers. The company must route the on-premises data to Amazon S3 with minimal latency and without public internet exposure.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Select TWO.)
- A . Create an Amazon SNS topic in the central account. Add a topic policy to allow other accounts to subscribe to the topic. Create an Amazon SQS queue in each individual AWS account. Subscribe the SQS queue to the SNS topic. Configure the microservices to read events from their own SQS queue.
- B . Create a new Amazon EventBridge event bus in the central account with the required permissions. Add EventBridge rules filtered by service for each microservice. Invoke the rules to route events to other accounts.
- C . Create a data stream in Amazon Kinesis Data Streams in the central account. Create an IAM policy to grant the necessary permissions to access the data stream. Set each of the microservices as an event source on the Kinesis stream. Configure the stream to invoke each microservice.
- D . Create a new Amazon SQS queue as the event broker in the central account. Grant the required permissions. Configure each of the microservices to read messages from the central SQS queue.
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
This explanation is based on AWS documentation and best practices but is paraphrased, not a literal extract.
The scenario describes a central event broker and multiple microservices running in separate AWS accounts that all need to consume the same events. The requirement is to distribute events from a central location to many microservices across accounts in a scalable and loosely coupled way, as part of modernizing to a microservices architecture.
Amazon EventBridge is a serverless event bus service designed for event-driven architectures. It supports centralized event buses, rich content-based filtering with rules, and cross-account event routing. With EventBridge, you can create an event bus in a central account and define rules that match specific event patterns (for example, by microservice or event type). Each rule can have one or more targets, including event buses in other AWS accounts. This supports the pattern of having a central event bus in one account and distributing relevant events to other accounts, where each microservice consumes events either directly from its own event bus or through additional rules and targets in its own account.
In this solution, you create a new EventBridge event bus in the central account and grant the appropriate permissions for cross-account access (option B). You then define EventBridge rules on the central event bus, filtered per microservice or per event category, and configure the rules to send events to the respective event buses or targets in the microservices’ accounts. EventBridge handles the fan-out and delivery of events across accounts in a managed, scalable way, which aligns with the modernization goal and reduces the operational overhead of managing custom routing or polling logic.
Option A uses an SNS topic with SQS queues in each account. This is a valid fan-out pattern and supports cross-account subscriptions, but it is more suited to traditional pub/sub messaging and does not provide the event routing, filtering, and observability features that EventBridge offers for modern event-driven microservices. In scenarios that explicitly mention an event broker and modernization, EventBridge is the recommended service.
Option C is incorrect because Kinesis Data Streams is designed for high-throughput streaming data and requires building and managing consumer applications. The description in the option is also technically inaccurate; Kinesis does not “invoke” microservices directly as event targets in the same way as EventBridge or SNS does. Instead, applications must read from the stream.
Option D uses a single central SQS queue that all microservices read from. SQS provides at-least-once delivery to competing consumers, which means multiple consumers reading from the same queue will typically share messages rather than each getting all messages. This does not satisfy the requirement for multiple microservices to each receive the same events independently. It also reduces decoupling and observability compared to an event bus model.
Therefore, creating an Amazon EventBridge event bus in the central account with rules to distribute events across accounts (option B) best meets the requirements for distributing events from a central broker to multiple microservices across accounts in a modernized architecture.
Reference: AWS documentation on Amazon EventBridge event buses, cross-account event routing, and rule-based filtering and targeting.AWS guidance for event-driven microservices architectures and centralized event broker patterns.
A company is running an application on Amazon EC2 instances in the AWS Cloud. The application is using a MongoDB database with a replica set as its data tier. The MongoDB database is installed on systems in the company’s on-premises data center and is accessible through an AWS Direct Connect connection to the data center environment.
A solutions architect must migrate the on-premises MongoDB database to Amazon DocumentDB (with MongoDB compatibility).
Which strategy should the solutions architect choose to perform this migration?
- A . Create a fleet of EC2 instances. Install MongoDB Community Edition on the EC2 instances, and create a database. Configure continuous synchronous replication with the database that is running in the on-premises data center.
- B . Create an AWS Database Migration Service (AWS DMS) replication instance. Create a source endpoint for the on-premises MongoDB database by using change data capture (CDC). Create a target endpoint for the Amazon DocumentDB database. Create and run a DMS migration task.
- C . Create a data migration pipeline by using AWS Data Pipeline. Define data nodes for the on-premises MongoDB database and the Amazon DocumentDB database. Create a scheduled task to run the data pipeline.
- D . Create a source endpoint for the on-premises MongoDB database by using AWS Glue crawlers. Configure continuous asynchronous replication between the MongoDB database and the Amazon DocumentDB database.
B
Explanation:
https://aws.amazon.com/getting-started/hands-on/move-to-managed/migrate-mongodb-to-documentdb/
A company runs a proprietary stateless ETL application on an Amazon EC2 Linux instance. The application is a Linux binary, and the source code cannot be modified. The application is single-threaded, uses 2 GB of RAM. and is highly CPU intensive. The application is scheduled to run every 4 hours and runs for up to 20 minutes A solutions architect wants to revise the architecture for the solution.
Which strategy should the solutions architect use?
- A . Use AWS Lambda to run the application. Use Amazon CloudWatch Logs to invoke the Lambda function every 4 hours.
- B . Use AWS Batch to run the application. Use an AWS Step Functions state machine to invoke the AWS Batch job every 4 hours.
- C . Use AWS Fargate to run the application. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke the Fargate task every 4 hours.
- D . Use Amazon EC2 Spot Instances to run the application. Use AWS Code Deploy to deploy and run the application every 4 hours.
C
Explanation:
step function could run a scheduled task when triggered by eventbrige, but why would you add that layer of complexity just to run aws batch when you could directly invoke it through eventbridge. The link provided – https://aws.amazon.com/pt/blogs/compute/orchestrating-high-performance-computing-with-aws-step-functions-and-aws-batch/ makes sense only for HPC, this is a single instance that needs to be run
An enterprise company wants to allow its developers to purchase third-party software through AWS
Marketplace. The company uses an AWS Organizations account structure with full features enabled, and has a shared services account in each organizational unit (OU) that will be used by procurement managers. The procurement team’s policy indicates that developers should be able to obtain third-party software from an approved list only and use Private Marketplace in AWS Marketplace to achieve this requirement. The procurement team wants administration of Private Marketplace to be restricted to a role named procurement-manager-role, which could be assumed by procurement managers Other IAM users groups, roles, and account administrators in the company should be denied Private Marketplace administrative access
What is the MOST efficient way to design an architecture to meet these requirements?
- A . Create an IAM role named procurement-manager-role in all AWS accounts in the organization Add the Power User Access managed policy to the role Apply an inline policy to all IAM users and roles in every AWS account to deny permissions on the AWS Private Marketplace Admin Full Access managed policy.
- B . Create an IAM role named procurement-manager-role in all AWS accounts in the organization Add the Administrator Access managed policy to the role Define a permissions boundary with the AWS Private Marketplace Admin Full Access managed policy and attach it to all the developer roles.
- C . Create an IAM role named procurement-manager-role in all the shared services accounts in the organization Add the AWS Private Marketplace Admin Full Access managed policy to the role Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization.
- D . Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers. Add the AWS Private Marketplace Admin Full Access managed policy to the role. Create an SCP in Organizations to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Apply the SCP to all the shared services accounts in the organization.
C
Explanation:
SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/
This approach allows the procurement managers to assume the procurement-manager-role in shared services accounts, which have the AWS Private Marketplace Admin Full Access managed policy attached to it and can then manage the Private Marketplace. The organization root-level SCP denies the permission to administer Private Marketplace to everyone except the role named procurement-manager-role and another SCP denies the permission to create an IAM role named procurement-manager-role to everyone in the organization, ensuring that only the procurement team can assume the role and manage the Private Marketplace. This approach provides a centralized way to manage and restrict access to Private Marketplace while maintaining a high level of security.
