Practice Free SAP-C02 Exam Online Questions
A solutions architect is planning to migrate critical Microsoft SOL Server databases to AWS. Because the databases are legacy systems, the solutions architect will move the databases to a modern data architecture. The solutions architect must migrate the databases with near-zero downtime.
Which solution will meet these requirements?
- A . Use AWS Application Migration Service and the AWS Schema Conversion Tool (AWS SCT). Perform an In-place upgrade before the migration. Export the migrated data to Amazon Aurora Serverless after cutover. Repoint the applications to Amazon Aurora.
- B . Use AWS Database Migration Service (AWS DMS) to Rehost the database. Set Amazon S3 as a target. Set up change data capture (CDC) replication. When the source and destination are fully synchronized, load the data from Amazon S3 into an Amazon RDS for Microsoft SQL Server DB Instance.
- C . Use native database high availability tools Connect the source system to an Amazon RDS for Microsoft SQL Server DB instance Configure replication accordingly. When data replication is finished, transition the workload to an Amazon RDS for Microsoft SQL Server DB instance.
- D . Use AWS Application Migration Service. Rehost the database server on Amazon EC2. When data replication is finished, detach the database and move the database to an Amazon RDS for Microsoft SQL Server DB instance. Reattach the database and then cut over all networking.
B
Explanation:
AWS DMS can migrate data from a source database to a target database in AWS, using change data capture (CDC) to replicate ongoing changes and keep the databases in sync. Setting Amazon S3 as a target allows storing the migrated data in a durable and cost-effective storage service. When the source and destination are fully synchronized, the data can be loaded from Amazon S3 into an Amazon RDS for Microsoft SQL Server DB instance, which is a managed database service that simplifies database administration tasks.
Reference:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServer.html
A company is planning to migrate workloads from its on-premises data center to Amazon EC2 instances. The workloads run on physical servers and VMware virtual servers. The company has gathered details about each on-premises server and virtual server, including server specification, CPU utilization, and memory utilization. The company has stored these details in a .csv file named onprem.csv.
Before the migration, the company must estimate the cost of running the servers on AWS and must determine recommended EC2 instance types for the servers. The company must export this information to a different .csv file.
Which solution will meet these requirements?
- A . Configure AWS Compute Optimizer to generate recommendations from an external source.
Import the onprem.csv file. Export the Compute Optimizer recommendations to a new .csv file. - B . Import the onprem.csv file into AWS Migration Hub by using AWS Migration Hub import. Use EC2 instance recommendations from Migration Hub to generate recommendations. Export the recommendations to a new .csv file.
- C . Deploy AWS Application Discovery Service Agentless Collector on premises. Use Agentless Collector to import the onprem.csv file. Send the file to AWS Migration Hub. Use EC2 instance recommendations from Migration Hub to generate recommendations. Export the recommendations to a new .csv file.
- D . Upload the onprem.csv file to an Amazon S3 bucket. Configure Migration Evaluator to import the data from the S3 bucket. Generate and confirm recommendations by using Migration Evaluator Quick Insights. Export the final recommendations to a new .csv file in the S3 bucket.
A company hosts a Git repository in an on-premises data center. The company uses webhooks to invoke functionality that runs in the AWS Cloud. The company hosts the webhook logic on a set of Amazon EC2 instances in an Auto Scaling group that the company set as a target for an Application Load Balancer (ALB). The Git server calls the ALB for the configured webhooks. The company wants to move the solution to a serverless architecture.
Which solution will meet these requirements with the LEAST operational overhead?
- A . For each webhook, create and configure an AWS Lambda function URL. Update the Git servers to call the individual Lambda function URLs.
- B . Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint.
- C . Deploy the webhook logic to AWS App Runner. Create an ALB, and set App Runner as the target.
Update the Git servers to call the ALB endpoint. - D . Containerize the webhook logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster, and run the webhook logic in AWS Fargate. Create an Amazon API Gateway REST API, and set Fargate as the target. Update the Git servers to call the API Gateway endpoint.
B
Explanation:
https://aws.amazon.com/solutions/implementations/git-to-s3-using-webhooks/
https://medium.com/mindorks/building-webhook-is-easy-using-aws-lambda-and-api-gateway-56f5e5c3a596
A company is running an application in the AWS Cloud. The application consists of microservices that run on a fleet of Amazon EC2 instances in multiple Availability Zones behind an Application Load Balancer. The company recently added a new REST API that was implemented in Amazon API Gateway. Some of the older microservices that run on EC2 instances need to call this new API. The company does not want the API to be accessible from the public internet and does not want proprietary data to traverse the public internet
What should a solutions architect do to meet these requirements?
- A . Create an AWS Site-to-Site VPN connection between the VPC and the API Gateway. Use API Gateway to generate a unique API key for each microservice. Configure the API methods to require the key.
- B . Create an interface VPC endpoint for API Gateway, and set an endpoint policy to only allow access to the specific API Add a resource policy to API Gateway to only allow access from the VPC endpoint. Change the API Gateway endpoint type to private.
- C . Modify the API Gateway to use 1AM authentication. Update the 1AM policy for the 1AM role that is assigned to the EC2 Instances to allow access to the API Gateway. Move the API Gateway into a new VPC Deploy a transit gateway and connect the VPCs.
- D . Create an accelerator in AWS Global Accelerator, and connect the accelerator to the API Gateway. Update the route table for all VPC subnets with a route to the created Global Accelerator endpoint IP address. Add an API key for each service to use for authentication.
B
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-vpc-endpoint-policies.html
A scientific company needs to process text and image data from an Amazon S3 bucket. The data is collected from several radar stations during a live, time-critical phase of a deep space mission. The radar stations upload the data to the source S3 bucket. The data is prefixed by radar station identification number.
The company created a destination S3 bucket in a second account. Data must be copied from the source S3 bucket to the destination S3 bucket to meet a compliance objective. The replication occurs through the use of an S3 replication rule to cover all objects in the source S3 bucket. One specific radar station is identified as having the most accurate data. Data replication at this radar station must be monitored for completion within 30 minutes after the radar station uploads the objects to the source S3 bucket.
What should a solutions architect do to meet these requirements?
- A . Set up an AWS DataSync agent to replicate the prefixed data from the source S3 bucket to the destination S3 bucket. Select to use all available bandwidth on the task, and monitor the task to ensure that it is in the TRANSFERRING status. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.
- B . In the second account, create another S3 bucket to receive data from the radar station with the most accurate data. Set up a new replication rule for this new S3 bucket to separate the replication from the other radar stations. Monitor the maximum replication time to the destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.
- C . Enable Amazon S3 Transfer Acceleration on the source S3 bucket, and configure the radar station with the most accurate data to use the new endpoint. Monitor the S3 destination bucket’s TotalRequestLatency metric. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.
- D . Create a new S3 replication rule on the source S3 bucket that filters for the keys that use the prefix of the radar station with the most accurate data. Enable S3 Replication Time Control (S3 RTC). Monitor the maximum replication time to the destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.
D
Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-time-control.html
A company has a critical application in which the data tier is deployed in a single AWS Region. The data tier uses an Amazon DynamoDB table and an Amazon Aurora MySQL DB cluster. The current Aurora MySQL engine version supports a global database. The application tier is already deployed in two Regions.
Company policy states that critical applications must have application tier components and data tier components deployed across two Regions. The RTO and RPO must be no more than a few minutes each. A solutions architect must recommend a solution to make the data tier compliant with company policy.
Which combination of steps will meet these requirements? (Choose two.)
- A . Add another Region to the Aurora MySQL DB cluster
- B . Add another Region to each table in the Aurora MySQL DB cluster
- C . Set up scheduled cross-Region backups for the DynamoDB table and the Aurora MySQL DB cluster
- D . Convert the existing DynamoDB table to a global table by adding another Region to its configuration
- E . Use Amazon Route 53 Application Recovery Controller to automate database backup and recovery to the secondary Region
A,D
Explanation:
The company should use Amazon Aurora global database and Amazon DynamoDB global table to deploy the data tier components across two Regions. Amazon Aurora global database is a feature that allows a single Aurora database to span multiple AWS Regions, enabling low-latency global reads and fast recovery from Region-wide outages1. Amazon DynamoDB global table is a feature that allows a single DynamoDB table to span multiple AWS Regions, enabling low-latency global reads and writes and fast recovery from Region-wide outages2.
Reference:
https://aws.amazon.com/rds/aurora/global-database/
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables_HowItWorks.html
https://aws.amazon.com/route53/application-recovery-controller/
A company is building a software-as-a-service (SaaS) solution on AWS. The company has deployed an Amazon API Gateway REST API with AWS Lambda integration in multiple AWS Regions and in the same production account.
The company offers tiered pricing that gives customers the ability to pay for the capacity to make a certain number of API calls per second. The premium tier offers up to 3,000 calls per second, and customers are identified by a unique API key. Several premium tier customers in various Regions report that they receive error responses of 429 Too Many Requests from multiple API methods during peak usage hours. Logs indicate that the Lambda function is never invoked.
What could be the cause of the error messages for these customers?
- A . The Lambda function reached its concurrency limit.
- B . The Lambda function its Region limit for concurrency.
- C . The company reached its API Gateway account limit for calls per second.
- D . The company reached its API Gateway default per-method limit for calls per second.
C
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html#apig-request-throttling-account-level-limits
A Solutions Architect wants to make sure that only AWS users or roles with suitable permissions can access a new Amazon API Gateway endpoint. The Solutions
Architect wants an end-to-end view of each request to analyze the latency of the request and create service maps.
How can the Solutions Architect design the API Gateway access control and perform request inspections?
- A . For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Enable the API caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray to trace and analyze user requests to API Gateway.
- B . For the API Gateway resource, set CORS to enabled and only return the company’s domain in Access-Control-Allow-Origin headers. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Use Amazon CloudWatch to trace and analyze user requests to API Gateway.
- C . Create an AWS Lambda function as the custom authorizer, ask the API client to pass the key and secret when making the call, and then use Lambda to validate the key/secret pair against the IAM system. Use AWS X-Ray to trace and analyze user requests to API Gateway.
- D . Create a client certificate for API Gateway. Distribute the certificate to the AWS users and roles that need to access the endpoint. Enable the API caller to pass the client certificate when accessing the endpoint. Use Amazon CloudWatch to trace and analyze user requests to API Gateway.
A company wants to use AWS for disaster recovery for an on-premises application. The company has hundreds of Windows-based servers that run the application. All the servers mount a common share. The company has an RTO of 15 minutes and an RPO of 5 minutes. The solution must support native failover and fallback capabilities.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an AWS Storage Gateway File Gateway. Schedule daily Windows server backups. Save the data lo Amazon S3. During a disaster, recover the on-premises servers from the backup. During
failback. run the on-premises servers on Amazon EC2 instances. - B . Create a set of AWS CloudFormation templates to create infrastructure. Replicate all data to Amazon Elastic File System (Amazon EFS) by using AWS DataSync. During a disaster, use AWS CodePipeline to deploy the templates to restore the on-premises servers. Fail back the data by using DataSync.
- C . Create an AWS Cloud Development Kit (AWS CDK) pipeline to stand up a multi-site active-active environment on AWS. Replicate data into Amazon S3 by using the s3 sync command. During a disaster, swap DNS endpoints to point to AWS. Fail back the data by using the s3 sync command.
- D . Use AWS Elastic Disaster Recovery to replicate the on-premises servers. Replicate data to an Amazon FSx for Windows File Server file system by using AWS DataSync. Mount the file system to AWS servers. During a disaster, fail over the on-premises servers to AWS. Fail back to new or existing servers by using Elastic Disaster Recovery.
A company runs an application on AWS. The application uses an Amazon Aurora MySQL database that is encrypted with the default AWS managed AWS KMS key.
The company must implement a solution to rotate the database encryption key every 180 days. The solution must provide a notification if the encryption key is noncompliant with this standard.
Which solution will meet these requirements?
- A . Configure the rotation period for the existing AWS managed KMS key to be 180 days. Implement the cmk-backing-key-rotation-enabled AWS Config managed rule for the existing KMS key. Configure AWS Config to use Amazon SNS to notify the security team if key rotation is noncompliant.
- B . Create a new AWS managed KMS key with automatic rotation set for 180 days. Take a snapshot of the database. Restore the snapshot to a new Aurora cluster that uses the new KMS key. Create an AWS Config custom rule that uses an AWS Lambda function to validate the key rotation period. Configure AWS Config to use Amazon SES to notify the security team if key encryption is noncompliant.
- C . Create a new customer managed KMS key with automatic rotation set for 180 days. Take asnapshot of the database. Restore the snapshot to a new Aurora cluster that uses the new KMS key. Create an AWS Config custom rule that uses an AWS Lambda function to validate the key rotation period. Configure AWS Config to use Amazon SNS to notify the security team if key encryption is noncompliant.
- D . Create a new customer managed KMS key with automatic rotation set for 180 days. Update the database to use the new KMS key for encryption. Implement the cmk-backing-key-rotation-enabled AWS Config managed rule for the new KMS key. Configure AWS Config to use Amazon SES to notify the security team if key rotation is noncompliant.
