Practice Free SAP-C02 Exam Online Questions
A company maintains information on premises in approximately 1 million .csv files that are hosted on a VM. The data initially is 10 TB in size and grows at a rate of 1 TB each week. The company needs to automate backups of the data to the AWS Cloud.
Backups of the data must occur daily. The company needs a solution that applies custom filters to back up only a subset of the data that is located in designated source directories. The company has set up an AWS Direct Connect connection.
Which solution will meet the backup requirements with the LEAST operational overhead?
- A . Use the Amazon S3 CopyObject API operation with multipart upload to copy the existing data to Amazon S3. Use the CopyObject API operation to replicate new data to Amazon S3 daily.
- B . Create a backup plan in AWS Backup to back up the data to Amazon S3. Schedule the backup plan to run daily.
- C . Install the AWS DataSync agent as a VM that runs on the on-premises hypervisor. Configure a DataSync task to replicate the data to Amazon S3 daily.
- D . Use an AWS Snowball Edge device for the initial backup. Use AWS DataSync for incremental backups to Amazon S3 daily.
C
Explanation:
AWS DataSync is an online data transfer service that is designed to help customers get their data to and from AWS quickly, easily, and securely. Using DataSync, you can copy data from your on-premises NFS or SMB shares directly to Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server. DataSync uses a purpose-built, parallel transfer protocol for speeds up to 10x faster than open source tools. DataSync also has built-in verification of data both in flight and at rest, so you can be confident that your data was transferred successfully. DataSync allows you to apply filters to select which files or folders to transfer, based on file name, size, or modification time. You can also schedule your DataSync tasks to run daily, weekly, or monthly, or on demand. DataSync is integrated with AWS Direct Connect, so you can take advantage of your existing private connection to AWS. DataSync is also a fully managed service, so you do not need to provision, configure, or maintain any infrastructure for data transfer.
Option A is incorrect because the Amazon S3 CopyObject API operation does not support filtering or scheduling, and it would require you to write and maintain custom scripts to automate the backup process.
Option B is incorrect because AWS Backup does not support filtering or transferring data from on-premises sources to Amazon S3. AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services.
Option D is incorrect because AWS Snowball Edge is a physical device that is used for offline data transfer when network bandwidth is limited or unavailable. It is not suitable for daily backups or incremental transfers. AWS Snowball Edge also does not support filtering or scheduling.
1: Considering four different replication options for data in Amazon S3
2: Protect your file and backup archives using AWS DataSync and Amazon S3 Glacier
3: AWS DataSync FAQs
A company has many separate AWS accounts and uses no central billing or management. Each AWS account hosts services for different departments in the company. The company has a Microsoft Azure Active Directory that is deployed.
A solution architect needs to centralize billing and management of the company’s AWS accounts. The company wants to start using identify federation instead of manual user management. The company also wants to use temporary credentials instead of long-lived access keys.
Which combination of steps will meet these requirements? (Select THREE)
- A . Create a new AWS account to serve as a management account. Deploy an organization in AWS Organizations. Invite each existing AWS account to join the organization. Ensure that each account accepts the invitation.
- B . Configure each AWS Account’s email address to be aws+<account id>@example.com so that account management email messages and invoices are sent to the same place.
- C . Deploy AWS IAM Identity Center (AWS Single Sign-On) in the management account. Connect IAM Identity Center to the Azure Active Directory. Configure IAM Identity Center for automatic synchronization of users and groups.
- D . Deploy an AWS Managed Microsoft AD directory in the management account. Share the directory with all other accounts in the organization by using AWS Resource Access Manager (AWS RAM).
- E . Create AWS IAM Identity Center (AWS Single Sign-On) permission sets. Attach the permission sets to the appropriate IAM Identity Center groups and AWS accounts.
- F . Configure AWS Identity and Access Management (IAM) in each AWS account to use AWS Managed Microsoft AD for authentication and authorization.
A company uses AWS CloudFormation to deploy applications within multiple VPCs that are all attached to a transit gateway Each VPC that sends traffic to the public internet must send the traffic
through a shared services VPC Each subnet within a VPC uses the default VPC route table and the traffic is routed to the transit gateway. The transit gateway uses its default route table for any VPC attachment
A security audit reveals that an Amazon EC2 instance that is deployed within a VPC can communicate with an EC2 instance that is deployed in any of the company’s other VPCs A solutions architect needs to limit the traffic between the VPCs. Each VPC must be able to communicate only with a predefined, limited set of authorized VPCs.
What should the solutions architect do to meet these requirements?
- A . Update the network ACL of each subnet within a VPC to allow outbound traffic only to the authorized VPCs Remove all deny rules except the default deny rule.
- B . Update all the security groups that are used within a VPC to deny outbound traffic to security groups that are used within the unauthorized VPCs
- C . Create a dedicated transit gateway route table for each VPC attachment. Route traffic only to the authorized VPCs.
- D . Update the mam route table of each VPC to route traffic only to the authorized VPCs through the transit gateway
C
Explanation:
You can segment your network by creating multiple route tables in an AWS Transit Gateway and associate Amazon VPCs and VPNs to them. This will allow you to create isolated networks inside an AWS Transit Gateway similar to virtual routing and forwarding (VRFs) in traditional networks. The AWS Transit Gateway will have a default route table. The use of multiple route tables is optional.
How should EC2 instances in AWS synchronize their clocks with an on-premises atomic clock NTP server, with the least administrative overhead?
- A . Configure a DHCP options set with the on-prem NTP server.
- B . Use a custom AMI with Amazon Time Sync.
- C . Deploy a 3rd-party NTP server from Marketplace.
- D . Create an IPsec VPN tunnel to sync over Direct Connect.
A
Explanation:
Option Ais the most lightweight and scalable approach. By updating the VPC’sDHCP options set, you automatically configure all EC2 instances to use your on-premises NTP server. No extra software or infrastructure is required.
VPC DHCP Options Docs
A company has an application that stores user-uploaded videos in an Amazon S3 bucket that uses S3 Standard storage. Users access the videos frequently in the first 180 days after the videos are uploaded. Access after 180 days is rare. Named users and anonymous users access the videos. Most of the videos are more than 100 MB in size. Users often have poor internet connectivity when they upload videos, resulting in failed uploads. The company uses multipart uploads for the videos. A solutions architect needs to optimize the S3 costs of the application.
Which combination of actions will meet these requirements? (Select TWO.)
- A . Configure the S3 bucket to be a Requester Pays bucket.
- B . Use S3 Transfer Acceleration to upload the videos to the S3 bucket.
- C . Create an S3 Lifecycle configuration to expire incomplete multipart uploads 7 days after initiation.
- D . Create an S3 Lifecycle configuration to transition objects to S3 Glacier Instant Retrieval after 1 day.
- E . Create an S3 Lifecycle configuration to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days.
C, E
Explanation:
Configuring an S3 Lifecycle policy to expire incomplete multipart uploads after 7 days prevents storage of partially uploaded objects, avoiding unnecessary costs from failed uploads. Additionally, implementing a lifecycle policy to transition objects to S3 Standard-IA after 180 days ensures older videos that are rarely accessed move to a lower-cost storage class, significantly reducing storage costs.
These measures are aligned with AWS cost optimization best practices for S3 data lifecycle management.
A company has an organization in AWS Organizations that includes a separate AWS account for each of the company’s departments. Application teams from different departments develop and deploy solutions independently.
The company wants to reduce compute costs and manage costs appropriately across departments. The company also wants to improve visibility into billing for individual departments. The company does not want to lose operational flexibility when the company selects compute resources.
Which solution will meet these requirements?
- A . Use AWS Budgets for each department. Use Tag Editor to apply tags to appropriate resources.
Purchase EC2 Instance Savings Plans. - B . Configure AWS Organizations to use consolidated billing. Implement a tagging strategy that identifies departments. Use SCPs to apply tags to appropriate resources. Purchase EC2 Instance Savings Plans.
- C . Configure AWS Organizations to use consolidated billing. Implement a tagging strategy that identifies departments. Use Tag Editor to apply tags to appropriate resources. Purchase Compute Savings Plans.
- D . Use AWS Budgets for each department. Use SCPs to apply tags to appropriate resources. Purchase Compute Savings Plans.
A solutions architect has an operational workload deployed on Amazon EC2 instances in an Auto Scaling Group. The VPC architecture spans two Availability Zones (AZ) with a subnet in each that the Auto Scaling group is targeting. The VPC is connected to an on-premises environment and connectivity cannot be interrupted. The maximum size of the Auto Scaling group is 20 instances in service.
The VPC IPv4 addressing is as follows:
VPCCIDR 10 0 0 0/23
AZ1 subnet CIDR: 10 0 0 0724
AZ2 subnet CIDR: 10.0.1 0724
Since deployment, a third AZ has become available in the Region. The solutions architect wants to adopt the new AZ without adding additional IPv4 address space and without service downtime.
Which solution will meet these requirements?
- A . Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1 subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZI subnet When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only Remove the current AZ2 subnet Create a new AZ2 subnet using the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets.
- B . Terminate the EC2 instances in the AZ1 subnet Delete and re-create the AZ1 subnet using hall the address space. Update the Auto Scaling group to use this new subnet. Repeat this for the second AZ. Define a new subnet in AZ3: then update the Auto Scaling group to target all three new subnets
- C . Create a new VPC with the same IPv4 address space and define three subnets, with one for each AZ Update the existing Auto Scaling group to target the new subnets in the new VPC
- D . Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet to have halt the previous address space Adjust the Auto Scaling group to also use the AZ1 subnet again. When the instances are healthy, adjust the Auto Seating group to use the AZ1 subnet only. Update the current AZ2 subnet and assign the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet usinghalf the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets
A
Explanation:
https://repost.aws/knowledge-center/vpc-ip-address-range
A company has separate AWS accounts for each of its departments. The accounts are in OUs that are in an organization in AWS Organizations. The IT department manages a private certificate authority (CA) by using AWS Private Certificate Authority in its account.
The company needs a solution to allow developer teams in the other departmental accounts to access the private CA to issue certificates for their applications. The solution must maintain appropriate security boundaries between accounts.
Which solution will meet these requirements?
- A . Create an AWS Lambda function in the IT account. Program the Lambda function to use the AWS Private CA API to export and import a private CA certificate to each department account. Use Amazon EventBridge to invoke the Lambda function on a schedule.
- B . Create an IAM identity-based policy that allows cross-account access to AWS Private CA. In the IT account, attach this policy to the private CA. Grant access to AWS Private CA by using the AWS Private CA API.
- C . In the organization’s management account, create an AWS CloudFormation stack to set up a resource-based delegation policy.
- D . Use AWS Resource Access Manager (AWS RAM) in the IT account to enable sharing in the organization. Create a resource share. Add the private CA resource to the resource share. Grant the department OUs access to the shared CA.
D
Explanation:
D is correct because AWS Private CA supports resource sharing through AWS RAM, which allows you to share the CA across accounts in your AWS Organization securely. It ensures the CA private key remains secure and is never exported.
A is invalid because you cannot export a CA’s private key, and importing/exporting CAs this way is unsupported.
B is incorrect ― IAM policies alone are not sufficient to share Private CAs across accounts.
C is not applicable because resource-based delegation policies do not apply to AWS Private CA.
Reference: Share your private CA using AWS RAM
Using AWS RAM for cross-account sharing
A company needs to optimize the cost of backups for Amazon Elastic File System (Amazon EFS). A solutions architect has already configured a backup plan in AWS Backup for the EFS backups. The backup plan contains a rule with a lifecycle configuration to transition EFS backups to cold storage after 7 days and to keep the backups for an additional 90 days.
After I month, the company reviews its EFS storage costs and notices an increase in the EFS backup costs. The EFS backup cold storage produces almost double the cost of the EFS warm backup storage.
What should the solutions architect do to optimize the cost?
- A . Modify the backup rule’s lifecycle configuration to move the EFS backups to cold storage after 1 day. Set the backup retention period to 30 days.
- B . Modify the backup rule’s lifecycle configuration to move the EFS backups to cold storage after 8 days. Set the backup retention period to 30 days.
- C . Modify the backup rule’s lifecycle configuration to move the EFS backups to cold storage after 1 day. Set the backup retention period to 90 days.
- D . Modify the backup rule’s lifecycle configuration to move the EFS backups to cold storage after 8 days. Set the backup retention period to 98 days.
A
Explanation:
The cost of EFS backup cold storage is $0.01 per GB-month, whereas the cost of EFS backup warm storage is $0.05 per GB-month1. Therefore, moving the backups to cold storage as soon as possible will reduce the storage cost. However, cold storage backups must be retained for a minimum of 90 days2, otherwise they incur a pro-rated charge equal to the storage charge for the remaining days1. Therefore, setting the backup retention period to 30 days will incur a penalty of 60 days of cold storage cost for each backup deleted. This penalty will still be lower than keeping the backups in warm storage for 7 days and then in cold storage for 83 days, which is the current configuration. Therefore, option A is the most cost-effective solution.
A company needs to aggregate Amazon CloudWatch logs from its AWS accounts into one central logging account. The collected logs must remain in the AWS Region of creation. The central logging account will then process the logs, normalize the logs into standard output format, and stream the output logs to a security tool for more processing.
A solutions architect must design a solution that can handle a large volume of logging data that needs to be ingested. Less logging will occur outside normal business hours than during normal business hours. The logging solution must scale with the anticipated load. The solutions architect has decided to use an AWS Control Tower design to handle the multi-account logging process.
Which combination of steps should the solutions architect take to meet the requirements? (Select THREE.)
- A . Create a destination Amazon Kinesis data stream in the central logging account.
- B . Create a destination Amazon Simple Queue Service (Amazon SQS) queue in the central logging account.
- C . Create an IAM role that grants Amazon CloudWatch Logs the permission to add data to the Amazon Kinesis data stream. Create a trust policy. Specify the trust policy in the IAM role. In each member account, create a subscription filter for each log group to send data to the Kinesis data stream.
- D . Create an IAM role that grants Amazon CloudWatch Logs the permission to add data to the Amazon Simple Queue Service (Amazon SQS) queue. Create atrust policy. Specify the trust policy in the IAM role. In each member account, create a single subscription filter for all log groups to send data to the SQSqueue.
- E . Create an AWS Lambda function. Program the Lambda function to normalize the logs in the central logging account and to write the logs to the security tool.
- F . Create an AWS Lambda function. Program the Lambda function to normalize the logs in the member accounts and to write the logs to the security tool.
