Practice Free SAP-C02 Exam Online Questions
A solutions architect is investigating an issue in which a company cannot establish new sessions in Amazon Workspaces. An initial analysis indicates that the issue involves user profiles. The Amazon Workspaces environment is configured to use Amazon FSx for Windows File Server as the profile share storage. The FSx for Windows File Server file system is configured with 10 TB of storage. The solutions architect discovers that the file system has reached its maximum capacity. The solutions architect must ensure that users can regain access. The solution also must prevent the problem from occurring again.
Which solution will meet these requirements?
- A . Remove old user profiles to create space. Migrate the user profiles to an Amazon FSx for Lustre file system.
- B . Increase capacity by using the update-file-system command. Implement an Amazon CloudWatch metric that monitors free space. Use Amazon EventBridge to invoke an AWS Lambda function to increase capacity as required.
- C . Monitor the file system by using the FreeStorageCapacity metric in Amazon CloudWatch. Use AWS Step Functions to increase the capacity as required.
- D . Remove old user profiles to create space. Create an additional FSx for Windows File Server file system. Update the user profile redirection for 50% of the users to use the new file system.
B
Explanation:
It can prevent the issue from happening again by monitoring the file system with the FreeStorageCapacity metric in Amazon CloudWatch and using Amazon EventBridge to invoke an AWS Lambda function to increase the capacity as required. This ensures that the file system always has enough free space to store user profiles and avoids reaching maximum capacity.
A solutions architect must create a business case for migration of a company’s on-premises data center to the AWS Cloud. The solutions architect will use a configuration management database (CMDB) export of all the company’s servers to create the case.
Which solution will meet these requirements MOST cost-effectively?
- A . Use AWS Well-Architected Tool to import the CMDB data to perform an analysis and generate recommendations.
- B . Use Migration Evaluator to perform an analysis. Use the data import template to upload the data from the CMDB export.
- C . Implement resource matching rules. Use the CMDB export and the AWS Price List Bulk API to query CMDB data against AWS services in bulk.
- D . Use AWS Application Discovery Service to import the CMDB data to perform an analysis.
B
Explanation:
https://aws.amazon.com/blogs/architecture/accelerating-your-migration-to-aws/ Build a business case with AWS Migration Evaluator. The foundation for a successful migration starts with a defined business objective (for example, growth or new offerings). In order to enable the business drivers, the established business case must then be aligned to a technical capability (increased security and elasticity). AWS Migration Evaluator (formerly known as TSO Logic) can help you meet these objectives. To get started, you can choose to upload exports from third-party tools such as Configuration Management Database (CMDB) or install a collector agent to monitor. You will receive an assessment after data collection, which includes a projected cost estimate and savings of running your on-premises workloads in the AWS Cloud. This estimate will provide a summary of the projected costs to re-host on AWS based on usage patterns. It will show the breakdown of costs by infrastructure and software licenses. With this information, you can make the business case and plan next steps.
A company creates an AWS Control Tower landing zone to manage and govern a multi-account AWS environment. The company’s security team will deploy preventive controls and detective controls to monitor AWS services across all the accounts. The security team needs a centralized view of the security state of all the accounts.
Which solution will meet these requirements?
- A . From the AWS Control Tower management account, use AWS CloudFormation StackSets to deploy an AWS Config conformance pack to all accounts in the organization
- B . Enable Amazon Detective for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Detective
- C . From the AWS Control Tower management account, deploy an AWS CloudFormation stack set that uses the automatic deployment option to enable Amazon Detective for the organization
- D . Enable AWS Security Hub for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Security Hub
D
Explanation:
Enable AWS Security Hub:
Navigate to the AWS Security Hub console in your management account and enable Security Hub. This process integrates Security Hub with AWS Control Tower, allowing you to manage and monitor security findings across all accounts within your organization.
Designate a Delegated Administrator:
In AWS Organizations, designate one of the AWS accounts as the delegated administrator for Security Hub. This account will have the responsibility to manage and oversee the security posture of all accounts within the organization.
Deploy Controls Across Accounts:
Use AWS Security Hub to automatically enable security controls across all AWS accounts in the organization. This provides a centralized view of the security state of all accounts and ensures continuous monitoring and compliance.
Utilize AWS Security Hub Features:
Leverage the capabilities of Security Hub to aggregate security alerts, run continuous security checks, and generate findings based on the AWS Foundational Security Best Practices. Security Hub integrates with other AWS services like AWS Config, Amazon GuardDuty, and AWS IAM Access
Analyzer to enhance security monitoring and remediation.
By integrating AWS Security Hub with AWS Control Tower and using a delegated administrator account, you can achieve a centralized and comprehensive view of your organization’s security posture, facilitating effective management and remediation of security issues. Reference
AWS Security Hub now integrates with AWS Control Tower 【 77 】 AWS Control Tower and Security Hub Integration 【 76 】 AWS Security Hub Features 【 79 】
A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve security:. The database must use strong, randomly generated passwords stored in a secure AWS managed service.
The application resources must be deployed through AWS CloudFormation.
The application must rotate credentials for the database every 90 days.
A solutions architect will generate a CloudFormation template to deploy the application.
Which resources specified in the CloudFormation template will meet the security engineer’s requirements with the LEAST amount of operational overhead?
- A . Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.
- B . Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda function resource to rotate the database password. Specifya Parameter Store RotationSchedule resource to rotate the database password every 90 days.
- C . Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days.
- D . Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync DataSource resource to automatically rotate the database password every 90 days.
B
Explanation:
https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-functions-by-using-aws-secrets-manager/
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_cloudformation.html
A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.
The website contains stat c content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue. The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.
Which solution meets these requirements?
- A . Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Recognition to categorize the videos.
- B . Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web application. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
- C . Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
- D . Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue Replace the custom software with Amazon Rekognition to categorize the videos.
C
Explanation:
Option C is correct because hosting the web application in Amazon S3, storing the uploaded videos in Amazon S3, and using S3 event notifications to publish events to the SQS queue reduces the operational overhead of managing EC2 instances and EBS volumes. Amazon S3 can serve static content such as HTML, CSS, JavaScript, and media files directly from S3 buckets. Amazon S3 can also trigger AWS Lambda functions through S3 event notifications when new objects are created or existing objects are updated or deleted. AWS Lambda can process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize thevideos. This solution eliminates the need for custom recognition software and third-party dependencies345
Reference:
1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
2: https://aws.amazon.com/efs/pricing/
3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
4: https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html
5: https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html 6 : https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html
A company has developed a new release of a popular video game and wants to make it available for public download. The new release package is approximately 5 GB in size. The company provides downloads for existing releases from a Linux-based publicly facing FTP site hosted in an on-premises data center. The company expects the new release will be downloaded by users worldwide. The company wants a solution that provides improved download performance and low transfer costs regardless of a user’s location
Which solutions will meet these requirements’?
- A . Store the game files on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group Configure an FTP service on the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to download the package
- B . Store the game files on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group Configure an FTP service on each of the EC2 instances Use an Application Load Balancer in front of the Auto Scaling group Publish the game download URL for users to download the package
- C . Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Use Amazon CloudFront for the website Publish the game download URL for users to download the package
- D . Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload the game files to the S3 bucket Set Requester Pays for the S3 bucket Publish the game download URL for users to download the package
C
Explanation:
Create an S3 Bucket:
Navigate to Amazon S3 in the AWS Management Console and create a new S3 bucket to store the game files.Enable static website hosting on this bucket.
Upload Game Files:
Upload the 5 GB game release package to the S3 bucket. Ensure that the files are publicly accessible if required for download.
Configure Amazon Route 53:
Set up a new domain or subdomain in Amazon Route 53 and point it to the S3 bucket. This allows users to access the game files using a custom URL.
Use Amazon CloudFront:
Create a CloudFront distribution with the S3 bucket as the origin. CloudFront is a content delivery network (CDN) that caches content at edge locations worldwide, improving download performance and reducing latency for users regardless of their location. Publish the Download URL:
Use the CloudFront distribution URL as the download link for users to access the game files.
CloudFront will handle the efficient distribution and caching of the content.
This solution leverages the scalability of Amazon S3 and the performance benefits of CloudFront to provide an optimal download experience for users globally while minimizing costs.
Reference
Amazon CloudFront Documentation
Amazon S3 Static Website Hosting
A company needs to migrate its customer transactions database from on premises to AWS. The database resides on an Oracle DB instance that runs on a Linux server. According to a new security requirement, the company must rotate the database password each year.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Convert the database to Amazon DynamoDB by using the AWS Schema Conversion Tool (AWS SCT). Store the password in AWS Systems Manager Parameter Store. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function for yearly password rotation.
- B . Migrate the database to Amazon RDS for Oracle. Store the password in AWS Secrets Manager. Turn on automatic rotation. Configure a yearly rotation schedule.
- C . Migrate the database to an Amazon EC2 instance. Use AWS Systems Manager Parameter Store to keep and rotate the connection string by using an AWS Lambda function on a yearly schedule
- D . Migrate the database to Amazon Neptune by using the AWS Schema Conversion Tool {AWS SCT). Create an Amazon CloudWatch alarm to invoke an AWS Lambda function for yearly password rotation.
A company operates a fleet of servers on premises and operates a fleet of Amazon EC2 instances in its organization in AWS Organizations. The company’s AWS accounts contain hundreds of VPCs. The company wants to connect its AWS accounts to its on-premises network. AWS Site-to-Site VPN connections are already established to a single AWS account. The company wants to control which VPCs can communicate with other VPCs.
Which combination of steps will achieve this level of control with the LEAST operational effort? (Choose three.)
- A . Create a transit gateway in an AWS account. Share the transit gateway across accounts by using AWS Resource Access Manager (AWS RAM).
- B . Configure attachments to all VPCs and VPNs.
- C . Set up transit gateway route tables. Associate the VPCs and VPNs with the route tables.
- D . Configure VPC peering between the VPCs.
- E . Configure attachments between the VPCs and VPNs.
- F . Set up route tables on the VPCs and VPNs.
A company runs a highly available data collection application on Amazon EC2 in the eu-north-1 Region. The application collects data from end-user devices and writes records to an Amazon Kinesis data stream and a set of AWS Lambda functions that process the records. The company persists the output of the record processing to an Amazon S3 bucket in eu-north-1. The company uses the data in the S3 bucket as a data source for Amazon Athena.
The company wants to increase its global presence. A solutions architect must launch the data
collection capabilities in the sa-east-1 and ap-northeast-1 Regions. The solutions architect deploys the application, the Kinesis data stream, and the Lambda functions in the two new Regions. The solutions architect keeps the S3 bucket in eu-north-1 to meet a requirement to centralize the data analysis.
During testing of the new setup, the solutions architect notices a significant lag on the arrival of data from the new Regions to the S3 bucket.
Which solution will improve this lag time the MOST?
- A . In each of the two new Regions, set up the Lambda functions to run in a VPC. Set up an S3 gateway endpoint in that VPC.
- B . Turn on S3 Transfer Acceleration on the S3 bucket in eu-north-1. Change the application to use the new S3 accelerated endpoint when the application uploads data to the S3 bucket.
- C . Create an S3 bucket in each of the two new Regions. Set the application in each new Region to upload to its respective S3 bucket. Set up S3 Cross-Region Replication to replicate data to the S3 bucket in eu-north-1.
- D . Increase the memory requirements of the Lambda functions to ensure that they have multiple cores available. Use the multipart upload feature when the application uploads data to Amazon S3 from Lambda.
A company runs a simple Linux application on Amazon EKS by using nodes of the M6i (general purpose) instance type. The company has an EC2 Instance Savings Plan for the M6i family that will expire soon.
A solutions architect must minimize the EKS compute costs when the Savings Plan expires.
Which combination of steps will meet this requirement? (Select THREE.)
- A . Rebuild the application container images to support ARM64 architecture.
- B . Rebuild the application container images to support containers.
- C . Migrate the EKS nodes to the most recent generation of Graviton-based instances.
- D . Replace the EKS nodes with the most recent generation of x86_64 instances.
- E . Purchase a new EC2 Instance Savings Plan for the newly selected Graviton instance family.
- F . Purchase a new EC2 Instance Savings Plan for the newly selected x86_64 instance family.
A, C, E
Explanation:
To minimize EKS compute costs:
