Practice Free SAP-C02 Exam Online Questions
A software company needs to create short-lived test environments to test pull requests as part of its development process. Each test environment consists of a single Amazon EC2 instance that is in an Auto Scaling group.
The test environments must be able to communicate with a central server to report test results. The central server is located in an on-premises data center. A solutions architect must implement a solution so that the company can create and delete test environments without any manual intervention. The company has created a transit gateway with a VPN attachment to the on-premises network.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create an AWS CloudFormation template that contains a transit gateway attachment and related routing configurations. Create a CloudFormation stack set that includes this template. Use CloudFormation StackSets to deploy a new stack for each VPC in the account. Deploy a new VPC for each test environment.
- B . Create a single VPC for the test environments. Include a transit gateway attachment and related routing configurations. Use AWS CloudFormation to deploy all test environments into the VPC.
- C . Create a new OU in AWS Organizations for testing. Create an AWS CloudFormation template that contains a VPC, necessary networking resources, a transit gateway attachment, and related routing configurations. Create a CloudFormation stack set that includes this template. Use CloudFormation StackSets for deployments into each account under the testing 01.1. Create a new account for each test environment.
- D . Convert the test environment EC2 instances into Docker images. Use AWS CloudFormation to configure an Amazon Elastic Kubernetes Service (Amazon EKS) cluster in a new VPC, create a transit gateway attachment, and create related routing configurations. Use Kubernetes to manage the deployment and lifecycle of the test environments.
B
Explanation:
This option allows the company to use a single VPC to host multiple test environments that are isolated from each other by using different subnets and security groups1. By including a transit gateway attachment and related routing configurations, the company can enable the test environmentsto communicate with the central server in the on-premises data center through a VPN connection2. By using AWS CloudFormation to deploy all test environments into the VPC, the company can automate the creation and deletion of test environments without any manual intervention3. This option also minimizes the operational overhead by reducing the number of VPCs, accounts, and resources that need to be managed.
Working with VPCs and subnets
Working with transit gateways
Working with AWS CloudFormation stacks
A company is planning to migrate to the AWS Cloud. The company hosts many applications on Windows servers and Linux servers. Some of the servers are physical, and some of the servers are virtual. The company uses several types of databases in its on-premises environment. The company does not have an accurate inventory of its on-premises servers and applications.
The company wants to rightsize its resources during migration. A solutions architect needs to obtain information about the network connections and the application relationships. The solutions architect must assess the company’s current environment and develop a migration plan.
Which solution will provide the solutions architect with the required information to develop the migration plan?
- A . Use Migration Evaluator to request an evaluation of the environment from AWS. Use the AWS Application Discovery Service Agentless Collector to import the details into a Migration Evaluator Quick Insights report.
- B . Use AWS Migration Hub and install the AWS Application Discovery Agent on the servers. Deploy the Migration Hub Strategy Recommendations application data collector. Generate a report by using Migration Hub Strategy Recommendations.
- C . Use AWS Migration Hub and run the AWS Application Discovery Service Agentless Collector on the servers. Group the servers and databases by using AWS Application Migration Service. Generate a report by using Migration Hub Strategy Recommendations.
- D . Use the AWS Migration Hub import tool to load the details of the company’s on-premises environment. Generate a report by using Migration Hub Strategy Recommendations.
B
Explanation:
To develop a migration plan with accurate inventory and dependency data:
AWS Migration Hub provides a single view for tracking migration tasks and resources across multiple AWS services.
The AWS Application Discovery Agent (installed on servers) collects detailed data about running processes, system performance, and network connections.
Migration Hub Strategy Recommendations leverages this data to automatically identify application patterns, generate recommended AWS target services, and provide a detailed migration plan. This approach ensures accurate data collection, detailed dependency mapping, and tailored recommendations―crucial for a successful and right-sized migration to AWS.
A large company is migrating ils entire IT portfolio to AWS. Each business unit in the company has a standalone AWS account that supports both development and test environments. New accounts to support production workloads will be needed soon.
The finance department requires a centralized method for payment but must maintain visibility into each group’s spending to allocate costs.
The security team requires a centralized mechanism to control 1AM usage in all the company’s accounts.
What combination of the following options meet the company’s needs with the LEAST effort? (Select TWO.)
- A . Use a collection of parameterized AWS CloudFormation templates defining common 1AM permissions that are launched into each account. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model.
- B . Use AWS Organizations to create a new organization from a chosen payer account and define an organizational unit hierarchy. Invite the existing accounts to join the organization and create new accounts using Organizations.
- C . Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks.
- D . Enable all features of AWS Organizations and establish appropriate service control policies that filter 1AM permissions for sub-accounts.
- E . Consolidate all of the company’s AWS accounts into a single AWS account. Use tags for billing purposes and the lAM’s Access Advisor feature to enforce the least privilege model.
B,D
Explanation:
Option B is correct because AWS Organizations allows a company to create a new organization from a chosen payer account and define an organizational unit hierarchy. This way, the finance department can have a centralized method for payment but also maintain visibility into each group’s spending to allocate costs. The company can also invite the existing accounts to join the organization and create new accounts using Organizations, which simplifies the account management process.
Option D is correct because enabling all features of AWS Organizations and establishing appropriate service control policies (SCPs) that filter IAM permissions for sub-accounts allows the security team to have a centralized mechanism to control IAM usage in all the company’s accounts. SCPs are policies that specify the maximum permissions for an organization or organizational unit (OU), and they can be used to restrict access to certain services or actions across all accounts in an organization.
Option A is incorrect because using a collection of parameterized AWS CloudFormation templates defining common IAM permissions that are launched into each account requires more effort than using SCPs. Moreover, it does not provide a centralized mechanism to control IAM usage, as each account would have to launch the appropriate stacks to enforce the least privilege model.
Option C is incorrect because requiring each business unit to use its own AWS accounts does not provide a centralized method for payment or a centralized mechanism to control IAM usage. Tagging each AWS account appropriately and enabling Cost Explorer to administer chargebacks may help with cost allocation, but it is not as efficient as using AWS Organizations.
Option E is incorrect because consolidating all of the company’s AWS accounts into a single AWS
account does not provide visibility into each group’s spending or a way to control IAM usage for different business units. Using tags for billing purposes and the IAM’s Access Advisor feature to enforce the least privilege model may help with cost optimization and security, but it is not as scalable or flexible as using AWS Organizations.
AWS Organizations
Service Control Policies
AWS CloudFormation
Cost Explorer
IAM Access Advisor
A company is running an application in the AWS Cloud. The application collects and stores a large amount of unstructured data in an Amazon S3 bucket. The S3 bucket contains several terabytes of data and uses the S3 Standard storage class. The data increases in size by several gigabytes every day. The company needs to query and analyze the data. The company does not access data that is more than 1-year-old. However, the company must retain all the data indefinitely for compliance reasons.
Which solution will meet these requirements MOST cost-effectively?
- A . Use S3 Select to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Glacier Deep Archive.
- B . Use Amazon Redshift Spectrum to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Glacier Deep Archive.
- C . Use an AWS Glue Data Catalog and Amazon Athena to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Glacier Deep Archive.
- D . Use Amazon Redshift Spectrum to query the data. Create an S3 Lifecycle policy to transition data that is more than 1 year old to S3 Intelligent-Tiering.
C
Explanation:
Generally, unstructured data should be converted structured data before querying them. AWS Glue can do that.
https://docs.aws.amazon.com/glue/latest/dg/schema-relationalize.html https://docs.aws.amazon.com/athena/latest/ug/glue-athena.html
A company needs to migratesome Oracle databases to AWSwhile keeping otherson-premisesfor compliance. The on-prem databases containspatial dataand runcron jobs. The solution must allowquerying on-prem data as foreign tablesfrom AWS.
- A . Use DynamoDB, SCT, and Lambda. Move spatial data to S3 and query with Athena.
- B . Use RDS for SQL Server and AWS Glue crawlers for Oracle access.
- C . Use EC2-hosted Oracle with Application Migration Service. Use Step Functions for cron.
- D . Use RDS for PostgreSQL with DMS and SCT. Use PostgreSQL foreign data wrappers. Connectvia
Direct Connect.
D
Explanation:
D is correct becauseRDS for PostgreSQLsupportsforeign data wrappers (FDW)that allow querying
remote Oracle databases. WithAWS Schema Conversion Tool (SCT)andDatabase Migration Service
(DMS), schema and data can be migrated effectively.AWS Direct Connectensures secure, private
connectivity to on-prem databases. Cron jobs can be run via EventBridge or external orchestration.
A doesn’t support relational/spatial querying.
B doesn’t support FDW or spatial types.
C introduces unnecessary complexity.
Reference: https://www.postgresql.org/docs/current/postgres-fdw.htmlhttps://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html
A company is running its solution on AWS in a manually created VPC. The company is using AWS CloudFormation to provision other parts of the infrastructure According to a new requirement the company must manage all infrastructure in an automatic way
What should the comp any do to meet this new requirement with the LEAST effort?
- A . Create a new AWS Cloud Development Kit (AWS CDK) stack that strictly provisions the existing VPC resources and configuration Use AWS CDK to import the VPC into the stack and to manage the VPC
- B . Create a CloudFormation stack set that creates the VPC Use the stack set to import the VPC into the stack
- C . Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration From the CloudFormation console, create a new stack by importing the existing resources
- D . Create a new CloudFormation template that creates the VPC Use the AWS Serverless Application Model (AWS SAM) CLI to import the VPC
C
Explanation:
Creating the Template:
Start by creating a CloudFormation template that includes all the VPC resources. This template should accurately reflect the current state and configuration of the VPC. Using the CloudFormation Console:
Open the AWS Management Console and navigate to CloudFormation.
Choose "Create stack" and then select "With existing resources (import resources)".
Specifying the Template:
Upload the previously created template or specify the Amazon S3 URL where the template is stored.
Identifying the Resources:
On the "Identify resources" page, provide the identifiers for each VPC resource you wish to import.
For example, for anAWS::EC2::VPCresource, use the VPC ID as the identifier.
Creating the Stack:
Complete the stack creation process by providing stack details and reviewing the changes. This will create a change set that includes the import operation. Executing the Change Set:
Execute the change set to import the resources into the CloudFormation stack, making them managed by CloudFormation.
Verification and Drift Detection:
After the import is complete, run drift detection to ensure the actual configuration matches the template configuration.
This approach allows the company to manage their VPC and other resources via CloudFormation
without the need to recreate resources, ensuring a smooth transition to automated infrastructure
management.
Reference
Creating a stack from existing resources – AWS CloudFormation(AWS Documentation).
Generating templates for existing resources – AWS CloudFormation(AWS Documentation).
Bringing existing resources into CloudFormation management(AWS Documentation).
A company is running a serverless application that consists of several AWS Lambda functions and Amazon DynamoDB tables. The company has created new functionality that requires the Lambda functions to access an Amazon Neptune DB cluster. The Neptune DB cluster is located in three subnets in a VPC.
Which of the possible solutions will allow the Lambda functions to access the Neptune DB cluster and DynamoDB tables? (Select TWO.)
- A . Create three public subnets in the Neptune VPC, and route traffic through an internet gateway.
Host the Lambda functions in the three new public subnets. - B . Create three private subnets in the Neptune VPC, and route internet traffic through a NAT gateway. Host the Lambda functions in the three new private subnets.
- C . Host the Lambda functions outside the VPC. Update the Neptune security group to allow access from the IP ranges of the Lambda functions.
- D . Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptune database, and have the Lambda functions access Neptune over the VPC endpoint.
- E . Create three private subnets in the Neptune VPC. Host the Lambda functions in the three new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDB traffic to the VPC endpoint.
B,E
Explanation:
This option allows the company to use private subnets and VPC endpoints to connect the Lambda functions to the Neptune DB cluster and DynamoDB tables securely and efficiently1. By creating three private subnets in the Neptune VPC, the company can isolate the Lambda functions from the public internet and reduce the attack surface2. By routing internet traffic through a NAT gateway, the company can enable the Lambda functions to access AWS services that are outside the VPC, suchas Amazon S3 or Amazon CloudWatch3. By hosting the Lambda functions in the three new private subnets, the company can ensure that the Lambda functions can access the Neptune DB cluster within the same VPC4. By creating a VPC endpoint for DynamoDB, the company can enable the Lambda functions to access DynamoDB tables without going through the internet or a NAT gateway5. By routing DynamoDB traffic to the VPC endpoint, the company can improve the performance and availability of the DynamoDB access5.
Configuring a Lambda function to access resources in a VPC
Working with VPCs and subnets
NAT gateways
Accessing Amazon Neptune from AWS Lambda
VPC endpoints for DynamoDB
A company has an online learning platform that teaches data science. The platform uses the AWS Cloud to provision on-demand lab environments for its students. Each student receives a dedicated AWS account for a short time. Students need access to ml.p2.xlarge instances to run a single Amazon SageMaker machine learning training job and to deploy the inference endpoint. Account provisioning is automated. The accounts are members of an organization in AWS Organizations with all features enabled. The accounts must be provisioned in the ap-southeast-2 Region. The default resource usage quotas are not sufficient for the accounts. A solutions architect must enhance the account provisioning process to include automated quota increases.
Which solution will meet these requirements?
- A . Create a quota request template in the us-east-1 Region in the organization’s management account. Enable template association. Add a quota for SageMaker in ap-southeast-2 for ml.p2.xlarge training job usage. Set the desired quota to 1. Add a quota for SageMaker in ap-southeast-2 for ml.p2.xlarge endpoint usage. Set the desired quota to 1.
- B . Create a quota request template in the us-east-1 Region in the organization’s management account. Enable template association. Add a quota for SageMaker in ap-southeast-2 for ml.p2.xlarge training warm pool usage. Set the desired quota to 2.
- C . Create a quota request template in ap-southeast-2 in the organization’s management account. Enable template association. Add a quota for SageMaker in the us-east-1 Region for ml.p2.xlarge training job usage. Set the desired quota to 1. Add a quota for SageMaker in us-east-1 for ml.p2.xlarge endpoint usage. Set the desired quota to 1.
- D . Create a quota request template in ap-southeast-2 in the organization’s management account. Enable template association. Add a quota for SageMaker in the us-east-1 Region for ml.p2.xlarge training warm pool usage. Set the desired quota to 2.
A
Explanation:
Creating a quota request template in the us-east-1 Region of the management account ensures it applies to all newly provisioned accounts in the organization.
By specifying the required ml.p2.xlarge training job and endpoint usage quotas in ap-southeast-2, the accounts will automatically receive these higher quotas upon creation, supporting the student lab environments without additional manual quota requests.
This process integrates seamlessly with AWS Organizations for automated and standardized account provisioning.
A company consists of two separate business units. Each business unit has its own AWS account within a single organization in AWS Organizations. The business units regularly share sensitive documents with each other. To facilitate sharing, the company created an Amazon S3 bucket in each
account and configured two-way replication between the S3 buckets. The S3 buckets have millions of objects.
Recently, a security audit identified that neither S3 bucket has encryption at rest enabled. Company policy requires that all documents must be stored with encryption at rest. The company wants to implement server-side encryption with Amazon S3 managed encryption keys (SSE-S3).
What is the MOST operationally efficient solution that meets these requirements?
- A . Turn on SSE-S3 on both S3 buckets. Use S3 Batch Operations to copy and encrypt the objects in the same location.
- B . Create an AWS Key Management Service (AWS KMS) key in each account. Turn on server-side encryption with AWS KMS keys (SSE-KMS) on each S3 bucket by using the corresponding KMS key in that AWS account. Encrypt the existing objects by using an S3 copy command in the AWS CLI.
- C . Turn on SSE-S3 on both S3 buckets. Encrypt the existing objects by using an S3 copy command in the AWS CLI.
- D . Create an AWS Key Management Service (AWS KMS) key in each account. Turn on server-side encryption with AWS KMS keys (SSE-KMS) on each S3 bucket by using the corresponding KMS key in that AWS account. Use S3 Batch Operations to copy the objects into the same location.
A
Explanation:
"The S3 buckets have millions of objects" If there are million of objects then you should use Batch operations. https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/
A company runs production workloads on EC2 On-Demand Instances and RDS for PostgreSQL. They want to reduce costs without compromising availability or capacity.
- A . Use CUR and Lambda to terminate underutilized instances. Buy Savings Plans.
- B . Use Budgets and Trusted Advisor, then manually terminate and buy RIs.
- C . UseCompute OptimizerandTrusted Advisorfor recommendations. Apply rightsizing, auto scaling, and purchase a Compute Savings Plan.
- D . Use Cost Explorer, alerts, and replace with Spot Instances.
C
Explanation:
C is correct: AWS Compute Optimizeruses machine learning to analyze usage patterns and recommends rightsizing.Trusted Advisoradds further insights. Combining these withSavings Plansgives the best cost optimizationwithout reducing availability. A is risky due to using Lambda for termination.
B and D offer partial or manual solutions.
Reference: AWS Compute Optimizer
