Practice Free SAP-C02 Exam Online Questions
A company is using AWS CloudFormation as its deployment tool for all applications. It stages all application binaries and templates within Amazon S3 buckets with versioning enabled. Developers use an Amazon EC2 instance with IDE access to modify and test applications. The developers want to implement CI/CD with AWS CodePipeline with the following requirements: Use AWS Code Commit for source control.
Automate unit testing and security scanning.
Alert developers when unit tests fail.
Toggle application features and allow lead developer approval before deployment.
Which solution will meet these requirements?
- A . Use AWS CodeBuild for testing and scanning. Use EventBridge and SNS for alerts. Use AWS CDK with a manifest to toggle features. Use a manual approval stage.
- B . Use Lambda for testing and alerts. Use AWS Amplify plugins for feature toggles. Use SES for manual approval.
- C . Use Jenkins and SES for alerts. Use nested CloudFormation stacks for features. Use Lambda for approvals.
- D . Use CodeDeploy for testing and scanning. Use CloudWatch alarms and SNS. Use Docker images for features and AWS CLI for toggles.
A
Explanation:
Comprehensive and Detailed in Depth
A is correct because AWS CodeBuild is a fully managed build service that supports running tests and security scans. EventBridge can detect test failures and send notifications via SNS. AWS CDK with manifest files is a robust way to manage feature toggles. CodePipeline supports manual approval stages to allow lead developer intervention.
Reference: AWS CodePipeline Approvals
AWS CDK with feature flags
A company has a legacy application that runs on multiple .NET Framework components. The components share the same Microsoft SQL Server database and communicate with each other asynchronously by using Microsoft Message Queueing (MSMQ).
The company is starting a migration to containerized .NET Core components and wants to refactor the application to run on AWS. The .NET Core components require complex orchestration. The company must have full control over networking and host configuration. The application’s database model is strongly relational.
Which solution will meet these requirements?
- A . Host the .NET Core components on AWS App Runner. Host the database on Amazon RDS for SQL Server. Use Amazon EventBridge for asynchronous messaging.
- B . Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Host the database on Amazon DynamoDB. Use Amazon Simple Notification Service (Amazon SNS) for asynchronous messaging.
- C . Host the .NET Core components on AWS Elastic Beanstalk. Host the database on Amazon Aurora PostgreSQL Serverless v2. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) for asynchronous messaging.
- D . Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Host the database on Amazon Aurora MySQL Serverless v2. Use Amazon Simple Queue Service (Amazon SQS) for asynchronous messaging.
D
Explanation:
Hosting the .NET Core components on Amazon ECS with the Amazon EC2 launch type will meet the requirements of having complex orchestration and full control over networking and host configuration. Amazon ECS is a fully managed container orchestration service that supports both AWS Fargate and Amazon EC2 as launch types. The Amazon EC2 launch type allows users to choose their own EC2 instances, configure their own networking settings, and access their own host operating systems. Hosting the database on Amazon Aurora MySQL Serverless v2 will meet the requirements of having a strongly relational database model and using the same database engine as SQL Server. MySQL is a compatible relational database engine with SQL Server, and it can support most of the legacy application’s database model. Amazon Aurora MySQL Serverless v2 is a serverless version of Amazon Aurora MySQL that can scale up and down automatically based on demand. Using Amazon SQS for asynchronous messaging will meet the requirements of providing a compatible replacement for MSMQ, which is a queue-based messaging system3. Amazon SQS is a fully managed message queuing service that enables decoupled and scalable microservices, distributed systems, and serverless applications.
A software as a service (SaaS) company provides a media software solution to customers. The solution is hosted on 50 VPCs across various AWS Regions and AWS accounts One of the VPCs is designated as a management VPC. The compute resources in the VPCs work independently
The company has developed a new feature that requires all 50 VPCs to be able to communicate with each other. The new feature also requires one-way access from each customer’s VPC to the company’s management VPC. The management VPC hosts a compute resource that validates licenses for the media software solution
The number of VPCs that the company will use to host the solution will continue to increase as the solution grows
Which combination of steps will provide the required VPC connectivity with the LEAST operational overhead” (Select TWO.)
- A . Create a transit gateway Attach all the company’s VPCs and relevant subnets to the transit gateway
- B . Create VPC peering connections between all the company’s VPCs
- C . Create a Network Load Balancer (NLB) that points to the compute resource for license validation. Create an AWS PrivateLink endpoint service that is available to each customer’s VPC Associate the endpoint service with the NLB
- D . Create a VPN appliance in each customer’s VPC Connect the company’s management VPC to each customer’s VPC by using AWS Site-to-Site VPN
- E . Create a VPC peering connection between the company’s management VPC and each customer’s VPC
A,C
Explanation:
Create a Transit Gateway:
Step 1: In the AWS Management Console, navigate to the VPC Dashboard.
Step 2: Select "Transit Gateways" and click on "Create Transit Gateway".
Step 3: Configure the transit gateway by providing a name and setting the options for Amazon side ASN and VPN ECMP support as needed.
Step 4: Attach each of the company’s VPCs and relevant subnets to the transit gateway. This centralizes the network management and simplifies the routing configurations, supporting scalable and flexible network architecture.
Set Up AWS PrivateLink:
Step 1: Create a Network Load Balancer (NLB) in the management VPC that points to the compute resource responsible for license validation.
Step 2: Create an AWS PrivateLink endpoint service pointing to this NLB.
Step 3: Allow each customer’s VPC to create an interface endpoint to this PrivateLink service. This setup enables secure and private communication between the customer VPCs and the management VPC, ensuring one-way access from each customer’s VPC to the management VPC for license validation.
This combination leverages the benefits of AWS Transit Gateway for scalable and centralized routing, and AWS PrivateLink for secure and private service access, meeting the requirement with minimal operational overhead.
Reference
Amazon VPC-to-Amazon VPC Connectivity Options
AWS PrivateLink – Building a Scalable and Secure Multi-VPC AWS Network Infrastructure Connecting Your VPC to Other VPCs and Networks Using a Transit Gateway
A solutions architect is creating an AWS CloudFormation template from an existing manually created non-production AWS environment. The CloudFormation template can be destroyed and recreated as needed. The environment contains an Amazon EC2 instance. The EC2 instance has an instance profile that the EC2 instance uses to assume a role in a parent account
The solutions architect recreates the role in a CloudFormation template and uses the same role name When the CloudFormation template is launched in the child account, the EC2 instance can no longer assume the role in the parent account because of insufficient permissions.
What should the solutions architect do to resolve this issue?
- A . In the parent account edit the trust policy for the role that the EC2 instance needs to assume Ensure that the target role ARN in the existing statement that allows the sts AssumeRole action is correct Save the trust policy
- B . In the parent account edit the trust policy for the role that the EC2 instance needs to assume Add a statement that allows the sts AssumeRole action for the root principal of the child account Save the trust policy
- C . Update the CloudFormation stack again Specify only the CAPABILITY_NAMED_IAM capability
- D . Update the CloudFormation stack again Specify the CAPABIUTYJAM capability and the CAPABILITY_NAMEDJAM capability
A
Explanation:
Edit the Trust Policy:
Go to the IAM console in the parent account and locate the role that the EC2 instance needs to assume.
Edit the trust policy of the role to ensure that it correctly allows the sts action for the role ARN in the child account. Update the Role ARN:
Verify that the target role ARN specified in the trust policy matches the role ARN created by the CloudFormation stack in the child account.
If necessary, update the ARN to reflect the correct role in the child account.
Save and Test:
Save the updated trust policy and ensure there are no syntax errors.
Test the setup by attempting to assume the role from the EC2 instance in the child account. Verify that the instance can successfully assume the role and perform the required actions.
This ensures that the EC2 instance in the child account can assume the role in the parent account,
resolving the permission issue.
Reference
AWS IAM Documentation on Trust Policies 【 51 】 .
A company is implementing a serverless architecture by using AWS Lambda functions that need to access a Microsoft SQL Server DB instance on Amazon RDS. The company has separate environments for development and production, including a clone of the database system.
The company’s developers are allowed to access the credentials for the development database. However, the credentials for the production database must be encrypted with a key that only members of the IT security team’s IAM user group can access. This key must be rotated on a regular basis.
What should a solutions architect do in the production environment to meet these requirements?
- A . Store the database credentials in AWS Systems Manager Parameter Store by using a SecureString parameter that is encrypted by an AWS Key Management Service (AWS KMS) customer managed key. Attach a role to each Lambda function to provide access to the SecureString parameter. Restrict access to the Securestring parameter and the customer managed key so that only the IT security team can access the parameter and the key.
- B . Encrypt the database credentials by using the AWS Key Management Service (AWS KMS) default Lambda key. Store the credentials in the environment variables of each Lambda function. Load the credentials from the environment variables in the Lambda code. Restrict access to the KMS key o that only the IT security team can access the key.
- C . Store the database credentials in the environment variables of each Lambda function. Encrypt the environment variables by using an AWS Key Management Service (AWS KMS) customer managed key. Restrict access to the customer managed key so that only the IT security team can access the key.
- D . Store the database credentials in AWS Secrets Manager as a secret that is associated with an AWS Key Management Service (AWS KMS) customermanaged key. Attach a role to each Lambda function to provide access to the secret. Restrict access to the secret and the customer managed key so that only the IT security team can access the secret and the key.
D
Explanation:
Storing the database credentials in AWS Secrets Manager as a secret that is associated with an AWS Key Management Service (AWS KMS) customer managed key will enable encrypting and managing the credentials securely1. AWS Secrets Manager helps you to securely encrypt, store, and retrieve credentials for your databases and other services2. Attaching a role to each Lambda function to provide access to the secret will enable retrieving the credentials programmatically1. Restricting access to the secret and the customer managed key so that only members of the IT security team’s IAM user group can access them will enable meeting the security requirements1.
A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to support a canary release.
Which solution will meet these requirements?
- A . Create an alias for every new deployed version of the Lambda function. Use the AWS CLIupdate-alias command with the routing-config parameter to distribute the load.
- B . Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
- C . Create a version for every new deployed Lambda function. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load.
- D . Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment configuration to distribute the load.
A
Explanation:
https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-traffic-shifting/
https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
A company’s solutions architect is analyzing costs of a multi-application environment. The environment is deployed across multiple Availability Zones in a single AWS Region. After a recent acquisition, the company manages two organizations in AWS Organizations. The company has created multiple service provider applications as AWS PrivateLink-powered VPC endpoint services in one organization. The company has created multiple service consumer applications in the other organization.
Data transfer charges are much higher than the company expected, and the solutions architect needs to reduce the costs. The solutions architect must recommend guidelines for developers to follow when they deploy services. These guidelines must minimize data transfer charges for the whole environment.
Which guidelines meet these requirements? (Select TWO.)
- A . Use AWS Resource Access Manager to share the subnets that host the service provider applications with other accounts in the organization.
- B . Place the service provider applications and the service consumer applications in AWS accounts in the same organization.
- C . Turn off cross-zone load balancing for the Network Load Balancer in all service provider application deployments.
- D . Ensure that service consumer compute resources use the Availability Zone-specific endpoint service by using the endpoint’s local DNS name.
- E . Create a Savings Plan that provides adequate coverage for the organization’s planned inter-Availability Zone data transfer usage.
C,D
Explanation:
Cross-zone load balancing enables traffic to be distributed evenly across all registered instances in all enabled Availability Zones. However, this also increases data transfer charges between Availability Zones. By turning off cross-zone load balancing, the service provider applications can reduce inter- Availability Zone data transfer costs. Similarly, by using the Availability Zone-specific endpoint service, the service consumer applications can ensure that they connect to the nearest service provider application in the same Availability Zone, avoiding cross-Availability Zone data transfer charges.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#cross-zone-load-balancing
https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-dns
A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an Amazon Simple Notification Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts.
A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of Cloud Formation stacks. Trustedaccess has been enabled in Organizations.
What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts?
- A . Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an organization. Use CloudFormation StackSets drift detection.
- B . Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization. Enable the CloudFormation StackSets automatic deployment.
- C . Create a stack set in the Organizations management account. Use service-managed permissions.
Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment. - D . Create stacks in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection.
C
Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html
A company is creating a centralized logging service running on Amazon EC2 that will receive and analyze logs from hundreds of AWS accounts. AWS PrivateLink is being used to provide connectivity between the client services and the logging service.
In each AWS account with a client, an interface endpoint has been created for the logging service and is available. The logging service running on EC2 instances with a Network Load Balancer (NLB) are deployed in different subnets. The clients are unable to submit logs using the VPC endpoint.
Which combination of steps should a solutions architect take to resolve this issue? (Select TWO.)
- A . Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnets. Check that the NACL is attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.
- B . Check that the NACL is attached to the logging service subnets to allow communications to and from the interface endpoint subnets. Check that the NACL is attached to the interface endpoint subnet to allow communications to and from the logging service subnets running on EC2 instances.
- C . Check the security group for the logging service running on the EC2 instances to ensure it allows
Ingress from the NLB subnets. - D . Check the security group for the loggia service running on EC2 instances to ensure it allows ingress from the clients.
- E . Check the security group for the NLB to ensure it allows ingress from the interlace endpoint subnets.
A company has multiple AWS accounts that are in an organization in AWS Organizations. The company needs to store AWS account activity and query the data from a central location by using SQL.
Which solution will meet these requirements?
- A . Create an AWS CloudTrail trail in each account. Specify CloudTrail management events for the trail. Configure CloudTrail to send the events to Amazon CloudWatch Logs. Configure CloudWatch cross-account observability. Query the data in CloudWatch Logs Insights.
- B . Use a delegated administrator account to create an AWS CloudTrail Lake data store. Specify CloudTrail management events for the data store. Enable the data store for all accounts tn the organization. Query the data in CloudTrail Lake.
- C . Use a delegated administrator account to create an AWS CloudTrail trail. Specify CloudTrail management events for the trail. Enable the trail for all accounts in the organization. Keep all other settings as default. Query the CloudTrail data from the CloudTrail event history page.
- D . Use AWS CloudFormation StackSets to deploy AWS CloudTrail Lake data stores in each account. Specify CloudTrail management events for the data stores. Keep all other settings as default. Query the data in CloudTrail Lake.
