Practice Free SAP-C02 Exam Online Questions
A company has many services running in its on-premises data center. The data center is connected to AWS using AWS Direct Connect (DX)and an IPsec VPN. The service data is sensitive and connectivity cannot traverse the interne. The company wants to expand to a new market segment and begin offering Is services to other companies that are using AWS.
Which solution will meet these requirements?
- A . Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network Load Balancer, and make the service available over DX.
- B . Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind an Application Load Balancer, and make the service available over DX.
- C . Attach an internet gateway to the VPC. and ensure that network access control and security group rules allow the relevant inbound and outbound traffic.
- D . Attach a NAT gateway to the VPC. and ensue that network access control and security group rules allow the relevant inbound and outbound traffic.
B
Explanation:
To offer services to other companies using AWS without traversing the internet, creating a VPC Endpoint Service hosted behind an Application Load Balancer (ALB) and making it available over AWS Direct Connect (DX) is the most suitable solution. This approach ensures that the service traffic remains within the AWS network, adhering to the requirement that connectivity must not traverse the internet. An ALB is capable of handling HTTP/HTTPS traffic, making it appropriate for web-based services. Utilizing DX for connectivity between the on-premises data center and AWS further secures and optimizes the network path.
AWS Direct Connect Documentation: Explains how to set up DX for private connectivity between AWS and an on-premises network.
Amazon VPC Endpoint Services (AWS PrivateLink) Documentation: Provides details on creating and configuring endpoint services for private, secure access to services hosted in AWS.
AWS Application Load Balancer Documentation: Offers guidance on configuring ALBs to distribute HTTP/HTTPS traffic efficiently.
A company is planning to migrate an application to AWS. The application runs as a Docker container and uses an NFS version 4 file share.
A solutions architect must design a secure and scalable containerized solution that does not require provisioning or management of the underlying infrastructure.
Which solution will meet these requirements?
- A . Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type. Use Amazon Elastic File System (Amazon EFS) for shared storage. Reference the EFS file system ID, container mount point, and EFS authorization IAM role in the ECS task definition.
- B . Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type. Use Amazon FSx for Lustre for shared storage. Reference the FSx for Lustre file system ID, container mount point, and FSx for Lustre authorization IAM role in the ECS task definition.
- C . Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type and auto scaling turned on. Use Amazon Elastic File System (Amazon EFS) for shared storage. Mount the EFS file system on the ECS container instances. Add the EFS authorization IAM role to the EC2 instance profile.
- D . Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type and auto scaling turned on. Use Amazon Elastic Block Store (Amazon EBS) volumes with Multi-Attach enabled for shared storage. Attach the EBS volumes to ECS container instances. Add the EBS authorization IAM role to an EC2 instance profile.
A
Explanation:
This option uses Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type to deploy the application containers. Amazon ECS is a fully managed container orchestration service that allows running Docker containers on AWS at scale. Fargate is a serverless compute engine for containers that eliminates the need to provision or manage servers or clusters. With Fargate, the company only pays for the resources required to run its containers, which reduces costs and operational overhead. This option also uses Amazon Elastic File System (Amazon EFS) for shared storage. Amazon EFS is a fully managed file system that provides scalable, elastic, concurrent, and secure file storage for use with AWS cloud services. Amazon EFS supports NFS version 4 protocol, which is compatible with the application’s requirements. To use Amazon EFS with Fargate containers, the company needs to reference the EFS file system ID, container mount point, and EFS authorization IAM role in the ECS task definition.
A company has a critical application in which the data tier is deployed in a single AWS Region. The data tier uses an Amazon DynamoDB table and an Amazon Aurora MySQL DB cluster. The current Aurora MySQL engine version supports a global database. The application tier is already deployed in two Regions.
Company policy states that critical applications must have application tier components and data tier components deployed across two Regions. The RTO and RPO must be no more than a few minutes each. A solutions architect must recommend a solution to make the data tier compliant with company policy.
Which combination of steps will meet these requirements? (Choose two.)
- A . Add another Region to the Aurora MySQL DB cluster
- B . Add another Region to each table in the Aurora MySQL DB cluster
- C . Set up scheduled cross-Region backups for the DynamoDB table and the Aurora MySQL DB cluster
- D . Convert the existing DynamoDB table to a global table by adding another Region to its configuration
- E . Use Amazon Route 53 Application Recovery Controller to automate database backup and recovery to the secondary Region
A,D
Explanation:
The company should use Amazon Aurora global database and Amazon DynamoDB global table to deploy the data tier components across two Regions. Amazon Aurora global database is a feature that allows a single Aurora database to span multiple AWS Regions, enabling low-latency global reads and fast recovery from Region-wide outages1. Amazon DynamoDB global table is a feature that allows a single DynamoDB table to span multiple AWS Regions, enabling low-latency global reads and writes and fast recovery from Region-wide outages2.
Reference:
https://aws.amazon.com/rds/aurora/global-database/
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables_HowItWorks.html
https://aws.amazon.com/route53/application-recovery-controller/
A company has application services that have been containerized and deployed on multiple Amazon EC2 instances with public IPs. An Apache Kafka cluster has been deployed to the EC2 instances. A PostgreSQL database has been migrated to Amazon RDS for PostgreSQL. The company expects a significant increase of orders on its platform when a new version of its flagship product is released.
What changes to the current architecture will reduce operational overhead and support the product release?
- A . Create an EC2 Auto Scaling group behind an Application Load Balancer. Create additional read replicas for the DB instance. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3.
- B . Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3.
- C . Deploy the application on a Kubernetes cluster created on the EC2 instances behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
- D . Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
D
Explanation:
The correct answer is
D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Option D meets the requirements of the scenario because it allows you to reduce operational overhead and support the product release by using the following AWS services and features: Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that allows you to run Kubernetes applications on AWS without needing to install, operate, or maintain your own Kubernetes control plane. You can use Amazon EKS to deploy your containerized application services on a Kubernetes cluster that is compatible with your existing tools and processes.
AWS Fargate is a serverless compute engine that eliminates the need to provision and manage servers for your containers. You can use AWS Fargate as the launch type for your Amazon EKS pods, which are the smallest deployable units of computing in Kubernetes. You can also enable auto scaling for your pods, which allows you to automatically adjust the number of pods based on the demand or custom metrics.
An Application Load Balancer (ALB) is a load balancer that distributes traffic across multiple targets in multiple Availability Zones using HTTP or HTTPS protocols. You can use an ALB to balance the load across your Amazon EKS pods and provide high availability and fault tolerance for your application. Amazon RDS for PostgreSQL is a fully managed relational database service that supports the PostgreSQL open source database engine. You can create additional read replicas for your DB instance, which are copies of your primary DB instance that can handle read-only queries and improve performance. You can also use read replicas to scale out beyond the capacity of a single DB instance for read-heavy workloads.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data. Apache Kafka is an open source platform for building real-time data pipelines and streaming applications. You can use Amazon MSK to create and manage a Kafka cluster that is highly available, secure, and compatible with your existing Kafka applications. You can also configure your application services to use the Amazon MSK cluster as a source or destination of streaming data.
Amazon S3 is an object storage service that offers high durability, availability, and scalability. You can store static content such as images, videos, or documents in Amazon S3 buckets, which are containers for objects. You can also serve static content directly from Amazon S3 using public URLs or presigned URLs.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. You can use Amazon CloudFront to create a distribution that caches static content from your Amazon S3 bucket at edge locations closer to your users. This can improve the performance and user experience of your application.
Option A is incorrect because creating an EC2 Auto Scaling group behind an ALB would not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as you would still need to manage EC2 instances for your containers. Creating additional read replicas for the DB instance would not provide high availability or fault tolerance in case of a failure of the primary DB instance, unlike deploying the DB instance in Multi-AZ mode. Creating Amazon Kinesis data streams would not be compatible with your existing Apache Kafka applications, unlike using Amazon MSK.
Option B is incorrect because creating an EC2 Auto Scaling group behind an ALB would not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as you would still need to manage EC2 instances for your containers. Creating Amazon Kinesis data streams would not be compatible with your existing Apache Kafka applications, unlike using Amazon MSK. Storing and serving static content directly from Amazon S3 would not provide optimal performance and user experience, unlike using Amazon CloudFront.
Option C is incorrect because deploying the application on a Kubernetes cluster created on the EC2 instances behind an ALB would not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as you would still need to manage EC2 instances and Kubernetes control plane for your containers. Using Amazon API Gateway to interact with the application would add an unnecessary layer of complexity and cost to your architecture, as you would need to create and maintain an API gateway that proxies requests to your ALB.
A company runs an application on Amazon EC2 and AWS Lambda. The application stores temporary data in Amazon 53. The 53 objects are deleted after 24 hours.
The company deploys new versions of the application by launching AWS CloudFormation stacks. The
stacks create the required resources. After validating a new version, the company deletes the old
stack. The deletion of an old development stack recently failed. A solutions architect needs to resolve
this Issue without major architecture changes.
Which solution will meet these requirements?
- A . Create a Lambda function to delete objects from an 53 bucket. Add the Lambda function as acustom resource in the CloudFormation stack with a DependsOn attribute that points to the S3 bucket resource.
- B . Modify the CkxidFormatton stack to attach a DeletionPolicy attribute with a value of Delete to the S3 bucket.
- C . Update the CloudFormation stack to add a DeletionPolicy attribute with a value of Snapshot for the S3 bucket resource.
- D . Update the CloudFormation template to create an Amazon EFS file system to store temporary files Instead of Amazon S3. Configure the Lambda functions to run in the same VPC as the EFS file system.
A video processing company wants to build a machine learning (ML) model by using 600 TB of compressed data that is stored as thousands of files in the company’s on-premises network attached storage system. The company does not have the necessary compute resources on premises for ML experiments and wants to use AWS.
The company needs to complete the data transfer to AWS within 3 weeks. The data transfer will be a one-time transfer. The data must be encrypted in transit. The measured upload speed of the company’s internet connection is 100 Mbps, and multiple departments share the connection.
Which solution will meet these requirements MOST cost-effectively?
- A . Order several AWS Snowball Edge Storage Optimized devices by using the AWS Management Console. Configure the devices with a destination S3 bucket. Copy the data to the devices. Ship the devices back to AWS.
- B . Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN connection into the Region to store the data in Amazon S3.
- C . Create a VPN connection between the on-premises network storage and the nearest AWS Region. Transfer the data over the VPN connection.
- D . Deploy an AWS Storage Gateway file gateway on premises. Configure the file gateway with a destination S3 bucket. Copy the data to the file gateway.
A
Explanation:
This solution will meet the requirements of the company as it provides a secure, cost-effective and fast way of transferring large data sets from on-premises to AWS. Snowball Edge devices encrypt the data during transfer, and the devices are shipped back to AWS for import into S3. This option is more cost effective than using Direct Connect or VPN connections as it does not require the company to pay for long-term dedicated connections.
A company manages hundreds of AWS accounts centrally in an organization in AWS Organizations. The company recently started to allow product teams to create and manage their own S3 access points in their accounts. The S3 access points can be accessed only within VPCs not on the internet.
What is the MOST operationally efficient way to enforce this requirement?
- A . Set the S3 access point resource policy to deny the s3 CreateAccessPoint action unless the s3:
AccessPointNetworkOngm condition key evaluates to VPC. - B . Create an SCP at the root level in the organization to deny the s3 CreateAccessPoint action unless the s3 AccessPomtNetworkOngin condition key evaluates to VPC.
- C . Use AWS CloudFormation StackSets to create a new 1AM policy in each AVVS account that allows the s3: CreateAccessPoint action only if the s3 AccessPointNetworkOrigin condition key evaluates to VPC.
- D . Set the S3 bucket policy to deny the s3: CreateAccessPoint action unless the s3AccessPointNetworkOrigin condition key evaluates to VPC.
B
Explanation:
https://aws.amazon.com/s3/features/access-points/ https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
A company uses AWS Organizations to manage a multi-account structure. The company has hundreds of AWS accounts and expects the number of accounts to increase. The company is building a new application that uses Docker images. The company will push the Docker images to Amazon Elastic Container Registry (Amazon ECR). Only accounts that are within the company’s organization should have access to the images.
The company has a CI/CD process that runs frequently. The company wants to retain all the tagged images. However, the company wants to retain only the five most recent untagged images.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a private repository in Amazon ECR. Create a permissions policy for the repository that allows only required ECR operations. Include a condition to allow the ECR operations if the value of the aws:PrincipalOrglD condition key is equal to the ID of the company’s organization. Add a lifecycle rule to the ECR repository that deletes all untagged images over the count of five.
- B . Create a public repository in Amazon ECR. Create an IAM role in the ECR account. Set permissions so that any account can assume the role if the value of the aws:PrincipalOrglD condition key is equal to the ID of the company’s organization. Add a lifecycle rule to the ECR repository that deletes all untagged images over the count of five.
- C . Create a private repository in Amazon ECR. Create a permissions policy for the repository that includes only required ECR operations. Include a condition to allow the ECR operations for all account IDs in the organization. Schedule a daily Amazon EventBridge rule to invoke an AWS Lambda function that deletes all untagged images over the count of five.
- D . Create a public repository in Amazon ECR. Configure Amazon ECR to use an interface VPC endpoint with an endpoint policy that includes the required permissions for images that the company needs to pull. Include a condition to allow the ECR operations for all account IDs in the company’s organization. Schedule a daily Amazon EventBridge rule to invoke an AWS Lambda function that deletes all untagged images over the count of five.
A
Explanation:
This option allows the company to use a private repository in Amazon ECR to store and manage its Docker images securely and efficiently1. By creating a permissions policy for the repository that allows only required ECR operations, such as ecr:GetDownloadUrlForLayer, ecr:BatchGetImage, ecr:BatchCheckLayerAvailability, ecr:PutImage, and ecr:InitiateLayerUpload2, the company can restrict access to the repository and prevent unauthorized actions. By including a condition to allow the ECRoperations if the value of the aws:PrincipalOrgID condition key is equal to the ID of the company’s organization, the company can ensure that only accounts that are within its organization can access the images3. By adding a lifecycle rule to the ECR repository that deletes all untagged images over the count of five, the company can reduce storage costs and retain only the most recent untagged images4.
Amazon ECR private repositories
Amazon ECR repository policies
Restricting access to AWS Organizations members
Amazon ECR lifecycle policies
A company has an application that uses an Amazon Aurora PostgreSQL DB cluster for the application’s database. The DB cluster contains one small primary instance and three larger replica instances. The application runs on an AWS Lambda function. The application makes many short-lived connections to the database’s replica instances to perform read-only operations.
During periods of high traffic, the application becomes unreliable and the database reports that too many connections are being established. The frequency of high-traffic periods is unpredictable.
Which solution will improve the reliability of the application?
- A . Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-only endpoint for the proxy. Update the Lambda function to connect to the proxyendpoint.
- B . Increase the max_connections setting on the DB cluster’s parameter group. Reboot all the instances in the DB cluster. Update the Lambda function to connect to the DB cluster endpoint.
- C . Configure instance scaling for the DB cluster to occur when the DatabaseConnections metric is close to the max _ connections setting. Update the Lambda function to connect to the Aurora reader endpoint.
- D . Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-only endpoint for the Aurora Data API on the proxy. Update the Lambda function to connect to the proxy endpoint.
A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to support a canary release.
Which solution will meet these requirements?
- A . Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
- B . Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
- C . Create a version for every new deployed Lambda function. Use the AWS CLI update-function-contiguration command with the routing-config parameter to distribute the load.
- D . Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment configuration to distribute the load.
A
Explanation:
https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-traffic-shifting/
