Practice Free SAP-C02 Exam Online Questions
A company needs to gather data from an experiment in a remote location that does not have internet connectivity. During the experiment, sensors that are connected to a total network will generate 6 TB of data in a preprimary formal over the course of 1 week. The sensors can be configured to upload their data files to an FTP server periodically, but the sensors do not have their own FTP server. The sensors also do not support other protocols. The company needs to collect the data centrally and move lie data to object storage in the AWS Cloud as soon. as possible after the experiment.
Which solution will meet these requirements?
- A . Order an AWS Snowball Edge Compute Optimized device. Connect the device to the local network. Configure AWS DataSync with a target bucket name, and unload the data over NFS to the device. After the experiment return the device to AWS so that the data can be loaded into Amazon S3.
- B . Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS) volume.
- C . Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Install and configure an FTP server on the EC2 instance. Configure the sensors to upload data to the EC2 instance. After the experiment, return the device to AWS so that the data can be loaded into Amazon S3.
- D . Order an AWS Snowcone device. Connect the device to the local network. Configure the device to use Amazon FSx. Configure the sensors to upload data to the device. Configure AWS DataSync on the device to synchronize the uploaded data with an Amazon S3 bucket Return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store (Amazon EBS) volume.
C
Explanation:
For collecting data from remote sensors without internet connectivity, using an AWS Snowcone device with an Amazon EC2 instance running an FTP server presents a practical solution. This setup allows the sensors to upload data to the EC2 instance via FTP, and after the experiment, the Snowcone device can be returned to AWS for data ingestion into Amazon S3. This approach minimizes operational complexity and ensures efficient data transfer to AWS for further processing or storage.
AWS Documentation on AWS Snowcone and Amazon EC2 provides detailed guidance on deploying compute and storage capabilities in edge locations. This solution leverages AWS’s edge computing devices to address challenges associated with data collection in remote or disconnected environments.
A company is creating a sequel for a popular online game. A large number of users from all over the world will play the game within the first week after launch.
Currently, the game consists of the following components deployed in a single AWS Region:
• Amazon S3 bucket that stores game assets
• Amazon DynamoDB table that stores player scores
A solutions architect needs to design a multi-Region solution that will reduce latency improve reliability, and require the least effort to implement
What should the solutions architect do to meet these requirements?
- A . Create an Amazon CloudFront distribution to serve assets from the S3 bucket Configure S3 Cross-Region Replication Create a new DynamoDB able in a new Region Use the new table as a replica target tor DynamoDB global tables.
- B . Create an Amazon CloudFront distribution to serve assets from the S3 bucket. Configure S3 Same-Region Replication. Create a new DynamoDB able m a new Region. Configure asynchronous replication between the DynamoDB tables by using AWS Database Migration Service (AWS DMS) with change data capture (CDC)
- C . Create another S3 bucket in a new Region and configure S3 Cross-Region Replication between the buckets Create an Amazon CloudFront distribution and configure origin failover with two origins accessing the S3 buckets in each Region. Configure DynamoDB global tables by enabling Amazon DynamoDB Streams, and add a replica table in a new Region.
- D . Create another S3 bucket in the same Region, and configure S3 Same-Region Replication between the buckets- Create an Amazon CloudFront distribution and configure origin failover with two origin accessing the S3 buckets Create a new DynamoDB table m a new Region Use the new table as a replica target for DynamoDB global tables.
C
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-global-table-stream-lambda/?nc1=h_ls
A solutions architect is creating an AWS CloudFormation template from an existing manually created non-production AWS environment. The CloudFormation template can be destroyed and recreated as needed. The environment contains an Amazon EC2 instance. The EC2 instance has an instance profile that the EC2 instance uses to assume a role in a parent account
The solutions architect recreates the role in a CloudFormation template and uses the same role name When the CloudFormation template is launched in the child account, the EC2 instance can no longer assume the role in the parent account because of insufficient permissions.
What should the solutions architect do to resolve this issue?
- A . In the parent account edit the trust policy for the role that the EC2 instance needs to assume Ensure that the target role ARN in the existing statement that allows the sts Assume Role action is correct Save the trust policy
- B . In the parent account edit the trust policy for the role that the EC2 instance needs to assume Add a statement that allows the sts AssumeRole action for the root principal of the child account Save the trust policy
- C . Update the CloudFormation stack again Specify only the CAPABILITY_NAMED_IAM capability
- D . Update the CloudFormation stack again Specify the CAPABIUTYJAM capability and the CAPABILITY_NAMEDJAM capability
A
Explanation:
Edit the Trust Policy:
Go to the IAM console in the parent account and locate the role that the EC2 instance needs to assume.
Edit the trust policy of the role to ensure that it correctly allows the sts action for the role ARN in the child account. Update the Role ARN:
Verify that the target role ARN specified in the trust policy matches the role ARN created by the CloudFormation stack in the child account.
If necessary, update the ARN to reflect the correct role in the child account.
Save and Test:
Save the updated trust policy and ensure there are no syntax errors.
Test the setup by attempting to assume the role from the EC2 instance in the child account. Verify that the instance can successfully assume the role and perform the required actions.
This ensures that the EC2 instance in the child account can assume the role in the parent account,
resolving the permission issue.
Reference
AWS IAM Documentation on Trust Policies 【 51 】 .
A company wants to design a disaster recovery (DR) solution for an application that runs in the company’s data center. The application writes to an SMB file share and creates a copy on a second file share. Both file shares are in the data center. The application uses two types of files: metadata files and image files.
The company wants to store the copy on AWS. The company needs the ability to use SMB to access the data from either the data center or AWS if a disaster occurs. The copy of the data is rarely
accessed but must be available within 5 minutes.
Which solution will meet these requirements MOST cost-effectively?
- A . Deploy AWS Outposts with Amazon S3 storage. Configure a Windows Amazon EC2 instance on Outposts as a file server.
- B . Deploy an Amazon FSx File Gateway. Configure an Amazon FSx for Windows File Server Multi-AZ file system that uses SSD storage.
- C . Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for the metadata files and to use S3 Glacier Deep Archive for the image files.
- D . Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for the metadata files and image files.
C
Explanation:
The correct solution is to use an Amazon S3 File Gateway to store the copy of the SMB file share on AWS. An S3 File Gateway enables on-premises applications to store and access objects in Amazon S3 using the SMB protocol. The S3 File Gateway can also be accessed from AWS using the SMB protocol, which provides the ability to use the data from either the data center or AWS if a disaster occurs. The S3 File Gateway supports tiering of data to different S3 storage classes based on the file type. This allows the company to optimizethe storage costs by using S3 Standard-Infrequent Access (S3 Standard-IA) for the metadata files, which are rarely accessed but must be available within 5 minutes, and S3 Glacier Deep Archive for the image files, which are the lowest-cost storage class and suitable for long-term retention of data that is rarely accessed. This solution is the most cost-effective because it does not require any additional hardware, software, or replication services.
The other solutions are incorrect because they either use more expensive or unnecessary services or components, or they do not meet the requirements. For example:
Solution A is incorrect because it uses AWS Outposts with Amazon S3 storage, which is a very expensive and complex solution for the scenario in the question. AWS Outposts is a service that extends AWS infrastructure, services, APIs, and tools to virtually any data center, co-location space, or on-premises facility. It is designed for customers who need low latency and local data processing. Amazon S3 storage on Outposts provides a subset of S3 features and APIs to store and retrieve data on Outposts. However, this solution does not provide SMB access to the data on Outposts, which requires a Windows EC2 instance on Outposts as a file server. This adds more cost and complexity to the solution, and it does not provide the ability to access the data from AWS if a disaster occurs. Solution B is incorrect because it uses Amazon FSx File Gateway and Amazon FSx for Windows File Server Multi-AZ file system that uses SSD storage, which are both more expensive and unnecessary services for the scenario in the question. Amazon FSx File Gateway is a service that enables on-premises applications to store and access data in Amazon FSx for Windows File Server using the SMB protocol. Amazon FSx for Windows File Server is a fully managed service that provides native Windows file shares with the compatibility, features, and performance that Windows-based applications rely on. However, this solution does not meet the requirements because it does not provide the ability to use different storage classes for the metadata files and image files, and it does not provide the ability to access the data from AWS if a disaster occurs. Moreover, using a Multi-AZ file system that uses SSD storage is overprovisioned and costly for the scenario in the question, which
involves rarely accessed data that must be available within 5 minutes.
Solution D is incorrect because it uses an S3 File Gateway that uses S3 Standard-IA for both the metadata files and image files, which is not the most cost-effective solution for the scenario in the question. S3 Standard-IA is a storage class that offers high durability, availability, and performance for infrequently accessed data. However, it is more expensive than S3 Glacier Deep Archive, which is the lowest-cost storage class and suitable for long-term retention of data that is rarely accessed. Therefore, using S3 Standard-IA for the image files, which are likely to be larger and more numerous than the metadata files, is not optimal for the storage costs.
What is S3 File Gateway?
Using Amazon S3 storage classes with S3 File Gateway
Accessing your file shares from AWS
A company has an application that stores user-uploaded videos in an Amazon S3 bucket that uses S3 Standard storage. Users access the videos frequently in the first 180 days after the videos are uploaded. Access after 180 days is rare. Named users and anonymous users access the videos. Most of the videos are more than 100 MB in size. Users often have poor internet connectivity when they upload videos, resulting in failed uploads. The company uses multipart uploads for the videos. A solutions architect needs to optimize the S3 costs of the application.
Which combination of actions will meet these requirements? (Select TWO.)
- A . Configure the S3 bucket to be a Requester Pays bucket.
- B . Use S3 Transfer Acceleration to upload the videos to the S3 bucket.
- C . Create an S3 Lifecycle configuration to expire incomplete multipart uploads 7 days after initiation.
- D . Create an S3 Lifecycle configuration to transition objects to S3 Glacier Instant Retrieval after 1 day.
- E . Create an S3 Lifecycle configuration to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days.
C, E
Explanation:
Configuring an S3 Lifecycle policy to expire incomplete multipart uploads after 7 days prevents storage of partially uploaded objects, avoiding unnecessary costs from failed uploads. Additionally, implementing a lifecycle policy to transition objects to S3 Standard-IA after 180 days ensures older videos that are rarely accessed move to a lower-cost storage class, significantly reducing storage costs.
These measures are aligned with AWS cost optimization best practices for S3 data lifecycle management.
A company needs to migratesome Oracle databases to AWSwhile keeping otherson-premisesfor compliance. The on-prem databases containspatial dataand runcron jobs. The solution must allowquerying on-prem data as foreign tablesfrom AWS.
- A . Use DynamoDB, SCT, and Lambda. Move spatial data to S3 and query with Athena.
- B . Use RDS for SQL Server and AWS Glue crawlers for Oracle access.
- C . Use EC2-hosted Oracle with Application Migration Service. Use Step Functions for cron.
- D . Use RDS for PostgreSQL with DMS and SCT. Use PostgreSQL foreign data wrappers. Connectvia
Direct Connect.
D
Explanation:
D is correct becauseRDS for PostgreSQLsupportsforeign data wrappers (FDW)that allow querying
remote Oracle databases. With AWS Schema Conversion Tool (SCT) and Database Migration Service
(DMS), schema and data can be migrated effectively. AWS Direct Connect ensures secure, private
connectivity to on-prem databases. Cron jobs can be run via EventBridge or external orchestration.
A doesn’t support relational/spatial querying.
B doesn’t support FDW or spatial types.
C introduces unnecessary complexity.
Reference:
https://www.postgresql.org/docs/current/postgres-fdw.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html
A company is migrating a containerized Kubernetes app with manifest files to AWS.
What is the easiest migration path?
- A . App Runner + open-source repo
- B . Amazon EKSwith managed node groups and Aurora
- C . ECS on EC2 + task definitions
- D . Rebuild Kubernetes cluster on EC2 manually
B
Explanation:
Since the company is already using Kubernetes manifests, Amazon EKSis a natural fit. It’s a managed Kubernetes control plane and works with Amazon Aurora PostgreSQL, a highly available DB service.
What is EKS?
A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.
After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs.
While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors.
Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.)
- A . Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
- B . Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
- C . Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
- D . Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server.
- E . Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.
C,E
Explanation:
"Save your custom error pages in a location that is accessible to CloudFront. We recommend that you store them in an Amazon S3 bucket, and that you don’t store them in the same place as the rest of your website or application’s content. If you store the custom error pages on the same origin as your website or application, and the origin starts to return 5xx errors, CloudFront can’t get the custom error pages because the origin server is unavailable." https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GeneratingCustomErrorResponses.html
A company is using AWS CodePipeline for the CI/CD of an application to an Amazon EC2 Auto Scaling group. All AWS resources are defined in AWS CloudFormation templates. The application artifacts are stored in an Amazon S3 bucket and deployed to the Auto Scaling group using instance user data scripts.
As the application has become more complex, recent resource changes in the CloudFormation templates have caused unplanned downtime.
How should a solutions architect improve the CI/CD pipeline to reduce the likelihood that changes in the templates will cause downtime?
- A . Adapt the deployment scripts to detect and report CloudFormation error conditions when performing deployments. Write test plans for a testing team to execute in a non-production environment before approving the change for production.
- B . Implement automated testing using AWS CodeBuild in a test environment. Use CloudFormation change sets to evaluate changes before deployment. Use AWS Code Deploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if needed.
- C . Use plugins for the integrated development environment (IDE) to check the templates for errors, and use the AWS CLI to validate that the templates are correct. Adapt the deployment code to check for error conditions and generate notifications on errors. Deploy to a test environment and execute a manual test plan before approving the change for production.
- D . Use AWS Code Deploy and a blue/green deployment pattern with CloudFormation to replace the user data deployment scripts. Have the operators log in to running instances and go through a manual test plan to verify the application is running as expected.
A company is migrating some of its applications to AWS. The company wants to migrate and modernize the applications quickly after it finalizes networking and security strategies. The company has set up an AWS Direct Connection connection in a central network account.
The company expects to have hundreds of AWS accounts and VPCs in the near future. The corporate network must be able to access the resources on AWS seamlessly and also must be able to communicate with all the VPCs. The company also wants to route its cloud resources to the internet through its on-premises data center.
Which combination of steps will meet these requirements? (Choose three.)
- A . Create a Direct Connect gateway in the central account. In each of the accounts, create an association proposal by using the Direct Connect gateway and the account ID for every virtual private gateway.
- B . Create a Direct Connect gateway and a transit gateway in the central network account. Attach the transit gateway to the Direct Connect gateway by using a transit VIF.
- C . Provision an internet gateway. Attach the internet gateway to subnets. Allow internet traffic through the gateway.
- D . Share the transit gateway with other accounts. Attach VPCs to the transit gateway.
- E . Provision VPC peering as necessary.
- F . Provision only private subnets. Open the necessary route on the transit gateway and customer gateway to allow outbound internet traffic from AWS to flow through NAT services that run in the data center.
B,D,F
Explanation:
Option A is incorrect because creating a Direct Connect gateway in the central account and creating an association proposal by using the Direct Connect gateway and the account ID for every virtual private gateway does not enable active-passive failover between the regions. A Direct Connect gateway is a globally available resource that enables you to connect your AWS Direct Connect connection over a private virtual interface (VIF) to one or more VPCs in any AWS Region. A virtual private gateway is the VPN concentrator on the Amazon side of a VPN connection. You can associate a Direct Connect gateway with either a transit gateway or a virtual private gateway. However, a Direct Connect gateway does not provide any load balancing or failover capabilities by itself1
Option B is correct because creating a Direct Connect gateway and a transit gateway in the central network account and attaching the transit gateway to the Direct Connect gateway by using a transit VIF meets the requirement of enabling the corporate network to access the resources on AWS seamlessly and also to communicate with all the VPCs. A transit VIF is a type of private VIF that you can use to connect your AWS Direct Connect connection to a transit gateway or a Direct Connect gateway. A transit gateway is a network transit hub that you can use to interconnect your VPCs and on-premises networks. By using a transit VIF, you can route traffic between your on-premises network and multiple VPCs across different AWS accounts and Regions through a single connection23
Option C is incorrect because provisioning an internet gateway, attaching the internet gateway to subnets, and allowing internet traffic through the gateway does not meet the requirement of routing cloud resources to the internet through its on-premises data center. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. By using an internet gateway, you are routing cloud resources directly to the internet, not through your on-premises data center.
Option D is correct because sharing the transit gateway with other accounts and attaching VPCs to the transit gateway meets the requirement of enabling the corporate network to access the resources on AWS seamlessly and also to communicate with all the VPCs. You can share your transit gateway with other AWS accounts within the same organization by using AWS Resource Access Manager (AWS RAM). This allows you to centrally manage connectivity from multiple accounts without having to create individual peering connections between VPCs or duplicate network appliances in each account. You can attach VPCs from different accounts and Regions to your shared transit gateway and enable routing between them.
Option E is incorrect because provisioning VPC peering as necessary does not meet the requirement of enabling the corporate network to access the resources on AWS seamlessly and also to communicate with all the VPCs. VPC peering is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within asingle Region. However, VPC peering does not allow you to route traffic from your on-premises network to your VPCs or between multiple Regions. You would need to create multiple VPN connections or Direct Connect connections for each VPC peering connection, which increases operational complexity and costs.
Option F is correct because provisioning only private subnets, opening the necessary route on the transit gateway and customer gateway to allow outbound internet traffic from AWS to flow through NAT services that run in the data center meets the requirement of routing cloud resources to the internet through its on-premises data center. A private subnet is a subnet that’s associated with a route table that has no route to an internet gateway. Instances in a private subnet can communicate with other instances in the same VPC but cannot access resources on the internet directly. To enable outbound internet access from instances in private subnets, you can use NAT devices such as NAT gateways or NAT instances that are deployed in public subnets. A public subnet is a subnet that’s associated with a route table that has a route to an internet gateway. Alternatively, you can use your on-premises data center as a NAT device by configuring routes on your transit gateway and customer gateway that direct outbound internet traffic from your private subnets through your VPN connection or Direct Connect connection. This way, you can route cloud resources to the internet through your on-premises data center instead of using an internet gateway.
Reference:
1: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
2: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-virtual-interfaces.html
3: https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html:
https://docs.aws.amazon.com/vpc/latest/tgw/tgw-sharing.html:
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario3.html:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Gateway.html
