Practice Free SAP-C02 Exam Online Questions
A company has migrated its forms-processing application to AWS. When users interact with the application, they upload scanned forms as files through a web application. A database stores user metadata and references to files that are stored in Amazon S3. The web application runs on Amazon EC2 instances and an Amazon RDS for PostgreSQL database.
When forms are uploaded, the application sends notifications to a team through Amazon Simple Notification Service (Amazon SNS). A team member then logs in and processes each form. The team member performs data validation on the form and extracts relevant data before entering the information into another system that uses an API.
A solutions architect needs to automate the manual processing of the forms. The solution must provide accurate form extraction, minimize time to market, and minimize long-term operational overhead.
Which solution will meet these requirements?
- A . Develop custom libraries to perform optical character recognition (OCR) on the forms. Deploy the libraries to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster as an application tier. Use this tier to process the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data into an Amazon DynamoDB table. Submit the data to the target system’s API. Host the new application tier on EC2 instances.
- B . Extend the system with an application tier that uses AWS Step Functions and AWS Lambda. Configure this tier to use artificial intelligence and machine learning (AI/ML) models that are trained and hosted on an EC2 instance to perform optical character recognition (OCR) on the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system’s API.
- C . Host a new application tier on EC2 instances. Use this tier to call endpoints that host artificial intelligence and machine learning (Al/ML) models that are trained and hosted in Amazon SageMaker to perform optical character recognition (OCR) on the forms. Store the output in Amazon ElastiCache. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system’s API.
- D . Extend the system with an application tier that uses AWS Step Functions and AWS Lambda. Configure this tier to use Amazon Textract and Amazon Comprehend to perform optical character recognition (OCR) on the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system’s API.
D
Explanation:
Extend the system with an application tier that uses AWS Step Functions and AWS Lambda. Configure this tier to use Amazon Textract and Amazon Comprehend to perform optical character recognition (OCR) on the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system’s API. This solution meets the requirements of accurate form extraction, minimal time to market, and minimal long-term operational overhead. Amazon Textract and Amazon Comprehend are fully managed and serverless services that can perform OCR and extract relevant data from the forms, which eliminates the need to develop custom libraries or train and host models. Using AWS Step Functions and Lambda allows for easy automation of the process and the ability to scale as needed.
A company uses AWS Organizations AWS account. A solutions architect must design a solution in which only administrator roles are allowed to use IAM actions. However the solutions archited does not have access to all the AWS account throughout the company.
Which solution meets these requirements with the LEAST operational overhead?
- A . Create an SCP that applies to at the AWS accounts to allow I AM actions only for administrator roles. Apply the SCP to the root OLI.
- B . Configure AWS CloudTrai to invoke an AWS Lambda function for each event that is related to 1AM actions. Configure the function to deny the action. If the user who invoked the action is not an administator.
- C . Create an SCP that applies to all the AWS accounts to deny 1AM actions for all users except for those with administrator roles. Apply the SCP to the root OU.
- D . Set an 1AM permissions boundary that allows 1AM actions. Attach the permissions boundary to every administrator role across all the AWS accounts.
A
Explanation:
To restrict IAM actions to only administrator roles across all AWS accounts in an organization, the most operationally efficient solution is to create a Service Control Policy (SCP) that allows IAM actions exclusively for administrator roles and apply this SCP to the root Organizational Unit (OU) of AWS Organizations. This method ensures a centralized governance mechanism that uniformly applies the policy across all accounts, thereby minimizing the need for individual account-level configurations and reducing operational complexity.
AWS Documentation on AWS Organizations and Service Control Policies offers comprehensive information on creating and managing SCPs for organizational-wide policy enforcement. This approach aligns with AWS best practices for managing permissions and ensuring secure and compliant account configurations within an AWS Organization.
A company is building an application on AWS. The application sends logs to an Amazon OpenSearch Service cluster for analysis. All data must be stored within a VPC.
Some of the company’s developers work from home. Other developers work from three different company office locations. The developers need to access OpenSearch Service to analyze and visualize logs directly from their local development machines.
Which solution will meet these requirements?
- A . Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN.
- B . Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client.
- C . Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection.
- D . Create and configure a bastion host in a public subnet of the VPC. Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The key requirements are: OpenSearch Service must be deployed within a VPC (VPC-only access), and developers must access OpenSearch from their local machines across multiple locations, including home networks. The most suitable low-overhead approach is to provide remote users with secure client-based connectivity into the VPC so they can reach private endpoints.
AWS Client VPN is a managed client-based VPN service that allows individual users to establish secure TLS VPN connections from their devices into a VPC. By associating a Client VPN endpoint with a subnet in the VPC and configuring authorization rules and routes, developers can access private resources (including VPC-only Amazon OpenSearch Service endpoints) as if they were on the corporate network. Client VPN is designed for distributed workforces and supports users connecting from anywhere without requiring each remote location to have dedicated network appliances.
Option A matches the need for remote developer access from home and multiple offices with the least operational overhead because it is a managed service for user-based VPN access and does not require running and maintaining bastion fleets or building site-to-site networks for each location.
Option B is not correct because AWS Site-to-Site VPN is designed to connect networks (for example, an office network or data center) to AWS, not to provide individual developers remote access from arbitrary home networks. Also, instructing developers to use an OpenVPN client does not align with how Site-to-Site VPN is typically used; Site-to-Site VPN terminates on a customer gateway device, not on individual laptops.
Option C is not correct because Direct Connect is designed for dedicated private connectivity between on-premises networks and AWS. It is not a solution for individual developers connecting from home. Additionally, using a public VIF is for reaching public AWS endpoints, whereas the requirement is to keep access within a VPC. A public VIF does not provide private VPC access to VPC-only service endpoints.
Option D is not the best choice because a bastion host provides SSH access to instances, not direct,
secure network-level access to VPC-only managed service endpoints from developer tools. It also increases operational overhead (patching, hardening, monitoring, scaling) and introduces additional security considerations. Developers also typically need browser-based or tool-based access to OpenSearch Dashboards, which is better served by VPN access into the VPC than SSH tunneling through a bastion host as a primary access mechanism.
Therefore, configuring AWS Client VPN to provide developers with secure connectivity into the VPC is the correct solution.
References:
AWS documentation on AWS Client VPN as a managed client-based VPN service for remote user access to VPC resources.
AWS documentation on VPC-only access patterns for managed services and using VPN connectivity to reach private endpoints from remote networks.
A large education company recently introduced Amazon Workspaces to provide access to internal applications across multiple universities. The company is storing user profiles on an Amazon FSx (or Windows File Server file system. The tile system is configured with a DNS alias and is connected to a self-managed Active Directory. As more users begin to use the Workspaces, login time increases to unacceptable levels.
An investigation reveals a degradation in performance of the file system. The company created the file system on HDD storage with a throughput of 16 MBps. A solutions architect must improve the performance of the file system during a defined maintenance window.
What should the solutions architect do to meet these requirements with the LEAST administrative effort?
- A . Use AWS Backup to create a point-ln-lime backup of the file system. Restore the backup to a new FSx for Windows File Server file system. Select SSD as the storage type Select 32 MBps as the throughput capacity. When the backup and restore process Is completed, adjust the DNS alias accordingly. Delete the original file system.
- B . Disconnect users from the file system. In the Amazon FSx console, update the throughput capacity to 32 MBps. Update the storage type to SSD. Reconnect users to the file system.
- C . Deploy an AWS DataSync agent onto a new Amazon EC2 Instance. Create a task. Configure the existing file system as the source location. Configure a new FSx for Windows File Server file system with SSD storage and 32 MBps of throughput as the target location. Schedule the task. When the task is completed, adjust the DNS alias accordingly. Delete the original file system.
- D . Enable shadow copies on the existing file system by using a Windows PowerShell command. Schedule the shadow copy job to create a point-in-time backup of the file system. Choose to restore previous versions. Create a new FSx for Windows File Server file system with SSD storage and 32 MBps of throughput. When the copy job is completed, adjust the DNS alias. Delete the original file system.
C
Explanation:
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/migrate-files-to-fsx-datasync.html
A financial company needs to create a separate AWS account for a new digital wallet application. The company uses AWS Organizations to manage its accounts. A solutions architect uses the 1AM user Supportl from the management account to create a new member account with [email protected] as the email address.
What should the solutions architect do to create IAM users in the new member account?
- A . Sign in to the AWS Management Console with AWS account root user credentials by using the 64-character password from the initial AWS Organizations email [email protected]. Set up the IAM users as required.
- B . From the management account, switch roles to assume the OrganizationAccountAccessRole role with the account ID of the new member account. Set up the IAM users as required.
- C . Go to the AWS Management Console sign-in page. Choose "Sign in using root account
credentials." Sign in in by using the email address [email protected] and the management account’s root password. Set up the IAM users as required. - D . Go to the AWS Management Console sign-in page. Sign in by using the account ID of the new member account and the Supportl IAM credentials. Set up the IAM users as required.
D
Explanation:
The best solution is to turn on the Concurrency Scaling feature for the Amazon Redshift cluster. This feature allows the cluster to automatically add additional capacity to handle bursts of read queries without affecting the performance of write queries. The additional capacity is transparent to the users and is billed separately based on the usage. This solution meets the business requirements of servicing read and write queries at all times and is also cost-effective compared to the other options, which involve provisioning additional resources or resizing the cluster.
Reference: Amazon Redshift Documentation, Concurrency Scaling in Amazon Redshift
A company completed a successful Amazon Workspaces proof of concept. They now want to make Workspaces highly available across two AWS Regions. Workspaces are deployed in the failover Region. A hosted zone is available in Amazon Route 53.
What should the solutions architect do?
- A . Create a connection alias in the primary Region and in the failover Region. Associate each with a directory in its Region. Create a Route 53 failover routing policy with Evaluate Target Health = Yes.
- B . Create a connection alias in both Regions. Associate both with a directory in the primary Region. Use a Route 53 multivalue answer routing policy.
- C . Create a connection alias in the primary Region. Associate with the directory in the primary Region. Use Route 53 weighted routing.
- D . Create a connection alias in the primary Region. Associate it with the directory in the failover Region. Use Route 53 failover routing with Evaluate Target Health = Yes.
A
Explanation:
A is correct because AWS recommends using oneconnection alias per Region, associated witheach directory. Then, configure a Route 53 failover policy so that if the primary Region becomes unhealthy, users are directed to the failover Region automatically. “Evaluate Target Health” ensures automatic detection and failover.
Reference: Amazon Workspaces Cross-Region Resilience
Route 53 Failover Routing
A company runs an ecommerce web application on AWS. The web application is hosted as a static website on Amazon S3 with Amazon CloudFront for content delivery. An Amazon API Gateway API invokes AWS Lambda functions to handle user requests and order processing for the web application. The Lambda functions store data in an Amazon RDS for MySQL DB cluster that uses On-Demand Instances. The DB cluster usage has been consistent in the past 12 months. Recently, the website has experienced SQL injection and web exploit attempts. Customers also report that order processing time has increased during periods of peak usage. During these periods, the Lambda functions often have cold starts. As the company grows, the company needs to ensure scalability and low-latency access during traffic peaks. The company also must optimize the database costs and add protection against the SQL injection and web exploit attempts.
Which solution will meet these requirements?
- A . Configure the Lambda functions to have an increased timeout value during peak periods. Use RDS Reserved Instances for the database. Use CloudFront and subscribe to AWS Shield Advanced to protect against the SQL injection and web exploit attempts.
- B . Increase the memory of the Lambda functions. Transition to Amazon Redshift for the database. Integrate Amazon Inspector with CloudFront to protect against the SQL injection and web exploit attempts.
- C . Use Lambda functions with provisioned concurrency for compute during peak periods. Transition to Amazon Aurora Serverless for the database. Use CloudFront and subscribe to AWS Shield Advanced to protect against the SQL injection and web exploit attempts.
- D . Use Lambda functions with provisioned concurrency for compute during peak periods. Use RDS Reserved Instances for the database. Integrate AWS WAF with CloudFront to protect against the SQL injection and web exploit attempts.
D
Explanation:
Option D best addresses all aspects of the problem:
Provisioned Concurrency for Lambda ensures that the Lambda functions have pre-initialized execution environments ready, eliminating cold start delays and ensuring low-latency performance during traffic spikes.
RDS Reserved Instances provide significant cost savings for workloads with consistent and predictable usage, which aligns with the company’s usage patterns.
AWS WAF integrated with CloudFront directly addresses the security concern by providing application layer protection against SQL injection and web exploit attempts.
This solution ensures scalability, performance consistency, cost savings, and security protection without introducing unnecessary complexity.
A solutions architect needs to migrate an on-premises legacy application to AWS. The application runs on two servers behind a bad balancer. The application requires a license file that is associated with the MAC address of the server’s network adapter. It takes the software vendor 12 hours to send new license files. The application also uses configuration files with a static IP address to access a database host names are not supported.
Given these requirements. which combination of steps should be taken to implement highly available architecture for the application servers in AWS? (Select TWO.)
- A . Create a pool of ENIs. Request license files from the vendor for the pool, and store the license files in Amazon $3. Create a bootstrap automation script to download a license file and attach the corresponding ENI to an Amazon EC2 instance.
- B . Create a pool of ENIs. Request license files from the vendor for the pool, store the license files on an Amazon EC2 instance. Create an AMI from the instance and use this AMI for all future EC2
- C . Create a bootstrap automation script to request a new license file from the vendor. When the response is received, apply the license file to an Amazon EC2 instance.
- D . Edit the bootstrap automation script to read the database server IP address from the AWS Systems Manager Parameter Store. and inject the value into the local configuration files.
- E . Edit an Amazon EC2 instance to include the database server IP address in the configuration files
and re-create the AMI to use for all future EC2 instances.
A,D
Explanation:
This solution will meet the requirements of implementing a highly available architecture for the application servers in AWS. Creating a pool of ENIs will allow the application servers to have consistent MAC addresses, which are needed for the license files. Requesting license files from the vendor for the pool and storing them in Amazon S3 will ensure that the license files are available and secure. Creating a bootstrap automation script to download a license file and attach the corresponding ENI to an EC2 instance will automate the process of launching new application servers with valid licenses. Editing the bootstrap automation script to read the database server IP address from the AWS Systems Manager Parameter Store and inject the value into the local configuration files will enable the application servers to access the database without hard-coding the IP address in the configuration files. This will also allow changing the database server IP address without modifying the configuration files on each application server.
A company runs a web-crawling process on a list of target URLs to obtain training documents for machine learning (ML) training algorithms. A fleet of Amazon EC2 t2.micro instances pulls the target URLs from an Amazon SQS queue. The instances write the result of the crawling algorithm as a CSV file to an Amazon EFS volume. The EFS volume is mounted on all instances in the fleet.
A second system occasionally adds the URLs to the SQS queue. The EC2 instances crawl each new URL in 10 seconds or less.
Metrics indicate that some instances are idle when no URLs are in the SQS queue. A solutions architect needs to redesign the architecture to optimize costs.
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)
- A . Use m5.8xlarge instances instead of t2.micro instances for the web-crawling process. Reduce the number of instances in the fleet by 50%.
- B . Use an AWS Lambda function to run the web-crawling process. Configure the Lambda function to pull URLs from the SQS queue.
- C . Modify the web-crawling process to store results in Amazon Neptune.
- D . Modify the web-crawling process to store results in an Amazon Aurora Serverless MySQL instance.
- E . Modify the web-crawling process to store results in Amazon S3.
B, E
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The current architecture uses a fleet of small EC2 instances that poll SQS for URLs and write results to EFS. Because URLs arrive only occasionally and each crawl takes 10 seconds or less, EC2 instances are often idle waiting for messages. This leads to unnecessary compute and storage costs.
This workload is naturally event-driven: each URL in the SQS queue triggers a short-lived crawl operation. AWS Lambda is well suited for such event-driven, short-duration tasks. Lambda integrates natively with Amazon SQS as an event source, which allows Lambda to poll the SQS queue automatically and invoke functions when messages arrive, scaling the concurrency up and down as needed. When there are no messages, there are no Lambda invocations and no compute charges, eliminating the cost of idle EC2 instances.
For storage of crawl results, Amazon S3 is a highly durable, cost-effective object store. The current use of an EFS file system mounted on all instances is no longer necessary if each Lambda invocation can write the output directly to S3 as objects (for example, CSV files). S3 charges only for the storage used and requests, and does not require continuously running file system infrastructure.
Option B replaces the idle fleet of EC2 instances with a Lambda function that reads from SQS. This aligns compute usage with actual work and takes advantage of serverless scaling and pricing.
Option E modifies the web-crawling process to store results in Amazon S3 instead of EFS, removing the need to maintain an EFS file system and its associated costs.
Option A increases instance size to m5.8xlarge, which greatly increases capacity and cost. Reducing the number of instances by 50% does not offset the large increase in instance size; it does not address idle capacity and likely increases overall cost.
Option C uses Amazon Neptune, which is a managed graph database service. It is not needed for storing simple CSV output; it is more complex and more expensive than S3 for this use case.
Option D uses Aurora Serverless MySQL. While Aurora Serverless can automatically scale, using a relational database to store simple crawl outputs adds unnecessary cost and operational complexity compared to S3 objects.
Therefore, moving the crawling logic to Lambda triggered by SQS (option B) and writing results directly to Amazon S3 (option E) meets the requirements in the most cost-effective way.
Reference: AWS documentation on using Amazon SQS as an event source for AWS Lambda for event-driven processing. AWS documentation on Amazon S3 for durable, low-cost object storage for analytical and application data.
A company is planning to migrate 1,000 on-premises servers to AWS. The servers run on several VMware clusters in the company’s data center. As part of the migration plan, the company wants to gather server metrics such as CPU details, RAM usage, operating system information, and running processes. The company then wants to query and analyze the data.
Which solution will meet these requirements?
- A . Deploy and configure the AWS Agentless Discovery Connector virtual appliance on the on-premises hosts. Configure Data Exploration in AWS Migration Hub. Use AWS Glue to perform an ETL job against the data. Query the data by using Amazon S3 Select.
- B . Export only the VM performance information from the on-premises hosts. Directly import the required data into AWS Migration Hub. Update any missing information in Migration Hub. Query the data by using Amazon QuickSight.
- C . Create a script to automatically gather the server information from the on-premises hosts. Use the AWS CLI to run the put-resource-attributes command to store the detailed server data in AWS Migration Hub. Query the data directly in the Migration Hub console.
- D . Deploy the AWS Application Discovery Agent to each on-premises server. Configure Data Exploration in AWS Migration Hub. Use Amazon Athena to run predefined queries against the data in Amazon S3.
D
Explanation:
it covers all the requirements mentioned in the question, it will allow collecting the detailed metrics, including process information and it provides a way to query and analyze the data using Amazon Athena.
