Practice Free SAP-C02 Exam Online Questions
A company is using an organization in AWS Organizations to manage hundreds of AWS accounts. A solutions architect is working on a solution to provide baseline protection for the Open Web Application Security Project (OWASP) top 10 web application vulnerabilities. The solutions architect is using AWS WAF for all existing and new Amazon CloudFront distributions that are deployed within the organization.
Which combination of steps should the solutions architect take to provide the baseline protection? (Select THREE.)
- A . Enable AWS Config in all accounts.
- B . Enable Amazon GuardDuty in all accounts.
- C . Enable all features for the organization.
- D . Use AWS Firewall Manager to deploy AWS WAF rules in all accounts for all CloudFront distributions.
- E . Use AWS Shield Advanced to deploy AWS WAF rules in all accounts for all CloudFront distributions.
- F . Use AWS Security Hub to deploy AWS WAF rules in all accounts for all CloudFront distributions.
C,D,E
Explanation:
Enabling all features for the organization will enable using AWS Firewall Manager to centrally configure and manage firewall rules across multiple AWS accounts1. Using AWS Firewall Manager to deploy AWS WAF rules in all accounts for all CloudFront distributions will enable providing baseline protection for the OWASP top 10 web application vulnerabilities2. AWS Firewall Manager supports AWS WAF rules that can help protect against common web exploits such as SQL injection and cross-site scripting3. Configuring redirection of HTTP requests to HTTPS requests in CloudFront will enable encrypting the data in transit using SSL/TLS.
A company runs a Python script on an Amazon EC2 instance to process data. The script runs every 10 minutes. The script ingests files from an Amazon S3 bucket and processes the files. On average, the script takes approximately 5 minutes to process each file. The script will not reprocess a file that the script has already processed.
The company reviewed Amazon CloudWatch metrics and noticed that the EC2 instance is idle for approximately 40% of the time because of the file processing speed. The company wants to make the workload highly available and scalable. The company also wants to reduce long-term management overhead.
Which solution will meet these requirements MOST cost-effectively?
- A . Migrate the data processing script to an AWS Lambda function. Use an S3 event notification to invoke the Lambda function to process the objects when the company uploads the objects.
- B . Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure Amazon S3 to send event notifications to the SQS queue. Create an EC2 Auto Scaling group with a minimum size of one instance. Update the data processing script to poll the SQS queue. Process the S3 objects that the SQS message identifies.
- C . Migrate the data processing script to a container image. Run the data processing container on an
EC2 instance. Configure the container to poll the S3 bucket for new objects and to process the resulting objects. - D . Migrate the data processing script to a container image that runs on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Create an AWS Lambda function that calls the Fargate RunTaskAPI operation when the container processes the file. Use an S3 event notification to invoke the Lambda function.
A
Explanation:
migrating the data processing script to an AWS Lambda function and using an S3 event notification to invoke the Lambda function to process the objects when the company uploads the objects. This solution meets the company’s requirements of high availability and scalability, as well as reducing long-term management overhead, and is likely to be the most cost-effective option.
A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket. The company requires that only authenticated users are allowed to post content. The application generates a presigned URL that is used to upload objects through a browser interface. Most users are reporting slow upload times for objects larger than 100 MB.
What can a Solutions Architect do to improve the performance of these uploads while ensuring only authenticated users are allowed to post content?
- A . Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3
service proxy. Configure the PUT method for this resource to expose the S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS authorizer. Have the browser interface use API Gateway instead of the presigned URL to upload objects. - B . Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service proxy. Configure the PUT method for this resource to expose the S3 PutObject operation. Secure the API Gateway using an AWS Lambda authorizer. Have the browser interface use API Gateway instead of the presigned URL to upload API objects.
- C . Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the presigned URL. Have the browser interface upload the objects to this URL using the S3 multipart upload API.
- D . Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT and POST methods for the CloudFront cache behavior. Update the CloudFront origin to use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution
C
Explanation:
S3 Transfer Acceleration is a feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket1. It works by leveraging the CloudFront edge network to route your requests to S3 over an optimized network path1. By using a Transfer Acceleration endpoint when generating a presigned URL, you can allow authenticated users to upload objects fasterand more reliably2. Additionally, using the S3 multipart upload API can improve the performance of large object uploads by breaking them into smaller parts and uploading them in parallel3.
S3 Transfer Acceleration
Using Transfer Acceleration with presigned URLs
Uploading objects using multipart upload API
A company has an application that uses an on-premises Oracle database. The company is migrating the database to the AWS Cloud. The database contains customer data and stored procedures.
The company needs to migrate the database as quickly as possible with minimum downtime. The solution on AWS must provide high availability and must use managed services for the database.
Which solution will meet these requirements?
- A . Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon RDS for Oracle database. Transfer the database files to an Amazon S3 bucket. Configure the RDS database to use the S3 bucket as database storage. Set up S3 replication for high availability. Redirect the application to the RDS DB instance.
- B . Create a database backup of the on-premises Oracle database. Upload the backup to an Amazon S3 bucket. Shut down the on-premises Oracle database to avoid any new transactions. Restore the backup to a new Oracle cluster that consists of Amazon EC2 instances across two Availability Zones. Redirect the application to the EC2 instances.
- C . Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon DynamoDB table. Use DynamoDB Accelerator (DAX) and implement global tables for high availability. Rewrite the stored procedures in AWS Lambda. Run the stored procedures in DAX. After replication, redirect the application to the DAX cluster endpoint.
- D . Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon Aurora PostgreSQL database. Use AWS SCT to convert the schema and stored procedures. Redirect the application to the Aurora DB cluster.
D
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The requirements emphasize minimum downtime, rapid migration, managed database services, high availability, and support for stored procedures. For Oracle-to-AWS migrations with minimal downtime, AWS DMS can replicate ongoing changes from the source database to the target, which reduces cutover downtime. However, a critical constraint is that the database contains stored procedures. If the company moves to a different engine, it must convert schema objects and procedural code.
Option D is the only option that combines an online replication approach (AWS DMS) with a managed, highly available target database (Amazon Aurora PostgreSQL) and explicitly includes schema and stored procedure conversion with AWS SCT. This aligns with a fast migration approach: SCT prepares the converted schema and stored procedures, while DMS performs ongoing replication of data changes until cutover, minimizing downtime. Aurora PostgreSQL is a managed service and supports high availability through multi-AZ architecture and managed failover capabilities within a Region.
Option A is incorrect because Amazon RDS does not use an S3 bucket “as database storage” in the way described. RDS storage is managed by the RDS service, and S3 replication is not the mechanism used to provide database high availability. High availability for RDS is typically achieved through Multi-AZ configurations and/or cluster-based architectures depending on engine. The described storage approach is not a valid managed RDS architecture.
Option B does not meet the managed database requirement because it restores Oracle onto EC2 instances, which is self-managed. It also does not meet minimum downtime because it requires shutting down the on-premises database to avoid new transactions before restoring the backup. This is a longer outage model than an online migration with continuous replication.
Option C is not feasible and would require major redesign. DynamoDB is not a relational database replacement for Oracle stored procedures. DAX is a DynamoDB cache and cannot run stored procedures. The option also incorrectly suggests running stored procedures “in DAX.” This does not meet the “migrate as quickly as possible with minimum downtime” requirement due to extensive refactoring and redesign.
Therefore, migrating to Aurora PostgreSQL using AWS SCT for schema and stored procedure conversion, and AWS DMS for ongoing replication to minimize downtime, is the best fit.
References:
AWS documentation and best practices describing AWS DMS as a tool for minimal-downtime database migration through continuous replication.
AWS documentation describing AWS SCT for converting schema objects and procedural code when migrating between different database engines.
AWS documentation describing Amazon Aurora as a managed, highly available database service with built-in multi-AZ resilience.
