Practice Free SAA-C03 Exam Online Questions
A company is migrating some workloads to AWS. However, many workloads will remain on premises. The on-premises workloads require secure and reliable connectivity to AWS with consistent, low-latency performance.
The company has deployed the AWS workloads across multiple AWS accounts and multiple VPCs.
The company plans to scale to hundreds of VPCs within the next year.
The company must establish connectivity between each of the VPCs and from the on-premises environment to each VPC.
Which solution will meet these requirements?
- A . Use an AWS Direct Connect connection to connect the on-premises environment to AWS.
Configure VPC peering to establish connectivity between VPCs. - B . Use multiple AWS Site-to-Site VPN connections to connect the on-premises environment to AWS. Create a transit gateway to establish connectivity between VPCs.
- C . Use an AWS Direct Connect connection with a Direct Connect gateway to connect the on-premises environment to AWS. Create a transit gateway to establish connectivity between VPCs. Associate the transit gateway with the Direct Connect gateway.
- D . Use an AWS Site-to-Site VPN connection to connect the on-premises environment to AWS. Configure VPC peering to establish connectivity between VPCs.
A company sells datasets to customers who do research in artificial intelligence and machine learning (Al/ML). The datasets are large, formatted files that are stored in an Amazon S3 bucket in the us-east-1 Region. The company hosts a web application that the customers use to purchase access to a given dataset. The web application is deployed on multiple Amazon EC2 instances behind an Application Load Balancer After a purchase is made customers receive an S3 signed URL that allows access to the files.
The customers are distributed across North America and Europe. The company wants to reduce the cost that is associated with data transfers and wants to maintain or improve performance.
What should a solutions architect do to meet these requirements?
- A . Configure S3 Transfer Acceleration on the existing S3 bucket Direct customer requests to the S3 Transfer Acceleration endpoint Continue to use S3 signed URLs for access control
- B . Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin Direct customer requests to the CloudFront URL Switch to CloudFront signed URLs for access control
- C . Set up a second S3 bucket in the eu-central-1 Region with S3 Cross-Region Replication between the buckets Direct customer requests to the closest Region Continue to use S3 signed URLs for access control
- D . Modify the web application to enable streaming of the datasets to end users. Configure the web application to read the data from the existing S3 bucket Implement access control directly in the application
B
Explanation:
https: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.
The company wants to optimize costs to archive data.
- A . Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.
- B . Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3.
Configure S3 Lifecycle rules on the S3 bucket. - C . Create an AWS Glue DataBrew Job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.
- D . Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
An e-commerce company has an application that uses Amazon DynamoDB tables configured with provisioned capacity. Order data is stored in a table named Orders. The Orders table has a primary key of order-ID and a sort key of product-ID. The company configured an AWS Lambda function to receive DynamoDB streams from the Orders table and update a table named Inventory. The company has noticed that during peak sales periods, updates to the Inventory table take longer than the company can tolerate.
Which solutions will resolve the slow table updates? (Select TWO.)
- A . Add a global secondary index to the Orders table. Include the product-ID attribute.
- B . Set the batch size attribute of the DynamoDB streams to be based on the size of items in the Orders table.
- C . Increase the DynamoDB table provisioned capacity by 1, 000 write capacity units (WCUs).
- D . Increase the DynamoDB table provisioned capacity by 1, 000 read capacity units (RCUs).
- E . Increase the timeout of the Lambda function to 15 minutes.
B, C
Explanation:
Key Problem:
Delayed Inventory table updates during peak sales.
DynamoDB Streams and Lambda processing require optimization.
Analysis of Options:
Option A: Adding a GSI is unrelated to the issue. It does not address stream processing delays or capacity issues.
Option B: Optimizing batch size reduces latency and allows the Lambda function to process larger chunks of data at once, improving performance during peak load.
Option C: Increasing write capacity for the Inventory table ensures that it can handle the increased volume of updates during peak times.
Option D: Increasing read capacity for the Orders table does not directly resolve the issue since the problem is with updates to the Inventory table.
Option E: Increasing Lambda timeout only addresses longer processing times but does not solve the underlying throughput problem.
AWS
Reference: DynamoDB Streams Best Practices
Provisioned Throughput in DynamoDB
A solutions architect needs to design a system to store client case files. The files are core company assets and are important. The number of files will grow over time.
The files must be simultaneously accessible from multiple application servers that run on Amazon EC2 instances. The solution must have built-in redundancy.
Which solution meets these requirements?
- A . Amazon Elastic File System (Amazon EFS)
- B . Amazon Elastic Block Store (Amazon EBS)
- C . Amazon S3 Glacier Deep Archive
- D . AWS Backup
A
Explanation:
Amazon EFS provides a simple, scalable, fully managed file system that can be simultaneously accessed from multiple EC2 instances and provides built-in redundancy. It is optimized for multiple EC2 instances to access the same files, and it is designed to be highly available, durable, and secure. It can scale up to petabytes of data and can handle thousands of concurrent connections, and is a cost-effective solution for storing and accessing large amounts of data.
A solutions architect is designing a shared storage solution for a web application that is deployed across multiple Availability Zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The company plans to make frequent changes to the content. The solution must have strong consistency in returning the new content as soon as the changes occur.
Which solutions meet these requirements? (Select TWO)
- A . Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted to the individual EC2 instances
- B . Create an Amazon Elastic File System (Amazon EFS) file system Mount the EFS file system on the individual EC2 instances
- C . Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the individual EC2 instances.
- D . Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group
- E . Create an Amazon S3 bucket to store the web content Set the metadata for the Cache-Control header to no-cache Use Amazon CloudFront to deliver the content
B, E
Explanation:
These options are the most suitable ways to design a shared storage solution for a web application that is deployed across multiple Availability Zones and requires strong consistency.
Option B uses Amazon Elastic File System (Amazon EFS) as a shared file system that can be mounted on multiple EC2 instances in different Availability Zones. Amazon EFS provides high availability, durability, scalability, and performance for file-based workloads. It also supports strong consistency, which means that any changes made to the file system are immediately visible to all clients.
Option E uses Amazon S3 as a shared object store that can store the web content and serve it through Amazon CloudFront, a content delivery network (CDN). Amazon S3 provides high availability, durability, scalability, and performance for object-based workloads. It also supports strong consistency for read-after-write and list operations, which means that any changes made to the objects are immediately visible to all clients. By setting the metadata for the Cache-Control header to no-cache, the web content can be prevented from being cached by the browsers or the CDN edge locations, ensuring that the latest content is always delivered to the users.
Option A is not suitable because using AWS Storage Gateway Volume Gateway as a shared storage solution for a web application is not efficient or scalable. AWS Storage Gateway Volume Gateway is a hybrid cloud storage service that provides block storage volumes that can be mounted on-premises or on EC2 instances as iSCSI devices. It is useful for migrating or backing up data to AWS, but it is not designed for serving web content or providing strong consistency. Moreover, using Volume Gateway would incur additional costs and complexity, and it would not leverage the native AWS storage services.
Option C is not suitable because creating a shared Amazon EBS volume and mounting it on multiple EC2 instances is not possible or reliable. Amazon EBS is a block storage service that provides persistent and high-performance volumes for EC2 instances. However, EBS volumes can only be attached to one EC2 instance at a time, and they are constrained to a single Availability Zone. Therefore, creating a shared EBS volume for a web application that is deployed across multiple Availability Zones is not feasible. Moreover, EBS volumes do not support strong consistency, which means that any changes made to the volume may not be immediately visible to other clients.
Option D is not suitable because using AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group is not efficient or scalable. AWS DataSync is a data transfer service that helps you move large amounts of data to and from AWS storage services. It is useful for migrating or archiving data, but it is not designed for serving web content or providing strong consistency. Moreover, using DataSync would incur additional costs and complexity, and it would not leverage the native AWS storage services.
Reference:
What Is Amazon Elastic File System?
What Is Amazon Simple Storage Service?
What Is Amazon CloudFront?
What Is AWS Storage Gateway?
What Is Amazon Elastic Block Store?
What Is AWS DataSync?
A solutions architect is designing a shared storage solution for a web application that is deployed across multiple Availability Zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The company plans to make frequent changes to the content. The solution must have strong consistency in returning the new content as soon as the changes occur.
Which solutions meet these requirements? (Select TWO)
- A . Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted to the individual EC2 instances
- B . Create an Amazon Elastic File System (Amazon EFS) file system Mount the EFS file system on the individual EC2 instances
- C . Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the individual EC2 instances.
- D . Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group
- E . Create an Amazon S3 bucket to store the web content Set the metadata for the Cache-Control header to no-cache Use Amazon CloudFront to deliver the content
B, E
Explanation:
These options are the most suitable ways to design a shared storage solution for a web application that is deployed across multiple Availability Zones and requires strong consistency.
Option B uses Amazon Elastic File System (Amazon EFS) as a shared file system that can be mounted on multiple EC2 instances in different Availability Zones. Amazon EFS provides high availability, durability, scalability, and performance for file-based workloads. It also supports strong consistency, which means that any changes made to the file system are immediately visible to all clients.
Option E uses Amazon S3 as a shared object store that can store the web content and serve it through Amazon CloudFront, a content delivery network (CDN). Amazon S3 provides high availability, durability, scalability, and performance for object-based workloads. It also supports strong consistency for read-after-write and list operations, which means that any changes made to the objects are immediately visible to all clients. By setting the metadata for the Cache-Control header to no-cache, the web content can be prevented from being cached by the browsers or the CDN edge locations, ensuring that the latest content is always delivered to the users.
Option A is not suitable because using AWS Storage Gateway Volume Gateway as a shared storage solution for a web application is not efficient or scalable. AWS Storage Gateway Volume Gateway is a hybrid cloud storage service that provides block storage volumes that can be mounted on-premises or on EC2 instances as iSCSI devices. It is useful for migrating or backing up data to AWS, but it is not designed for serving web content or providing strong consistency. Moreover, using Volume Gateway would incur additional costs and complexity, and it would not leverage the native AWS storage services.
Option C is not suitable because creating a shared Amazon EBS volume and mounting it on multiple EC2 instances is not possible or reliable. Amazon EBS is a block storage service that provides persistent and high-performance volumes for EC2 instances. However, EBS volumes can only be attached to one EC2 instance at a time, and they are constrained to a single Availability Zone. Therefore, creating a shared EBS volume for a web application that is deployed across multiple Availability Zones is not feasible. Moreover, EBS volumes do not support strong consistency, which means that any changes made to the volume may not be immediately visible to other clients.
Option D is not suitable because using AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group is not efficient or scalable. AWS DataSync is a data transfer service that helps you move large amounts of data to and from AWS storage services. It is useful for migrating or archiving data, but it is not designed for serving web content or providing strong consistency. Moreover, using DataSync would incur additional costs and complexity, and it would not leverage the native AWS storage services.
Reference:
What Is Amazon Elastic File System?
What Is Amazon Simple Storage Service?
What Is Amazon CloudFront?
What Is AWS Storage Gateway?
What Is Amazon Elastic Block Store?
What Is AWS DataSync?
A company is concerned about the security of its public web application due to recent web attacks. The application uses an Application Load Balancer (ALB). A solutions architect must reduce the risk of DDoS attacks against the application.
What should the solutions architect do to meet this requirement?
- A . Add an Amazon Inspector agent to the ALB.
- B . Configure Amazon Macie to prevent attacks.
- C . Enable AWS Shield Advanced to prevent attacks.
- D . Configure Amazon GuardDuty to monitor the ALB.
C
Explanation:
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators. https: //docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
A company wants to run its experimental workloads in the AWS Cloud. The company has a budget for cloud spending. The company’s CFO is concerned about cloud spending accountability for each department. The CFO wants to receive notification when the spending threshold reaches 60%of the budget.
Which solution will meet these requirements?
- A . Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets.
Add an alert threshold to receive notification when spending exceeds 60%of the budget. - B . Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection to create alert threshold notifications when spending exceeds 60%of the budget.
- C . Use cost allocation tags on AWS resources to label owners. Use AWS Support API on AWS Trusted Advisor to create alert threshold notifications when spending exceeds 60%of the budget
- D . Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60%of the budget.
A
Explanation:
This solution meets the requirements because it allows the company to track and manage its cloud spending by using cost allocation tags to assign costs to different departments, creating usage budgets to set spending limits, and adding alert thresholds to receive notifications when the spending reaches a certain percentage of the budget. This way, the company can monitor its experimental workloads and avoid overspending on the cloud.
Reference: Using Cost Allocation Tags
Creating an AWS Budget
Creating an Alert for an AWS Budget
A company is designing an application on AWS that processes sensitive data. The application stores and processes financial data for multiple customers.
To meet compliance requirements, the data for each customer must be encrypted separately at rest by using a secure, centralized key management solution. The company wants to use AWS Key Management Service (AWS KMS) to implement encryption.
Which solution will meet these requirements with the LEAST operational overhead’?
- A . Generate a unique encryption key for each customer. Store the keys in an Amazon S3 bucket.
Enable server-side encryption. - B . Deploy a hardware security appliance in the AWS environment that securely stores customer-provided encryption keys. Integrate the security appliance with AWS KMS to encrypt the sensitive data in the application.
- C . Create a single AWS KMS key to encrypt all sensitive data across the application.
- D . Create separate AWS KMS keys for each customer’s data that have granular access control and logging enabled.
D
Explanation:
This solution meets the requirement of encrypting each customer’s data separately with the least operational overhead by leveraging AWS Key Management Service (KMS).
Separate AWS KMS Keys: By creating separate KMS keys for each customer, you can ensure that each customer’s data is encrypted with a unique key. This approach satisfies the compliance requirement for separate encryption and provides fine-grained control over access to the keys.
Granular Access Control: AWS KMS allows you to define key policies and use IAM policies to grant specific permissions to the keys. This ensures that only authorized users or services can access the keys, thereby maintaining the principle of least privilege.
Logging and Monitoring: AWS KMS integrates with AWS CloudTrail, which logs all key usage and management activities. This provides an audit trail that is essential for meeting compliance requirements.
Why Not Other Options?
Option A (Store keys in S3): Storing keys in S3 is not recommended because it does not provide the same level of security, access control, or integration with AWS services as KMS does.
Option B (Hardware security appliance): Deploying a hardware security appliance adds significant operational overhead and complexity, which is unnecessary given that KMS already provides a secure and centralized key management solution.
Option C (Single KMS key for all data): Using a single KMS key does not meet the requirement of
encrypting each customer’s data separately.
AWS
Reference: AWS Key Management Service (KMS)- Overview of KMS, its features, and best practices for key management.
Using AWS KMS for Multi-Tenant Applications- Guidance on how to design applications using KMS for multi-tenancy.