Practice Free SAA-C03 Exam Online Questions
A financial services company plans to launch a new application on AWS to handle sensitive financial transactions. The company will deploy the application on Amazon EC2 instances. The company will use Amazon RDS for MySQL as the database. The company’s security policies mandate that data must be encrypted at rest and in transit.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys.
Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit. - B . Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure IPsec tunnels for encryption in transit
- C . Implement third-party application-level data encryption before storing data in Amazon RDS for MySQL. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.
- D . Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys Configure a VPN connection to enable private connectivity to encrypt data in transit.
A
Explanation:
This solution provides encryption at rest and in transit with the least operational overhead while adhering to the company’s security policies.
Encryption at Rest: Amazon RDS for MySQL can be configured to encrypt data at rest by using AWS Key Management Service (KMS) managed keys. This encryption is applied automatically to all data stored on disk, including backups, read replicas, and snapshots. This solution requires minimal operational overhead because AWS manages the encryption and key management process. Encryption in Transit: AWS Certificate Manager (ACM) allows you to provision, manage, and deploy SSL/TLS certificates seamlessly. These certificates can be used to encrypt data in transit by configuring the MySQL instance to use SSL/TLS for connections. This setup ensures that data is encrypted between the application and the database, protecting it from interception during transmission.
Why Not Other Options?
Option B (IPsec tunnels): While IPsec tunnels encrypt data in transit, they are more complex to manage and require additional configuration and maintenance, leading to higher operational overhead.
Option C (Third-party application-level encryption): Implementing application-level encryption adds complexity, requires code changes, and increases operational overhead.
Option D (VPN for encryption): A VPN solution for encrypting data in transit is unnecessary and adds additional complexity without providing any benefit over SSL/TLS, which is simpler to implement and manage.
AWS
Reference: Amazon RDS Encryption- Information on how to configure and use encryption for Amazon RDS. AWS Certificate Manager (ACM)- Details on using ACM to manage SSL/TLS certificates for securing data in transit.
A company’s SAP application has a backend SQL Server database in an on-premises environment. The company wants to migrate its on-premises application and database server to AWS. The company needs an instance type that meets the high demands of its SAP database. On-premises performance data shows that both the SAP application and the database have high memory utilization.
Which solution will meet these requirements?
- A . Use the compute optimized Instance family for the application Use the memory optimized instance family for the database.
- B . Use the storage optimized instance family for both the application and the database
- C . Use the memory optimized instance family for both the application and the database
- D . Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for the database.
C
Explanation:
Memory Optimized Instances: These instances are designed to deliver fast performance for workloads that process large data sets in memory. They are ideal for high-performance databases like SAP and applications with high memory utilization.
High Memory Utilization: Both the SAP application and the SQL Server database have high memory demands as per the on-premises performance data. Memory optimized instances provide the necessary memory capacity and performance.
Instance Types:
For the SAP application, using a memory optimized instance ensures the application has sufficient memory to handle the high workload efficiently.
For the SQL Server database, memory optimized instances ensure optimal database performance with high memory throughput.
Operational Efficiency: Using the same instance family for both the application and the database simplifies management and ensures both components meet performance requirements.
Reference: Amazon EC2 Instance Types
SAP on AWS
A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data are routed through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket.
Which solution will meet these requirements?
- A . Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
- B . Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is located. Attach appropriate security groups to the endpoint. Attach a resource policy lo the S3 bucket to only allow the EC2 instance’s IAM role for access.
- C . Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
- D . Use the AWS provided, publicly available ip-ranges.json tile to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
A
Explanation:
(https: //aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/)
A company hosts an application on AWS that gives users the ability to download photos. The company stores all photos in an Amazon S3 bucket that is located in the us-east-1 Region. The company wants to provide the photo download application to global customers with low latency.
Which solution will meet these requirements?
- A . Find the public IP addresses that Amazon S3 uses in us-east-1. Configure an Amazon Route 53 latency-based routing policy that routes to all the public IP addresses.
- B . Configure an Amazon CloudFront distribution in front of the S3 bucket. Use the distribution endpoint to access the photos that are in the S3 bucket.
- C . Configure an Amazon Route 53 geoproximity routing policy to route the traffic to the S3 bucket that is closest to each customer’s location.
- D . Create a new S3 bucket in the us-west-1 Region. Configure an S3 Cross-Region Replication rule to copy the photos to the new S3 bucket.
B
Explanation:
Amazon CloudFront is a content delivery network (CDN) service that distributes content with low latency and high transfer speeds. Placing CloudFront in front of the S3 bucket ensures globalusers download content from the nearest edge location, reducing latency significantly.
Reference: AWS Documentation C Amazon CloudFront with S3 Origin
A media company has a multi-account AWS environment in the us-east-1 Region. The company has an Amazon Simple Notification Service {Amazon SNS) topic in a production account that publishes performance metrics. The company has an AWS Lambda function in an administrator account to process and analyze log data.
The Lambda function that is in the administrator account must be invoked by messages from the SNS topic that is in the production account when significant metrics tM* reported.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Create an IAM resource policy for the Lambda function that allows Amazon SNS to invoke the function. Implement an Amazon Simple Queue Service (Amazon SQS) queue in the administrator account to buffer messages from the SNS topic that is in the production account. Configure the SOS queue to invoke the Lambda function.
- B . Create an IAM policy for the SNS topic that allows the Lambda function to subscribe to the topic.
- C . Use an Amazon EventBridge rule in the production account to capture the SNS topic notifications. Configure the EventBridge rule to forward notifications to the Lambda function that is in the administrator account.
- D . Store performance metrics in an Amazon S3 bucket in the production account. Use Amazon Athena to analyze the metrics from the administrator account.
A, B
Explanation:
Requirement Analysis: The Lambda function in the administrator account needs to process messages from an SNS topic in the production account.
IAM Policy for SNS Topic: Allows the Lambda function to subscribe and be invoked by the SNS topic.
SQS Queue for Buffering: Using an SQS queue provides reliable message delivery and buffering between SNS and Lambda, ensuring all messages are processed. Implementation:
Create an SQS queue in the administrator account.
Set an IAM policy to allow the Lambda function to subscribe to and be invoked by the SNS topic.
Configure the SNS topic to send messages to the SQS queue.
Set up the SQS queue to trigger the Lambda function.
Conclusion: This solution ensures reliable message delivery and processing with appropriate permissions.
Reference
Amazon SNS: Amazon SNS Documentation
Amazon SQS: Amazon SQS Documentation
AWS Lambda: AWS Lambda Documentation
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database’s performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?
- A . Use Amazon Redshift with a single node for leader and compute functionality.
- B . Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
- C . Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
- D . Use Amazon ElastiCache for Memcached with EC2 Spot Instances.
C
Explanation:
AURORA is 5x performance improvement over MySQL on RDS and handles more read requests than write, ; maintaining high availability = Multi-AZ deployment
A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience.
Which service will improve the performance of both the real-lime and on-demand streaming?
- A . Amazon CloudFront
- B . AWS Global Accelerator
- C . Amazon Route 53
- D . Amazon S3 Transfer Acceleration
A
Explanation:
You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin. One way you can set up video workflows in the cloud is by using CloudFront together with AWS Media Services. https: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/on-demand-streaming-video.html
A company needs to configure a real-time data ingestion architecture for its application. The company needs an API. a process that transforms data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
- B . Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3.
- C . Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
- D . Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3.
C
Explanation:
It uses Amazon Kinesis Data Firehose which is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3. This service requires less operational overhead as compared to option A, B, and D. Additionally, it also uses Amazon API Gateway which is a fully managed service for creating, deploying, and managing APIs. These services help in reducing the operational overhead and automating the data ingestion process.
A company is running a batch application on Amazon EC2 instances. The application consists of a backend with multiple Amazon RDS databases. The application is causing a high number of leads on the databases. A solutions architect must reduce the number of database reads while ensuring high availability.
What should the solutions architect do to meet this requirement?
- A . Add Amazon RDS read replicas
- B . Use AmazonElastCache for Redis
- C . Use Amazon Route 53 DNS caching
- D . Use Amazon ElastiCache for Memcached
A
Explanation:
This solution meets the requirement of reducing the number of database reads while ensuring high availability for a batch application that consists of a backend with multiple Amazon RDS databases. Amazon RDS read replicas are copies of the primary database instance that can serve read-only traffic. You can create one or more read replicas for a primary database instance and connect to them using a special endpoint. Read replicas can improve the performance and availability of your application by offloading read queries from the primary database instance.
Option B is incorrect because using Amazon ElastiCache for Redis can provide a fast, in-memory data store that can cache frequently accessed data, but it does not support replication from Amazon RDS databases.
Option C is incorrect because using Amazon Route 53 DNS caching can improve the performance and availability of DNS queries, but it does not reduce the number of database reads.
Option D is incorrect because using Amazon Elasti Cache for Memcached can provide a fast, in-memory data store that can cache frequently accessed data, but it does not support replication from Amazon RDS databases.
Reference: https: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
A company currently runs an on-premises stock trading application by using Microsoft Windows Server. The company wants to migrate the application to the AWS Cloud. The company needs to design a highly available solution that provides low-latency access to block storage across multiple Availability Zones.
Which solution will meet these requirements with the LEAST implementation effort?
- A . Configure a Windows Server cluster that spans two Availability Zones on Amazon EC2 instances. Install the application on both cluster nodes. Use Amazon FSx for Windows File Server as shared storage between the two cluster nodes.
- B . Configure a Windows Server cluster that spans two Availability Zones on Amazon EC2 instances. Install the application on both cluster nodes Use Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp3) volumes as storage attached to the EC2 instances. Set up application-level replication to sync data from one EBS volume in one Availability Zone to another EBS volume in the second Availability Zone.
- C . Deploy the application on Amazon EC2 instances in two Availability Zones Configure one EC2 instance as active and the second EC2 instance in standby mode. Use an Amazon FSx for NetApp ONTAP Multi-AZ file system to access the data by using Internet Small Computer Systems Interface (iSCSI) protocol.
- D . Deploy the application on Amazon EC2 instances in two Availability Zones. Configure one EC2 instance as active and the second EC2 instance in standby mode. Use Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io2) volumes as storage attached to the EC2 instances. Set up Amazon EBS level replication to sync data from one io2 volume in one Availability Zone to another io2 volume in the second Availability Zone.
A
Explanation:
This solution is designed to provide high availability and low-latency access to block storage across multiple Availability Zones with minimal implementation effort.
Windows Server Cluster Across AZs: Configuring a Windows Server Failover Cluster (WSFC) that spans two Availability Zones ensures that the application can failover from one instance to another in case of a failure, meeting the high availability requirement.
Amazon FSx for Windows File Server: FSx for Windows File Server provides fully managed Windows file storage that is accessible via the SMB protocol, which is suitable for Windows-based applications. It offers high availability and can be used as shared storage between the cluster nodes, ensuring that both nodes have access to the same data with low latency.
Why Not Other Options?
Option B (EBS with application-level replication): This requires complex configuration and management, as EBS volumes cannot be directly shared across AZs. Application-level replication is more complex and prone to errors.
Option C (FSx for NetApp ONTAP with iSCSI): While this is a viable option, it introduces additional complexity with iSCSI and requires more specialized knowledge for setup and management.
Option D (EBS with EBS-level replication): EBS-level replication is not natively supported across AZs, and setting up a custom replication solution would increase the implementation effort.
AWS
Reference: Amazon FSx for Windows File Server- Overview and benefits of using FSx for Windows File Server.
Windows Server Failover Clustering on AWS- Guide on setting up a Windows Server cluster on AWS.