Practice Free SAA-C03 Exam Online Questions
A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each user a personalized view by using mixture of staticand dynamic content. Content is served over HTTPS through an API server running on an Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?
- A . Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB as an origin.
- B . Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest Region.
- C . Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static content. Serve the dynamic content directly from the ALB.
- D . Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in the closest Region.
A
Explanation:
https: //aws.amazon.com/blogs/networking-and-content-delivery/deliver-your-apps-dynamic-content-using-amazon-cloudfront-getting-started-template/
A company has a large dataset for its online advertising business stored in an Amazon RDS for MySQL DB instance in a single Availability Zone. The company wants business reporting queries to run without impacting the write operations to the production DB instance.
Which solution meets these requirements?
- A . Deploy RDS read replicas to process the business reporting queries.
- B . Scale out the DB instance horizontally by placing it behind an Elastic Load Balancer
- C . Scale up the DB instance to a larger instance type to handle write operations and queries
- D . Deploy the OB distance in multiple Availability Zones to process the business reporting queries
A
Explanation:
Read replica use cases – You have a production database that is taking on normal load & You want to run a reporting application to run some analytics
• You create a Read Replica to run the new workload there
•. The production application is unaffected
• Read replicas are used for SELECT (=read) only kind of statements (not INSERT, UPDATE, DELETE)
A company wants to use high performance computing (HPC) infrastructure on AWS for financial risk modeling. The company’s HPC workloads run on Linux. Each HPC workflow runs on hundreds of Amazon EC2 Spot Instances, is shorl-lived, and generates thousands of output files that are ultimately stored in persistent storage for analytics and long-term future use.
The company seeks a cloud storage solution that permits the copying of on-premises data to long-term persistent storage to make data available for processing by all EC2 instances. The solution should also be a high performance file system that is integrated with persistent storage to read and write datasets and output files.
Which combination of AWS services meets these requirements?
- A . Amazon FSx for Lustre integrated with Amazon S3
- B . Amazon FSx for Windows File Server integrated with Amazon S3
- C . Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)
- D . Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume
A
Explanation:
https: //aws.amazon.com/fsx/lustre/
Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Many workloads such as machine learning, high performance computing (HPC), video rendering, and financial simulations depend on compute instances accessing the same set of data through high-performance shared storage.
An online retail company has more than 50 million active customers and receives more than 25, 000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead.
Which solution will meet these requirements?
- A . Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.
- B . Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.
- C . Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register (he S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.
- D . Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.
C
Explanation:
To make all the data available to various teams and minimize operational overhead, the company can create a data lake by using AWS Lake Formation. This will allow the company to centralize all the data in one place and use fine-grained access controls to manage access to the data. To meet the requirements of the company, the solutions architect can create a data lake by using AWS Lake Formation, create an AWS Glue JDBC connection to Amazon RDS, and register the S3 bucket in Lake Formation. The solutions architect can then use Lake Formation access controls to limit access to the data. This solution will provide the ability to manage fine-grained permissions for the data and minimize operational overhead.
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The company also needs increased capacity for read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
- A . Create an Amazon DynamoDB database table configured with global tables.
- B . Create an Amazon RDS database with Multi-AZ deployments
- C . Create an Amazon RDS database with Multi-AZ DB cluster deployment.
- D . Create an Amazon RDS database configured with cross-Region read replicas.
C
Explanation:
Amazon RDSMulti-AZ DB cluster deployment ensures high availability by automatically replicating data across multiple Availability Zones (AZs), and it supports failover in case of a failure in one AZ. This setup also provides increased capacity for read workloads by allowing read scaling with reader instances in different AZs. This solution offers the most operational efficiency with minimal manual intervention.
Option A (DynamoDB): DynamoDB is not suitable for a relational database workload, which requires a PostgreSQL engine.
Option B (RDS with Multi-AZ): While this provides high availability, it doesn’t offer read scaling capabilities.
Option D (Cross-Region Read Replicas): This adds complexity and is not necessary if the requirement
is high availability within a single region.
AWS
Reference: Amazon RDS Multi-AZ DB Cluster
A company runs applications on AWS that connect to the company’s Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change the applications to use the DynamoDB endpoint.
- B . Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
- C . Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to use the custom proxy endpoint.
- D . Use an AWS Lambda function to provide connection pooling with a target group configuration for the database. Change the applications to use the Lambda function.
B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure1. RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability2. RDS Proxy also reduces failover times for Aurora and RDS databases by up to66%and enables IAM authentication and Secrets Manager integration for database access1. RDS Proxy can be enabled for most applications with no code changes2.
A company is planning to store data on Amazon RDS DB instances. The company must encrypt the data at rest.
What should a solutions architect do to meet this requirement?
- A . Create an encryption key and store the key in AWS Secrets Manager Use the key to encrypt the DB instances
- B . Generate a certificate in AWS Certificate Manager (ACM). Enable SSL/TLS on the DB instances by using the certificate
- C . Create a customer master key (CMK) in AWS Key Management Service (AWS KMS) Enable encryption for the DB instances
- D . Generate a certificate in AWS Identity and Access Management {IAM) Enable SSUTLS on the DB instances by using the certificate
A
Explanation:
To encrypt data at rest in Amazon RDS, you can use the encryption feature of Amazon RDS, which uses AWS Key Management Service (AWS KMS). With this feature, Amazon RDS encrypts each database instance with a unique key. This key is stored securely by AWS KMS. You can manage your own keys or use the default AWS-managed keys. When you enable encryption for a DB instance, Amazon RDS encrypts the underlying storage, including the automated backups, read replicas, and snapshots.
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users while reducing the load on the origin.
Which solution meets these requirements MOST cost-effectively?
- A . Deploy an AWS Global Accelerator accelerator in front of the web servers.
- B . Deploy an Amazon CloudFront web distribution in front of the S3 bucket.
- C . Deploy an Amazon ElastiCache (Redis OSS) instance in front of the web servers.
- D . Deploy an Amazon ElastiCache (Memcached) instance in front of the web servers.
B
Explanation:
Amazon CloudFront is a highly cost-effective CDN that caches content like images and videos at edge locations globally. This reduces latency and the load on the origin S3 bucket. It is ideal for static content that is accessed by many users.
Reference: AWS Documentation C Amazon CloudFront with S3 Integration
A company is creating an application. The company stores data from tests of the application in multiple on-premises locations.
The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The number of accounts and VPCs will increase during the next year. The network architecture must simplify the administration of new connections and must provide the ability to scale.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Create a peering connection between the VPCs. Create a VPN connection between the VPCs and
the on-premises locations. - B . Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
- C . Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments for the on-premises connections.
- D . Create an AWS Direct Connect connection between the on-premises locations and a central VPC.
Connect the central VPC to other VPCs by using peering connections.
C
Explanation:
AWS Transit Gateway simplifies network connectivity by acting as a hub that can connect VPCs and on-premises networks through VPN or Direct Connect. It provides scalability and reduces administrative overhead by eliminating the need to manage complex peering relationships as the number of accounts and VPCs grows.
Reference: AWS Documentation C Transit Gateway
A company is creating an application. The company stores data from tests of the application in multiple on-premises locations.
The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The number of accounts and VPCs will increase during the next year. The network architecture must simplify the administration of new connections and must provide the ability to scale.
Which solution will meet these requirements with the LEAST administrative overhead?
- A . Create a peering connection between the VPCs. Create a VPN connection between the VPCs and
the on-premises locations. - B . Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
- C . Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments for the on-premises connections.
- D . Create an AWS Direct Connect connection between the on-premises locations and a central VPC.
Connect the central VPC to other VPCs by using peering connections.
C
Explanation:
AWS Transit Gateway simplifies network connectivity by acting as a hub that can connect VPCs and on-premises networks through VPN or Direct Connect. It provides scalability and reduces administrative overhead by eliminating the need to manage complex peering relationships as the number of accounts and VPCs grows.
Reference: AWS Documentation C Transit Gateway