Practice Free SAA-C03 Exam Online Questions
A company wants to add its existing AWS usage cost to its operation cost dashboard A solutions architect needs to recommend a solution that will give the company access to its usage cost programmatically. The company must be able to access cost data for the current year and forecast costs for the next 12 months.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Access usage cost-related data by using the AWS Cost Explorer API with pagination.
- B . Access usage cost-related data by using downloadable AWS Cost Explorer report csv files.
- C . Configure AWS Budgets actions to send usage cost data to the company through FTP.
- D . Create AWS Budgets reports for usage cost data Send the data to the company through SMTP.
A
Explanation:
Understanding the Requirement: The company needs programmatic access to its AWS usage costs for the current year and cost forecasts for the next 12 months, with minimal operational overhead.
Analysis of Options:
AWS Cost Explorer API: Provides programmatic access to detailed usage and cost data, including forecast costs. It supports pagination for handling large datasets, making it an efficient solution. Downloadable AWS Cost Explorer report csv files: While useful, this method requires manual handling of files and does not provide real-time access.
AWS Budgets actions via FTP: This is less suitable as it involves setting up FTP transfers and does not provide the same level of detail and real-time access as the API.
AWS Budgets reports via SMTP: Similar to FTP, this method involves additional setup and lacks the real-time access and detail provided by the API.
Best Option for Minimal Operational Overhead:
AWS Cost Explorer API provides direct, programmatic access to cost data, including detailed usage and forecasting, with minimal setup and operational effort. It is the most efficient solution for integrating cost data into an operational cost dashboard.
Reference: AWS Cost Explorer API
AWS Cost and Usage Reports
A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule. These tasks were written by different teams and have no common programming language. The company is concerned about performance and scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
- B . Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
- C . Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
- D . Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto
Scaling group with the AMI to run multiple copies of the instance.
A
Explanation:
AWS Batch is a fully managed service that enables users to run batch jobs on AWS. It can handle different types of tasks written in different languages and run them on EC2 instances. It also integrates with Amazon EventBridge (Amazon CloudWatch Events) to schedule jobs based on time or event triggers. This solution will meet the requirements of performance, scalability and low operational overhead12.
B. Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs. This solution will not meet the requirement of low operational overhead, as it involves converting the EC2 instance to a container and using AWS App Runner, which is a service that automatically builds and deploys web applications and load balances traffic2. This is not necessary for running batch jobs.
C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events). This solution will not meet the requirement of performance, as AWS Lambda has a limit of 15 minutes for execution timeand 10 GB for memory allocation3. These limits may not be sufficient for running 1-hour tasks.
D. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance. This solution will not meet the requirement of low operational overhead, as it involves creating and maintaining AMIs and Auto Scaling groups, which are additional resources that need to be configured and managed2.
Reference URL: https: //docs.aws.amazon.com/whitepapers/latest/aws-overview/compute-services.html
A solutions architect is designing a web application that will run on Amazon EC2 instances behind an Application Load Balancer (ALB). The company strictly requires that the application be resilient against malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.
What should the solutions architect recommend?
- A . Leverage Amazon CloudFront with the ALB endpoint as the origin.
- B . Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.
- C . Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.
- D . Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.
B
Explanation:
AWS WAF allows web applications to protect themselves from common web exploits and vulnerabilities. Using AWS managed rule groups ensures protection against known attack patterns, such as SQL injection and cross-site scripting. Associating AWS WAF with the ALB provides application-layer security and real-time threat mitigation.
Reference: AWS Documentation C AWS WAF and Shield Developer Guide
A company is expecting rapid growth in the near future. A solutions architect needs to configure existing users and grant permissions to new users on AWS. The solutions architect has decided to create IAM groups. The solutions architect will add the new users to IAM groups based on department.
Which additional action is the MOST secure way to grant permissions to the new users?
- A . Apply service control policies (SCPs) to manage access permissions.
- B . Create IAM roles that have least privilege permission. Attach the roles to the IAM groups.
- C . Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups.
- D . Create IAM roles. Associate the roles with a permissions boundary that defines the maximum permissions.
C
Explanation:
An IAM policy is a document that defines the permissions for an IAM identity (such as a user, group, or role). You can use IAM policies to grant permissions to existing users and groups based on department. You can create an IAM policy that grants least privilege permission, which means that you only grant the minimum permissions required for the users to perform their tasks. You can then attach the policy to the IAM groups, which will apply the policy to all the users in those groups. This solution will reduce operational costs and simplify configuration and management of permissions.
Reference: https: //docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
A company is using Amazon CloudFront with this website. The company has enabled logging on the CloudFront distribution, and logs are saved in one of the company’s Amazon S3 buckets. The company needs to perform advanced analyses on the logs and build visualizations.
What should a solutions architect do to meet these requirements?
- A . Use standard SQL queries in Amazon Athena to analyze the CloudFront togs in the S3 bucket Visualize the results with AWS Glue
- B . Use standard SQL queries in Amazon Athena to analyze the CloudFront togs in the S3 bucket Visualize the results with Amazon QuickSight
- C . Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs m the S3 bucket Visualize the results with AWS Glue
- D . Use standard SQL queries in Amazon DynamoDB to analyze the CtoudFront logs m the S3 bucket Visualize the results with Amazon QuickSight
B
Explanation:
https: //docs.aws.amazon.com/quicksight/latest/user/welcome.html
Using Athena to query the CloudFront logs in the S3 bucket and QuickSight to visualize the results is the best solution because it is cost-effective, scalable, and requires no infrastructure setup. It also provides a robust solution that enables the company to perform advanced analysis and build interactive visualizations without the need for a dedicated team of developers.
A company hosts a public web application on AWS. The website has a three-tier architecture. The frontend web tier is comprised of Amazon EC2 instances in an Auto Scaling group. The application tier is a second Auto Scaling group. The database tier is an Amazon RDS database.
The company has configured the Auto Scaling groups to handle the application’s normal level of demand. During an unexpected spike in demand, the company notices a long delay in the startup
time when the frontend and application layers scale out. The company needs to improve the scaling performance of the application without negatively affecting the user experience.
Which solution will meet these requirements MOST cost-effectively?
- A . Decrease the minimum number of EC2 instances for both Auto Scaling groups. Increase the desired number of instances to meet the peak demand requirement.
- B . Configure the maximum number of instances for both Auto Scaling groups to be the number required to meet the peak demand. Create a warm pool.
- C . Increase the maximum number of EC2 instances for both Auto Scaling groups to meet the normal demand requirement. Create a warm pool.
- D . Reconfigure both Auto Scaling groups to use a scheduled scaling policy. Increase the size of the EC2 instance types and the RDS instance types.
B
Explanation:
EC2 Auto Scaling warm pools allow you to pre-initialize instances, reducing the delay in scale-out events. This results in significantly faster response times during demand surges while remaining cost-effective compared to always running at peak capacity.
Reference: AWS Documentation C EC2 Auto Scaling Warm Pools for Faster Scaling
A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web application consists of a web tier and an application tier that stores and retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly accessible. The application EC2 instances need to access the DynamoDB tables without exposing API credentials in the template.
What should the solutions architect do to meet these requirements?
- A . Create an IAM role to read the DynamoDB tables. Associate the role with the application instances by referencing an instance profile.
- B . Create an IAM role that has the required permissions to read and write from the DynamoDB tables. Add the role to the EC2 instance profile, and associate the instance profile with the application instances.
- C . Use the parameter section in the AWS CloudFormation template to have the user input access and secret keys from an already-created IAM user that has the required permissions to read and write from the DynamoDB tables.
- D . Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write from the DynamoDB tables. Use the GetAtt function to retrieve the access and secret keys, and pass them to the application instances through the user data.
B
Explanation:
it allows the application EC2 instances to access the DynamoDB tables without exposing API credentials in the template. By creating an IAM role that has the required permissions to read and write from the DynamoDB tables and adding it to the EC2 instance profile, the application instances can use temporary security credentials that are automatically rotated by AWS. This is a secure and best practice way to grant access to AWS resources from EC2 instances.
Reference: IAM Roles for Amazon EC2
Using Instance Profiles
A company is developing a two-tier web application on AWS. The company’s developers have deployed the application on an Amazon EC2 instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The company must also implement a solution to automatically rotate the database credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and instance metadata at the same time.
- B . Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the configuration file at the same time. Use S3 Versioning to ensure the ability to fall back to previous values.
- C . Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret.
- D . Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters.
C
Explanation:
https: //docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
A company is designing a new web application that will run on Amazon EC2 Instances. The application will use Amazon DynamoDB for backend data storage. The application traffic will be unpredictable. T company expects that the application read and write throughput to the database will be moderate to high. The company needs to scale in response to application traffic.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
- A . Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class. Set DynamoDB auto scaling to a maximum defined capacity.
- B . Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
- C . Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a maximum defined capacity.
- D . Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class.
B
Explanation:
The most cost-effective DynamoDB table configuration for the web application is to configure DynamoDB in on-demand mode by using the DynamoDB Standard table class. This configuration will allow the company to scale in response to application traffic and pay only for the read and write requests that the application performs on the table.
On-demand mode is a flexible billing option that can handle thousands of requests per second without capacity planning. On-demand mode automatically adjusts the table’s capacity based on the incoming traffic, and charges only for the read and write requests that are actually performed. On-demand mode is suitable for applications with unpredictable or variable workloads, or applications that prefer the ease of paying for only what they use1.
The DynamoDB Standard table class is the default and recommended table class for most workloads. The DynamoDB Standard table class offers lower throughput costs than the DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, and is more cost-effective for tables where throughput is the dominant cost. The DynamoDB Standard table class also offers the same performance, durability, and availability as the DynamoDB Standard-IA table class2.
The other options are not correct because they are either not cost-effective or not suitable for the use case. Configuring DynamoDB with provisioned read and write by using the DynamoDB Standard table class, and setting DynamoDB auto scaling to a maximum defined capacity is not correct because this configuration requires manual estimation and management of the table’s capacity, which adds complexity and cost to the solution. Provisioned mode is a billing option that requires users to specify the amount of read and write capacity units for their tables, and charges for the reserved capacity regardless of usage. Provisioned mode is suitable for applications with predictable or stable workloads, or applications that require finer-grained control over their capacity settings1. Configuring DynamoDB with provisioned read and write by using the DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, and setting DynamoDB auto scaling to a maximum defined capacity is not correct because this configuration is not cost-effective for tables with moderate to high throughput. The DynamoDB Standard-IA table class offers lower storage costs than the DynamoDB Standard table class, but higher throughput costs. The DynamoDB Standard-IA table class is optimized for tables where storage is the dominant cost, such as tables that store infrequently accessed data2. Configuring DynamoDB in on-demand mode by using the DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class is not correct because this configuration is not cost-effective for tables with moderate to high throughput. As mentioned above, the DynamoDB Standard-IA table class has higher throughput costs than the DynamoDB Standard table class, which can offset the savings from lower storage costs.
Reference: Table classes – Amazon DynamoDB
Read/write capacity mode – Amazon DynamoDB
A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application’s architecture.
What should a solutions architect do to meet these requirements?
- A . Use Amazon ElastiCache in front of the database.
- B . Use RDS Proxy between the application and the database.
- C . Migrate the application from EC2 instances to AWS Lambda.
- D . Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB.
A
Explanation:
ElastiCache can help speed up the read performance of the database by caching frequently accessed data, reducing latency and allowing the application to access the data more quickly. This solution requires minimal modifications to the current architecture, as ElastiCache can be used in conjunction with the existing Amazon RDS for MySQL database.