Practice Free SAA-C03 Exam Online Questions
At part of budget planning. management wants a report of AWS billed dams listed by user. The data will be used to create department budgets. A solution architect needs to determine the most efficient way to obtain this report Information
Which solution meets these requirement?
- A . Run a query with Amazon Athena to generate the report.
- B . Create a report in Cost Explorer and download the report
- C . Access the bill details from the running dashboard and download Via bill.
- D . Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
B
Explanation:
This option is the most efficient because it uses Cost Explorer, which is a tool that allows you to visualize, understand, and manage your AWS costs and usage over time1. You can create a report in Cost Explorer that lists AWS billed items by user, using the user name tag as a filter2. You can then download the report as a CSV file and use it for budget planning.
Option A is less efficient because it uses Amazon Athena, which is a serverless interactive query service that allows you to analyze data in Amazon S3 using standard SQL3. You would need to set up an Athena table that points to your AWS Cost and Usage Report data in S3, and then run a query to generate the report. This would incur additional costs and complexity.
Option C is less efficient because it uses the billing dashboard, which provides a high-level summary of your AWS costs and usage. You can access the bill details from the billing dashboard and download them via bill, but this would not list the billed items by user. You would need to use tags to group your costs by user name, which would require additional steps.
Option D is less efficient because it uses AWS Budgets, which is a tool that allows you to plan your service usage, service costs, and instance reservations. You can modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES), but this would not generate a report of AWS billed items by user. This would only notify you when your actual or forecasted costs exceed or are expected to exceed your budgeted amount.
An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations. The applications run on Amazon Aurora PostgreSQL databases across all the accounts. The company needs to prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases
Which solution will meet these requirements in the MOST operationally efficient way?
- A . Attach service control policies (SCPs) to the root of the organization to identify the failed login attempts
- B . Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization
- C . Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export the log data to a central Amazon S3 bucket
- D . Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket
C
Explanation:
This option is the most operationally efficient way to meet the requirements because it allows the company to monitor and analyze the database login activity across all the accounts in the organization. By publishing the Aurora general logs to a log group in Amazon CloudWatch Logs, the company can enable the logging of the database connections, disconnections, and failed authentication attempts. By exporting the log data to a central Amazon S3 bucket, the company can store the log data in a durable and cost-effective way and use other AWS services or tools to perform further analysis or alerting on the log data. For example, the company can use Amazon Athena to query the log data in Amazon S3, or use Amazon SNS to send notifications based on the log data.
A company designed a stateless two-tier application that uses Amazon EC2 in a single Availability Zone and an Amazon RDS Multi-AZ DB instance New company management wants to ensure the application is highly available.
What should a solutions architect do to meet this requirement?
- A . Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
- B . Configure the application to take snapshots of the EC2 instances and send them to a different AWS Region.
- C . Configure the application to use Amazon Route 53 latency-based routing to feed requests to the application.
- D . Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application Load Balancer
A
Explanation:
https: //docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-zone.html
A solutions architect needs to implement a solution that can handle up to 5, 000 messages per second. The solution must publish messages as events to multiple consumers. The messages are up to 500 KB in size. The message consumers need to have the ability to use multiple programming languages to consume the messages with minimal latency. The solution must retain published messages for more than 3 months. The solution must enforce strict ordering of the messages.
- A . Publish messages to an Amazon Kinesis Data Streams data stream. Enable enhanced fan-out.
Ensure that consumers ingest the data stream by using dedicated throughput. - B . Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to subscribe to the topic.
- C . Publish messages to Amazon EventBridge. Allow each consumer to create rules to deliver messages to the consumer’s own target.
- D . Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use Amazon Data Firehose to subscribe to the topic.
A
Explanation:
AmazonKinesis Data Streamsis the best choice for this scenario:
Message throughput: Kinesis Data Streams supports high throughput with enhanced fan-out and dedicated throughput for consumers.
Large message size: Supports message sizes up to 1 MB, meeting the 500 KB requirement.
Message retention: Data streams can retain messages for up to 365 days.
Strict ordering: Guarantees message ordering within shards.
Why Other Options Are Not Ideal:
Option B:
While SQS FIFO supports strict ordering, SNS topics do not. SNS also does not natively support message retention or strict ordering across consumers.Does not meet requirements.
Option C:
EventBridge does not provide strict ordering guarantees or message retention beyond 24 hours. Does not meet requirements.
Option D:
SNS topics with Data Firehose are not designed for use cases requiring strict ordering or long message retention.Does not meet requirements. AWS
Reference: Amazon Kinesis Data Streams: AWS Documentation – Kinesis Data Streams
AWS Messaging Services Comparison: AWS Documentation – Messaging Services
A company previously migrated its data warehouse solution to AWS. The company also has an AWS Direct Connect connection. Corporate office users query the data warehouse using a visualization tool. The average size of a query returned by the data warehouse is 50 MB and each webpage sent by the visualization tool is approximately 500 KB. Result sets returned by the data warehouse are not cached.
Which solution provides the LOWEST data transfer egress cost for the company?
- A . Host the visualization tool on premises and query the data warehouse directly over the internet.
- B . Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet.
- C . Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same AWS Region.
- D . Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location in the same Region.
D
Explanation:
https: //aws.amazon.com/directconnect/pricing/
https: //aws.amazon.com/blogs/aws/aws-data-transfer-prices-reduced/
A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and Ingestion rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company’s data science team wants to query Ingested data In near-real time.
Which solution provides near-real -time data querying that is scalable with minimal data loss?
- A . Publish data to Amazon Kinesis Data Streams Use Kinesis data Analytics to query the data.
- B . Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination Use Amazon Redshift to query the data
- C . Store ingested data m an EC2 Instance store Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data.
- D . Store ingested data m an Amazon Elastic Block Store (Amazon EBS) volume Publish data to Amazon ElastiCache tor Red Subscribe to the Redis channel to query the data
A
Explanation:
https: //docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html
A company is migrating a daily Microsoft Windows batch job from the company’s on-premises environment to AWS. The current batch job runs for up to 1 hour. The company wants to modernize the batch job process for the cloud environment.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Create a fleet of Amazon EC2 instances in an Auto Scaling group to handle the Windows batch job processing.
- B . Implement an AWS Lambda function to process the Windows batch job. Use an Amazon EventBridge rule to invoke the Lambda function.
- C . Use AWS Fargate to deploy the Windows batch job as a container. Use AWS Batch to manage the batch job processing.
- D . Use Amazon Elastic Kubernetes Service (Amazon EKS) on Amazon EC2 instances to orchestrate Windows containers for the batch job processing.
C
Explanation:
AWS Batch supports Windows-based jobs and automates provisioning and scaling of compute environments. Paired with AWS Fargate, it removes the need to manage infrastructure. This solution requires the least operational overhead and is cloud-native, providing flexibility and scalability.
Reference: AWS Documentation C AWS Batch with Fargate for Windows Workloads
A retail company has several businesses. The IT team for each business manages its own AWS account. Each team account is part of an organization in AWS Organizations. Each team monitors its product inventory levels in an Amazon DynamoDB table in the team’s own AWS account.
The company is deploying a central inventory reporting application into a shared AWS account. The application must be able to read items from all the teams’ DynamoDB tables.
Which authentication option will meet these requirements MOST securely?
- A . Integrate DynamoDB with AWS Secrets Manager in the inventory application account. Configure the application to use the correct secret from Secrets Manager to authenticate and read the DynamoDB table. Schedule secret rotation for every 30 days.
- B . In every business account, create an IAM user that has programmatic access. Configure the application to use the correct IAM user access key ID and secret access key to authenticate and read the DynamoDB table. Manually rotate IAM access keys every 30 days.
- C . In every business account, create an IAM role named BU_ROLE with a policy that gives the role access to the DynamoDB table and a trust policy to trust a specific role in the inventory application account. In the inventory account, create a role named APP_ROLE that allows access to the STS AssumeRole API operation. Configure the application to use APP_ROLE and assume the cross-account role BU_ROLE to read the DynamoDB table.
- D . Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identity certificates to authenticate DynamoDB. Configure the application to use the correct certificate to authenticate and read the DynamoDB table.
C
Explanation:
This solution meets the requirements most securely because it uses IAM roles and the STS AssumeRole API operation to authenticate and authorize the inventory application to access the DynamoDB tables in different accounts. IAM roles are more secure than IAM users or certificates because they do not require long-term credentials or passwords. Instead, IAM roles provide temporary security credentials that are automatically rotated and can be configured with a limited duration. The STS AssumeRole API operation enables you to request temporary credentials for a role that you are allowed to assume. By using this operation, you can delegate access to resources that are in different AWS accounts that you own or that are owned by third parties. The trust policy of the
role defines which entities can assume the role, and the permissions policy of the role defines which actions can be performed on the resources. By using this solution, you can avoid hard-coding credentials or certificates in the inventory application, and you can also avoid storing them in Secrets Manager or ACM. You can also leverage the built-in security features of IAM and STS, such as MFA, access logging, and policy conditions.
Reference: IAM Roles
STS AssumeRole
Tutorial: Delegate Access Across AWS Accounts Using IAM Roles
A company is building a serverless application to process large video files that users upload. The application performs multiple tasks to process each video file. Processing can take up to 30 minutes for the largest files.
The company needs a scalable architecture to support the processing application.
Which solution will meet these requirements?
- A . Store the uploaded video files in Amazon Elastic File System (Amazon EFS). Configure a schedule in Amazon EventBridge Scheduler to invoke an AWS Lambda function periodically to check for new files. Configure the Lambda function to perform all the processing tasks.
- B . Store the uploaded video files in Amazon Elastic File System (Amazon EFS). Configure an Amazon
EFS event notification to start an AWS Step Functions workflow that uses AWS Fargate tasks to perform the processing tasks. - C . Store the uploaded video files in Amazon S3. Configure an Amazon S3 event notification to send an event to Amazon EventBridge when a user uploads a new video file. Configure an AWS Step Functions workflow as a target for an EventBridge rule. Use the workflow to manage AWS Fargate tasks to perform the processing tasks.
- D . Store the uploaded video files in Amazon S3. Configure an Amazon S3 event notification to invoke an AWS Lambda function when a user uploads a new video file. Configure the Lambda function to perform all the processing tasks.
C
Explanation:
The requirements include:
Scalability: The solution must scale as video files are uploaded. Long-running tasks: Processing tasks can take up to 30 minutes. AWS Lambda has a maximum execution time of 15 minutes, which rules out options that involve Lambda performing all the processing.
Serverless and event-driven architecture: Ensures cost-effectiveness and high availability.
Analysis of Options:
Option A:
AWS Lambda has a 15-minute timeout, which cannot support tasks that take up to 30 minutes.
EventBridge Scheduler is unnecessary for monitoring files when native event notifications are available. Not a valid choice.
Option B:
AWS Step Functions and AWS Fargate can handle long-running processes, but Amazon EFS is not the ideal storage for uploaded video files in a serverless architecture.
Processing tasks triggered by EFS events are not a common pattern and may introduce complexities. Not the best practice.
Option C:
Amazon S3 is used for storing uploaded files, which integrates natively with event-driven services like EventBridge and Step Functions.
Amazon S3 event notifications trigger a Step Functions workflow, which can orchestrate Fargate tasks to process large video files, meeting the scalability and execution time requirements. Correct choice.
Option D:
Similar to Option A, AWS Lambda cannot handle long-running processes due to its 15-minute timeout.
Invoking Lambda for processing directly is not feasible for tasks that take up to 30 minutes. Not a valid choice.
AWS
Reference: Amazon S3 Event Notifications: AWS Documentation – S3 Event Notifications AWS Step Functions: AWS Documentation – Step Functions AWS Fargate: AWS Documentation C Fargate Comparison of Storage Services: AWS Storage Options
By leveragingAmazon S3, Step Functions, andFargate, this solution provides a scalable, efficient, and serverless approach to handling video processing tasks.
A company wants to migrate an application to AWS. The company wants to increase the application’s current availability. The company wants to use AWS WAF in the application’s architecture.
Which solution will meet these requirements?
- A . Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target. Connect a WAF to the ALB.
- B . Create a cluster placement group that contains multiple Amazon EC2 instances that hosts the application Configure an Application Load Balancer and set the EC2 instances as the targets. Connect a WAF to the placement group.
- C . Create two Amazon EC2 instances that host the application across two Availability Zones. Configure the EC2 instances as the targets of an Application Load Balancer (ALB). Connect a WAF to the ALB.
- D . Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target Connect a WAF to the Auto Scaling group.
A
Explanation:
Understanding the Requirement: The company wants to migrate an application to AWS, increase its availability, and use AWS WAF in the architecture.
Analysis of Options:
Auto Scaling group with ALB and WAF: This option provides high availability by distributing instances across multiple Availability Zones. The ALB ensures even traffic distribution, and AWS WAF provides security at the application layer.
Cluster placement group with ALB and WAF: Cluster placement groups are for low-latency networking within a single AZ, which does not provide the high availability across AZs.
Two EC2 instances with ALB and WAF: This setup provides some availability but does not scale automatically, missing the benefits of an Auto Scaling group.
Auto Scaling group with WAF directly: AWS WAF cannot be directly connected to an Auto Scaling group; it needs to be attached to an ALB, CloudFront distribution, or API Gateway.
Best Solution:
Auto Scaling group with ALB and WAF: This configuration ensures high availability, scalability, and security, meeting all the requirements effectively.
Reference: Amazon EC2 Auto Scaling
Application Load Balancer
AWS WAF