Practice Free SAA-C03 Exam Online Questions
A company wants to build a map of its IT infrastructure to identify and enforce policies on resources that pose security risks. The company’s security team must be able to query data in the IT infrastructure map and quickly identify security risks.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon RDS to store the data. Use SQL to query the data to identify security risks.
- B . Use Amazon Neptune to store the data. Use SPARQL to query the data to identify security risks.
- C . Use Amazon Redshift to store the data. Use SQL to query the data to identify security risks.
- D . Use Amazon DynamoDB to store the data. Use PartiQL to query the data to identify security risks.
B
Explanation:
Understanding the Requirement: The company needs to map its IT infrastructure to identify and enforce security policies, with the ability to quickly query and identify security risks.
Analysis of Options:
Amazon RDS: While suitable for relational data, it is not optimized for handling complex relationships and querying those relationships, which is essential for an IT infrastructure map.
Amazon Neptune: A graph database service designed for handling highly connected data. It uses SPARQL to query graph data efficiently, making it ideal for mapping IT infrastructure and identifying relationships that pose security risks.
Amazon Redshift: A data warehouse solution optimized for complex queries on large datasets but not specifically for graph data.
Amazon DynamoDB: A NoSQL database that uses PartiQL for querying, but it is not optimized for complex relationships in graph data.
Best Option for Mapping and Querying IT Infrastructure:
Amazon Neptune provides the most suitable solution with the least operational overhead. It is purpose-built for graph data and enables efficient querying of complex relationships to identify security risks.
Reference: Amazon Neptune
Querying with SPARQL
A developer is creating an ecommerce workflow in an AWS Step Functions state machine that includes an HTTP Task state. The task passes shipping information and order details to an endpoint. The developer needs to test the workflow to confirm that the HTTP headers and body are correct and that the responses meet expectations.
Which solution will meet these requirements?
- A . Use the TestState API to invoke only the HTTP Task. Set the inspection level to TRACE.
- B . Use the TestState API to invoke the state machine. Set the inspection level to DEBUG.
- C . Use the data flow simulator to invoke only the HTTP Task. View the request and response data.
- D . Change the log level of the state machine to ALL. Run the state machine.
D
Explanation:
State Machine Testing with Logs:
Changing the log level to ALL enables capturing detailed request and response data. This helps verify
HTTP headers, body, and responses.
Incorrect Options Analysis:
Option A and B: The TestState API is not a valid option for Step Functions.
Option C: A data flow simulator does not exist for AWS Step Functions.
Reference: Step Functions Logging and Monitoring
A global company runs its workloads on AWS. The company’s application uses Amazon S3 buckets across AWS Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets daily. The company wants to identify all S3 buckets that are not versioning-enabled.
Which solution will meet these requirements?
- A . Set up an AWS CloudTrail event that has a rule to identify all S3 buckets that are not versioning-enabled across Regions
- B . Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions.
- C . Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across Regions
- D . Create an S3 Multi-Region Access Point to identify all S3 buckets that are not versioning-enabled across Regions
B
Explanation:
Amazon S3 Storage Lens:
S3 Storage Lens provides organization-wide visibility into object storage usage and activity trends. It can generate metrics and insights about your S3 buckets, including versioning status. Configuration:
Enable S3 Storage Lens at the organization level.
Configure the dashboard to include the versioning status metric.
Identify Non-Versioned Buckets:
Use the S3 Storage Lens dashboard to filter and identify buckets that do not have versioning enabled. Storage Lens provides detailed insights and reports which can be used to enforce compliance and manage storage effectively.
Operational Efficiency: Using S3 Storage Lens provides a centralized, easy-to-use interface for monitoring bucket configurations across multiple Regions and accounts, reducing the need for custom scripts or manual checks.
Reference: Amazon S3 Storage Lens
S3 Storage Lens Metrics
A company wants to create a payment processing application. The application must run when a payment record arrives in an existing Amazon S3 bucket. The application must process each payment record exactly once. The company wants to use an AWS Lambda function to process the payments.
Which solution will meet these requirements?
- A . Configure the existing S3 bucket to send object creation events to Amazon EventBridge. Configure EventBridge to route events to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
- B . Configure the existing S3 bucket to send object creation events to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the Lambda function to run when a new event arrives in the SNS topic.
- C . Configure the existing S3 bucket to send object creation events to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
- D . Configure the existing S3 bucket to send object creation events directly to the Lambda function. Configure the Lambda function to handle object creation events and to process the payments.
A solutions architect is designing a workload that will store hourly energy consumption by business tenants in a building. The sensors will feed a database through HTTP requests that will add up usage for each tenant. The solutions architect must use managed services when possible. The workload will receive more features in the future as the solutions architect adds independent components.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an Amazon DynamoDB table.
- B . Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
- C . Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
- D . Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to store the processed data.
A
Explanation:
To use an event-driven programming model with AWS Lambda and reduce operational overhead, Amazon API Gateway and Amazon DynamoDB are suitable solutions. Amazon API Gateway can receive the data from the sensors and invoke AWS Lambda functions to process the data. AWS Lambda can run code without provisioning or managing servers, and scale automatically with the incoming requests. Amazon DynamoDB can store the data in a fast and flexible NoSQL database that can handle any amount of data with consistent performance.
Reference:
What Is Amazon API Gateway?
What Is AWS Lambda?
What Is Amazon DynamoDB?
A finance company collects streaming data for a real-time search and visualization system. They want to migrate to AWS using a native solution for ingest, search, and visualization.
- A . Use EC2 to ingest/process data to S3 → Athena + Managed Grafana
- B . Use EMR to ingest/process to Redshift → Redshift Spectrum + QuickSight
- C . Use EKS to ingest/process to DynamoDB → CloudWatch Dashboards
- D . Use Kinesis Data Streams → Amazon OpenSearch Service → Amazon QuickSight
A finance company collects streaming data for a real-time search and visualization system. They want to migrate to AWS using a native solution for ingest, search, and visualization.
- A . Use EC2 to ingest/process data to S3 → Athena + Managed Grafana
- B . Use EMR to ingest/process to Redshift → Redshift Spectrum + QuickSight
- C . Use EKS to ingest/process to DynamoDB → CloudWatch Dashboards
- D . Use Kinesis Data Streams → Amazon OpenSearch Service → Amazon QuickSight
A company has an automobile sales website that stores its listings in a database on Amazon RDS When an automobile is sold the listing needs to be removed from the website and the data must be
sent to multiple target systems.
Which design should a solutions architect recommend?
- A . Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS> queue for the targets to consume
- B . Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume
- C . Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics Use AWS Lambda functions to update the targets
- D . Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues Use AWS Lambda functions to update the targets
D
Explanation:
https: //docs.aws.amazon.com/lambda/latest/dg/services-rds.htmlhttps: //docs.aws.amazon.com/lambda/latest/dg/with-sns.html
A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage.
What should a solutions architect do to meet these requirements?
- A . Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
- B . Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.
- C . Create a file system on a Provisioned IOPS SSD (102) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.
- D . Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchromze the EBS volumes across the different EC2 instances.
B
Explanation:
it allows the EC2 instances to read and write rapidly and concurrently to shared storage across two Availability Zones. Amazon EFS provides a scalable, elastic, and highly available file system that can be mounted from multiple EC2 instances. Amazon EFS supports high levels of throughput and IOPS, and consistent low latencies. Amazon EFS also supports NFSv4 lock upgrading and downgrading, which enables high levels of concurrency.
Reference: Amazon EFS Features
Using Amazon EFS with Amazon EC2
A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results does not matter. The application uses a monolithicarchitecture. The only way that the company can scale the application to meet increased demand is to increase the size of the instances.
The company’s developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service (Amazon ECS).
What should a solutions architect recommend for communication between the microservices?
- A . Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue.
- B . Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic. Add code to the data consumers to subscribe to the topic.
- C . Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add code to the data consumers to receive a data object that is passed from the Lambda function.
- D . Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
A
Explanation:
Queue has Limited throughput (300 msg/s without batching, 3000 msg/s with batching whereby up-to 10 msg per batch operation; Msg duplicates not allowed in the queue (exactly-once delivery); Msg order is preserved (FIFO); Queue name must end with .fifo