Practice Free SAA-C03 Exam Online Questions
A company’s reporting system delivers hundreds of .csv files to an Amazon S3 bucket each day. The company must convert these files to Apache Parquet format and must store the files in a transformed data bucket.
Which solution will meet these requirements with the LEAST development effort?
- A . Create an Amazon EMR cluster with Apache Spark installed. Write a Spark application to transform the data. Use EMR File System (EMRFS) to write files to the transformed data bucket.
- B . Create an AWS Glue crawler to discover the data. Create an AWS Glue extract, transform, and load (ETL) job to transform the data. Specify the transformed data bucket in the output step.
- C . Use AWS Batch to create a job definition with Bash syntax to transform the data and output the data to the transformed data bucket. Use the job definition to submit a job. Specify an array job as the job type.
- D . Create an AWS Lambda function to transform the data and output the data to the transformed data bucket. Configure an event notification for the S3 bucket. Specify the Lambda function as the destination for the event notification.
B
Explanation:
AWS Glue provides a serverless ETL solution requiring minimal development. Glue supports conversion to Parquet with managed jobs and integrates with S3 for output.
AWS Documentation
Reference: AWS Glue Overview
A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.
Which storage solution meets these requirements?
- A . Amazon S3 Standard
- B . Amazon S3 Intelligent-Tiering
- C . Amazon S3 Glacier Deep Archive
- D . Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
B
Explanation:
Amazon S3 Intelligent-Tiering is the most cost-effective solution for this scenario, providing both high availability and durability while adjusting automatically to changing access patterns. It moves data across two access tiers: one optimized for frequent access and another for infrequent access, based on usage patterns. This tiering ensures that the company avoids paying for unused storage while also keeping frequently accessed data in a more accessible tier. Key AWS references and benefits ofS3 Intelligent-Tiering:
High Durability and Availability: Amazon S3 offers 99.999999999% durability and 99.9% availability for objects stored, ensuring data is always protected.
Automatic Tiering: Data is automatically moved between tiers based on access patterns, making it ideal for workloads with unpredictable or variable access patterns.
No Retrieval Fees: Unlike S3 One Zone-IA or Glacier, there are no retrieval fees, making this more cost-effective in scenarios where access patterns vary over time.
AWS Documentation: According to the AWS Well-Architected Framework under theCost Optimization Pillar, S3 Intelligent-Tiering is recommended for storage when access patterns change over time, as it minimizes costs while maintaining availability.
A company is planning to migrate an on-premises online transaction processing (OLTP) database that uses MySQL to an AWS managed database management system. Several reporting and analytics applications use the on-premises database heavily on weekends and at the end of each month. The cloud-based solution must be able to handle read-heavy surges during weekends and at the end of each month.
Which solution will meet these requirements?
- A . Migrate the database to an Amazon Aurora MySQL cluster. Configure Aurora Auto Scaling to use replicas to handle surges.
- B . Migrate the database to an Amazon EC2 instance that runs MySQL. Use an EC2 instance type that has ephemeral storage. Attach Amazon EBS Provisioned IOPS SSD (io2) volumes to the instance.
- C . Migrate the database to an Amazon RDS for MySQL database. Configure the RDS for MySQL database for a Multi-AZ deployment, and set up auto scaling.
- D . Migrate from the database to Amazon Redshift. Use Amazon Redshift as the database for both OLTP and analytics applications.
A
Explanation:
A company hosts an application on AWS. The application gives users the ability to upload photos and store the photos in an Amazon S3 bucket. The company wants to use Amazon CloudFront and a custom domain name to upload the photo files to the S3 bucket in the eu-west-1 Region.
Which solution will meet these requirements? (Select TWO.)
- A . Use AWS Certificate Manager (ACM) to create a public certificate in the us-east-1 Region. Use the certificate in CloudFront
- B . Use AWS Certificate Manager (ACM) to create a public certificate in eu-west-1. Use the certificate in CloudFront.
- C . Configure Amazon S3 to allow uploads from CloudFront. Configure S3 Transfer Acceleration.
- D . Configure Amazon S3 to allow uploads from CloudFront origin access control (OAC).
- E . Configure Amazon S3 to allow uploads from CloudFront. Configure an Amazon S3 website endpoint.
B, D
Explanation:
To upload photos to an S3 bucket using Amazon CloudFront with a custom domain name, the following components are required:
ACM in us-east-1 (Option A): When using CloudFront with HTTPS, the SSL/TLS certificate must be created in theus-east-1Region. AWS Certificate Manager (ACM) handles the provisioning, management, and renewal of public certificates, making this a cost-effective and low-maintenance solution.
S3 Transfer Acceleration (Option C): Transfer Acceleration allows faster uploads to S3 from CloudFront by routing traffic through AWS’s edge locations. This significantly speeds up the data upload process, especially for users that are geographically distant from the S3 bucket’s region.
Option B (ACM in eu-west-1): CloudFront only supports certificates created in us-east-1.
Option D and E (OAC and website endpoint): These are not ideal for handling secure uploads or efficient data transfers in this case.
AWS
Reference: Using ACM with CloudFront
Amazon S3 Transfer Acceleration
A company is building a gaming application that needs to send unique events to multiple leaderboards, player matchmaking systems, and authentication services concurrently. The company requires an AWS-based event-driven system that delivers events in order and supports a publish-subscribe model. The gaming application must be the publisher, and the leaderboards, matchmaking systems, and authentication services must be the subscribers.
Which solution will meet these requirements?
- A . Amazon EventBridge event buses
- B . Amazon Simple Notification Service (Amazon SNS) FIFO topics
- C . Amazon Simple Notification Service (Amazon SNS) standard topics
- D . Amazon Simple Queue Service (Amazon SQS) FIFO queues
B
Explanation:
The requirement is an event-driven pub/sub system that guarantees ordered delivery of events.
Amazon SNS FIFO topics provide the publish-subscribe model along with FIFO (First-In-First-Out)
delivery and exactly-once message processing, ensuring ordered delivery to multiple subscribers.
Option A, EventBridge, provides event buses but does not guarantee event ordering across multiple subscribers.
Option C (SNS standard topics) provides pub/sub but without ordering guarantees.
Option D (SQS FIFO queues) guarantees order but are point-to-point queues, not pub/sub. Thus, Amazon SNS FIFO topics meet the requirements for ordered pub/sub messaging.
Reference: Amazon SNS FIFO Topics (https: //docs.aws.amazon.com/sns/latest/dg/fifo-topics.html)
Amazon EventBridge (https: //docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html)
AWS Well-Architected Framework ― Performance Efficiency Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
A company needs to provide a team of contractors with temporary access to the company’s AWS resources for a short-term project. The contractors need different levels of access to AWS services. The company needs to revoke permissions for all the contractors when the project is finished.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use AWS IAM to create a user account for each contractor. Attach policies that define access levels for the contractors to the user accounts. Manually deactivate the accounts when the project is finished.
- B . Use AWS Security Token Service (AWS STS) to generate temporary credentials for the contractors. Provide the contractors access based on predefined roles. Set the access to automatically expire when the project is finished.
- C . Configure AWS Config rules to monitor the contractors’ access patterns. Use AWS Config rules to automatically revoke permissions that are not in use or that are too permissive.
- D . Use AWS CloudTrail and custom Amazon EventBridge triggers to audit the contractors’ actions.
Adjust the permissions for each contractor based on activity logs.
B
Explanation:
AWS STS issues temporary credentials with automatically expiring permissions based on IAM roles.
This eliminates the need to manually manage or deactivate IAM users.
“You can use AWS STS to grant temporary security credentials that automatically expire after a specified duration.”
― Temporary Security Credentials
This is the least operational overhead and follows AWS best practices for short-term access.
A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are possible at this time. The company needs a solution that minimizes operational overhead.
- A . Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
- B . Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage.
- C . Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.
- D . Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
D
Explanation:
EKS lets teams “run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane,” and with AWS Fargate, you “run Kubernetes pods without managing servers,” reducing operational overhead for worker nodes. For the data layer, Amazon DocumentDB (with MongoDB compatibility) “supports the MongoDB API and drivers,” allowing existing MongoDB applications to work without application changes, while the service “automatically scales storage,” provides “high availability across multiple Availability Zones,” automatic backups, and patching. Because the company cannot change code or deployment methods, keeping Kubernetes (EKS) and the MongoDB API (DocumentDB compatibility) is essential. Options A and C either change the orchestrator (to ECS) or the database API (to DynamoDB), which would require code/deployment changes.
Option A also leaves you managing EC2 nodes and a self-managed MongoDB on EC2, increasing ops burden. Therefore, EKS on Fargate + Amazon DocumentDB minimizes operations and preserves compatibility.
Reference: Amazon EKS User Guide ― “What is Amazon EKS,” “Fargate for EKS (serverless pods)”; Amazon DocumentDB ― “MongoDB compatibility,” “High availability and automatic scaling”; AWS Well-Architected ― Operational Excellence (use managed services).
A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.
The company must store the files for 4 years before the files can be deleted. The files must be immediately accessible. The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation.
- B . Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA)
30 days after object creation Delete the files 4 years after object creation. - C . Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA)
30 days after object creation Delete the files 4 years after object creation. - D . Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA)
30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.
C
Explanation:
Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still providing high availability and durability.
Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and reduces storage costs significantly.
Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures automatic management of the data lifecycle, moving files to a lower-cost storage class without manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is no longer needed.
Reference: Amazon S3 Storage Classes
S3 Lifecycle Configuration
A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database Compliance regulations mandate that all personally identifiable information (Pll) be encrypted at rest.
Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure?
- A . Deploy AWS Certificate Manager to generate certificates Use the certificates to encrypt the database volume
- B . Deploy AWS CloudHSM. generate encryption keys, and use the keys to encrypt database volumes.
- C . Configure SSL encryption using AWS Key Management Service {AWS KMS) keys to encrypt database volumes.
- D . Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
D
Explanation:
EBS Encryption:
Default EBS Encryption: Can be enabled for new EBS volumes.
Use of AWS KMS: Specify AWS KMS keys to handle encryption and decryption of data transparently.
Amazon RDS Encryption:
RDS Encryption: Encrypts the underlying storage for RDS instances using AWS KMS.
Configuration: Enable encryption when creating the RDS instance or modify an existing instance to enable encryption.
Least Amount of Changes:
Both EBS and RDS support seamless encryption with AWS KMS, requiring minimal changes to the existing infrastructure.
Enables compliance with regulatory requirements without modifying the application. Operational Efficiency: Using AWS KMS for both EBS and RDS ensures a consistent, managed approach to encryption, simplifying key management and enhancing security.
Reference: Amazon EBS Encryption
Amazon RDS Encryption
AWS Key Management Service
A law firm needs to make hundreds of files readable for the general public. The law firm must prevent members of the public from modifying or deleting the files before a specified future date.
Which solution will meet these requirements MOST securely?
- A . Upload the files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the specified date.
- B . Create a new Amazon S3 bucket. Enable S3 Versioning. Use S3 Object Lock and set a retention period based on the specified date. Create an Amazon CloudFront distribution to serve content from the bucket. Use an S3 bucket policy to restrict access to the CloudFront origin access control (OAC).
- C . Create a new Amazon S3 bucket. Enable S3 Versioning. Configure an event trigger to run an AWS Lambda function if a user modifies or deletes an object. Configure the Lambda function to replace
the modified or deleted objects with the original versions of the objects from a private S3 bucket. - D . Upload the files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period based on the specified date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
B
Explanation:
S3 Object Lock: Enables Write Once Read Many (WORM) protection for data, preventing objects from being deleted or modified for a set retention period.
S3 Versioning: Helps maintain object versions and ensures a recovery path for accidental overwrites.
CloudFront Distribution: Ensures secure and efficient public access by serving content through an edge-optimized delivery network while protecting S3 data with OAC.
Bucket Policy for OAC: Restricts public access to only the CloudFront origin, ensuring maximum security.
Amazon S3 Object Lock Documentation
