Practice Free SAA-C03 Exam Online Questions
A company stores petabytes of historical medical information on premises. The company has a process to manage encryption of the data to comply with regulations. The company needs a cloud-based solution for data backup, recovery, and archiving. The company must retain control over the encryption key material.
Which combination of solutions will meet these requirements? (Select
TWO.)
- A . Create an AWS Key Management Service (AWS KMS) key without key material. Import the company’s key material into the KMS key.
- B . Create an AWS Key Management Service (AWS KMS) encryption key that contains key material generated by AWS KMS.
- C . Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage. Use S3 Bucket Keys with AWS Key Management Service (AWS KMS) keys.
- D . Store the data in an Amazon S3 Glacier storage class. Use server-side encryption with customer-provided keys (SSE-C).
- E . Store the data in AWS Snowball devices. Use server-side encryption with AWS KMS keys (SSE-KMS).
A, D
Explanation:
Option A: Importing customer-managed keys into AWS KMS ensures that encryption key material remains under the company’s control.
Option D: S3 Glacier with server-side encryption using customer-provided keys (SSE-C) complies with the need for controlled encryption and provides cost-effective storage for backups.
AWS Key Management Service Importing Keys Documentation, S3 Encryption Documentation
A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.
What is the MOST cost-effective solution to connect these VPCs?
- A . Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication.
- B . Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication.
- C . Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
- D . Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
D
Explanation:
To connect two VPCs in the same Region within the same AWS account, VPC peering is the most cost-effective solution. VPC peering allows direct network traffic between the VPCs without requiring a gateway, VPN connection, or AWS Transit Gateway. VPC peering also does not incur any additional charges for data transfer between the VPCs.
Reference:
What Is VPC Peering?
VPC Peering Pricing
A company that uses AWS needs a solution to predict the resources needed for manufacturing processes each month. The solution must use historical values that are currently stored in an Amazon S3 bucket. The company has no machine learning (ML) experience and wants to use a managed service for the training and predictions.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
- B . Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
- C . Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to create predictions based on the inputs.
- D . Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor to create a prediction based on the inputs.
- E . Train an Amazon Forecast predictor by using the historical data in the S3 bucket.
B, E
Explanation:
To predict the resources needed for manufacturing processes each month using historical values that are currently stored in an Amazon S3 bucket, a solutions architect should use Amazon SageMaker to train a model by using the historical data in the S3 bucket, and deploy an Amazon SageMaker model and create a SageMaker endpoint for inference. Amazon SageMaker is a fully managed service that provides an easy way to build, train, and deploy machine learning (ML) models. The solutions architect can use the built-in algorithms or frameworks provided by SageMaker, or bring their own custom code, to train a model using the historical data in the S3 bucket as input. The trained model can then be deployed to a SageMaker endpoint, which is a scalable and secure web service that can handle requests for predictions from the application. The solutions architect does not need to have any ML experience or manage any infrastructure to use SageMaker.
A company has a serverless website with millions of objects in an Amazon S3 bucket. The company uses the S3 bucket as the origin for an Amazon CloudFront distribution. The company did not set encryption on the S3 bucket before the objects were loaded. A solutions architect needs to enable encryption for all existing objects and for all objects that are added to the S3 bucket in the future.
Which solution will meet these requirements with the LEAST amount of effort?
- A . Create a new S3 bucket. Turn on the default encryption settings for the new S3 bucket. Download all existing objects to temporary local storage. Upload the objects to the new S3 bucket.
- B . Turn on the default encryption settings for the S3 bucket. Use the S3 Inventory feature to create a
.csv file that lists the unencrypted objects. Run an S3 Batch Operations job that uses the copy command to encrypt those objects. - C . Create a new encryption key by using AWS Key Management Service (AWS KMS). Change the settings on the S3 bucket to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS). Turn on versioning for the S3 bucket.
- D . Navigate to Amazon S3 in the AWS Management Console. Browse the S3 bucket’s objects. Sort by the encryption field. Select each unencrypted object. Use the Modify button to apply default encryption settings to every unencrypted object in the S3 bucket.
B
Explanation:
https: //spin.atomicobject.com/2020/09/15/aws-s3-encrypt-existing-objects/
A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the logs from all accounts in a centralized account. The company has created an Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail logs. All logs must be highly available for 30 days for frequent analysis, retained tor an additional 60 days tor backup purposes, and deleted 90 days after creation.
Which solution will meet these requirements MOST cost-effectively?
- A . Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
- B . Transition objects lo the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after creation Move all objects to the S3 Glacier FlexibleRetrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
- C . Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects alter 90 days.
- D . Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
D
Explanation:
Understanding the Requirement: The company needs logs to be highly available for 30 days for frequent analysis, retained for an additional 60 days for backup, and deleted after 90 days.
Analysis of Options:
Transition to S3 Standard after 30 days: Keeps logs in the same high-availability storage, not cost-effective.
Transition to S3 Standard-IA, then Glacier Flexible Retrieval after 90 days: Adds unnecessary cost and complexity since objects need to be accessible for only 30 days and then retained for 60 days. Transition to Glacier Flexible Retrieval after 30 days: Not suitable for frequent access required in the first 30 days.
Transition to S3 One Zone-IA after 30 days, then Glacier Flexible Retrieval: Provides cost-effective storage for infrequently accessed logs after the initial 30-day period, then moves to the cheapest long-term storage before deletion.
Best Solution:
Transition to S3 One Zone-IA after 30 days, then Glacier Flexible Retrieval: This solution meets the requirements for high availability, cost-effective storage for backup, and scheduled deletion with the least cost.
Reference: Amazon S3 Storage Classes
Managing your storage lifecycle
A company wants to implement a data lake in the AWS Cloud. The company must ensure that only specific teams have access to sensitive data in the data lake. The company must have row-level access control for the data lake.
- A . Use Amazon RDS to store the data. Use IAM roles and permissions for data governance and access control.
- B . Use Amazon Redshift to store the data. Use IAM roles and permissions for data governance and access control.
- C . Use Amazon S3 to store the data. Use AWS Lake Formation for data governance and access control.
- D . Use AWS Glue Catalog to store the data. Use AWS Glue DataBrew for data governance and access control.
C
Explanation:
Detailed
A company wants to securely exchange data between its software as a service (SaaS) application Salesforce account and Amazon S3. The company must encrypt the data at rest by using AWS Key Management Service (AWS KMS) customer managed keys (CMKs). The company must also encrypt the data in transit. The company has enabled API access for the Salesforce account.
Which solution will meet these requirements with the LEAST development effort?
- A . Create AWS Lambda functions to transfer the data securely from Salesforce to Amazon S3.
- B . Create an AWS Step Functions workflow. Define the task to transfer the data securely from Salesforce to Amazon S3.
- C . Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.
- D . Create a custom connector for Salesforce to transfer the data securely from Salesforce to Amazon S3.
C
Explanation:
Amazon AppFlow is a fully managed integration service that enables users to transfer data securely between SaaS applications and AWS services. It supports Salesforce as a source and Amazon S3 as a destination. It also supports encryption of data at rest using AWS KMS CMKs and encryption of data in transit using SSL/TLS1. By using Amazon AppFlow, the solution can meet the requirements with the least development effort.
A company is building a new web-based customer relationship management application. The application will use several Amazon EC2 instances that are backed by Amazon Elastic Block Store (Amazon EBS) volumes behind an Application Load Balancer (ALB). The application will also use an Amazon Aurora database. All data for the application must be encrypted at rest and in transit.
Which solution will meet these requirements?
- A . Use AWS Key Management Service (AWS KMS) certificates on the ALB to encrypt data in transit. Use AWS Certificate Manager (ACM) to encrypt the EBS volumes and Aurora database storage at rest.
- B . Use the AWS root account to log in to the AWS Management Console. Upload the company’s encryption certificates. While in the root account, select the option to turn on encryption for all data at rest and in transit for the account.
- C . Use a AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate Manager (ACM) certificate to the ALB to encrypt data in transit.
- D . Use BitLocker to encrypt all data at rest. Import the company’s TLS certificate keys to AWS key Management Service (AWS KMS). Attach the KMS keys to the ALB to encrypt data in transit.
C
Explanation:
This option is the most efficient because it uses AWS Key Management Service (AWS KMS), which is a service that makes it easy for you to create and manage cryptographickeys and control their use across a wide range of AWS services and with your applications running on AWS1. It also uses AWS KMS to encrypt the EBS volumes and Aurora database storage at rest, which provides data protection by encrypting your data with encryption keys that youmanage23. It also uses AWS Certificate Manager (ACM), which is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. It also attaches an ACM certificate to the ALB to encrypt data in transit, which provides data protection by enabling SSL/TLS encryption for connections between clients and the load balancer. Thissolution meets the requirement of encrypting all data for the application at rest and in transit.
Option A is less efficient because it uses AWS KMS certificates on the ALB to encrypt data in transit, which is not possible as AWS KMS does not provide certificates but only keys. It also uses AWS Certificate Manager (ACM) to encrypt the EBS volumes and Aurora database storage at rest, which is not possible as ACM does not provide encryption but only certificates.
Option B is less efficient because it uses the AWS root account to log in to the AWS Management Console, which is not recommended as it has unrestricted access to all resources in your account. It also uploads the company’s encryption certificates, which is not necessary as ACM can provide certificates for free. It also selects the option to turn on encryption for all data at rest and in transit for the account, which is not possible as encryption settings are specific to each service and resource.
Option D is less efficient because it uses BitLocker to encrypt all data at rest, which is a Windows feature that provides encryption for volumes on Windows servers. However, this does not provide encryption for Aurora database storage at rest, as Auroraruns on Linux servers. It also imports the company’s TLS certificate keys to AWS KMS, which is not necessary as ACM can provide certificates for free. It also attaches the KMS keys to the ALB to encrypt data in transit, which is not possible as ALB requires certificates and not keys.
A company hosts a three-tier web application that includes a PostgreSQL database. The database stores the metadata from documents. The company searches the metadata for key terms to retrieve documents that the company reviews in a report each month. The documents are stored in Amazon S3. The documents are usually written only once, but they are updated frequency. The reporting process takes a few hours with the use of relational queries. The reporting process must not affect any document modifications or the addition of new documents.
What are the MOST operationally efficient solutions that meet these requirements? (Select TWO)
- A . Set up a new Amazon Document DB (with MongoDB compatibility) cluster that includes a read replica Scale the read replica to generate the reports.
- B . Set up a new Amazon RDS for PostgreSQL Reserved Instance and an On-Demand read replica Scale the read replica to generate the reports
- C . Set up a new Amazon Aurora PostgreSQL DB cluster that includes a Reserved Instance and an Aurora Replica issue queries to the Aurora Replica to generate the reports.
- D . Set up a new Amazon RDS for PostgreSQL Multi-AZ Reserved Instance Configure the reporting module to query the secondary RDS node so that the reporting module does not affect the primary node
- E . Set up a new Amazon DynamoDB table to store the documents Use a fixed write capacity to support new document entries Automatically scale the read capacity to support the reports
B, C
Explanation:
These options are operationally efficient because they use Amazon RDS read replicas to offload the reporting workload from the primary DB instance and avoid affecting any documentmodifications or the addition of new documents1. They also use Reserved Instances for the primary DB instance to reduce costs and On-Demand or Aurora Replicas for the read replicas to scale as needed.
Option A is less efficient because it uses Amazon S3 Glacier Flexible Retrieval, which is a cold storage class that has higher retrieval costs and longer retrieval times than Amazon S3 Standard. It also uses EventBridge rules to invoke the job nightly, which does not meet the requirement of processing incoming data files as soon as possible.
Option D is less efficient because it uses AWS Lambda to process the files, which has a maximum execution time of 15 minutes per invocation, which might not be enough for processing each file that needs 3-8 minutes. It also uses S3 event notifications to invoke the Lambda function when the files arrive, which could cause concurrency issues if there are thousands of small data files arriving periodically.
Option E is less efficient because it uses Amazon DynamoDB, which is a NoSQL database service that does not support relational queries, which are needed for generating the reports. It also uses fixed write capacity, which could cause throttling or underutilization depending on the incoming data files.
A company runs an application on Amazon EC2 instances. The instances need to access an Amazon RDS database by using specific credentials. The company uses AWS Secrets Manager to contain the credentials the EC2 instances must use.
Which solution will meet this requirement?
- A . Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy to grant the new IAM role access to the secret that contains the database credentials.
- B . Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the new IAM user access to the secret that contains the database credentials.
- C . Create a resource-based policy for the secret that contains the database credentials. Use EC2 Instance Connect to access the secret.
- D . Create an identity-based policy for the secret that contains the database credentials. Grant direct access to the EC2 instances.
A
Explanation:
IAM Role: Attaching an IAM role to an EC2 instance profile is a secure way to manage permissions without embedding credentials.
AWS Secrets Manager: Grants controlled access to database credentials and automatically rotates secrets if configured.
Identity-Based Policy: Ensures the IAM role only has access to specific secrets, enhancing security. AWS Secrets Manager Documentation