Practice Free SAA-C03 Exam Online Questions
A solutions architect is designing a new hybrid architecture to extend a company s on-premises infrastructure to AWS. The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?
- A . Provision an AWS Direct Connect connection to a Region Provision a VPN connection as a backup if the primary Direct Connect connection fails.
- B . Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
- C . Provision an AWS Direct Connect connection to a Region Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
- D . Provision an AWS Direct Connect connection to a Region Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.
A
Explanation:
"In some cases, this connection alone is not enough. It is always better to guarantee a fallback connection as the backup of DX. There are several options, but implementing it with an AWS Site-To-Site VPN is a real cost-effective solution that can be exploited to reduce costs or, in the meantime, wait for the setup of a second DX."
https: //www.proud2becloud.com/hybrid-cloud-networking-backup-aws-direct-connect-network-connection-with-aws-site-to-site-vpn/
A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.
What should a solutions architect do to transmit and process the clickstream data?
- A . Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics
- B . Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis
- C . Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to process the data tor analysis.
- D . Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis
D
Explanation:
https: //aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
A company needs to store confidential files on AWS. The company accesses the files every week. The company must encrypt the files by using envelope encryption, and the encryption keys must be rotated automatically. The company must have an audit trail to monitor encryption key usage.
Which combination of solutions will meet these requirements? (Select TWO.)
- A . Store the confidential files in Amazon S3.
- B . Store the confidential files in Amazon S3 Glacier Deep Archive.
- C . Use server-side encryption with customer-provided keys (SSE-C).
- D . Use server-side encryption with Amazon S3 managed keys (SSE-S3).
- E . Use server-side encryption with AWS KMS managed keys (SSE-KMS).
A company has an application that collects data from loT sensors on automobiles. The data is streamed and stored in Amazon S3 through Amazon Kinesis Date Firehose. The data produces trillions of S3 objects each year. Each morning, the company uses the data from the previous 30 days to retrain a suite of machine learning (ML) models.
Four times each year, the company uses the data from the previous 12 months to perform analysis and train other ML models. The data must be available with minimal delay for up to 1 year. After 1 year, the data must be retained for archival purposes.
Which storage solution meets these requirements MOST cost-effectively?
- A . Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year
- B . Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically move objects to S3 Glacier Deep Archive after 1 year.
- C . Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
- D . Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive after 1 year.
D
Explanation:
– First 30 days- data access every morning (predictable and frequently) C S3 standard – After 30 days, accessed 4 times a year C S3 infrequently access – Data preserved- S3 Gllacier Deep Archive
A company has 5 TB of datasets. The datasets consist of 1 million user profiles and 10 million connections. The user profiles have connections as many-to-many relationships. The company needs a performance-efficient way to find mutual connections up to five levels.
Which solution will meet these requirements?
- A . Use an Amazon S3 bucket to store the datasets. Use Amazon Athena to perform SQL JOIN queries to find connections.
- B . Use Amazon Neptune to store the datasets with edges and vertices. Query the data to find connections.
- C . Use an Amazon S3 bucket to store the datasets. Use Amazon QuickSight to visualize connections.
- D . Use Amazon RDS to store the datasets with multiple tables. Perform SQL JOIN queries to find connections.
A company stores data in an on-premises Oracle relational database. The company needs to make the data available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS Site-to-Site VPN connection to connect its on-premises network to AWS.
The company must capture the changes that occur to the source database during the migration to Aurora PostgreSQL.
Which solution will meet these requirements?
- A . Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data.
- B . Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
- C . Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the ongoing changes.
- D . Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
C
Explanation:
For the migration of data from an on-premises Oracle database to Amazon Aurora PostgreSQL, this solution effectively handles schema conversion, data migration, and ongoing data replication. AWS Schema Conversion Tool (SCT): SCT is used to convert the Oracle database schema to a format compatible with Aurora PostgreSQL. This tool automatically converts the database schema and code objects, like stored procedures, to the target database engine.
AWS Database Migration Service (DMS): DMS is employed to perform the data migration. It supports both full-load migrations (for initial data transfer) and continuous replication of ongoing changes (Change Data Capture, or CDC). This ensures that any updates to the Oracle database during the migration are captured and applied to the Aurora PostgreSQL database, minimizing downtime.
Why Not Other Options?
Option A (SCT + DMS full-load only): This option does not capture ongoing changes, which is crucial for a live database migration to ensure data consistency.
Option B (DataSync + S3): AWS DataSync is more suited for file transfers rather than database migrations, and it doesn’t support ongoing change replication.
Option D (Snowball + S3): Snowball is typically used for large-scale data transfers that don’t require continuous synchronization, making it less suitable for this scenario where ongoing changes must be captured.
AWS
Reference: AWS Schema Conversion Tool- Guidance on using SCT for database schema conversions.
AWS Database Migration Service- Detailed documentation on using DMS for data migrations and ongoing replication.
A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?
- A . Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances.
Connect the database by using native Java Database Connectivity (JDBC) drivers. - B . Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
- C . Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
- D . Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
B
Explanation:
bottlenecks can be avoided with queues (SQS).
A company that uses AWS Organizations runs 150 applications across 30 different AWS accounts. The company used AWS Cost and Usage Report to create a new report in the management account. The report is delivered to an Amazon S3 bucket that is replicated to a bucket in the data collection account.
The company’s senior leadership wants to view a custom dashboard that provides NAT gateway costs each day starting at the beginning of the current month.
Which solution will meet these requirements?
- A . Share an Amazon QuickSight dashboard that includes the requested table visual. Configure QuickSight to use AWS DataSync to query the new report
- B . Share an Amazon QuickSight dashboard that includes the requested table visual. Configure QuickSight to use Amazon Athena to query the new report.
- C . Share an Amazon CloudWatch dashboard that includes the requested table visual Configure CloudWatch to use AWS DataSync to query the new report
- D . Share an Amazon CloudWatch dashboard that includes the requested table visual. Configure CloudWatch to use Amazon Athena to query the new report
B
Explanation:
Understanding the Requirement: Senior leadership wants a custom dashboard displaying NAT gateway costs daily, starting from the beginning of the current month.
Analysis of Options:
QuickSight with DataSync: While QuickSight is suitable for dashboards, DataSync is not designed for querying and analyzing data reports.
QuickSight with Athena: QuickSight can visualize data queried by Athena, which is designed to analyze data directly from S3.
CloudWatch with DataSync: CloudWatch is primarily for monitoring metrics, not for creating detailed cost analysis dashboards.
CloudWatch with Athena: Similarly, using CloudWatch with Athena does not align well with the requirement for a visual dashboard.
Best Solution for Visualization and Querying:
Amazon QuickSight with Athena: This combination allows for powerful data visualization and querying capabilities. QuickSight can create dynamic dashboards, while Athena efficiently queries the cost and usage report data stored in S3.
Reference: Amazon QuickSight
Amazon Athena
A company is designing a new Amazon Elastic Kubernetes Service (Amazon EKS) deployment to host multi-tenant applications that use a single cluster. The company wants to ensure that each pod has its own hosted environment. The environments must not share CPU, memory, storage, or elastic network interfaces.
Which solution will meet these requirements?
- A . Use Amazon EC2 instances to host self-managed Kubernetes clusters. Use taints and tolerations to enforce isolation boundaries.
- B . Use Amazon EKS with AWS Fargate. Use Fargate to manage resources and to enforce isolation boundaries.
- C . Use Amazon EKS and self-managed node groups. Use taints and tolerations to enforce isolation boundaries.
- D . Use Amazon EKS and managed node groups. Use taints and tolerations to enforce isolation boundaries.
B
Explanation:
AWS Fargate provides per-pod isolation for CPU, memory, storage, and networking, making it ideal
for multi-tenant use cases.
AWS Documentation
Reference: EKS with Fargate
A company runs a stateful production application on Amazon EC2 instances. The application requires at least two EC2 instances to always be running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The solutions architect creates an Auto Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
- A . Set the Auto Scaling group’s minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand Instance in a second Availability Zone.
- B . Set the Auto Scaling group’s minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two On-Demand Instances in a second Availability Zone
- C . Set the Auto Scaling group’s minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
- D . Set the Auto Scaling group’s minimum capacity to four Deploy two On-Demand Instances in one
Availability Zone and two Spot Instances in a second Availability Zone.
A
Explanation:
Understanding the Requirement: The application is stateful and requires at least two EC2 instances to be running at all times, with a highly available and fault-tolerant architecture.
Analysis of Options:
Minimum capacity of two with instances in separate AZs: Ensures high availability by distributing instances across multiple AZs, fulfilling the requirement of always having two instances running. Minimum capacity of four: Provides redundancy but is more than what is required and increases cost without additional benefit.
Spot Instances: Not suitable for a stateful application requiring guaranteed availability, as Spot Instances can be terminated at any time.
Combination of On-Demand and Spot Instances: Mixing instance types might provide cost savings but does not ensure the required availability for a stateful application. Best Solution:
Minimum capacity of two with instances in separate AZs: This setup ensures high availability and meets the requirement with the least cost and complexity.
Reference: Amazon EC2 Auto Scaling
High Availability for Amazon EC2