Practice Free AIF-C01 Exam Online Questions
A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.
Which solution will meet these requirements?
- A . Deploy optimized small language models (SLMs) on edge devices.
- B . Deploy optimized large language models (LLMs) on edge devices.
- C . Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
- D . Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
A
Explanation:
To achieve the lowest latency possible for inference on edge devices, deploying optimized small language models (SLMs) is the most effective solution. SLMs require fewer resources and havefaster inference times, making them ideal for deployment on edge devices where processing power and memory are limited.
Option A (Correct): "Deploy optimized small language models (SLMs) on edge devices": This is the correct answer because SLMs provide fast inference with low latency, which is crucial for edge deployments.
Option B: "Deploy optimized large language models (LLMs) on edge devices" is incorrect because LLMs are resource-intensive and may not perform well on edge devices due to their size and computational demands.
Option C: "Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices" is incorrect because it introduces network latency due to the need for communication with a centralized server.
Option D: "Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices" is incorrect for the same reason, with even greater latency due to
the larger model size.
AWS AI Practitioner
Reference: Optimizing AI Models for Edge Devices on AWS: AWS recommends using small, optimized models for edge deployments to ensure minimal latency and efficient performance.
A company wants to develop an AI assistant for employees to query internal data.
Which AWS service will meet this requirement?
- A . Amazon Rekognition
- B . Amazon Textract
- C . Amazon Lex
- D . Amazon Q Business
D
Explanation:
Comprehensive and Detailed Explanation From Exact AWS AI documents:
Amazon Q Business is a managed AI assistant designed to:
Allow employees to query internal enterprise data
Provide conversational answers
Respect enterprise security and access controls
AWS guidance positions Amazon Q Business as the solution for internal knowledge discovery and enterprise AI assistance.
Why the other options are incorrect:
Rekognition (A) analyzes images.
Textract (B) extracts text from documents.
Lex (C) builds conversational interfaces but does not provide enterprise data integration out of the box.
AWS AI document references:
Amazon Q Business Overview
Enterprise AI Assistants on AWS
Secure Access to Internal Data with AI
An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms.
What should the firm do when developing and deploying the LLM? (Select TWO.)
- A . Include fairness metrics for model evaluation.
- B . Adjust the temperature parameter of the model.
- C . Modify the training data to mitigate bias.
- D . Avoid overfitting on the training data.
- E . Apply prompt engineering techniques.
A,C
Explanation:
To implement a large language model (LLM) responsibly, the firm should focus on fairness and mitigating bias, which are critical for ethical AI deployment.
A company wants to build a lead prioritization application for its employees to contact potential customers. The application must give employees the ability to view and adjust the weights assigned to different variables in the model based on domain knowledge and expertise.
Which ML model type meets these requirements?
- A . Logistic regression model
- B . Deep learning model built on principal components
- C . K-nearest neighbors (k-NN) model
- D . Neural network
A company is monitoring a predictive model by using Amazon SageMaker Model Monitor. The company notices data drift beyond a defined threshold. The company wants to mitigate a potentially adverse impact on the predictive model.
- A . Restart the SageMaker AI endpoint.
- B . Adjust the monitoring sensitivity.
- C . Re-train the model with fresh data.
- D . Set up experiments tracking.
C
Explanation:
The correct answer is C C Re-train the model with fresh data. AWS SageMaker Model Monitor is designed to detect data drift, feature drift, and model quality degradation in real-time. When drift exceeds a set threshold, AWS recommends initiating a retraining workflow with updated data to restore model accuracy. According to AWS documentation, data drift indicates that the distribution of incoming data has changed significantly from the original training dataset―often due to new user behaviors, market changes, or seasonal patterns. Restarting the endpoint (A) does not address degraded model performance. Adjusting sensitivity (B) suppresses the alert but does not fix the underlying issue. Experiments tracking (D) is helpful for monitoring model versions but is not corrective. Retraining ensures the model adapts to new data patterns and continues to perform reliably, which is the AWS-endorsed response to drift detection alerts.
Referenced AWS Documentation:
Amazon SageMaker Model Monitor C Detecting Drift
AWS ML Ops Best Practices C Continuous Retraining
A company uses Amazon Bedrock to implement a generative AI assistant on a website. The AI assistant helps customers with product recommendations and purchasing decisions. The company wants to measure the direct impact of the AI assistant on sales performance.
- A . The conversion rate of customers who purchase products after AI assistant interactions
- B . The number of customer interactions with the AI assistant
- C . Sentiment analysis scores from customer feedback after AI assistant interactions
- D . Natural language understanding accuracy rates
A
Explanation:
The most direct business KPI for sales performance is conversion rate (percentage of users who purchase after AI assistant interaction).
Number of interactions (B) shows engagement, not sales impact. Sentiment analysis (C) shows customer satisfaction but not revenue impact. NLU accuracy (D) is a technical metric, not a business outcome.
Reference: AWS Generative AI Use Cases C Measuring Business Value
A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company’s review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing.
Which AWS service meets this requirement?
- A . Amazon Textract
- B . Amazon Personalize
- C . Amazon Lex
- D . Amazon Transcribe
A
Explanation:
Amazon Textract is a service that automatically extracts text and data from scanned documents, including PDFs. It is the best choice for converting resumes from PDF format to plain text for further processing.
Amazon Textract:
Extracts text, forms, and tables from scanned documents accurately.
Ideal for automating the process of converting PDF resumes into plain text format.
Why Option A is Correct:
Automation of Text Extraction: Textract is designed to handle large volumes of documents and convert them into machine-readable text, perfect for the company’s need.
Scalability and Efficiency: Supports scalability to handle a growing volume of resumes as the company expands.
Why Other Options are Incorrect:
B. Amazon Personalize: Used for creating personalized recommendations, not for text extraction.
C. Amazon Lex: A service for building conversational interfaces, not for processing documents.
D. Amazon Transcribe: Used for converting speech to text, not for extracting text from documents.
A company needs to log all requests made to its Amazon Bedrock API. The company must retain the logs securely for 5 years at the lowest possible cost.
Which combination of AWS service and storage class meets these requirements? (Select TWO.)
- A . AWS CloudTrail
- B . Amazon CloudWatch
- C . AWS Audit Manager
- D . Amazon S3 Intelligent-Tiering
- E . Amazon S3 Standard
A,D
Explanation:
AWS CloudTrail: Logs all API calls to Amazon Bedrock.
Amazon S3 Intelligent-Tiering: Optimizes storage costs for long-term retention with automatic tiering.
According to Amazon Bedrock Logging Documentation:
“CloudTrail records API activity and events, and logs can be stored in S3. For cost optimization, use S3 Intelligent-Tiering to retain logs long-term.”
A company uses foundation models (FMs) to automate daily tasks. An AI practitioner is creating system instructions that include context relevant to the tasks. The AI practitioner wants to save and reuse the instructions in daily interactions with FMs in Amazon Bedrock.
Which Amazon Bedrock solution will meet these requirements?
- A . Knowledge Bases
- B . Guardrails
- C . Playgrounds
- D . Prompt management
D
Explanation:
Comprehensive and Detailed Explanation (AWS AI documents):
AWS Amazon Bedrock Prompt Management is designed to allow practitioners to create, version, store, and reuse prompts and system instructions when working with foundation models. It enables consistent reuse of contextual instructions across repeated interactions and applications.
Why the other options are incorrect:
Knowledge Bases store and retrieve external data, not reusable system instructions.
Guardrails enforce safety and policy controls, not prompt reuse.
Playgrounds are for experimentation and testing, not long-term prompt storage.
AWS AI Study Guide
Reference: Amazon Bedrock prompt management capabilities
AWS best practices for prompt engineering and reuse
A company is developing an ML model to support the company’s retail application. The company wants to use information that the model has produced from previous tasks to increase the learning speed of the model.
Which model training solution will meet these requirements?
- A . Supervised learning
- B . Hyperparameter tuning
- C . Regularization techniques
- D . Transfer learning
D
Explanation:
Transfer learning is a machine learning technique that reuses knowledge learned from previous tasks to improve training efficiency and performance on new tasks. AWS documentation explains that transfer learning allows models to start from pretrained weights or representations, reducing training time and the amount of data required.
In this retail application scenario, the company wants to leverage information from prior tasks to increase learning speed, which is a defining characteristic of transfer learning. AWS emphasizes that transfer learning is especially effective when tasks are related, such as customer behavior analysis, product recommendations, or demand forecasting.
By initializing a model with learned features from an existing task, transfer learning enables faster convergence and improved accuracy compared to training from scratch. AWS frequently recommends this approach when computational efficiency and rapid iteration are important.
The other options do not satisfy the requirement. Supervised learning defines how labels are used but does not reuse prior knowledge. Hyperparameter tuning optimizes model configuration but does
not leverage previous task outputs. Regularization techniques reduce overfitting but do not accelerate learning through knowledge reuse.
AWS documentation positions transfer learning as a foundational concept in modern ML workflows, particularly for retail, personalization, and natural language processing use cases. Therefore, transfer learning is the correct solution.
