Practice Free AIF-C01 Exam Online Questions
HOTSPOT
A company is using Amazon SageMaker to develop AI models.
Select the correct SageMaker feature or resource from the following list for each step in the AI model lifecycle workflow. Each
SageMaker feature or resource should be selected one time or not at all. (Select TWO.)
SageMaker Clarify
SageMaker Model Registry
SageMaker Serverless Inference

Explanation:
SageMaker Model Registry, SageMaker Serverless interference
This question requires selecting the appropriate Amazon SageMaker feature for two distinct steps in the AI model lifecycle.
Let’s break down each step and evaluate the options:
Step 1: Managing different versions of the model
The goal here is to identify a SageMaker feature that supports version control and management of machine learning models. Let’s analyze the options:
SageMaker Clarify: This feature is used to detect bias in models and explain model predictions, helping with fairness and interpretability. It does not provide functionality for managing model versions.
SageMaker Model Registry: This is a centralized repository in Amazon SageMaker that allows users to catalog, manage, and track different versions of machine learning models. It supports model versioning, approval workflows, and deployment tracking, making it ideal for managing different versions of a model.
SageMaker Serverless Inference: This feature enables users to deploy models for inference without managing servers, automatically scaling based on demand. It is focused on inference (predictions), not on managing model versions.
Conclusion for Step 1: The SageMaker Model Registry is the correct choice for managing different versions of the model.
Exact Extract
Reference: According to the AWS SageMaker documentation, “The SageMaker Model Registry allows you to catalog models for production, manage model versions, associate metadata, and manage approval status for deployment.” (Source: AWS SageMaker Documentation – Model Registry, https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html).
Step 2: Using the current model to make predictions
The goal here is to identify a SageMaker feature that facilitates making predictions (inference) with a deployed model. Let’s evaluate the options:
SageMaker Clarify: As mentioned, this feature focuses on bias detection and explainability, not on performing inference or making predictions.
SageMaker Model Registry: While the Model Registry helps manage and catalog models, it is not used directly for making predictions. It can store models, but the actual inference process requires a deployment mechanism.
SageMaker Serverless Inference: This feature allows users to deploy models for inference without managing infrastructure. It automatically scales based on traffic and is specifically designed for making predictions in a cost-efficient, serverless manner.
Conclusion for Step 2: SageMaker Serverless Inference is the correct choice for using the current model to make predictions.
Exact Extract
Reference: The AWS documentation states, “SageMaker Serverless Inference is a deployment option that allows you to deploy machine learning models for inference without configuring or managing servers. It automatically scales to handle inference requests, making it ideal for workloads with intermittent or unpredictable traffic.” (Source: AWS SageMaker Documentation – Serverless Inference, https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-inference.html).
Why Not Use the Same Feature Twice?
The question specifies that each SageMaker feature or resource should be selected one time or not at all. Since SageMaker Model Registry is used for version management and SageMaker Serverless Inference is used for predictions, each feature is selected exactly once. SageMaker Clarify is not applicable to either step, so it is not selected at all, fulfilling the question’s requirements.
Reference: AWS SageMaker Documentation: Model Registry
(https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html)
AWS SageMaker Documentation: Serverless Inference
(https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-inference.html)
AWS AI Practitioner Study Guide (conceptual alignment with SageMaker features for model lifecycle management and inference)
Let’s format this question according to the specified structure and provide a detailed, verified answer based on AWS AI Practitioner knowledge and official AWS documentation. The question focuses on selecting an AWS database service that supports storage and queries of embeddings as vectors, which is relevant to generative AI applications.
Which option is a benefit of using Amazon SageMaker Model Cards to document AI models?
- A . Providing a visually appealing summary of a model’s capabilities.
- B . Standardizing information about a model’s purpose, performance, and limitations.
- C . Reducing the overall computational requirements of a model.
- D . Physically storing models for archival purposes.
B
Explanation:
Amazon SageMaker Model Cards provide a standardized way to document important details about an AI model, such as its purpose, performance, intended usage, and known limitations. This enables transparency and compliance while fostering better communication between stakeholders. It does not store models physically or optimize computational requirements.
Reference: AWS SageMaker Model Cards Documentation.
A company deployed a model to production. After 4 months, the model inference quality degraded. The company wants to receive a notification if the model inference quality degrades. The company also wants to ensure that the problem does not happen again.
Which solution will meet these requirements?
- A . Retrain the model. Monitor model drift by using Amazon SageMaker Clarify.
- B . Retrain the model. Monitor model drift by using Amazon SageMaker Model Monitor.
- C . Build a new model. Monitor model drift by using Amazon SageMaker Feature Store.
- D . Build a new model. Monitor model drift by using Amazon SageMaker JumpStart.
B
Explanation:
The company needs to address the degradation in model inference quality after 4 months in production and prevent future occurrences by receiving notifications. Retraining the model can address the current degradation, likely caused by data drift (changes in the data distribution over time). Amazon SageMaker Model Monitor is designed to detect and monitor model drift, alerting the company when inference quality degrades, thus meeting both requirements.
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"Amazon SageMaker Model Monitor enables you to monitor machine learning models in production for data drift, model performance degradation, and other quality issues. It can detect drift in feature distributions and inference quality, sending notifications when deviations are detected, allowing you to take corrective actions such as retraining the model."
(Source: Amazon SageMaker Developer Guide, Monitoring Models with SageMaker Model Monitor) Detailed
Option A: Retrain the model. Monitor model drift by using Amazon SageMaker Clarify. SageMaker Clarify is used for bias detection and explainability, not for monitoring model drift or inference quality in production. This option does not fully meet the requirements.
Option B: Retrain the model. Monitor model drift by using Amazon SageMaker Model Monitor. This is the correct answer. Retraining addresses the current degradation, and SageMaker Model Monitor can detect future drift in inference quality, sending notifications to prevent recurrence, as required.
Option C: Build a new model. Monitor model drift by using Amazon SageMaker Feature Store. SageMaker Feature Store is for managing and sharing features, not for monitoring model drift or inference quality. Building a new model may not be necessary if retraining can address the issue.
Option D: Build a new model. Monitor model drift by using Amazon SageMaker JumpStart. SageMaker JumpStart provides pre-trained models and solutions for quick deployment, but it does not offer specific tools for monitoring model drift or inference quality in production.
Reference: Amazon SageMaker Developer Guide: Monitoring Models with SageMaker Model Monitor (https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html)
AWS AI Practitioner Learning Path: Module on Model Monitoring and Maintenance
AWS Documentation: Addressing Model Drift in Production (https://aws.amazon.com/sagemaker/)
Which technique breaks a complex task into smaller subtasks that are sent sequentially to a large language model (LLM)?
- A . One-shot prompting
- B . Prompt chaining
- C . Tree of thoughts
- D . Retrieval Augmented Generation (RAG)
A company is using a pre-trained large language model (LLM). The LLM must perform multiple tasks that require specific domain knowledge. The LLM does not have information about several technical topics in the domain. The company has unlabeled data that the company can use to fine-tune the model.
Which fine-tuning method will meet these requirements?
- A . Full training
- B . Supervised fine-tuning
- C . Continued pre-training
- D . Retrieval Augmented Generation (RAG)
C
Explanation:
The correct answer is C because Continued Pre-training (also known as domain-adaptive pre-training) involves training a pre-trained model further on unlabeled domain-specific data. This method helps adapt the LLM to a specific domain without needing labeled datasets, making it ideal for cases where the goal is to enhance the model’s understanding of technical language or terminology.
From AWS documentation:
"Continued pre-training allows an LLM to ingest large volumes of domain-specific text without labels to improve contextual understanding in a particular area. This is effective when adapting a foundation model to new knowledge without altering the model architecture."
Explanation of other options:
A company is introducing a new feature for its application. The feature will refine the style of output messages. The company will fine-tune a large language model (LLM) on Amazon Bedrock to implement the feature.
Which type of data does the company need to meet these requirements?
- A . Samples of only input messages
- B . Samples of only output messages
- C . Samples of pairs of input and output messages
- D . Separate samples of input and output messages
C
Explanation:
Fine-tuning requires paired inputCoutput examples to teach the model how to respond to inputs with desired styled outputs.
Single inputs (A) or outputs (B) are insufficient.
Separate, unpaired samples (D) don’t establish the input-output mapping.
Reference: AWS Documentation C Preparing data for fine-tuning FMs
A publishing company built a Retrieval Augmented Generation (RAG) based solution to give its users the ability to interact with published content. New content is published daily. The company wants to provide a near real-time experience to users.
Which steps in the RAG pipeline should the company implement by using offline batch processing to meet these requirements? (Select TWO.)
- A . Generation of content embeddings
- B . Generation of embeddings for user queries
- C . Creation of the search index
- D . Retrieval of relevant content
- E . Response generation for the user
A, C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
In a RAG (Retrieval Augmented Generation) architecture, there are steps that can be optimized using offline batch processing, particularly for operations that do not require real-time updates:
Which scenario describes a potential risk and limitation of prompt engineering In the context of a generative AI model?
- A . Prompt engineering does not ensure that the model always produces consistent and deterministic outputs, eliminating the need for validation.
- B . Prompt engineering could expose the model to vulnerabilities such as prompt injection attacks.
- C . Properly designed prompts reduce but do not eliminate the risk of data poisoning or model hijacking.
- D . Prompt engineering does not ensure that the model will consistently generate highly reliable outputs when working with real-world data.
A bank is building a chatbot to answer customer questions about opening a bank account. The chatbot will use public bank documents to generate responses. The company will use Amazon Bedrock and prompt engineering to improve the chatbot’s responses.
Which prompt engineering technique meets these requirements?
- A . Complexity-based prompting
- B . Zero-shot prompting
- C . Few-shot prompting
- D . Directional stimulus prompting
D
Explanation:
Directional stimulus prompting guides the foundation model to produce outputs aligned with business context. It’s particularly effective for aligning responses with public documents and improving coherence. From Bedrock Prompt Engineering Techniques documentation:
“Directional stimulus prompting provides structured prompts to steer the model output towards desired formats or behaviors using specific linguistic cues.”
Which technique can a company use to lower bias and toxicity in generative AI applications during the post-processing ML lifecycle?
- A . Human-in-the-loop
- B . Data augmentation
- C . Feature engineering
- D . Adversarial training
A
Explanation:
The correct answer is A because Human-in-the-loop (HITL) is a post-processing strategy used to monitor, review, and filter outputs from generative AI models for toxicity, bias, or inappropriate content. It allows human reviewers to approve or reject model responses before they are delivered to end-users, ensuring alignment with ethical guidelines and company policies.
From the AWS documentation:
"Human-in-the-loop (HITL) workflows in generative AI are used to validate and approve outputs of models, especially in applications where content quality, compliance, or harm reduction is critical.
HITL is a key step in responsible AI implementations to mitigate hallucinations, bias, and unsafe content."
Explanation of other options:
B. Data augmentation is a pre-processing technique to increase data diversity, not typically used in post-processing stages.
C. Feature engineering is relevant in traditional ML, especially structured data tasks, not typically used in generative AI post-processing.
D. Adversarial training is a model training strategy, not a post-processing mitigation approach.
Referenced AWS AI/ML Documents and Study Guides:
AWS Responsible AI Practices Whitepaper
AWS Generative AI Developer Guide C Human-in-the-loop and Post-processing
Amazon A2I Documentation C Integrating Human Review in ML Workflows
