Practice Free AIF-C01 Exam Online Questions
A company needs to choose a model from Amazon Bedrock to use internally. The company must identify a model that generates responses in a style that the company’s employees prefer.
What should the company do to meet these requirements?
- A . Evaluate the models by using built-in prompt datasets.
- B . Evaluate the models by using a human workforce and custom prompt datasets.
- C . Use public model leaderboards to identify the model.
- D . Use the model Invocation Latency runtime metrics in Amazon CloudWatch when trying models.
HOTSPOT
A company is using a generative AI model to develop a digital assistant. The model’s responses occasionally include undesirable and potentially harmful content. Select the correct Amazon Bedrock filter policy from the following list for each mitigation action.
Each filter policy should be selected one time. (Select FOUR.)
• Content filters
• Contextual grounding check
• Denied topics
• Word filters

Explanation:
Block input prompts or model responses that contain harmful content such as hate, insults, violence, or misconduct: Content filters
Avoid subjects related to illegal investment advice or legal advice:Denied topics Detect and block specific offensive terms: Word filters
Detect and filter out information in the model’s responses that is not grounded in the provided source information: Contextual grounding check
The company is using a generative AI model on Amazon Bedrock and needs to mitigate undesirable and potentially harmful content in the model’s responses. Amazon Bedrock provides several guardrail mechanisms, including content filters, denied topics, word filters, and contextual grounding checks, to ensure safe and accurate outputs. Each mitigation action in the hotspot aligns with a specific Bedrock filter policy, and each policy must be used exactly once.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
*"Amazon Bedrock guardrails provide mechanisms to control model outputs, including:
Content filters: Block harmful content such as hate speech, violence, or misconduct.
Denied topics: Prevent the model from generating responses on specific subjects, such as illegal activities or advice.
Word filters: Detect and block specific offensive or inappropriate terms.
Contextual grounding check: Ensure responses are grounded in the provided source information, filtering out ungrounded or hallucinated content."*(Source: AWS Bedrock User Guide, Guardrails for Responsible AI)
Detailed
Block input prompts or model responses that contain harmful content such as hate, insults, violence, or misconduct: Content filters Content filters in Amazon Bedrock are designed to detect and block harmful content, such as hate speech, insults, violence, or misconduct, ensuring the model’s outputs are safe and appropriate. This matches the first mitigation action.
Avoid subjects related to illegal investment advice or legal advice: Denied topics Denied topics allow users to specify subjects the model should avoid, such as illegal investment advice or legal advice, which could have regulatory implications. This policy aligns with the second mitigation action.
Detect and block specific offensive terms: Word filters Word filters enable the detection and blocking of specific offensive or inappropriate terms defined by the user, making them ideal for this mitigation action focused on specific terms.
Detect and filter out information in the model’s responses that is not grounded in the provided source information: Contextual grounding check The contextual grounding check ensures that the model’s responses are based on the provided source information, filtering out ungrounded or hallucinated content. This matches the fourth mitigation action.
Hotspot Selection Analysis:
The hotspot lists four mitigation actions, each with the same dropdown options: "Select…," "Content filters," "Contextual grounding check," "Denied topics," and "Word filters."
The correct selections are:
First action: Content filters
Second action: Denied topics
Third action: Word filters
Fourth action: Contextual grounding check
Each filter policy is used exactly once, as required, and aligns with Amazon Bedrock’s guardrail capabilities.
Reference: AWS Bedrock User Guide: Guardrails for Responsible AI (https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html)
AWS AI Practitioner Learning Path: Module on Responsible AI and Model Safety
Amazon Bedrock Developer Guide: Configuring Guardrails (https://aws.amazon.com/bedrock/)
A company wants to create a chatbot to answer employee questions about company policies.
Company policies are updated frequently. The chatbot must reflect the changes in near real time.
The company wants to choose a large language model (LLM).
- A . Fine-tune an LLM on the company policy text by using Amazon SageMaker.
- B . Select a foundation model (FM) from Amazon Bedrock to build an application.
- C . Create a Retrieval Augmented Generation (RAG) workflow by using Amazon Bedrock Knowledge Bases.
- D . Use Amazon Q Business to build a custom Q App.
C
Explanation:
The correct answer is C because Retrieval-Augmented Generation (RAG) allows a large language model to provide responses based on up-to-date content from external data sources without the need to fine-tune the model.
According to the AWS Bedrock Developer Guide:
"Amazon Bedrock Knowledge Bases enables developers to augment foundation models (FMs) with company-specific data that is updated in real time or near real time. By separating retrieval from the model itself, RAG-based approaches avoid the need for frequent retraining or fine-tuning."
This means a company can use a knowledge base with Amazon Bedrock to dynamically fetch the latest company policy information and feed it to the LLM in the prompt. This approach is ideal for use cases where the content (like policies) changes frequently, and latency for updates must be minimal.
Explanation of other options:
A company uses a third-party model on Amazon Bedrock to analyze confidential documents. The company is concerned about data privacy.
Which statement describes how Amazon Bedrock protects data privacy?
- A . User inputs and model outputs are anonymized and shared with third-party model providers.
- B . User inputs and model outputs are not shared with any third-party model providers.
- C . User inputs are kept confidential, but model outputs are shared with third-party model providers.
- D . User inputs and model outputs are redacted before the inputs and outputs are shared with third-party model providers.
B
Explanation:
Comprehensive and Detailed Explanation from AWS AI Documents:
Amazon Bedrock ensures data privacy and security by not sharing customer inputs or outputs with third-party model providers.
The models are accessed via Bedrock’s API isolation layer, meaning that model providers do not see your data.
Customer data is not used for training or improving foundation models unless customers explicitly opt in.
From AWS Docs:
“Amazon Bedrock does not share your inputs and outputs with third-party model providers. Your data remains private, and is not used to improve the foundation models.”
This ensures full data privacy, especially for sensitive use cases like confidential documents.
Reference: AWS Documentation C Data privacy in Amazon Bedrock
A company is building a new generative AI chatbot. The chatbot uses an Amazon Bedrock foundation model (FM) to generate responses. During testing, the company notices that the chatbot is prone to prompt injection attacks.
What can the company do to secure the chatbot with the LEAST implementation effort?
- A . Fine-tune the FM to avoid harmful responses.
- B . Use Amazon Bedrock Guardrails content filters and denied topics.
- C . Change the FM to a more secure FM.
- D . Use chain-of-thought prompting to produce secure responses.
B
Explanation:
Amazon Bedrock Guardrails allow developers to create safeguards that filter harmful content and prevent sensitive topics from being discussed. This functionality helps mitigate prompt injection attacks with minimal implementation effort.
According to the official Amazon Bedrock documentation:
“You can configure Guardrails for Amazon Bedrock to define denied topics, use content filters, and apply sensitive information filters, offering protection against prompt injection attacks with minimal development effort.”
An AI company periodically evaluates its systems and processes with the help of independent software vendors (ISVs). The company needs to receive email message notifications when an ISV’s compliance reports become available.
Which AWS service can the company use to meet this requirement?
- A . AWS Audit Manager
- B . AWS Artifact
- C . AWS Trusted Advisor
- D . AWS Data Exchange
D
Explanation:
AWS Data Exchange is a service that allows companies to securely exchange data with third parties, such as independent software vendors (ISVs). AWS Data Exchange can be configured to provide notifications, including email notifications, when new datasets or compliance reports become available.
Option D (Correct): "AWS Data Exchange": This is the correct answer because it enables the company to receive notifications, including email messages, when ISVs’ compliance reports are available.
Option A: "AWS Audit Manager" is incorrect because it focuses on assessing an organization’s own compliance, not receiving third-party compliance reports.
Option B: "AWS Artifact" is incorrect as it provides access to AWS’s compliance reports, not ISVs’.
Option C: "AWS Trusted Advisor" is incorrect as it offers optimization and best practices guidance, not compliance report notifications.
AWS AI Practitioner
Reference: AWS Data Exchange Documentation: AWS explains how Data Exchange allows organizations to subscribe to third-party data and receive notifications when updates are available.
An AI practitioner is developing a prompt for an Amazon Titan model. The model is hosted on Amazon Bedrock. The AI practitioner is using the model to solve numerical reasoning challenges. The AI practitioner adds the following phrase to the end of the prompt: "Ask the model to show its work by explaining its reasoning step by step."
Which prompt engineering technique is the AI practitioner using?
- A . Chain-of-thought prompting
- B . Prompt injection
- C . Few-shot prompting
- D . Prompt templating
A
Explanation:
Chain-of-thought prompting is a prompt engineering technique where you instruct the model to explain its reasoning step by step, which is particularly useful for tasks involving logic, math, or reasoning.
A is correct: Asking the model to "explain its reasoning step by step" directly invokes chain-of-thought prompting, as documented in AWS and generative AI literature.
B is unrelated (prompt injection is a security concern).
C (few-shot) provides examples, but doesn’t specifically require step-by-step reasoning.
D (templating) is about structuring the prompt format.
"Chain-of-thought prompting elicits step-by-step explanations from LLMs, which improves performance on complex reasoning tasks."
(Reference: Amazon Bedrock Prompt Engineering Guide, AWS Certified AI Practitioner Study Guide)
A company wants to develop a large language model (LLM) application by using Amazon Bedrock and customer data that is uploaded to Amazon S3. The company’s security policy states that each team can access data for only the team’s own customers.
Which solution will meet these requirements?
- A . Create an Amazon Bedrock custom service role for each team that has access to only the team’s customer data.
- B . Create a custom service role that has Amazon S3 access. Ask teams to specify the customer name on each Amazon Bedrock request.
- C . Redact personal data in Amazon S3. Update the S3 bucket policy to allow team access to customer data.
- D . Create one Amazon Bedrock role that has full Amazon S3 access. Create IAM roles for each team that have access to only each team’s customer folders.
A
Explanation:
To comply with the company’s security policy, which restricts each team to access data for only their own customers, creating an Amazon Bedrock custom service role for each team is the correct solution.
Custom Service Role Per Team:
A custom service role for each team ensures that the access control is granular, allowing only specific teams to access their own customer data in Amazon S3.
This setup aligns with the principle of least privilege, ensuring teams can only interact with data they are authorized to access.
Why Option A is Correct:
Access Control: Allows precise access permissions for each team’s data.
Security Compliance: Directly meets the company’s security policy requirements by ensuring data segregation.
Why Other Options are Incorrect:
B. Custom service role with customer name specification: This approach is impractical as it relies on manual input, which is prone to errors and does not inherently enforce data access controls.
C. Redacting personal data and updating S3 bucket policy: Redaction does not solve the requirement for team-specific access, and updating bucket policies is less granular than creating roles.
D. One Bedrock role with full S3 access and IAM roles for teams: This setup does not meet the least privilege principle, as having a single role with full access is contrary to the company’s security policy.
Thus, A is the correct answer to meet the company’s security requirements.
A company is developing an ML model to predict heart disease risk. The model uses patient data, such as age, cholesterol, blood pressure, smoking status, and exercise habits. The dataset includes a target value that indicates whether a patient has heart disease.
Which ML technique will meet these requirements?
- A . Unsupervised learning
- B . Supervised learning
- C . Reinforcement learning
- D . Semi-supervised learning
A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants to adapt pre-trained models to create models for new, related tasks.
Which ML strategy meets these requirements?
- A . Increase the number of epochs.
- B . Use transfer learning.
- C . Decrease the number of epochs.
- D . Use unsupervised learning.
B
Explanation:
Transfer learning is the correct strategy for adapting pre-trained models for new, related tasks without creating models from scratch.
Transfer Learning:
Involves taking a pre-trained model and fine-tuning it on a new dataset for a related task.
This approach is efficient because it leverages existing knowledge from a model trained on a large dataset, requiring less data and computational resources than training a new model from scratch.
Why Option B is Correct:
Adaptation of Pre-trained Models: Allows for adapting existing models to new tasks, which aligns with the company’s goal of not starting from scratch.
Efficiency and Speed: Speeds up the model development process by building on the knowledge of pre-trained models.
Why Other Options are Incorrect:
