Practice Free AIF-C01 Exam Online Questions
A company is developing a new image classification model by using a dataset of photos. The dataset must follow the AWS principles of responsible AI.
Which characteristics should the dataset have to meet this requirement?
- A . The dataset should be diverse, sourced from reputable sources, and have balanced categories.
- B . The dataset should contain over 5 million photos, and 1% of photos should be labeled.
- C . The dataset should include photos from a limited source.
- D . The dataset should be curated entirely by the company’s own engineers and researchers.
A
Explanation:
Comprehensive and Detailed Explanation (AWS AI documents):
AWS Responsible AI principles stress the importance of fairness, robustness, and accountability, which begin with the quality of the data used to train models. A responsible dataset should:
Be diverse, representing different populations, environments, and conditions to reduce bias Be sourced from reputable and appropriate sources, ensuring data quality and ethical use
Have balanced categories, preventing the model from favoring one class over others and reducing the risk of discriminatory outcomes
These characteristics help ensure that the resulting image classification model behaves fairly, performs reliably across groups, and aligns with AWS Responsible AI best practices.
Why the other options are incorrect:
B focuses on dataset size rather than quality, balance, or representativeness.
C increases the risk of bias and poor generalization.
D limits diversity and does not inherently ensure fairness or accountability.
AWS AI Study Guide
Reference: AWS Responsible AI principles: fairness and data quality
AWS guidance on dataset selection and preparation for responsible ML
A company that streams media is selecting an Amazon Nova foundation model (FM) to process
documents and images. The company is comparing Nova Micro and Nova Lite. The company wants to minimize costs.
- A . Nova Micro uses transformer-based architectures. Nova Lite does not use transformer-based architectures.
- B . Nova Micro supports only text data. Nova Lite is optimized for numerical data.
- C . Nova Micro supports only text. Nova Lite supports images, videos, and text.
- D . Nova Micro runs only on CPUs. Nova Lite runs only on GPUs.
C
Explanation:
The correct answer is C, because Amazon Nova Micro is a smaller, lower-cost foundation model that is text-only, while Nova Lite is a more capable multimodal model that supports images, videos, and text. According to AWS Bedrock documentation, the Nova model family includes variants that differ in capability and cost. Nova Micro is optimized for lightweight text-based tasks, including summarization, question answering, and basic reasoning. This makes it cheaper to operate and well-suited for cost-sensitive workloads. Nova Lite, on the other hand, is a multimodal FM that can analyze documents, screenshots, photographs, charts, and videos, making it ideal for media companies requiring cross-format understanding. AWS clarifies that both Micro and Lite use transformer-based architectures, and run on managed infrastructure that abstracts hardware considerations. Therefore, the main differentiator is capability―and Nova Micro being text-only is the more cost-effective option. Nova Lite is appropriate only when image or video analysis is required.
Referenced AWS Documentation:
Amazon Bedrock C Nova Model Family Overview
AWS Generative AI Model Selection Guide
Which option describes embeddings in the context of AI?
- A . A method for compressing large datasets
- B . An encryption method for securing sensitive data
- C . A method for visualizing high-dimensional data
- D . A numerical method for data representation in a reduced dimensionality space
D
Explanation:
Embeddings in AI refer to numerical representations of data (e.g., text, images) in a lower-dimensional space, capturing semantic or contextual relationships. They are widely used in NLP and other AI tasks to represent complex data in a format that models can process efficiently.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"Embeddings are numerical representations of data in a reduced dimensionality space. In natural language processing, for example, word or sentence embeddings capture semantic relationships, enabling models to process text efficiently for tasks like classification or similarity search."
(Source: AWS AI Practitioner Learning Path, Module on AI Concepts)
Detailed
Option A: A method for compressing large datasets While embeddings reduce dimensionality, their primary purpose is not data compression but rather to represent data in a way that preserves meaningful relationships. This option is incorrect.
Option B: An encryption method for securing sensitive data Embeddings are not related to encryption or data security. They are used for data representation, making this option incorrect.
Option C: A method for visualizing high-dimensional data While embeddings can sometimes be used in visualization (e.g., t-SNE), their primary role is data representation for model processing, not visualization. This option is misleading.
Option D: A numerical method for data representation in a reduced dimensionality space This is the correct answer. Embeddings transform complex data into lower-dimensional numerical vectors, preserving semantic or contextual information for use in AI models.
Reference: AWS AI Practitioner Learning Path: Module on AI Concepts
Amazon Comprehend Developer Guide: Embeddings for Text Analysis (https://docs.aws.amazon.com/comprehend/latest/dg/embeddings.html)
AWS Documentation: What are Embeddings? (https://aws.amazon.com/what-is/embeddings/)
A financial company is developing a generative AI application for loan approval decisions. The company needs the application output to be responsible and fair.
- A . Review the training data to check for biases. Include data from all demographics in the training data.
- B . Use a deep learning model with many hidden layers.
- C . Keep the model’s decision-making process a secret to protect proprietary algorithms.
- D . Continuously monitor the model’s performance on a static test dataset.
A
Explanation:
The correct answer is A, as the most effective way to ensure fairness in AI systems is to audit and diversify training data. AWS Responsible AI documentation stresses that bias originates primarily from unbalanced or incomplete datasets. By including samples from all demographic groups, the model learns patterns that generalize fairly across populations. AWS SageMaker Clarify supports data bias detection and offers fairness metrics like demographic parity and equal opportunity difference. Deeper models (option B) or secrecy (option C) do not mitigate bias and may worsen transparency. Static test datasets (option D) fail to capture evolving data distributions. Therefore, ongoing bias reviews and diverse datasets form the foundation of responsible generative AI for financial use cases.
Referenced AWS AI/ML Documents and Study Guides:
AWS Responsible AI Whitepaper C Fairness and Bias Mitigation Amazon SageMaker Clarify Documentation C Bias Detection
A company is using large language models (LLMs) to develop online tutoring applications. The company needs to apply configurable safeguards to the LLMs. These safeguards must ensure that the LLMs follow standard safety rules when creating applications.
Which solution will meet these requirements with the LEAST effort?
- A . Amazon Bedrock playgrounds
- B . Amazon SageMaker Clarify
- C . Amazon Bedrock Guardrails
- D . Amazon SageMaker JumpStart
C
Explanation:
The correct answer is C because Amazon Bedrock Guardrails provides out-of-the-box configurable safety mechanisms to control the behavior of LLMs in generative AI applications. Guardrails can be configured with denylists, content filters, sensitive topics, and tone enforcement, all without retraining the model.
From AWS documentation:
"Amazon Bedrock Guardrails allows developers to define safety and responsible AI policies directly in the model inference layer, making it easy to prevent harmful, biased, or unsafe outputs with minimal configuration."
Explanation of other options:
A company is building a generative AI application to help customers make travel reservations. The application will process customer requests and invoke the appropriate API calls to complete reservation transactions.
Which Amazon Bedrock resource will meet these requirements?
- A . Agents
- B . Intelligent prompt routing
- C . Knowledge Bases
- D . Guardrails
A
Explanation:
Comprehensive and Detailed Explanation From Exact AWS AI documents:
Amazon Bedrock Agents are designed to:
Interpret user intent
Plan multi-step actions
Invoke APIs and backend services
Complete transactional workflows
This makes Agents ideal for task-oriented applications such as travel reservations, where the system must:
Understand customer requests
Call reservation APIs
Orchestrate end-to-end transactions
Why the other options are incorrect:
Prompt routing (B) selects prompts or models but does not invoke APIs.
Knowledge Bases (C) retrieve information but do not execute actions.
Guardrails (D) enforce safety and compliance, not task execution.
AWS AI document references:
Amazon Bedrock Agents Documentation
Building Task-Oriented Generative AI Applications
Foundation Model Orchestration on AWS
A company wants to make a chatbot to help customers. The chatbot will help solve technical problems without human intervention. The company chose a foundation model (FM) for the chatbot. The chatbot needs to produce responses that adhere to company tone.
Which solution meets these requirements?
- A . Set a low limit on the number of tokens the FM can produce.
- B . Use batch inferencing to process detailed responses.
- C . Experiment and refine the prompt until the FM produces the desired responses.
- D . Define a higher number for the temperature parameter.
C
Explanation:
Experimenting and refining the prompt is the best approach to ensure that the chatbot using a foundation model (FM) produces responses that adhere to the company’s tone.
Prompt Engineering:
Adjusting and refining the prompt allows for better control over the FM’s outputs, ensuring they align with the desired tone and style.
This iterative process involves testing different prompts and modifying them based on the model’s responses to achieve the desired outcome.
Why Option C is Correct:
Directly Influences Output Quality: Allows for fine-tuning of the model’s responses to match the company’s tone.
Cost-Effective: Does not require modifying the model itself, only the inputs to it.
Why Other Options are Incorrect:
A company wants to identify harmful language in the comments section of social media posts by using an ML model. The company will not use labeled data to train the model.
Which strategy should the company use to identify harmful language?
- A . Use Amazon Rekognition moderation.
- B . Use Amazon Comprehend toxicity detection.
- C . Use Amazon SageMaker AI built-in algorithms to train the model.
- D . Use Amazon Polly to monitor comments.
B
Explanation:
Amazon Comprehend toxicity detection is a managed NLP service that can analyze text for harmful or toxic language using pre-trained models―no need for labeled data or custom training.
B is correct: Comprehend’s toxicity detection API is designed for this use case, works out-of-the-box, and requires no data labeling or model training.
A (Rekognition) is for image and video content moderation.
C would require labeled data for training.
D (Polly) is for text-to-speech, not content moderation.
“Amazon Comprehend can detect toxicity in text with pre-trained models, requiring no labeled training data.”
(Reference: Amazon Comprehend Toxicity Detection, AWS AI Practitioner Official Guide)
A company has thousands of customer support interactions per day and wants to analyze these interactions to identify frequently asked questions and develop insights.
Which AWS service can the company use to meet this requirement?
- A . Amazon Lex
- B . Amazon Comprehend
- C . Amazon Transcribe
- D . Amazon Translate
B
Explanation:
Amazon Comprehend is the correct service to analyze customer support interactions and identify frequently asked questions and insights.
Amazon Comprehend:
A natural language processing (NLP) service that uses machine learning to uncover insights and relationships in text.
Capable of extracting key phrases, detecting entities, analyzing sentiment, and identifying topics from text data, making it ideal for analyzing customer support interactions.
Why Option B is Correct:
Text Analysis Capabilities: Can process large volumes of text to identify common topics, phrases, and sentiment, providing valuable insights.
Suitable for Customer Support Analysis: Specifically designed to understand the content and meaning of text, which is key for identifying frequently asked questions.
Why Other Options are Incorrect:
A financial company is developing a generative AI application for loan approval decisions. The company needs the application output to be responsible and fair.
Which solution meets these requirements?
- A . Review the training data to check for biases. Include data from all demographics in the training data.
- B . Use a deep learning model with many hidden layers.
- C . Keep the model’s decision-making process a secret to protect proprietary algorithms.
- D . Continuously monitor the model’s performance on a static test dataset.
A
Explanation:
The correct answer is A because ensuring responsibility and fairness in ML begins with bias detection in the training data. Including a balanced representation of all demographics ensures the model learns fairly across different groups, which is critical in regulated industries like finance.
From AWS documentation:
"A key principle of responsible AI is building models that do not propagate or amplify bias. Fairness begins with training data. Reviewing and augmenting data for representation is essential."
Explanation of other options:
B. The number of hidden layers doesn’t inherently improve fairness or responsibility.
C. Keeping decisions opaque violates explainability principles in responsible AI.
D. A static dataset can become outdated and may not reflect real-world shifts, which limits fairness assessment over time.
Referenced AWS AI/ML Documents and Study Guides:
Amazon SageMaker Clarify Documentation C Bias Detection and Explainability
AWS Responsible AI Guidelines
AWS ML Specialty Study Guide C Fairness and Governance
