Practice Free AIF-C01 Exam Online Questions
A company needs to apply numerical transformations to a set of images to transpose and rotate the images.
- A . Create a deep neural network by using the images as input.
- B . Create an AWS Lambda function to perform the transformations.
- C . Use an Amazon Bedrock large language model (LLM) with a high temperature.
- D . Use AWS Glue Data Quality to make corrections to each image.
B
Explanation:
The correct answer is
B. AWS Lambda can efficiently perform image processing and transformations, such as rotating, resizing, or transposing, in a serverless and event-driven manner. According to AWS documentation, Lambda functions can trigger automatically when new images are uploaded to Amazon S3, perform necessary transformations using libraries like Pillow or OpenCV, and store the processed outputs. This approach minimizes infrastructure management and scales automatically. Deep neural networks (option A) are excessive for simple transformations, while LLMs (option C) and Glue Data Quality (option D) are unrelated―LLMs handle text and Glue is for tabular data validation. Lambda is the AWS-recommended service for lightweight, automated image preprocessing tasks.
Referenced AWS AI/ML Documents and Study Guides:
AWS Lambda Developer Guide C Image Processing Use Cases AWS ML Specialty Guide C Preprocessing and Automation
A company is deploying AI/ML models by using AWS services. The company wants to offer transparency into the models’ decision-making processes and provide explanations for the model outputs.
- A . Amazon SageMaker Model Cards
- B . Amazon Rekognition
- C . Amazon Comprehend
- D . Amazon Lex
A
Explanation:
Amazon SageMaker Model Cards document model details, performance, intended use cases, and risk considerations. They support responsible AI by improving transparency and governance.
Rekognition is computer vision.
Comprehend is NLP for entity/sentiment.
Lex is conversational AI.
Reference: AWS Documentation C SageMaker Model Cards
An ecommerce company is using a chatbot to automate the customer order submission process. The chatbot is powered by AI and Is available to customers directly from the company’s website 24 hours a day, 7 days a week.
Which option is an AI system input vulnerability that the company needs to resolve before the chatbot is made available?
- A . Data leakage
- B . Prompt injection
- C . Large language model (LLM) hallucinations
- D . Concept drift
A
Explanation:
The ecommerce company’s chatbot, powered by AI, automates customer order submissions and is accessible 24/7 via the website. Prompt injection is an AI system input vulnerability where malicious users craft inputs to manipulate the chatbot’s behavior, such as bypassing safeguards or accessing unauthorized information. This vulnerability must be resolved before the chatbot is made available to ensure security.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Prompt injection is a vulnerability in AI systems, particularly chatbots, where malicious inputs can manipulate the model’s behavior, potentially leading to unauthorized actions or harmful outputs. Implementing guardrails and input validation can mitigate this risk."
(Source: AWS Bedrock User Guide, Security Best Practices)
Detailed
Option A: Data leakage Data leakage refers to the unintended exposure of sensitive data during model training or inference, not an input vulnerability affecting a chatbot’s operation.
Option B: Prompt injection This is the correct answer. Prompt injection is a critical input vulnerability for chatbots, where malicious prompts can exploit the AI to produce harmful or unauthorized
responses, a risk that must be addressed before launch.
Option C: Large language model (LLM) hallucinations LLM hallucinations refer to the model generating incorrect or ungrounded responses, which is an output issue, not an input vulnerability.
Option D: Concept drift Concept drift occurs when the data distribution changes over time, affecting model performance. It is not an input vulnerability but a long-term performance issue.
Reference: AWS Bedrock User Guide: Security Best Practices (https://docs.aws.amazon.com/bedrock/latest/userguide/security.html)
AWS AI Practitioner Learning Path: Module on AI Security and Vulnerabilities
AWS Documentation: Securing AI Systems (https://aws.amazon.com/security/)
A company wants to build an ML model to detect abnormal patterns in sensor data. The company does not have labeled data for training.
Which ML method will meet these requirements?
- A . Linear regression
- B . Classification
- C . Decision tree
- D . Autoencoders
D
Explanation:
The correct answer is D because autoencoders are an unsupervised machine learning method commonly used for anomaly detection when labeled data is not available.
From AWS documentation:
"Autoencoders learn to compress and reconstruct input data. During anomaly detection, they learn normal patterns in data. Data points that the model cannot accurately reconstruct are flagged as anomalies."
This approach is ideal when there is no labeled data and when patterns must be learned based on normal behavior alone ― a common situation in IoT sensor data environments.
Explanation of other options:
A bank has fine-tuned a large language model (LLM) to expedite the loan approval process. During an external audit of the model, the company discovered that the model was approving loans at a faster pace for a specific demographic than for other demographics.
How should the bank fix this issue MOST cost-effectively?
- A . Include more diverse training data. Fine-tune the model again by using the new data.
- B . Use Retrieval Augmented Generation (RAG) with the fine-tuned model.
- C . Use AWS Trusted Advisor checks to eliminate bias.
- D . Pre-train a new LLM with more diverse training data.
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The best practice for mitigating bias in AI/ML models, according to AWS and responsible AI frameworks, is to ensure that the training data is representative and diverse. If a model demonstrates bias (such as favoring a particular demographic), the recommended, cost-effective approach is to collect additional data from underrepresented groups and retrain (fine-tune) the model with the improved dataset.
An AI practitioner is using an LLM-as-a-judge in Amazon Bedrock to evaluate the quality of agent responses in a production environment. The AI practitioner wants to apply a built-in metric that assesses how thoroughly the agent responses address all parts of each prompt or question.
Which metric will meet these requirements?
- A . Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
- B . Completeness
- C . Following instructions
- D . Refusal
B
Explanation:
Comprehensive and Detailed Explanation From Exact AWS AI documents:
In Amazon Bedrock evaluations, Completeness measures how thoroughly a model or agent response addresses all aspects of the user prompt.
AWS evaluation guidance for LLM-as-a-judge explains that:
Completeness focuses on coverage of prompt requirements
It is especially useful for evaluating multi-part questions
It is a built-in qualitative metric in agent evaluation workflows
Why the other options are incorrect:
ROUGE (A) measures text overlap, mainly for summarization.
Following instructions (C) evaluates adherence, not coverage.
Refusal (D) measures appropriate refusal behavior.
AWS AI document references:
Amazon Bedrock Model Evaluation
LLM-as-a-Judge Metrics
Evaluating Agent Responses on AWS
A company is introducing a new feature for its application. The feature will refine the style of output messages. The company will fine-tune a large language model (LLM) on Amazon Bedrock to implement the feature.
Which type of data does the company need to meet these requirements?
- A . Samples of only input messages
- B . Samples of only output messages
- C . Samples of pairs of input and output messages
- D . Separate samples of input and output messages
C
Explanation:
Fine-tuning requires paired inputCoutput examples to teach the model how to respond to inputs with desired styled outputs.
Single inputs (A) or outputs (B) are insufficient.
Separate, unpaired samples (D) don’t establish the input-output mapping.
Reference: AWS Documentation C Preparing data for fine-tuning FMs
A company runs a website for users to make travel reservations. The company wants an AI solution to help create consistent branding for hotels on the website. The AI solution needs to generate hotel descriptions for the website in a consistent writing style.
Which AWS service will meet these requirements?
- A . Amazon Comprehend
- B . Amazon Personalize
- C . Amazon Rekognition
- D . Amazon Bedrock
D
Explanation:
The correct answer is D because Amazon Bedrock provides access to foundation models (FMs) from various providers for generative AI use cases, including text generation. It supports generating content in a consistent tone, voice, or writing style using prompts or few-shot examples.
From AWS documentation:
"Amazon Bedrock allows you to build and scale generative AI applications using foundation models from AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability AI, and Amazon. These models can generate text with controlled tone and style for applications like branding, content creation, and copywriting."
Explanation of other options:
Which strategy will prevent model hallucinations?
- A . Fact-check the output of the large language model (LLM).
- B . Compare the output of the large language model (LLM) to the results of an internet search.
- C . Use contextual grounding.
- D . Use relevance grounding.
A software company has deployed an AI model to translate paragraphs of text into a user’s chosen language. The model can produce a confidence score for the translations. The company wants to incorporate its employees into a review process to validate and improve the model’s translations.
Which AWS solution will meet these requirements?
- A . Amazon SageMaker Clarify
- B . Amazon Augmented AI (Amazon A2I)
- C . Amazon SageMaker Model Monitor
- D . Amazon Bedrock Agents
B
Explanation:
Amazon Augmented AI (Amazon A2I) is designed to enable human review workflows for machine learning predictions where human judgment is required. AWS documentation states that A2I allows organizations to integrate human reviewers into AI workflows to review low-confidence predictions or samples selected by business rules. This directly aligns with the requirement to involve employees in validating and improving translation outputs.
In this scenario, the translation model produces a confidence score, which is a common trigger used by Amazon A2I to route predictions to human reviewers. AWS explicitly describes that A2I supports use cases such as natural language processing, translation, content moderation, and document processing, where automated models may need human oversight to ensure accuracy and quality.
Amazon A2I provides managed workflows, reviewer task interfaces, and audit trails that allow employees to review, correct, and validate model outputs. The feedback collected through A2I can then be used to improve future model training, increasing translation quality over time.
The other options do not meet the requirement. Amazon SageMaker Clarify focuses on bias detection and explainability, not human review workflows. Amazon SageMaker Model Monitor is used to detect data drift and model performance degradation in production, not to involve humans in validating predictions. Amazon Bedrock Agents are designed to orchestrate tasks and interactions with foundation models, not to manage human-in-the-loop review processes.
AWS positions Amazon A2I as a core service for implementing human-in-the-loop machine learning, making it the correct solution for incorporating employees into a structured translation review process.
