Practice Free AIF-C01 Exam Online Questions
What does an F1 score measure in the context of foundation model (FM) performance?
- A . Model precision and recall.
- B . Model speed in generating responses.
- C . Financial cost of operating the model.
- D . Energy efficiency of the model’s computations.
A
Explanation:
The F1 score is the harmonic mean of precision and recall, making it a balanced metric for evaluating model performance when there is an imbalance between false positives and false negatives. Speed, cost, and energy efficiency are unrelated to the F1 score.
Reference: AWS Foundation Models Guide.
A company has created a custom model by fine-tuning an existing large language model (LLM) from Amazon Bedrock. The company wants to deploy the model to production and use the model to handle a steady rate of requests each minute.
Which solution meets these requirements MOST cost-effectively?
- A . Deploy the model by using an Amazon EC2 compute optimized instance.
- B . Use the model with on-demand throughput on Amazon Bedrock.
- C . Store the model in Amazon S3 and host the model by using AWS Lambda.
- D . Purchase Provisioned Throughput for the model on Amazon Bedrock.
D
Explanation:
The correct answer is D C Purchase Provisioned Throughput on Amazon Bedrock, which provides guaranteed and predictable model capacity for workloads with consistent or steady request volume. According to AWS Bedrock documentation, Provisioned Throughput is specifically designed for production applications that require reliable, consistent inference performance at a controlled cost. It allows customers to reserve a fixed number of model inference units (MIUs), ensuring low latency and cost savings compared to on-demand pricing when traffic is steady. On-demand throughput (option B) is ideal for unpredictable or sporadic usage, but it becomes more expensive for stable traffic patterns because it charges per token with no discount for steady volume. Hosting the model on EC2 (option A) or Lambda (option C) increases operational overhead, requires model containerization, scaling management, and may not support LLM-level GPU performance efficiently. Bedrock Provisioned Throughput eliminates infrastructure management and provides the most cost-effective solution for stable, predictable workloads.
Referenced AWS Documentation:
Amazon Bedrock Developer Guide C Provisioned Throughput
AWS ML Specialty Study Guide C Cost Optimization for Generative AI
A company wants to use a large language model (LLM) to generate product descriptions. The company wants to give the model example descriptions that follow a format.
Which prompt engineering technique will generate descriptions that match the format?
- A . Zero-shot prompting
- B . Chain-of-thought prompting
- C . One-shot prompting
- D . Few-shot prompting
D
Explanation:
The correct answer is D because Few-shot prompting involves providing the LLM with a few examples of the expected input-output format. This helps the model learn and mimic the pattern or structure required in the response ― such as generating product descriptions that follow a specific template.
From AWS documentation:
"Few-shot prompting helps guide the model to produce structured and domain-specific outputs by supplying a small number of example inputs and corresponding outputs.“
Explanation of other options:
A company wants to extract key insights from large policy documents to increase employee efficiency.
- A . Regression
- B . Clustering
- C . Summarization
- D . Classification
C
Explanation:
Summarization is a natural language processing (NLP) task that condenses long documents into concise, meaningful summaries while retaining the key information.
Regression predicts numerical values.
Clustering groups similar items.
Classification assigns data into predefined categories.
Reference: AWS NLP Use Cases C Summarization
A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short and written in a specific language.
Which solution will align the LLM response quality with the company’s expectations?
- A . Adjust the prompt.
- B . Choose an LLM of a different size.
- C . Increase the temperature.
- D . Increase the Top K value.
A
Explanation:
Adjusting the prompt is the correct solution to align the LLM outputs with the company’s expectations for short, specific language responses.
Adjust the Prompt:
Modifying the prompt can guide the LLM to produce outputs that are shorter and tailored to the desired language.
A well-crafted prompt can provide specific instructions to the model, such as "Answer in a short sentence in Spanish.“
Why Option A is Correct:
Control Over Output: Adjusting the prompt allows for direct control over the style, length, and language of the LLM outputs.
Flexibility: Prompt engineering is a flexible approach to refining the model’s behavior without modifying the model itself.
Why Other Options are Incorrect:
B. Choose an LLM of a different size: The model size does not directly impact the response length or language.
C. Increase the temperature: Increases randomness in responses but does not ensure brevity or specific language.
D. Increase the Top K value: Affects diversity in model output but does not align directly with response length or language specificity.
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis.
The company wants to know how much information can fit into one prompt.
Which consideration will inform the company’s decision?
- A . Temperature
- B . Context window
- C . Batch size
- D . Model size
B
Explanation:
The context window determines how much information can fit into a single prompt when using a large language model (LLM) like those on Amazon Bedrock.
Context Window:
The context window is the maximum amount of text (measured in tokens) that a language model can process in a single pass.
For LLM applications, the size of the context window limits how much input data, such as text for sentiment analysis, can be fed into the model at once.
Why Option B is Correct:
Determines Prompt Size: The context window size directly informs how much information (e.g., words or sentences) can fit in one prompt.
Model Capacity: The larger the context window, the more information the model can consider for generating outputs.
Why Other Options are Incorrect:
A company is using an Amazon Nova Canvas model to generate images. The model generates images successfully. The company needs to prevent the model from including specific items in the generated images.
Which solution will meet this requirement?
- A . Use a higher temperature value.
- B . Use a more detailed prompt.
- C . Use a negative prompt.
- D . Use another foundation model (FM).
C
Explanation:
The correct answer is C C Use a negative prompt. Negative prompts instruct a generative image model to avoid certain features, objects, or styles in the output. This technique is fully supported by models like Amazon Nova Canvas on Bedrock, which are based on diffusion or image generation architectures. According to AWS documentation, negative prompts refine output control by telling the model what not to include, thereby improving brand alignment, compliance, or creative direction. A higher temperature increases randomness, not control. A detailed prompt helps, but without exclusion instructions, the model may still include unwanted elements. Changing the model may yield better output but doesn’t directly solve this control requirement. Negative prompts are purpose-built for this scenario.
Referenced AWS AI/ML Documents and Study Guides:
Amazon Bedrock Documentation C Prompt Engineering for Image Models
AWS Generative AI Guide C Controlled Generation with Negative Prompts
An ecommerce company wants to improve search engine recommendations by customizing the results for each user of the company’s ecommerce platform.
Which AWS service meets these requirements?
- A . Amazon Personalize
- B . Amazon Kendra
- C . Amazon Rekognition
- D . Amazon Transcribe
A
Explanation:
The ecommerce company wants to improve search engine recommendations by customizing results for each user. Amazon Personalize is a machine learning service that enables personalized recommendations, tailoring search results or product suggestions based on individual user behavior and preferences, making it the best fit for this requirement.
Exact Extract from AWS AI Documents:
From the Amazon Personalize Developer Guide:
"Amazon Personalize enables developers to build applications with personalized recommendations, such as customized search results or product suggestions, by analyzing user behavior and preferences to deliver tailored experiences.“
(Source: Amazon Personalize Developer Guide, Introduction to Amazon Personalize)
Detailed
Option A: Amazon Personalize This is the correct answer. Amazon Personalize specializes in creating personalized recommendations, ideal for customizing search results for each user on an ecommerce platform.
Option B: Amazon Kendra Amazon Kendra is an intelligent search service for enterprise data, focusing on retrieving relevant documents or answers, not on personalizing search results for individual users.
Option C: Amazon Rekognition Amazon Rekognition is for image and video analysis, such as object detection or facial recognition, and is unrelated to search engine recommendations.
Option D: Amazon Transcribe Amazon Transcribe converts speech to text, which is not relevant for improving search engine recommendations.
Reference: Amazon Personalize Developer Guide: Introduction to Amazon Personalize (https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html)
AWS AI Practitioner Learning Path: Module on Recommendation Systems
AWS Documentation: Personalization with Amazon Personalize (https://aws.amazon.com/personalize/)
A company is building a customer service chatbot. The company wants the chatbot to improve its responses by learning from past interactions and online resources.
Which AI learning strategy provides this self-improvement capability?
- A . Supervised learning with a manually curated dataset of good responses and bad responses
- B . Reinforcement learning with rewards for positive customer feedback
- C . Unsupervised learning to find clusters of similar customer inquiries
- D . Supervised learning with a continuously updated FAQ database
B
Explanation:
Reinforcement learning allows a model to learn and improve over time based on feedback from its environment. In this case, the chatbot can improve its responses by being rewarded for positive customer feedback, which aligns well with the goal of self-improvement based on past interactions and new information.
Option B (Correct): "Reinforcement learning with rewards for positive customer feedback": This is the correct answer as reinforcement learning enables the chatbot to learn from feedback and adapt its behavior accordingly, providing self-improvement capabilities.
Option A: "Supervised learning with a manually curated dataset" is incorrect because it does not support continuous learning from new interactions.
Option C: "Unsupervised learning to find clusters of similar customer inquiries" is incorrect because unsupervised learning does not provide a mechanism for improving responses based on feedback.
Option D: "Supervised learning with a continuously updated FAQ database" is incorrect because it still relies on manually curated data rather than self-improvement from feedback.
AWS AI Practitioner
Reference: Reinforcement Learning on AWS: AWS provides reinforcement learning frameworks that can be used to train models to improve their performance based on feedback.
A company has a generative AI application that uses a pre-trained foundation model (FM) on Amazon Bedrock. The company wants the FM to include more context by using company information.
Which solution meets these requirements MOST cost-effectively?
- A . Use Amazon Bedrock Knowledge Bases.
- B . Choose a different FM on Amazon Bedrock.
- C . Use Amazon Bedrock Agents.
- D . Deploy a custom model on Amazon Bedrock.
A
Explanation:
Amazon Bedrock Knowledge Bases enable Retrieval Augmented Generation (RAG) by letting you connect external company data sources to your foundation model in a serverless, cost-effective manner―without retraining or fine-tuning. This allows the model to answer questions and generate content grounded in your own documents or data.
A is correct:
“Knowledge Bases for Amazon Bedrock enable generative AI applications to retrieve and include your company’s information for more contextual responses, without the need for expensive retraining or custom models.“
(Reference: Amazon Bedrock Knowledge Bases)
B is not cost-effective or guaranteed to add your company’s context.
C (Agents) are for orchestrating workflows, not specifically RAG/context.
D (Custom model deployment) is costly and unnecessary for just adding context.
