Practice Free AIF-C01 Exam Online Questions
A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company’s product manuals. The manuals are stored as PDF files.
Which solution meets these requirements MOST cost-effectively?
- A . Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock.
- B . Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock.
- C . Use all the PDF documents to fine-tune a model with Amazon Bedrock. Use the fine-tuned model to process user prompts.
- D . Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock.
A
Explanation:
Using Amazon Bedrock with large language models (LLMs) allows for efficient utilization of AI to answer queries based on context provided in product manuals. To achieve this cost-effectively, the company should avoid unnecessary use of resources.
Option A (Correct): "Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock": This is the most cost-effective solution. By using prompt engineering, only the relevant content from one PDF file is added as context to each query. This approach minimizes the amount of data processed, which helps in reducing costs associated with LLMs’ computational requirements.
Option B: "Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock" is incorrect. Including all PDF files would increase costs significantly due to the large context size processed by the model.
Option C: "Use all the PDF documents to fine-tune a model with Amazon Bedrock" is incorrect. Fine-tuning a model is more expensive than using prompt engineering, especially if done for multiple documents.
Option D: "Upload PDF documents to an Amazon Bedrock knowledge base" is incorrect because
Amazon Bedrock does not have a built-in knowledge base feature for directly managing and querying
PDF documents.
AWS AI Practitioner
Reference: Prompt Engineering for Cost-Effective AI: AWS emphasizes the importance of using prompt engineering to minimize costs when interacting with LLMs. By carefully selecting relevant context, users can reduce the amount of data processed and save on expenses.
An airline company wants to build a conversational AI assistant to answer customer questions about flight schedules, booking, and payments. The company wants to use large language models (LLMs) and a knowledge base to create a text-based chatbot interface.
Which solution will meet these requirements with the LEAST development effort?
- A . Train models on Amazon SageMaker Autopilot.
- B . Develop a Retrieval Augmented Generation (RAG) agent by using Amazon Bedrock.
- C . Create a Python application by using Amazon Q Developer.
- D . Fine-tune models on Amazon SageMaker Jumpstart.
B
Explanation:
The airline company aims to build a conversational AI assistant using large language models (LLMs) and a knowledge base to create a text-based chatbot with minimal development effort. Retrieval Augmented Generation (RAG) on Amazon Bedrock is an ideal solution because it combines LLMs with a knowledge base to provide accurate, contextually relevant responses without requiring extensive model training or custom development. RAG retrieves relevant information from a knowledge base and uses an LLM to generate responses, simplifying the development process.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Retrieval Augmented Generation (RAG) in Amazon Bedrock enables developers to build conversational AI applications by combining foundation models with external knowledge bases. This approach minimizes development effort by leveraging pre-trained models and integrating them with data sources, such as FAQs or databases, to provide accurate and contextually relevant responses." (Source: AWS Bedrock User Guide, Retrieval Augmented Generation) Detailed
Option A: Train models on Amazon SageMaker Autopilot.SageMaker Autopilot is designed for
automated machine learning (AutoML) tasks like classification or regression, not for building conversational AI with LLMs and knowledge bases. It requires significant data preparation and is not optimized for chatbot development, making it less suitable.
Option B: Develop a Retrieval Augmented Generation (RAG) agent by using Amazon Bedrock. This is the correct answer. RAG on Amazon Bedrock allows the company to use pre-trained LLMs and integrate them with a knowledge base (e.g., flight schedules or FAQs) to build a chatbot with minimal effort. It avoids the need for extensive training or coding, aligning with the requirement for least development effort.
Option C: Create a Python application by using Amazon Q Developer. While Amazon Q Developer can assist with code generation, building a chatbot from scratch in Python requires significant development effort, including integrating LLMs and a knowledge base manually, which is more complex than using RAG on Bedrock.
Option D: Fine-tune models on Amazon SageMaker Jumpstart.Fine-tuning models on SageMaker Jumpstart requires preparing training data and customizing LLMs, which involves more effort than using a pre-built RAG solution on Bedrock. This option is not the least effort-intensive.
Reference: AWS Bedrock User Guide: Retrieval Augmented Generation (https://docs.aws.amazon.com/bedrock/latest/userguide/rag.html)
AWS AI Practitioner Learning Path: Module on Generative AI and Conversational AI
Amazon Bedrock Developer Guide: Building Conversational AI (https://aws.amazon.com/bedrock/)
A company wants to collaborate with several research institutes to develop an AI model. The company needs standardized documentation of model version tracking and a record of model development.
Which solution meets these requirements?
- A . Track the model changes by using Git.
- B . Track the model changes by using Amazon Fraud Detector.
- C . Track the model changes by using Amazon SageMaker Model Cards.
- D . Track the model changes by using Amazon Comprehend.
A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions.
Which business objective should the company use to evaluate the effect of the LLM chatbot?
- A . Website engagement rate
- B . Average call duration
- C . Corporate social responsibility
- D . Regulatory compliance
B
Explanation:
The business objective to evaluate the effect of an LLM chatbot aimed at reducing the actions required by call center employees should be average call duration.
Average Call Duration:
This metric measures the time taken to handle a customer call or query. A successful LLM chatbot should reduce the call duration by efficiently providing answers, minimizing the need for human intervention.
By decreasing the average call duration, the company can improve call center efficiency, reduce costs, and enhance the user experience.
Why Option B is Correct:
Direct Impact: The objective aligns directly with the goal of reducing the number of actions call center employees must take.
Operational Efficiency: Reducing call duration is a clear indicator of the chatbot’s effectiveness in
assisting customers without human help.
Why Other Options are Incorrect:
Which technique involves training AI models on labeled datasets to adapt the models to specific industry terminology and requirements?
- A . Data augmentation
- B . Fine-tuning
- C . Model quantization
- D . Continuous pre-training
B
Explanation:
Fine-tuning involves training a pre-trained AI model on a labeled dataset specific to a particular task or domain, adapting it to industry terminology and requirements. This process adjusts the model’s parameters to better fit the target use case, such as understanding specialized vocabulary or meeting domain-specific needs.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Fine-tuning allows you to adapt a pre-trained foundation model to your specific use case by training
it on a labeled dataset. This technique is commonly used to customize models forindustry-specific
terminology, improving their accuracy for specialized tasks."
(Source: AWS Bedrock User Guide, Model Customization)
Detailed
Option A: Data augmentationData augmentation involves generating synthetic data to expand a training dataset, typically for tasks like image or text generation. It does not specifically adapt models to industry terminology or requirements.
Option B: Fine-tuningThis is the correct answer. Fine-tuning trains a pre-trained model on a labeled dataset tailored to the target domain, enabling it to learn industry-specific terminology and requirements, as described in the question.
Option C: Model quantizationModel quantization reduces the precision of a model’s weights to optimize it for deployment (e.g., on edge devices). It does not involve training on labeled datasets or adapting to industry terminology.
Option D: Continuous pre-trainingContinuous pre-training extends the initial training of a model on a large, general dataset. While it can improve general performance, it is not specifically tailored to industry requirements using labeled datasets, unlike fine-tuning.
Reference: AWS Bedrock User Guide: Model Customization (https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html) AWS AI Practitioner Learning Path: Module on Model Training and Customization Amazon SageMaker Developer Guide: Fine-Tuning Models (https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html)
A company wants to use generative AI to increase developer productivity and software development.
The company wants to use Amazon Q Developer.
What can Amazon Q Developer do to help the company meet these requirements?
- A . Create software snippets, reference tracking, and open-source license tracking.
- B . Run an application without provisioning or managing servers.
- C . Enable voice commands for coding and providing natural language search.
- D . Convert audio files to text documents by using ML models.
A
Explanation:
Amazon Q Developer is a tool designed to assist developers in increasing productivity by generating code snippets, managing reference tracking, and handling open-source license tracking. These features help developers by automating parts of the software development process.
Option A (Correct): "Create software snippets, reference tracking, and open-source license tracking": This is the correct answer because these are key features that help developers streamline and automate tasks, thus improving productivity.
Option B: "Run an application without provisioning or managing servers" is incorrect as it refers to AWS Lambda or AWS Fargate, not Amazon Q Developer.
Option C: "Enable voice commands for coding and providing natural language search" is incorrect because this is not a function of Amazon Q Developer.
Option D: "Convert audio files to text documents by using ML models" is incorrect as this refers to
Amazon Transcribe, not Amazon Q Developer.
AWS AI Practitioner
Reference: Amazon Q Developer Features: AWS documentation outlines how Amazon Q Developer supports developers by offering features that reduce manual effort and improve efficiency.
A company wants to keep its foundation model (FM) relevant by using the most recent data. The company wants to implement a model training strategy that includes regular updates to the FM.
Which solution meets these requirements?
- A . Batch learning
- B . Continuous pre-training
- C . Static training
- D . Latent training
B
Explanation:
To keep a foundation model (FM) relevant with the most recent data, the company needs a training strategy that supports regular updates. Continuous pre-training involves periodically updating a pre-trained model with new data to improve its performance and relevance over time, making it the best fit for this requirement.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"Continuous pre-training is a strategy where a pre-trained model is periodically updated with new data to keep it relevant and improve its performance. This approach is commonly used for foundation models to ensure they adapt to new trends and information." (Source: AWS AI Practitioner Learning Path, Module on Model Training Strategies)
Detailed
Option A: Batch learning Batch learning involves training a model on a fixed dataset in batches, but it does not inherently support regular updates with new data to keep the model relevant over time.
Option B: Continuous pre-training This is the correct answer. Continuous pre-training updates the FM with recent data, ensuring it stays relevant by adapting to new trends and information.
Option C: Static training Static training implies training a model once on a fixed dataset without updates, which does not meet the requirement for regular updates.
Option D: Latent training Latent training is not a standard term in AWS or ML contexts. It may refer to latent space in models like VAEs, but it is not a strategy for regular model updates.
Reference: AWS AI Practitioner Learning Path: Module on Model Training Strategies Amazon Bedrock User Guide: Model Customization and Updates (https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html)
AWS Documentation: Machine Learning Training Strategies (https://aws.amazon.com/machine-learning/)
A company is training a foundation model (FM). The company wants to increase the accuracy of the
model up to a specific acceptance level.
Which solution will meet these requirements?
- A . Decrease the batch size.
- B . Increase the epochs.
- C . Decrease the epochs.
- D . Increase the temperature parameter.
B
Explanation:
Increasing the number of epochs during model training allows the model to learn from the data over more iterations, potentially improving its accuracy up to a certain point. This is a common practice when attempting to reach a specific level of accuracy.
Option B (Correct): "Increase the epochs": This is the correct answer because increasing epochs allows the model to learn more from the data, which can lead to higher accuracy.
Option A: "Decrease the batch size" is incorrect as it mainly affects training speed and may lead to overfitting but does not directly relate to achieving a specific accuracy level.
Option C: "Decrease the epochs" is incorrect as it would reduce the training time, possibly preventing the model from reaching the desired accuracy.
Option D: "Increase the temperature parameter" is incorrect because temperature affects the randomness of predictions, not model accuracy. AWS AI Practitioner
Reference: Model Training Best Practices on AWS: AWS suggests adjusting training parameters, like the number of epochs, to improve model performance.
Which AWS service or feature can help an AI development team quickly deploy and consume a foundation model (FM) within the team’s VPC?
- A . Amazon Personalize
- B . Amazon SageMaker JumpStart
- C . PartyRock, an Amazon Bedrock Playground
- D . Amazon SageMaker endpoints
B
Explanation:
Amazon SageMaker JumpStart is the correct service for quickly deploying and consuming a foundation model (FM) within a team’s VPC.
Amazon SageMaker JumpStart:
Provides access to a wide range of pre-trained models and solutions that can be easily deployed and consumed within a VPC.
Designed to simplify and accelerate the deployment of machine learning models, including foundation models.
Why Option B is Correct:
Rapid Deployment: JumpStart allows for quick deployment of models with minimal configuration, directly within a secure VPC environment.
Ease of Use: Provides a user-friendly interface to select and deploy models, reducing the time to value.
Why Other Options are Incorrect:
A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.
Which model evaluation strategy meets these requirements?
- A . Bilingual Evaluation Understudy (BLEU)
- B . Root mean squared error (RMSE)
- C . Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
- D . F1 score
A
Explanation:
BLEU (Bilingual Evaluation Understudy) is a metric used to evaluate the accuracy of machine-generated translations by comparing them against reference translations. It is commonly used for translation tasks to measure how close the generated output is to professional human translations.
Option A (Correct): "Bilingual Evaluation Understudy (BLEU)": This is the correct answer because BLEU is specifically designed to evaluate the quality of translations, making it suitable for the company’s use case.
Option B: "Root mean squared error (RMSE)" is incorrect because RMSE is used for regression tasks to measure prediction errors, not translation quality.
Option C: "Recall-Oriented Understudy for Gisting Evaluation (ROUGE)" is incorrect as it is used to evaluate text summarization, not translation.
Option D: "F1 score" is incorrect because it is typically used for classification tasks, not for evaluating translation accuracy.
AWS AI Practitioner
Reference: Model Evaluation Metrics on AWS: AWS supports various metrics like BLEU for specific use cases, such as evaluating machine translation models.