Practice Free AIF-C01 Exam Online Questions
A company is building an application that needs to generate synthetic data that is based on existing data.
Which type of model can the company use to meet this requirement?
- A . Generative adversarial network (GAN)
- B . XGBoost
- C . Residual neural network
- D . WaveNet
A
Explanation:
Generative adversarial networks (GANs) are a type of deep learning model used for generating synthetic data based on existing datasets. GANs consist of two neural networks (a generator and a discriminator) that work together to create realistic data.
Option A (Correct): "Generative adversarial network (GAN)": This is the correct answer because GANs are specifically designed for generating synthetic data that closely resembles the real data they are trained on.
Option B: "XGBoost" is a gradient boosting algorithm for classification and regression tasks, not for generating synthetic data.
Option C: "Residual neural network" is primarily used for improving the performance of deep networks, not for generating synthetic data.
Option D: "WaveNet" is a model architecture designed for generating raw audio waveforms, not
synthetic data in general.
AWS AI Practitioner
Reference: GANs on AWS for Synthetic Data Generation: AWS supports the use of GANs for creating synthetic datasets, which can be crucial for applications like training machine learning models in environments where real data is scarce or sensitive.
A company is implementing intelligent agents to provide conversational search experiences for its customers. The company needs a database service that will support storage and queries of embeddings from a generative AI model as vectors in the database.
Which AWS service will meet these requirements?
- A . Amazon Athena
- B . Amazon Aurora PostgreSQL
- C . Amazon Redshift
- D . Amazon EMR
B
Explanation:
The requirement is to identify an AWS database service that supports the storage and querying of embeddings (from a generative AI model) as vectors. Embeddings are typically high-dimensional numerical representations of data (e.g., text, images) used in AI applications like conversational search. The database must support vector storage and efficient vector similarity searches.
Let’s evaluate each option:
Which technique should be used to fine-tune a pre-trained large language model (LLM) to improve its performance on a specific task?
- A . Zero-shot learning
- B . Few-shot learning
- C . Transfer learning
- D . Unsupervised learning
A company is using a pre-trained large language model (LLM) to extract information from documents. The company noticed that a newer LLM from a different provider is available on Amazon Bedrock. The company wants to transition to the new LLM on Amazon Bedrock.
What does the company need to do to transition to the new LLM?
- A . Create a new labeled dataset
- B . Perform feature engineering.
- C . Adjust the prompt template.
- D . Fine-tune the LLM.
C
Explanation:
Transitioning to a new large language model (LLM) on Amazon Bedrock typically involves minimal changes when the new model is pre-trained and available as a foundation model. Since the company is moving from one pre-trained LLM to another, the primary task is to ensure compatibility between the new model’s input requirements and the existing application. Adjusting the prompt template is often necessary because different LLMs may have varying prompt formats, tokenization methods, or response behaviors, even for similar tasks like document extraction.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"When switching between foundation models in Amazon Bedrock, you may need to adjust the prompt template to align with the new model’s expected input format and optimize its performance for your use case. Prompt engineering is critical to ensure the model understands the task and generates accurate outputs."
(Source: AWS Bedrock User Guide, Prompt Engineering for Foundation Models)
Detailed
Option A: Create a new labeled dataset. Creating a new labeled dataset is unnecessary when transitioning to a new pre-trained LLM, as pre-trained models are already trained on large datasets. This option would only be relevant if the company were training a custom model from scratch, which is not the case here.
Option B: Perform feature engineering. Feature engineering is typically associated with traditional machine learning models, not pre-trained LLMs. LLMs process raw text inputs, and transitioning to a new LLM does not require restructuring input features. This option is incorrect.
Option C: Adjust the prompt template. This is the correct approach. Different LLMs may interpret prompts differently due to variations in training data, tokenization, or model architecture. Adjusting the prompt template ensures the new LLM understands the task (e.g., document extraction) and produces the desired output format. AWS documentation emphasizes prompt engineering as a key step when adopting a new foundation model.
Option D: Fine-tune the LLM. Fine-tuning is not required for transitioning to a new pre-trained LLM unless the company needs to customize the model for a highly specific task. Since the question does not indicate a need for customization beyond document extraction (a common LLM capability), fine-tuning is unnecessary.
Reference: AWS Bedrock User Guide: Prompt Engineering for Foundation Models (https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-engineering.html)
AWS AI Practitioner Learning Path: Module on Working with Foundation Models in Amazon Bedrock
Amazon Bedrock Developer Guide: Transitioning Between Models (https://docs.aws.amazon.com/bedrock/latest/devguide/)
A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated content aligns with the company’s brand voice and messaging requirements.
Which solution meets these requirements?
- A . Optimize the model’s architecture and hyperparameters to improve the model’s overall performance.
- B . Increase the model’s complexity by adding more layers to the model’s architecture.
- C . Create effective prompts that provide clear instructions and context to guide the model’s generation.
- D . Select a large, diverse dataset to pre-train a new generative model.
C
Explanation:
Creating effective prompts is the best solution to ensure that the content generated by a pre-trained generative AI model aligns with the company’s brand voice and messaging requirements.
Effective Prompt Engineering:
Involves crafting prompts that clearly outline the desired tone, style, and content guidelines for the model.
By providing explicit instructions in the prompts, the company can guide the AI to generate content that matches the brand’s voice and messaging.
Why Option C is Correct:
Guides Model Output: Ensures the generated content adheres to specific brand guidelines by shaping the model’s response through the prompt.
Flexible and Cost-effective: Does not require retraining or modifying the model, which is more resource-efficient.
Why Other Options are Incorrect:
An e-commerce company wants to build a solution to determine customer sentiments based on written customer reviews of products.
Which AWS services meet these requirements? (Select TWO.)
- A . Amazon Lex
- B . Amazon Comprehend
- C . Amazon Polly
- D . Amazon Bedrock
- E . Amazon Rekognition
B,D
Explanation:
To determine customer sentiments based on written customer reviews, the company can use Amazon Comprehend and Amazon Bedrock.
Amazon Comprehend:
A natural language processing (NLP) service that uses machine learning to uncover insights and relationships in text.
Can analyze customer reviews to detect sentiments (positive, negative, neutral, or mixed) automatically.
Amazon Bedrock:
Provides access to foundational models (FMs) from multiple AI companies for tasks such as text generation, summarization, and sentiment analysis.
The company can use a pre-trained sentiment analysis model available on Amazon Bedrock for
processing customer reviews.
Why Other Options are Incorrect: