Practice Free AIF-C01 Exam Online Questions
A company wants to use AWS services to build an AI assistant for internal company use. The AI assistant’s responses must reference internal documentation. The company stores internal documentation as PDF, CSV, and image files.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Use Amazon SageMaker AI to fine-tune a model.
- B . Use Amazon Bedrock Knowledge Bases to create a knowledge base.
- C . Configure a guardrail in Amazon Bedrock Guardrails.
- D . Select a pre-trained model from Amazon SageMaker JumpStart.
A company is creating a model to label credit card transactions. The company has a large volume of sample transaction data to train the model. Most of the transaction data is unlabeled. The data does not contain confidential information. The company needs to obtain labeled sample data to fine-tune the model.
- A . Run batch inference jobs on the unlabeled data
- B . Run an Amazon SageMaker AI training job that uses the PyTorch Distributed library to label data
- C . Use an Amazon SageMaker Ground Truth labeling job with Amazon Mechanical Turk workers
- D . Use an optical character recognition model trained on labeled samples to label unlabeled samples
- E . Run an Amazon SageMaker AI labeling job
C
Explanation:
Amazon SageMaker Ground Truth lets you create data labeling jobs and can integrate with Amazon Mechanical Turk (a distributed human workforce) to label large unlabeled datasets.
A (batch inference) applies models to already-trained data, not labeling.
B (PyTorch Distributed) is for distributed training, not labeling.
D (OCR) applies only to text extraction from images, not transactions.
E is incorrect; Ground Truth is the service for labeling, not "AI labeling job."
Reference: AWS Documentation C SageMaker Ground Truth
A company is using an Amazon Nova Canvas model to generate images. The model generates images successfully.
The company needs to prevent the model from including specific items in the generated images.
Which solution will meet this requirement?
- A . Use a higher temperature value.
- B . Use a more detailed prompt.
- C . Use a negative prompt.
- D . Use another foundation model (FM).
A company wants to improve the accuracy of the responses from a generative AI application. The application uses a foundation model (FM) on Amazon Bedrock.
Which solution meets these requirements MOST cost-effectively?
- A . Fine-tune the FM.
- B . Retrain the FM.
- C . Train a new FM.
- D . Use prompt engineering.
D
Explanation:
The company wants to improve the accuracy of a generative AI application using a foundation model (FM) on Amazon Bedrock in the most cost-effective way. Prompt engineering involves optimizing the input prompts to guide the FM to produce more accurate responses without modifying the model itself. This approach is cost-effective because it does not require additional computational resources or training, unlike fine-tuning or retraining.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Prompt engineering is a cost-effective technique to improve the performance of foundation models. By crafting precise and context-rich prompts, users can guide the model to generate more accurate and relevant responses without the need for fine-tuning or retraining."
(Source: AWS Bedrock User Guide, Prompt Engineering for Foundation Models)
Detailed
Option A: Fine-tune the FM.Fine-tuning involves retraining the FM on a custom dataset, which requirescomputational resources, time, and cost (e.g., for Amazon Bedrock fine-tuning jobs). It is not the most cost-effective solution.
Option B: Retrain the FM.Retraining an FM from scratch is highly resource-intensive and expensive, as it requires large datasets and significant compute power. This is not cost-effective.
Option C: Train a new FM.Training a new FM is the most expensive option, as it involves building a model from the ground up, requiring extensive data, compute resources, and expertise. This is not cost-effective.
Option D: Use prompt engineering. This is the correct answer. Prompt engineering adjusts the input prompts to improve the FM’s responses without incurring additional compute costs, making it the most cost-effective solution for improving accuracy on Amazon Bedrock.
Reference: AWS Bedrock User Guide: Prompt Engineering for Foundation Models (https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-engineering.html)
AWS AI Practitioner Learning Path: Module on Generative AI Optimization
Amazon Bedrock Developer Guide: Cost Optimization for Generative AI (https://aws.amazon.com/bedrock/)
A company wants to assess internet quality in remote areas of the world. The company needs to collect internet speed data and store the data in Amazon RDS. The company will analyze internet speed variation throughout each day. The company wants to create an AI model to predict potential internet disruptions.
Which type of data should the company collect for this task?
- A . Tabular data
- B . Text data
- C . Time series data
- D . Audio data
A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.
Which model evaluation strategy meets these requirements?
- A . Bilingual Evaluation Understudy (BLEU)
- B . Root mean squared error (RMSE)
- C . Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
- D . F1 score
A
Explanation:
BLEU (Bilingual Evaluation Understudy) is a metric used to evaluate the accuracy of machine-generated translations by comparing them against reference translations. It is commonly used for translation tasks to measure how close the generated output is to professional human translations.
Option A (Correct): "Bilingual Evaluation Understudy (BLEU)": This is the correct answer because BLEU is specifically designed to evaluate the quality of translations, making it suitable for the company’s use case.
Option B: "Root mean squared error (RMSE)" is incorrect because RMSE is used for regression tasks
to measure prediction errors, not translation quality.
Option C: "Recall-Oriented Understudy for Gisting Evaluation (ROUGE)" is incorrect as it is used to evaluate text summarization, not translation.
Option D: "F1 score" is incorrect because it is typically used for classification tasks, not for evaluating translation accuracy.
AWS AI Practitioner
Reference: Model Evaluation Metrics on AWS: AWS supports various metrics like BLEU for specific use cases, such as evaluating machine translation models.
A company is deploying AI/ML models by using AWS services. The company wants to offer transparency into the models’ decision-making processes and provide explanations for the model outputs.
- A . Amazon SageMaker Model Cards
- B . Amazon Rekognition
- C . Amazon Comprehend
- D . Amazon Lex
A
Explanation:
Amazon SageMaker Model Cards document model details, performance, intended use cases, and risk considerations. They support responsible AI by improving transparency and governance.
Rekognition is computer vision.
Comprehend is NLP for entity/sentiment.
Lex is conversational AI.
Reference: AWS Documentation C SageMaker Model Cards
A financial institution is building an AI solution to make loan approval decisions by using a foundation model (FM). For security and audit purposes, the company needs the AI solution’s decisions to be explainable.
Which factor relates to the explainability of the AI solution’s decisions?
- A . Model complexity
- B . Training time
- C . Number of hyperparameters
- D . Deployment time
A
Explanation:
The financial institution needs an AI solution for loan approval decisions to be explainable for security and audit purposes. Explainability refers to the ability to understand and interpret how a model makes decisions. Model complexity directly impacts explainability: simpler models (e.g., logistic regression) are more interpretable, while complex models (e.g., deep neural networks) are harder to explain, often behaving like "black boxes."
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"Model complexity affects the explainability of AI solutions. Simpler models, such as linear regression, are inherently more interpretable, while complex models, such as deep neural networks, may require additional tools like SageMaker Clarify to provide insights into their decision-making processes."
(Source: Amazon SageMaker Developer Guide, Explainability with SageMaker Clarify)
Detailed
Option A: Model complexity This is the correct answer. The complexity of the model directly influences how easily its decisions can be explained, a critical factor for audit and security purposes in loan approvals.
Option B: Training time Training time refers to how long it takes to train the model, which does not directly impact the explainability of its decisions.
Option C: Number of hyperparameters While hyperparameters affect model performance, they do not directly relate to explainability. A model with many hyperparameters might still be explainable if it is a simple model.
Option D: Deployment time Deployment time refers to the time taken to deploy the model to production, which is unrelated to the explainability of its decisions.
Reference: Amazon SageMaker Developer Guide: Explainability with SageMaker Clarify (https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-explainability.html)
AWS AI Practitioner Learning Path: Module on Responsible AI and Explainability
AWS Documentation: Explainable AI (https://aws.amazon.com/machine-learning/responsible-ai/)
A company wants to set up private access to Amazon Bedrock APIs from the company’s AWS account.
The company also wants to protect its data from internet exposure.
- A . Use Amazon CloudFront to restrict access to the company’s private content
- B . Use AWS Glue to set up data encryption across the company’s data catalog
- C . Use AWS Lake Formation to manage centralized data governance and cross-account data sharing
- D . Use AWS PrivateLink to configure a private connection between the company’s VPC and Amazon Bedrock
D
Explanation:
AWS PrivateLink enables private connectivity between your VPC and supported AWS services (like Amazon Bedrock) without sending traffic over the public internet.
CloudFront (A) is for CDN and content delivery, not private service connections.
AWS Glue (B) is for ETL/data catalog, not networking.
Lake Formation (C) provides governance for data lakes, not API network isolation.
Reference: AWS Documentation C Access Amazon Bedrock with PrivateLink
A hospital wants to use a generative AI solution with speech-to-text functionality to help improve employee skills in dictating clinical notes.
- A . Amazon Q Developer
- B . Amazon Polly
- C . Amazon Rekognition
- D . AWS HealthScribe
D
Explanation:
AWS HealthScribe provides speech-to-text and medical documentation generation, specifically designed for healthcare applications.
Amazon Polly is text-to-speech, not speech-to-text.
Amazon Rekognition is computer vision.
Amazon Q Developer is a generative AI assistant for developers, not healthcare.
Reference: AWS Documentation C AWS HealthScribe
