Practice Free AIF-C01 Exam Online Questions
Question #81
What does an F1 score measure in the context of foundation model (FM) performance?
- A . Model precision and recall
- B . Model speed in generating responses
- C . Financial cost of operating the model
- D . Energy efficiency of the model’s computations
Correct Answer: A
A
Explanation:
The F1 score is a metric used to evaluate the performance of a classification model by considering both precision and recall. Precision measures the accuracy of positive predictions (i.e., the proportion of true positive predictions among all positive predictions made by the model), while recall measures the model’s ability to identify all relevant positive instances (i.e., the proportion of true positive predictions among all actual positive instances). The F1 score is the harmonic mean of precision and recall, providing a single metric that balances both concerns. This is particularly useful when dealing with imbalanced datasets or when the cost of false positives and false negatives is significant.
Options B, C, and D pertain to other aspects of model performance but are not related to the F1 score.
Reference: AWS Certified AI Practitioner Exam Guide
A
Explanation:
The F1 score is a metric used to evaluate the performance of a classification model by considering both precision and recall. Precision measures the accuracy of positive predictions (i.e., the proportion of true positive predictions among all positive predictions made by the model), while recall measures the model’s ability to identify all relevant positive instances (i.e., the proportion of true positive predictions among all actual positive instances). The F1 score is the harmonic mean of precision and recall, providing a single metric that balances both concerns. This is particularly useful when dealing with imbalanced datasets or when the cost of false positives and false negatives is significant.
Options B, C, and D pertain to other aspects of model performance but are not related to the F1 score.
Reference: AWS Certified AI Practitioner Exam Guide
Question #82
A company wants to build and deploy ML models on AWS without writing any code.
Which AWS service or feature meets these requirements?
- A . Amazon SageMaker Canvas
- B . Amazon Rekognition
- C . AWS DeepRacer
- D . Amazon Comprehend
Correct Answer: A
A
Explanation:
Amazon SageMaker Canvas is a visual, no-code tool for building and deploying ML models. According to the official SageMaker Canvas documentation:
“SageMaker Canvas provides a visual point-and-click interface that allows business analysts to generate accurate ML predictions without writing any code.”
A
Explanation:
Amazon SageMaker Canvas is a visual, no-code tool for building and deploying ML models. According to the official SageMaker Canvas documentation:
“SageMaker Canvas provides a visual point-and-click interface that allows business analysts to generate accurate ML predictions without writing any code.”
Question #83
Which option is a characteristic of AI governance frameworks for building trust and deploying human-centered AI technologies?
- A . Expanding initiatives across business units to create long-term business value
- B . Ensuring alignment with business standards, revenue goals, and stakeholder expectations
- C . Overcoming challenges to drive business transformation and growth
- D . Developing policies and guidelines for data, transparency, responsible AI, and compliance
Correct Answer: D
D
Explanation:
AI governance frameworks aim to build trust and deploy human-centered AI technologies by establishing guidelines and policies for data usage, transparency, responsible AI practices, and compliance with regulations. This ensures ethical and accountable AI development and deployment.
Exact Extract from AWS AI Documents:
From the AWS Documentation on Responsible AI:
"AI governance frameworks establish trust in AI technologies by developing policies and guidelines for data management, transparency, responsible AI practices, and compliance with regulatory requirements, ensuring human-centered and ethical AI deployment."
(Source: AWS Documentation, Responsible AI Governance)
Detailed
Option A: Expanding initiatives across business units to create long-term business value While expanding initiatives can drive value, it is not a core characteristic of AI governance frameworks focused on trust and human-centered AI.
Option B: Ensuring alignment with business standards, revenue goals, and stakeholder expectations Alignment with business goals is important but not specific to AI governance frameworks for building trust and ethical AI deployment.
Option C: Overcoming challenges to drive business transformation and growth Overcoming challenges is a general business goal, not a defining characteristic of AI governance frameworks.
Option D: Developing policies and guidelines for data, transparency, responsible AI, and compliance This is the correct answer. This option directly describes the core components of AI governance frameworks that ensure trust and ethical AI deployment.
Reference: AWS Documentation: Responsible AI Governance (https://aws.amazon.com/machine-learning/responsible-ai/)
AWS AI Practitioner Learning Path: Module on AI Governance
AWS Well-Architected Framework: Machine Learning Lens
(https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/)
D
Explanation:
AI governance frameworks aim to build trust and deploy human-centered AI technologies by establishing guidelines and policies for data usage, transparency, responsible AI practices, and compliance with regulations. This ensures ethical and accountable AI development and deployment.
Exact Extract from AWS AI Documents:
From the AWS Documentation on Responsible AI:
"AI governance frameworks establish trust in AI technologies by developing policies and guidelines for data management, transparency, responsible AI practices, and compliance with regulatory requirements, ensuring human-centered and ethical AI deployment."
(Source: AWS Documentation, Responsible AI Governance)
Detailed
Option A: Expanding initiatives across business units to create long-term business value While expanding initiatives can drive value, it is not a core characteristic of AI governance frameworks focused on trust and human-centered AI.
Option B: Ensuring alignment with business standards, revenue goals, and stakeholder expectations Alignment with business goals is important but not specific to AI governance frameworks for building trust and ethical AI deployment.
Option C: Overcoming challenges to drive business transformation and growth Overcoming challenges is a general business goal, not a defining characteristic of AI governance frameworks.
Option D: Developing policies and guidelines for data, transparency, responsible AI, and compliance This is the correct answer. This option directly describes the core components of AI governance frameworks that ensure trust and ethical AI deployment.
Reference: AWS Documentation: Responsible AI Governance (https://aws.amazon.com/machine-learning/responsible-ai/)
AWS AI Practitioner Learning Path: Module on AI Governance
AWS Well-Architected Framework: Machine Learning Lens
(https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/)
Question #84
A company is working on a large language model (LLM) and noticed that the LLM’s outputs are not as
diverse as expected.
Which parameter should the company adjust?
- A . Temperature
- B . Batch size
- C . Learning rate
- D . Optimizer type
Correct Answer: A
A
Explanation:
The correct answer is A because temperature controls the randomness of a language model’s output. A higher temperature increases diversity by making the model more likely to explore less probable tokens, while a lower temperature results in more deterministic and repetitive outputs.
From AWS documentation:
"The temperature parameter in LLMs adjusts the randomness of generated responses. Higher values (e.g., 0.8C1.0) produce more creative and diverse output, while lower values (e.g., 0.1C0.3) make output more focused and repetitive."
Explanation of other options:
B. Batch size is related to training efficiency, not output diversity.
C. Learning rate affects the training convergence rate, not inference-time output variety.
D. Optimizer type is a training configuration that influences how the model learns during training, not diversity during inference.
Referenced AWS AI/ML Documents and Study Guides:
Amazon Bedrock C Parameter Tuning Guide
AWS Machine Learning Specialty Guide C LLM Inference Parameters
A
Explanation:
The correct answer is A because temperature controls the randomness of a language model’s output. A higher temperature increases diversity by making the model more likely to explore less probable tokens, while a lower temperature results in more deterministic and repetitive outputs.
From AWS documentation:
"The temperature parameter in LLMs adjusts the randomness of generated responses. Higher values (e.g., 0.8C1.0) produce more creative and diverse output, while lower values (e.g., 0.1C0.3) make output more focused and repetitive."
Explanation of other options:
B. Batch size is related to training efficiency, not output diversity.
C. Learning rate affects the training convergence rate, not inference-time output variety.
D. Optimizer type is a training configuration that influences how the model learns during training, not diversity during inference.
Referenced AWS AI/ML Documents and Study Guides:
Amazon Bedrock C Parameter Tuning Guide
AWS Machine Learning Specialty Guide C LLM Inference Parameters
Question #85
Which prompting attack directly exposes the configured behavior of a large language model (LLM)?
- A . Prompted persona switches
- B . Exploiting friendliness and trust
- C . Ignoring the prompt template
- D . Extracting the prompt template
Correct Answer: D
D
Explanation:
A prompt template defines how the model is structured and guided (system prompts, roles, guardrails).
An attack that reveals or leaks this prompt template is known as a prompt extraction attack.
The other options (persona switching, exploiting friendliness, ignoring prompts) describe adversarial techniques but do not directly expose the internal configured behavior.
Reference: AWS Responsible AI C Prompt Injection & Extraction Attacks
D
Explanation:
A prompt template defines how the model is structured and guided (system prompts, roles, guardrails).
An attack that reveals or leaks this prompt template is known as a prompt extraction attack.
The other options (persona switching, exploiting friendliness, ignoring prompts) describe adversarial techniques but do not directly expose the internal configured behavior.
Reference: AWS Responsible AI C Prompt Injection & Extraction Attacks
