Practice Free AIF-C01 Exam Online Questions
A security company is using Amazon Bedrock to run foundation models (FMs). The company wants to ensure that only authorized users invoke the models. The company needs to identify any unauthorized access attempts to set appropriate AWS Identity and Access Management (IAM) policies and roles for future iterations of the FMs.
Which AWS service should the company use to identify unauthorized users that are trying to access Amazon Bedrock?
- A . AWS Audit Manager
- B . AWS CloudTrail
- C . Amazon Fraud Detector
- D . AWS Trusted Advisor
B
Explanation:
AWS CloudTrail is a service that enables governance, compliance, and operational and risk auditing
of your AWS account. It tracks API calls and identifies unauthorized access attempts to AWS
resources, including Amazon Bedrock.
AWS CloudTrail:
Provides detailed logs of all API calls made within an AWS account, including those to Amazon Bedrock.
Can identify unauthorized access attempts by logging and monitoring the API calls, which helps in setting appropriate IAM policies and roles.
Why Option B is Correct:
Monitoring and Security: CloudTrail logs all access requests and helps detect unauthorized access attempts.
Auditing and Compliance: The logs can be used to audit user activity and enforce security measures.
Why Other Options are Incorrect:
Which option describes embeddings in the context of AI?
- A . A method for compressing large datasets
- B . An encryption method for securing sensitive data
- C . A method for visualizing high-dimensional data
- D . A numerical method for data representation in a reduced dimensionality space
D
Explanation:
Embeddings in AI refer to numerical representations of data (e.g., text, images) in a lower-dimensional space, capturing semantic or contextual relationships. They are widely used in NLP and other AI tasks to represent complex data in a format that models can process efficiently.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"Embeddings are numerical representations of data in a reduced dimensionality space. In natural language processing, for example, word or sentence embeddings capture semantic relationships, enabling models to process text efficiently for tasks like classification or similarity search." (Source: AWS AI Practitioner Learning Path, Module on AI Concepts)
Detailed
Option A: A method for compressing large datasets While embeddings reduce dimensionality, their primary purpose is not data compression but rather to represent data in a way that preserves meaningful relationships. This option is incorrect.
Option B: An encryption method for securing sensitive data Embeddings are not related to encryption or data security. They are used for data representation, making this option incorrect.
Option C: A method for visualizing high-dimensional data While embeddings can sometimes be used in visualization (e.g., t-SNE), their primary role is data representation for model processing, not visualization. This option is misleading.
Option D: A numerical method for data representation in a reduced dimensionality space This is the correct answer. Embeddings transform complex data into lower-dimensional numerical vectors, preserving semantic or contextual information for use in AI models.
Reference: AWS AI Practitioner Learning Path: Module on AI Concepts
Amazon Comprehend Developer Guide: Embeddings for Text Analysis (https://docs.aws.amazon.com/comprehend/latest/dg/embeddings.html)
AWS Documentation: What are Embeddings? (https://aws.amazon.com/what-is/embeddings/)
What does an F1 score measure in the context of foundation model (FM) performance?
- A . Model precision and recall.
- B . Model speed in generating responses.
- C . Financial cost of operating the model.
- D . Energy efficiency of the model’s computations.
A
Explanation:
The F1 score is the harmonic mean of precision and recall, making it a balanced metric for evaluating model performance when there is an imbalance between false positives and false negatives. Speed, cost, and energy efficiency are unrelated to the F1 score.
Reference: AWS Foundation Models Guide.
A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks.
Which solution will meet this requirement?
- A . Use Amazon Inspector to monitor SageMaker Studio.
- B . Use Amazon Macie to monitor SageMaker Studio.
- C . Configure SageMaker to use a VPC with an S3 endpoint.
- D . Configure SageMaker to use S3 Glacier Deep Archive.
C
Explanation:
To manage the flow of data from Amazon S3 to SageMaker Studio notebooks securely, using a VPC
with an S3 endpoint is the best solution.
Amazon SageMaker and S3 Integration:
Configuring SageMaker to use a Virtual Private Cloud (VPC) with an S3 endpoint allows the data flow between Amazon S3 and SageMaker Studio notebooks to occur over a private network.
This setup ensures that traffic between SageMaker and S3 does not traverse the public internet, enhancing security and performance.
Why Option C is Correct:
Secure Data Transfer: Ensures secure, private connectivity between SageMaker and S3, reducing exposure to potential security risks.
Direct Access to S3: Using an S3 endpoint in a VPC allows direct access to data in S3 without leaving the AWS network.
Why Other Options are Incorrect:
How can companies use large language models (LLMs) securely on Amazon Bedrock?
- A . Design clear and specific prompts. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access.
- B . Enable AWS Audit Manager for automatic model evaluation jobs.
- C . Enable Amazon Bedrock automatic model evaluation jobs.
- D . Use Amazon CloudWatch Logs to make models explainable and to monitor for bias.
A
Explanation:
To securely use large language models (LLMs) on Amazon Bedrock, companies should design clear and specific prompts to avoid unintended outputs and ensure proper configuration of AWS Identity and Access Management (IAM) roles and policies with the principle of least privilege. This approach limits access to sensitive resources and minimizes the potential impact of security incidents.
Option A (Correct): "Design clear and specific prompts. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access": This is the correct answer as it directly addresses both security practices in prompt design and access management.
Option B: "Enable AWS Audit Manager for automatic model evaluation jobs" is incorrect because Audit Manager is for compliance and auditing, not directly related to secure LLM usage.
Option C: "Enable Amazon Bedrock automatic model evaluation jobs" is incorrect because Bedrock does not provide automatic model evaluation jobs specifically for security purposes.
Option D: "Use Amazon CloudWatch Logs to make models explainable and to monitor for bias" is incorrect because CloudWatch Logs are used for monitoring and not directly for making models explainable or secure.
AWS AI Practitioner
Reference: Secure AI Practices on AWS: AWS recommends configuring IAM roles and using least privilege access to ensure secure usage of AI models.
Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?
- A . Helps decrease the model’s complexity
- B . Improves model performance over time
- C . Decreases the training time requirement
- D . Optimizes model inference time
B
Explanation:
Ongoing pre-training when fine-tuning a foundation model (FM) improves model performance over time by continuously learning from new data.
Ongoing Pre-Training:
Involves continuously training a model with new data to adapt to changing patterns, enhance generalization, and improve performance on specific tasks.
Helps the model stay updated with the latest data trends and minimize drift over time.
Why Option B is Correct:
Performance Enhancement: Continuously updating the model with new data improves its accuracy and relevance.
Adaptability: Ensures the model adapts to new data distributions or domain-specific nuances.
Why Other Options are Incorrect:
Which AW5 service makes foundation models (FMs) available to help users build and scale generative AI applications?
- A . Amazon Q Developer
- B . Amazon Bedrock
- C . Amazon Kendra
- D . Amazon Comprehend
B
Explanation:
Amazon Bedrock is a fully managed service that provides access to foundation models (FMs) from various providers, enabling users to build and scale generative AI applications. It simplifies the process of integrating FMs into applications for tasks like text generation, chatbots, and more.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI providers available through a single API, enabling developers to build and scale generative AI applications with ease."
(Source: AWS Bedrock User Guide, Introduction to Amazon Bedrock)
Detailed
Option A: Amazon Q DeveloperAmazon Q Developer is an AI-powered assistant for coding and AWS service guidance, not a service for hosting or providing foundation models.
Option B: Amazon BedrockThis is the correct answer. Amazon Bedrock provides access to foundation models, making it the primary service for building and scaling generative AI applications.
Option C: Amazon KendraAmazon Kendra is an intelligent search service powered by machine learning, not a service for providing foundation models or building generative AI applications.
Option D: Amazon ComprehendAmazon Comprehend is an NLP service for text analysis tasks like sentiment analysis, not for providing foundation models or supporting generative AI.
Reference: AWS Bedrock User Guide: Introduction to Amazon Bedrock (https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) AWS AI Practitioner Learning Path: Module on Generative AI Services
AWS Documentation: Generative AI on AWS (https://aws.amazon.com/generative-ai/)
Which feature of Amazon OpenSearch Service gives companies the ability to build vector database applications?
- A . Integration with Amazon S3 for object storage
- B . Support for geospatial indexing and queries
- C . Scalable index management and nearest neighbor search capability
- D . Ability to perform real-time analysis on streaming data
C
Explanation:
Amazon OpenSearch Service (formerly Amazon Elasticsearch Service) has introduced capabilities to support vector search, which allows companies to build vector database applications. This is particularly useful in machine learning, where vector representations (embeddings) of data are often used to capture semantic meaning.
Scalable index management and nearest neighbor search capability are the core features enabling vector database functionalities in OpenSearch. The service allows users to index high-dimensional vectors and perform efficient nearest neighbor searches, which are crucial for tasks such as recommendation systems, anomaly detection, and semantic search.
Here is why option C is the correct answer:
Scalable Index Management: OpenSearch Service supports scalable indexing of vector data. This means you can index a large volume of high-dimensional vectors and manage these indexes in a cost-effective and performance-optimized way. The service leverages underlying AWS infrastructure to ensure that indexing scales seamlessly with data size.
Nearest Neighbor Search Capability: OpenSearch Service’s nearest neighbor search capability allows for fast and efficient searches over vector data. This is essential for applications like product recommendation engines, where the system needs to quickly find the most similar items based on a user’s query or behavior.
AWS AI Practitioner
Reference: According to AWS documentation, OpenSearch Service’s support for nearest neighbor search using vector embeddings is a key feature for companies building machine learning applications that require similarity search.
The service uses Approximate Nearest Neighbors (ANN) algorithms to speed up searches over large datasets, ensuring high performance even with large-scale vector data.
The other options do not directly relate to building vector database applications:
A company wants to build an ML model by using Amazon SageMaker. The company needs to share and manage variables for model development across multiple teams.
Which SageMaker feature meets these requirements?
- A . Amazon SageMaker Feature Store
- B . Amazon SageMaker Data Wrangler
- C . Amazon SageMaker Clarify
- D . Amazon SageMaker Model Cards
A
Explanation:
Amazon SageMaker Feature Store is the correct solution for sharing and managing variables (features) across multiple teams during model development.
Amazon SageMaker Feature Store:
A fully managed repository for storing, sharing, and managing machine learning features across different teams and models.
It enables collaboration and reuse of features, ensuring consistent data usage and reducing redundancy.
Why Option A is Correct:
Centralized Feature Management: Provides a central repository for managing features, making it easier to share them across teams.
Collaboration and Reusability: Improves efficiency by allowing teams to reuse existing features
instead of creating them from scratch.
Why Other Options are Incorrect:
B. SageMaker Data Wrangler: Helps with data preparation and analysis but does not provide a centralized feature store.
C. SageMaker Clarify: Used for bias detection and explainability, not for managing variables across teams.
D. SageMaker Model Cards: Provide model documentation, not feature management.
A bank is fine-tuning a large language model (LLM) on Amazon Bedrock to assist customers with questions about their loans. The bank wants to ensure that the model does not reveal any private customer data.
Which solution meets these requirements?
- A . Use Amazon Bedrock Guardrails.
- B . Remove personally identifiable information (PII) from the customer data before fine-tuning the LLM.
- C . Increase the Top-K parameter of the LLM.
- D . Store customer data in Amazon S3. Encrypt the data before fine-tuning the LLM.
B
Explanation:
The goal is to prevent a fine-tuned large language model (LLM) on Amazon Bedrock from revealing private customer data.
Let’s analyze the options: