Practice Free AIP-C01 Exam Online Questions
A financial services company is developing a customer service AI assistant by using Amazon Bedrock. The AI assistant must not discuss investment advice with users. The AI assistant must block harmful content, mask personally identifiable information (PII), and maintain audit trails for compliance reporting. The AI assistant must apply content filtering to both user inputs and model responses based on content sensitivity.
The company requires an Amazon Bedrock guardrail configuration that will effectively enforce policies with minimal false positives. The solution must provide multiple handling strategies for multiple types of sensitive content.
Which solution will meet these requirements?
- A . Configure a single guardrail and set content filters to high for all categories. Set up denied topics for investment advice and include sample phrases to block. Set up sensitive information filters that apply the block action for all PII entities. Apply the guardrail to all model inference calls.
- B . Configure multiple guardrails by using tiered policies. Create one guardrail and set content filters
to high. Configure the guardrail to block PII for public interactions. Configure a second guardrail and
set content filters to medium. Configure the second guardrail to mask PII for internal use. Configure
multiple topic-specific guardrails to block investment advice and set up contextual grounding checks. - C . Configure a guardrail and set content filters to medium for harmful content. Set up denied topics for investment advice and include clear definitions and sample phrases to block. Configure sensitive information filters to mask PII in responses and to block financial information in inputs. Enable both input and output evaluations that use custom blocked messages for audits.
- D . Create a separate guardrail for each use case. Create one guardrail that applies a harmful content filter. Create a guardrail to apply topic filters for investment advice. Create a guardrail to apply sensitive information filters to block PII. Use AWS Step Functions to chain the guardrails sequentially.
C
Explanation:
Option C is the correct solution because it uses a single, well-tuned Amazon Bedrock guardrail that applies different actions to different content types, which is the recommended approach for minimizing false positives while enforcing strong policy controls.
Setting content filters to medium rather than high reduces over blocking of benign customer conversations while still preventing harmful content. Amazon Bedrock guardrails are designed to balance precision and recall, and medium sensitivity is commonly recommended for customer-facing financial services use cases.
Denied topics explicitly prevent the assistant from discussing investment advice, which is a regulatory requirement. Including definitions and sample phrases improves detection accuracy and reduces ambiguity.
Sensitive information filters support different actions per context. Masking PII in responses preserves conversational usefulness for legitimate customer support while preventing exposure of sensitive data. Blocking sensitive financial information in inputs prevents downstream processing of disallowed content before it reaches the foundation model.
Critically, enabling both input and output evaluation ensures that guardrails are applied consistently at every stage of interaction. Custom blocked messages and audit logging provide clear compliance evidence for regulators and internal audits.
Option A causes excessive false positives by blocking all PII outright.
Option B introduces
unnecessary complexity and is not how Bedrock guardrails are intended to be applied.
Option D uses orchestration logic that Bedrock guardrails already handle natively.
Therefore, Option C best satisfies enforcement, flexibility, auditability, and accuracy requirements.
A healthcare company is developing a document management system that stores medical research papers in an Amazon S3 bucket. The company needs a comprehensive metadata framework to improve search precision for a GenAI application. The metadata must include document timestamps, author information, and research domain classifications.
The solution must maintain a consistent metadata structure across all uploaded documents and allow foundation models (FMs) to understand document context without accessing full content.
Which solution will meet these requirements?
- A . Store document timestamps in Amazon S3 system metadata. Use S3 object tags for domain classification. Implement custom user-defined metadata to store author information.
- B . Set up S3 Object Lock with legal holds to track document timestamps. Use S3 object tags for author information. Implement S3 access points for domain classification.
- C . Use S3 Inventory reports to track timestamps. Create S3 access points for domain classification.
Store author information in S3 Storage Lens dashboards. - D . Use custom user-defined metadata to store author information. Use S3 Object Lock retention periods for timestamps. Use S3 Event Notifications for domain classification.
A
Explanation:
Option A is the correct solution because it uses native Amazon S3 metadata mechanisms to create a consistent, queryable, and model-friendly metadata framework with minimal complexity. S3 system metadata automatically records object creation and modification timestamps, providing reliable and consistent temporal context without additional processing.
Custom user-defined metadata is the appropriate mechanism for storing structured attributes such as author information. These key-value pairs are stored directly with the object, remain consistent across uploads, and can be accessed programmatically by downstream indexing or retrieval systems used by GenAI applications.
S3 object tags are ideal for domain classification because they are designed for lightweight categorization, filtering, and access control. Tags can be standardized across the organization to ensure consistent research domain labeling and can be consumed by search indexes or knowledge base ingestion pipelines without requiring access to the full document body.
Together, system metadata, user-defined metadata, and object tags provide a clean separation of concerns: timestamps for temporal context, metadata for authorship, and tags for classification. This structure allows foundation models to reason about document context (such as recency, domain relevance, and authorship) based on metadata alone, improving retrieval precision and reducing unnecessary token usage.
Options B, C, and D misuse features like Object Lock, access points, Storage Lens, or event notifications for purposes they were not designed for, adding complexity without improving metadata quality or model understanding.
Therefore, Option A best satisfies the metadata consistency, context enrichment, and low-overhead requirements for GenAI-driven document analysis.
A company uses Amazon Bedrock to build a Retrieval Augmented Generation (RAG) system. The RAG system uses an Amazon Bedrock Knowledge Bases that is based on an Amazon S3 bucket as the data source for emergency news video content. The system retrieves transcripts, archived reports, and related documents from the S3 bucket.
The RAG system uses state-of-the-art embedding models and a high-performing retrieval setup. However, users report slow responses and irrelevant results, which cause decreased user satisfaction. The company notices that vector searches are evaluating too many documents across too many content types and over long periods of time.
The company determines that the underlying models will not benefit from additional fine-tuning. The company must improve retrieval accuracy by applying smarter constraints and wants a solution that requires minimal changes to the existing architecture.
Which solution will meet these requirements?
- A . Enhance embeddings by using a domain-adapted model that is specifically trained on emergency news content for improved vector similarity.
- B . Migrate to Amazon OpenSearch Service. Use vector fields and metadata filters to define the scope of results retrieval.
- C . Enable metadata-aware filtering within the Amazon Bedrock knowledge base by indexing S3 object metadata.
- D . Migrate to an Amazon Q Business index to perform structured metadata filtering and document
categorization during retrieval.
C
Explanation:
Option C is the correct solution because it directly addresses the root cause of the problem―overly broad retrieval―while requiring minimal architectural change. Amazon Bedrock Knowledge Bases support metadata-aware filtering, which allows the system to constrain retrieval queries based on indexed metadata such as content type, publication date, source, or category.
By indexing Amazon S3 object metadata, the company can restrict vector searches to relevant subsets of the corpus, such as recent emergency reports, specific content formats, or trusted sources. This significantly reduces the number of documents evaluated during retrieval, which improves both latency and result relevance without changing embedding models or retrieval infrastructure.
This approach aligns with AWS best practices for optimizing RAG systems: when embeddings are already strong, retrieval quality is often improved by narrowing the candidate set rather than increasing model complexity. Metadata filtering reduces noise and ensures that retrieved documents are more contextually aligned with user queries.
Option A requires retraining or adapting embedding models, which the company has already determined will not provide additional benefit.
Option B introduces a migration to OpenSearch, which adds operational overhead and deviates from the existing Bedrock knowledge base architecture.
Option D requires moving to a different indexing service, increasing complexity and implementation effort.
Therefore, Option C provides the most effective and low-effort solution to improve retrieval accuracy and performance in the existing Amazon Bedrock RAG system.
A company is designing a solution that uses foundation models (FMs) to support multiple AI workloads. Some FMs must be invoked on demand and in real time. Other FMs require consistent high-throughput access for batch processing.
The solution must support hybrid deployment patterns and run workloads across cloud infrastructure and on-premises infrastructure to comply with data residency and compliance requirements.
Which combination of steps will meet these requirements? (Select TWO.)
- A . Use AWS Lambda to orchestrate low-latency FM inference by invoking FMs hosted on Amazon SageMaker AI asynchronous endpoints.
- B . Configure provisioned throughput in Amazon Bedrock to ensure consistent performance for high-volume workloads.
- C . Deploy FMs to Amazon SageMaker AI endpoints with support for edge deployment by using Amazon SageMaker Neo. Orchestrate the FMs by using AWS Lambda to support hybrid deployment.
- D . Use Amazon Bedrock with auto-scaling to handle unpredictable traffic surges.
- E . Use Amazon SageMaker JumpStart to host and invoke the FMs.
B, C
Explanation:
The correct combination is B and C because together they address both workload diversity and hybrid deployment requirements with minimal custom engineering.
Option B provides consistent, high-throughput access by configuring provisioned throughput in Amazon Bedrock. Provisioned throughput guarantees predictable capacity and performance, which is essential for batch processing workloads that require sustained inference rates. This eliminates cold starts and throttling concerns that can occur with purely on-demand usage, making it well suited for high-volume enterprise workloads.
Option C enables hybrid deployment across cloud and on-premises environments by deploying foundation models to Amazon SageMaker AI endpoints and using Amazon SageMaker Neo for edge and on-premises optimization. SageMaker Neo compiles models for target hardware, allowing inference to run efficiently outside the AWS cloud while still using AWS-managed tooling. Orchestrating these deployments with AWS Lambda allows consistent invocation patterns across environments.
Option A uses asynchronous endpoints, which are not suitable for real-time, low-latency inference.
Option D addresses scaling but does not support on-premises or hybrid deployment.
Option E simplifies model onboarding but does not address hybrid execution or guaranteed throughput.
Therefore, Options B and C together provide real-time and batch support, predictable performance, and true hybrid deployment while minimizing operational overhead.
A company is developing a generative AI (GenAI) application that analyzes customer service calls in real time and generates suggested responses for human customer service agents. The application must process 500,000 concurrent calls during peak hours with less than 200 ms end-to-end latency for each suggestion. The company uses existing architecture to transcribe customer call audio streams. The application must not exceed a predefined monthly compute budget and must maintain auto scaling capabilities.
Which solution will meet these requirements?
- A . Deploy a large, complex reasoning model on Amazon Bedrock. Purchase provisioned throughput and optimize for batch processing.
- B . Deploy a low-latency, real-time optimized model on Amazon Bedrock. Purchase provisioned throughput and set up automatic scaling policies.
- C . Deploy a large language model (LLM) on an Amazon SageMaker real-time endpoint that uses
dedicated GPU instances. - D . Deploy a mid-sized language model on an Amazon SageMaker serverless endpoint that is optimized for batch processing.
B
Explanation:
Option B is the correct solution because it aligns with AWS guidance for building high-throughput, ultra-low-latency GenAI applications while maintaining predictable costs and automatic scaling. Amazon Bedrock provides access to foundation models that are specifically optimized for real-time inference use cases, including conversational and recommendation-style workloads that require responses within milliseconds.
Low-latency models in Amazon Bedrock are designed to handle very high request rates with minimal per-request overhead. Purchasing provisioned throughput ensures that sufficient model capacity is reserved to handle peak loads, eliminating cold starts and reducing request queuing during traffic surges. This is critical when supporting up to 500,000 concurrent calls with strict latency requirements.
Automatic scaling policies allow the application to dynamically adjust capacity based on demand, ensuring cost efficiency during off-peak hours while maintaining performance during peak usage. This directly supports the requirement to stay within a predefined monthly compute budget.
Option A fails because batch processing and complex reasoning models introduce higher latency and are not suitable for real-time suggestions.
Option C introduces significantly higher operational and cost overhead due to dedicated GPU instances and manual scaling responsibilities.
Option D is optimized for batch workloads and cannot meet the sub-200 ms latency requirement.
Therefore, Option B provides the best balance of performance, scalability, cost control, and operational simplicity using AWS-native GenAI services.
A bank is developing a generative AI (GenAI)-powered AI assistant that uses Amazon Bedrock to assist the bank’s website users with account inquiries and financial guidance. The bank must ensure that the AI assistant does not reveal any personally identifiable information (PII) in customer interactions.
The AI assistant must not send PII in prompts to the GenAI model. The AI assistant must not respond to customer requests to provide investment advice. The bank must collect audit logs of all customer interactions, including any images or documents that are transmitted during customer interactions.
Which solution will meet these requirements with the LEAST operational effort?
- A . Use Amazon Macie to detect and redact PII in user inputs and in the model responses. Apply prompt engineering techniques to force the model to avoid investment advice topics. Use AWS CloudTrail to capture conversation logs.
- B . Use an AWS Lambda function and Amazon Comprehend to detect and redact PII. Use Amazon Comprehend topic modeling to prevent the AI assistant from discussing investment advice topics. Set up custom metrics in Amazon CloudWatch to capture customer conversations.
- C . Configure Amazon Bedrock guardrails to apply a sensitive information policy to detect and filter PII. Set up a topic policy to ensure that the AI assistant avoids investment advice topics. Use the Converse API to log model invocations. Enable delivery and image logging to Amazon S3.
- D . Use regex controls to match patterns for PII. Apply prompt engineering techniques to avoid returning PII or investment advice topics to customers. Enable model invocation logging, delivery logging, and image logging to Amazon S3.
C
Explanation:
Option C is the correct solution because Amazon Bedrock guardrails are purpose-built to enforce defense-in-depth safety controls for GenAI applications with minimal operational overhead. Guardrails provide managed, policy-based enforcement that operates before prompts are sent to the foundation model and after responses are generated, which directly satisfies the requirement that PII must not be sent to the model and must not appear in outputs.
By configuring a sensitive information policy, the application can automatically detect and redact PII in user inputs and model responses without building custom preprocessing pipelines. This approach is more reliable and scalable than regex or prompt engineering techniques, which are brittle and error-prone for sensitive data handling.
The topic policy capability in Amazon Bedrock guardrails allows the bank to explicitly block investment advice topics, ensuring regulatory compliance. This policy-based approach is safer and more auditable than attempting to steer the model only through prompt instructions.
Using the Converse API enables structured, standardized interactions with the model and supports consistent logging of requests and responses. Enabling delivery logging and image logging to Amazon S3 ensures that all customer interactions, including documents and images, are captured in a durable, auditable storage layer. This directly supports compliance, regulatory audits, and forensic analysis.
Option A incorrectly relies on Amazon Macie, which is designed for data-at-rest discovery rather than real-time conversational filtering.
Option B introduces custom Lambda pipelines and topic modeling, increasing operational complexity.
Option D relies on regex and prompt engineering, which do not meet financial-grade compliance standards.
Therefore, Option C delivers the strongest security, governance, and auditability with the least operational effort.
A company is creating a generative AI (GenAI) application that uses Amazon Bedrock foundation models (FMs). The application must use Microsoft Entra ID to authenticate. All FM API calls must stay on private network paths. Access to the application must be limited by department to specific model families. The company also needs a comprehensive audit trail of model interactions.
Which solution will meet these requirements?
- A . Configure SAML federation between Microsoft Entra ID and AWS Identity and Access Management. Create department-specific IAM roles that allow only the required ModelId values. Create AWS PrivateLink interface VPC endpoints for Amazon Bedrock runtime services. Enable AWS CloudTrail to capture Amazon Bedrock API calls. Configure Amazon Bedrock model invocation logging to record detailed model interactions.
- B . Create an identity provider (IdP) connection in IAM to authenticate by using Microsoft Entra ID. Assign department permission sets to control access to specific model families. Deploy AWS Lambda functions in private subnets with a NAT gateway for egress to Amazon Bedrock public endpoints. Enable CloudWatch Logs to capture model interactions for auditing purposes.
- C . Create a SAML identity provider (IdP) in IAM to authenticate by using Microsoft Entra ID. Use IAM permissions boundaries to limit department roles’ access to specific model families. Configure public
Amazon Bedrock API endpoints with VPC routing to maintain private network connectivity. Set up CloudTrail with Amazon S3 Lifecycle rules to manage audit logs of model interactions. - D . Configure OpenID Connect (OIDC) federation between Microsoft Entra ID and IAM. Use attribute-based access control to map department attributes to specific model access permissions. Apply SCP policies to restrict access to Amazon Bedrock FM families based on department. Use Microsoft Entra ID’s built-in logging capabilities to maintain an audit trail of model interactions.
A
Explanation:
Option A is the correct solution because it satisfies authentication, private connectivity, fine-grained authorization, and auditing using AWS-recommended patterns.
SAML federation between Microsoft Entra ID and IAM is a mature, well-supported integration that enables centralized enterprise authentication. Department-specific IAM roles allow precise control over which Bedrock ModelId values each department can invoke, enforcing access by model family.
Using AWS PrivateLink interface VPC endpoints for Amazon Bedrock runtime services ensures that all inference traffic stays on private AWS network paths, with no public internet exposure. NAT gateways and public endpoints, as used in other options, violate this requirement.
AWS CloudTrail provides authoritative audit logs of all Bedrock API calls, which is required for compliance. Amazon Bedrock model invocation logging complements CloudTrail by capturing detailed prompt and response metadata for deeper auditing and investigation.
Option B uses public endpoints via NAT.
Option C incorrectly claims public endpoints can be private.
Option D relies on IdP-side logs, which do not capture Bedrock API activity.
Therefore, Option A is the only solution that fully meets security, compliance, and observability requirements.
A retail company is using Amazon Bedrock to develop a customer service AI assistant. Analysis shows that 70% of customer inquiries are simple product questions that a smaller model can effectively handle. However, 30% of inquiries are complex return policy questions that require advanced reasoning.
The company wants to implement a cost-effective model selection framework to automatically route customer inquiries to appropriate models based on inquiry complexity. The framework must maintain high customer satisfaction and minimize response latency.
Which solution will meet these requirements with the LEAST implementation effort?
- A . Create a multi-stage architecture that uses a small foundation model (FM) to classify the complexity of each inquiry. Route simple inquiries to a smaller, more cost-effective model. Route complex inquiries to a larger, more capable model. Use AWS Lambda functions to handle routing logic.
- B . Use Amazon Bedrock intelligent prompt routing to automatically analyze inquiries. Route simple product inquiries to smaller models and route complex return policy inquiries to more capable larger models.
- C . Implement a single-model solution that uses an Amazon Bedrock mid-sized foundation model (FM) with on-demand pricing. Include special instructions in model prompts to handle both simple and complex inquiries by using the same model.
- D . Create separate Amazon Bedrock endpoints for simple and complex inquiries. Implement a rule-based routing system based on keyword detection. Use on-demand pricing for the smaller model and provisioned throughput for the larger model.
B
Explanation:
Option B is the correct solution because it leverages native Amazon Bedrock intelligent prompt
routing, which is specifically designed to reduce cost and complexity in multi-model GenAI architectures. Intelligent prompt routing automatically analyzes incoming prompts and selects the most appropriate foundation model based on prompt characteristics and complexity―without requiring custom classification logic or orchestration code.
This approach directly meets the requirement for least implementation effort. The company does not need to deploy additional Lambda functions, maintain routing rules, or manage separate classification stages. Routing decisions are handled by Bedrock, which simplifies architecture and reduces operational risk.
By routing the majority (70%) of simple product inquiries to smaller, lower-cost models, the company minimizes inference cost and latency. More complex return policy inquiries are automatically routed to larger models that provide better reasoning capabilities, preserving response quality and customer satisfaction.
Because routing is handled inline by Bedrock, response latency remains low compared to multi-stage architectures that require an additional classification model call before inference. This is critical for customer service scenarios where responsiveness directly impacts satisfaction.
Option A introduces additional inference steps and custom logic.
Option C increases cost by overusing a mid-sized model for all queries.
Option D relies on brittle keyword rules and increases operational overhead through endpoint management.
Therefore, Option B delivers the optimal balance of cost efficiency, performance, and simplicity for dynamic model selection in Amazon Bedrock.
A company runs a Retrieval Augmented Generation (RAG) application that uses Amazon Bedrock Knowledge Bases to perform regulatory compliance queries. The application uses the RetrieveAndGenerateStream API. The application retrieves relevant documents from a knowledge base that contains more than 50,000 regulatory documents, legal precedents, and policy updates.
The RAG application is producing suboptimal responses because the initial retrieval often returns semantically similar but contextually irrelevant documents. The poor responses are causing model hallucinations and incorrect regulatory guidance. The company needs to improve the performance of the RAG application so it returns more relevant documents.
Which solution will meet this requirement with the LEAST operational overhead?
- A . Deploy an Amazon SageMaker endpoint to run a fine-tuned ranking model. Use an Amazon API Gateway REST API to route requests. Configure the application to make requests through the REST API to rerank the results.
- B . Use Amazon Comprehend to classify documents and apply relevance scores. Integrate the RAG application’s reranking process with Amazon Textract to run document analysis. Use Amazon Neptune to perform graph-based relevance calculations.
- C . Implement a retrieval pipeline that uses the Amazon Bedrock Knowledge Bases Retrieve API to perform initial document retrieval. Call the Amazon Bedrock Rerank API to rerank the results. Invoke the InvokeModelWithResponseStream operation to generate responses.
- D . Use the latest Amazon reranker model through the reranking configuration within Amazon Bedrock Knowledge Bases. Use the model to improve document relevance scoring and to reorder results based on contextual assessments.
D
Explanation:
Option D is the correct solution because Amazon Bedrock Knowledge Bases natively support reranking by using Amazon-managed reranker models, which are specifically designed to improve contextual relevance after the initial vector retrieval step. This approach directly addresses the root cause of the issue: semantically similar but contextually irrelevant documents being passed to the foundation model.
By enabling the reranking configuration within Amazon Bedrock Knowledge Bases, the application can automatically reorder retrieved documents based on deeper contextual understanding, such as regulatory scope, legal applicability, and semantic intent. This significantly improves retrieval precision, which reduces hallucinations and improves the factual accuracy of generated regulatory
guidance.
Option D requires no additional infrastructure, no custom orchestration logic, and no separate model hosting. The reranking is fully managed by Amazon Bedrock and integrates seamlessly with the existing RetrieveAndGenerateStream workflow. This makes it the lowest operational overhead solution.
Option A introduces operational complexity by requiring a custom SageMaker endpoint, API Gateway routing, and model lifecycle management.
Option B combines multiple unrelated services and introduces significant complexity without being purpose-built for RAG relevance ranking.
Option C improves relevance but requires explicitly calling the Rerank API and modifying the application pipeline, which increases operational and integration effort compared to built-in reranking.
Therefore, Option D provides the most efficient, scalable, and AWS-recommended method to improve RAG retrieval quality while minimizing operational burden.
A financial services company is building a customer support application that retrieves relevant financial regulation documents from a database based on semantic similarity to user queries. The application must integrate with Amazon Bedrock to generate responses. The application must search documents in English, Spanish, and Portuguese. The application must filter documents by metadata such as publication date, regulatory agency, and document type.
The database stores approximately 10 million document embeddings. To minimize operational overhead, the company wants a solution that minimizes management and maintenance effort while providing low-latency responses for real-time customer interactions.
Which solution will meet these requirements?
- A . Use Amazon OpenSearch Serverless to provide vector search capabilities and metadata filtering. Integrate with Amazon Bedrock Knowledge Bases to enable Retrieval Augmented Generation (RAG) using an Anthropic Claude foundation model.
- B . Deploy an Amazon Aurora PostgreSQL database with the pgvector extension. Store embeddings and metadata in tables. Use SQL queries for similarity search and send results to Amazon Bedrock for response generation.
- C . Use Amazon S3 Vectors to configure a vector index and non-filterable metadata fields. Integrate S3 Vectors with Amazon Bedrock for RAG.
- D . Set up an Amazon Neptune Analytics database with a vector index. Use graph-based retrieval and Amazon Bedrock for response generation.
A
Explanation:
Option A is the optimal solution because it provides scalable semantic search, rich metadata filtering, and tight integration with Amazon Bedrock while minimizing operational overhead. Amazon OpenSearch Serverless is designed for high-volume, low-latency search workloads and removes the need to manage clusters, capacity planning, or scaling policies.
With support for vector search and structured metadata filtering, OpenSearch Serverless enables efficient similarity search across 10 million embeddings while applying constraints such as language, publication date, regulatory agency, and document type. This is critical for financial services use cases where relevance and compliance depend on precise filtering.
Integrating OpenSearch Serverless with Amazon Bedrock Knowledge Bases enables a fully managed RAG workflow. The knowledge base handles embedding generation, retrieval, and context assembly,
while Amazon Bedrock generates responses using a foundation model. This significantly reduces custom glue code and operational complexity.
Multilingual support is handled at the embedding and retrieval layer, allowing documents in English, Spanish, and Portuguese to be searched semantically without language-specific query logic. OpenSearch’s distributed architecture ensures consistent low-latency responses for real-time customer interactions.
Option B increases operational overhead by requiring database tuning and scaling for vector workloads.
Option C does not support advanced metadata filtering, which is a key requirement.
Option D introduces unnecessary complexity and is not optimized for large-scale semantic document retrieval.
Therefore, Option A best meets the requirements for performance, scalability, multilingual support, and minimal management effort in an Amazon BedrockCbased RAG application.
