Practice Free AD0-E605 Exam Online Questions
After going live with an Adobe Real-Time CDP implementation, a data architect notices that a field defined in field group A should actually be part of field group B.
How can this issue be resolved?
- A . Remove the field from field group A and create it inside of field group B after creating all datasets for the target XDM schema
- B . Use the move schema endpoint in the schema registry API after deleting all datasets linked to the schema
- C . Remove the field from field group A and create it inside of field group B after deleting all datasets linked to the XDM schema
- D . Use the schema editor in the AEP UI to directly move the field to the new field group
A financial institution needs to ingest sensitive customer data, including credit card transactions, while ensuring real-time updates to customer profiles.
What considerations should they prioritize?
- A . Enable Edge ingestion with encryption for sensitive data.
- B . Rely solely on batch ingestion workflows.
- C . Disable real-time updates to ensure data consistency.
- D . Avoid using streaming connectors for sensitive data.
A data engineer has configured a new dataset for Adobe Experience Platform which is enabled for Profile. The new source uses an individual profile-based schema with its primary identity as Email. The initial file that is loaded to create the profiles contains 100 profile fragments (records), but after ingestion, only 75 profile fragments are created. No errors were observed during the ingestion.
What could be the reason for this?
- A . 75 profile fragments ingested successfully and 25 profile fragments failed to ingest
- B . The identity graph collapsed 25 of the profile fragments into the existing 75 profile fragments
- C . 25 of the profile fragments had repeated Primary Identity values
- D . The dataset schema used was not compatible with the ingestion workflow
A data engineer has loaded a single order event with a status of "order_placed" into the Real-Time Customer Profile. The event utilizes an Experience Event class-based schema/dataset where the email address is marked as the primary identity.
The event is as follows:
_id: 1234 (unique id of the event)
timestamp: 2023-10-06T12:00:00Z (timestamp the event occurred)
Email: [email protected] (primary identity — who the event belongs to)
status: order_placed (status of the event)
A few hours later the data engineer sends an updated order event into the Real-Time Customer Profile stating the order status is now "order_shipped".
The event is as follows:
_id: 1234
timestamp: 2023-10-06T14:00:00Z
Email: [email protected]
status: order_shipped
When the data engineer looks up the profile, the new event does not appear with the "order_shipped" status on the profile, but it is in the Data Lake.
Why did this happen?
- A . The Real-Time Customer Profile had a processing error during ingestion.
- B . The Real-Time Customer Profile skipped the record as the _id was already existing.
- C . The Real-Time Customer Profile skipped the record as the timestamp was different.
- D . The Real-Time Customer Profile only accepts one event per timestamp per primary identity.
A data engineer has loaded a single order event with a status of "order_placed" into the Real-Time Customer Profile. The event utilizes an Experience Event class-based schema/dataset where the email address is marked as the primary identity.
The event is as follows:
_id: 1234 (unique id of the event)
timestamp: 2023-10-06T12:00:00Z (timestamp the event occurred)
Email: [email protected] (primary identity — who the event belongs to)
status: order_placed (status of the event)
A few hours later the data engineer sends an updated order event into the Real-Time Customer Profile stating the order status is now "order_shipped".
The event is as follows:
_id: 1234
timestamp: 2023-10-06T14:00:00Z
Email: [email protected]
status: order_shipped
When the data engineer looks up the profile, the new event does not appear with the "order_shipped" status on the profile, but it is in the Data Lake.
Why did this happen?
- A . The Real-Time Customer Profile had a processing error during ingestion.
- B . The Real-Time Customer Profile skipped the record as the _id was already existing.
- C . The Real-Time Customer Profile skipped the record as the timestamp was different.
- D . The Real-Time Customer Profile only accepts one event per timestamp per primary identity.
What is the main goal of role-based access control (RBAC) in Adobe RT-CDP?
- A . To automate profile updates.
- B . To assign specific data access permissions based on user roles.
- C . To enforce deterministic identity resolution.
- D . To create static audience segments.
A financial services firm uses Adobe RT-CDP to unify customer data from multiple sources, including credit card activity, email interactions, and website visits.
What combination of features is critical for accurate profile assembly?
- A . Probabilistic identity resolution and edge profiles.
- B . Deterministic identity resolution and the Identity Graph.
- C . Static schemas and batch ingestion workflows.
- D . Real-time activation without identity stitching.
What is a key best practice for applying data governance in Adobe RT-CDP?
- A . Label all datasets with appropriate usage tags during ingestion.
- B . Avoid mapping data to identity fields.
- C . Disable profile updates in real-time workflows.
- D . Restrict all data to static batch exports.
A financial institution is using Adobe RT-CDP to activate data to an ad platform. The dataset includes sensitive information such as account balances.
What guardrails should they apply during activation?
- A . Use DULE policies to restrict sensitive data from being exported.
- B . Perform manual activation workflows.
- C . Limit the activation to edge-based triggers.
- D . Include all data fields in the activation for completeness.
A financial institution needs to store customer transaction data along with unstructured notes from customer service interactions.
Which feature of Adobe RT-CDP should they use?
- A . Schema-on-write with strict normalization.
- B . Hierarchical data modeling in NoSQL schemas.
- C . Deterministic identity resolution.
- D . Batch processing workflows.