Practice Free Plat-Arch-204 Exam Online Questions
Northern Trail Outfitters (NTO) wants to improve the quality of callouts from Salesforce to its REST APIs. For this purpose, NTO will require all API Clients/consumers to adhere to REST API Markup Language (RAML) specifications that include the field-level definition of every API request and response Payload. The RAML specs serve as interface contracts that Apex REST API Clients can rely on.
Which design specification should the integration architect include in the integration architecture to ensure that Apex REST API Clients’ unit tests confirm Adherence to the RAML specs?
- A . Require the Apex REST API Clients to implement the HttpCalloutMock
- B . Call the HttpCalloutMock Implementation from the Apex REST API Clients.
- C . Implement HttpCalloutMock to return responses per RAML specification.
C
Explanation:
In a contract-first integration strategy using RAML (RESTful API Modeling Language), the specification defines the exact structure of requests and responses. Because Salesforce unit tests cannot perform actual network callouts, the platform requires d1evelopers to use the HttpCalloutMock interface to simulate responses.
To ensure that the integration code strictly adheres to the established RAML contract, the integration architect must mandate that the HttpCalloutMock implementation returns responses that mirror the RAML specification. This means the mock must include all required fields, correct data types, and the expected HTTP status codes (e.g., 200 OK, 201 Created) as defined in the contract. By doing this, the unit tests verify that the Apex client code can successfully parse and process the specific JSON or XML payloads defined in the RAML spec.
Option A and B are technically imprecise. The Apex client does not "implement" the mock; rather, the test class provides a separate mock implementation to the runtime via Test.setMock(). The value of the integration architecture lies in the content of that mock. If the mock is designed to return contract-compliant data, then any change to the RAML that breaks the Apex code’s ability to process it will be caught immediately during the testing phase. This "Mock-as-a-Contract" approach provides a safety net, ensuring that Salesforce remains compatible with external services even as those services evolve, provided the RAML is kept up to date.
A company has an external system that processes and tracks orders. Sales reps manage their leads and opportunity pipeline in Salesforce. The company decided to integrate Salesforce and the Order Management System (OMS) with minimal customization and code. Sales reps need to see order history in real-time. The legacy system is on-premise and connected to an ESB. There are 1,000 reps creating 15 orders each per shift, mostly with 20-30 line items.
How should an integration architect integrate the two systems based on these requirements?
- A . Use Salesforce standard object, REST API, and extract, transform, load (ETL).
- B . Use Salesforce custom object, custom REST API, and extract, transform, load (ETL).
- C . Use Salesforce external object and OData connector.
C
Explanation:
To meet the requirements of minimal customization, low developer resources, and real-time visibility without data replication, the architect should utilize Salesforce Connect with External Objects and an OData connector.
Salesforce External Objects allow the OMS data to be viewed within Salesforce as if it were stored natively, but the data remains in the on-premise system. This fulfills the requirement for sales reps to see "up-to-date information" because every time they view the record, Salesforce Connect fetches the latest data via the ESB’s OData endpoint. This Data Virtualization pattern is the most efficient choice for real-time history where users only need to view the data occasionally.
Options A and B involve Data Replication via ETL, which would store the order data inside Salesforce.
Given the volume (15,000 orders/shift with 25 line items each = 375,000 records daily), this would rapidly consume Salesforce data storage limits and require significant custom development for the ETL logic and REST APIs. Furthermore, ETL is typically batch-oriented and would not provide the true "real-time" view requested. By using an OData connector, the architect leverages a declarative, "no-code" solution that satisfies the timeline constraints and provides immediate access to order details and line items without the cost of data storage.
Northern Trail Outfitters (NTO) has recently changed its Corporate Security Guidelines. The guidelines require that all cloud applications pass through a secure firewall before accessing on-
premise resources. NTO is evaluating middleware solutions to integrate cloud applications with on-premise resources and services.
Which consideration should an integration architect evaluate before choosing a middleware solution?12
- A . An API Gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.
- B . The middleware solution is able to interface directly with databases via an5 Open Database Connectivity (ODBC) con6nection string.
- C . The middleware solution enforces the OAuth security protocol.
A
Explanation:
In modern enterprise architecture, securing the boundary between cloud environments like Salesforce and on-premise data centers is a critical responsibility of the Integration Architect. When Corporate Security Guidelines mandate that all traffic must pass through a secure firewall, the architecture must support a Demilitarized Zone (DMZ) or "Perimeter Network" strategy.
An API Gateway or a specialized middleware connector acts as the "front door" for these on-premise resources. The architect must evaluate whether the chosen middleware solution supports a distributed deployment model where the gateway component can reside within the DMZ. This setup allows the organization to terminate external (cloud) connections in a hardened environment before the traffic is inspected and proxied to the internal, trusted network.
While supporting OAuth (Option C) is essential for modern authentication, it does not satisfy the specific network-level firewall requirement described. Similarly, ODBC connections (Option B) are low-level database protocols that usually operate deep within the internal network and would typically be considered a security risk if exposed directly to a firewall.
By ensuring the middleware has an architecturally compatible gateway for the DMZ, the architect provides a solution that allows for deep packet inspection, IP whitelisting, and rate limiting at the edge of the corporate network. This approach aligns with the "Defense in Depth" principle, ensuring that Salesforce can securely communicate with legacy systems (like SAP or internal databases) without exposing those systems directly to the public internet, thereby satisfying the new Corporate Security Guidelines.
Service agents at Northern Trail Outfitters use Salesforce to manage cases and B2C Commerce for ordering.
Which integration solution should an architect recommend in order for the service agents
to see order history from a business-to-consumer (B2C) Commerce system?
- A . Salesforce B2C Commerce to Service Cloud Connector
- B . REST API offered by Commerce Platforms
- C . MuleSoft Anypoint Platform
A
Explanation:
For a unified service experience between Salesforce Service Cloud and B2C Commerce (formerly Demandware), Salesforce provides a purpose-built, cross-cloud solution known as the Salesforce B2C Commerce to Service Cloud Connector.
This connector is part of the Salesforce B2C Solution Architecture and is designed to provide "out-of-the-box" synchronization of data between the two platforms. By implementing this connector, service agents gain several high-value capabilities within the Service Console:
Customer Profile Sync: Ensures that customer data (name, address, etc.) is consistent across both systems.
Order History View: Allows agents to see real-time order data from the Commerce system directly within the Case record page.
Order on Behalf Of: Enables agents to place orders for customers without leaving Salesforce.
While you could build a custom integration using the Commerce REST API (Option B) or MuleSoft (Option C), these would require significant development, testing, and maintenance effort. The Salesforce B2C Connector is the recommended "path of least resistance" because it leverages Salesforce’s own pre-built logic for cross-cloud interoperability, reducing technical debt and time-to-value. For an architect, choosing the standard1 connector ensures better supportability and future-proofing 2as Salesforce continues to enhance its multi-cloud features.
Universal Containers (UC) support agents would like to open bank accounts on the spot. During the process, agents execute credit checks through external agencies. At any given time, up to 30 concurrent reps will be using the service.
Which error handling mechanisms should be built to display an error to the agent when the credit verification process has failed?
- A . Handle Integration errors in the middleware in case the verification process is down, then the middleware should retry processing the request multiple times.
- B . In case the verification process is down, use fire and forget mechanism instead of Request and Reply to allow the agent to get the response back when the service is back online.
- C . Handle the error in the synchronous callout and display a message to the agent. (Note: While not explicitly in the user’s snippet, A and B are provided options; the standard architect answer for "displaying an error to the agent" in a synchronous flow is handling the exception in the UI layer).
A
Explanation:
In a synchronous Request-Reply scenario where a bank agent is waiting "on the spot" for a credit check, the error-handling strategy must balance immediate feedback with system resilience.
Option A is the recommended architectural approach for enterprise resiliency. By placing a Middleware layer (like MuleSoft) between Salesforce and the credit agencies, the architect can implement sophisticated error-handling patterns that are invisible to the user but critical for success. If a credit agency’s API is momentarily unreachable, the middleware can perform automated retries (e.g., three attempts with 500ms intervals). If the retries still fail, the middleware sends a clean, structured error response back to Salesforce.
Option B (Fire and Forget) is fundamentally unsuitable for this use case because the agent needs the result immediately to open the account; they cannot wait for a callback that might arrive hours later.
Option C (Mock service) is only a testing tool and provides no value in a production environment where real financial data is required. By delegating the retry logic to the middleware, the architect protects Salesforce’s concurrent request limits (since the agent only occupies a thread for the duration of the final response) and ensures that transient network issues do not result in a "failed" bank account application for the customer.
Northern Trail Outfitters (NTO) has an affiliate company that would like immediate notifications of changes to opportunities in the NTO Salesforce instance. The affiliate company has a CometD client available.
Which solution is recommended in order to meet the requirement?
- A . Implement a polling mechanism in the client that calls the SOAP API getUpdated method to get the ID values of each updated record.
- B . Create a PushTopic update event on the Opportunity object to allow the subscriber to react to the streaming API.
- C . Create a connected app in the affiliate org and select “Accept CometD API Requests”.
B
Explanation:
To provide "immediate notifications" to a client that already supports CometD, the architect must utilize Salesforce’s Streaming API. PushTopic Events are the classic Streaming API implementation for broadcasting changes to specific records.
A PushTopic is a definition that includes a SOQL query and a set of trigger events (Create, Update, Delete). When an Opportunity in NTO’s instance is updated and matches the query, Salesforce automatically pushes a notification to the topic’s channel. The affiliate’s CometD client, having subscribed to this channel, receives the data payload instantly.12
Option A (Polling) is the architectural opposite of "immediate notifications". Polling is inefficient, consumes API limits, and int13roduces latency between the actual record change and the client’s discovery of that change.
Option C is incorrect as there is no "Accept CometD" setting in a Connected App; while a Connected App is used for OAuth authentication, the PushTopic is the mechanism that actually defines and delivers the stream. By using PushTopic, NTO provides a decoupled, event-driven architecture that fulfills the requirement for real-time visibility while minimizing the technical burden on the affiliate’s infrastructure.
Northern Trail Outfitters needs to use Shield Platform Encryption to encrypt social security numbers in order to meet a business requirement.
Which action should an integration architect take prior to the implementation of Shield Platform Encryption?
- A . Encrypt all the data so that it is secure.
- B . Use Shield Platform Encryption as a user authentication or authorization tool.
- C . Review Shield Platform Encryption configurations.
C
Explanation:
Implementing Shield Platform Encryption is a significant architectural change that requires careful planning before activation. The architect’s first priority must be to Review Shield Platform Encryption configurations and understand the platform’s functional limitations.
Encryption at rest affects how data interacts with other platform features. For example, encrypting a field can impact the ability to use that field in SOQL WHERE clauses, report filters, list views, or as a
unique/external ID. Before encrypting Social Security Numbers, the architect must audit all existing integrations, Apex code, and reports that reference that field to ensure they will still function correctly.
Option A is incorrect because unnecessarily encrypting all data can negatively impact system performance and break standard functionality. Encryption should be applied selectively to sensitive fields based on a clear data classification policy.
Option B is factually wrong; Shield is a data protection tool, not an authentication or authorization mechanism like OAuth or SSO. By reviewing the configurations first, the architect can identify potential "blockers"―such as a field being used in a formula or a criteria-based sharing rule―and address them before the encryption keys are generated and applied.
A company uses Customer Community for course registration. The payment gateway takes more than 30 seconds to process transactions. Students want results in real time to retry if errors occur.
What is the recommended integration approach?
- A . Use Request and Reply to make an API call to the payment gateway.
- B . Use Platform Events to process payment to the payment gateway.
- C . Use Continuation to process payment to the payment gateway
C
Explanation:
Standard synchronous Apex callouts have a timeout limit, and more importantly, Salesforce limits the number of long-running requests (those lasting longer than 5 seconds) that can execute concurrently. If a payment gateway consistently takes 30 seconds, a few simultaneous users could easily exhaust the org’s concurrent request limit, causing the entire system to stop responding.
The Continuation pattern (Option C) is designed specifically for this "long-wait" scenario. It allows the Apex request to be suspended while waiting for the external service to respond, freeing up the Salesforce worker thread to handle other users. Once the gateway responds, the suspended process resumes and returns the result to the student’s browser. This provides the "real-time" experience required for the student to retry immediately without the risk of bringing down the entire community due to thread exhaustion.
A Salesforce customer is planning to roll out Salesforce for all of their sales and service staff. Senior management has requested that monitoring be in place for Operations to notify any degradation in Salesforce performance.
How should an Integration consultant implement monitoring?
- A . Use Salesforce API Limits to capture current API usage and configure alerts for monitoring.
- B . Identify critical business processes and establish automation to monitor performance against established benchmarks.
- C . Use APIEVENT to track all user initiated API calls through SOAP, REST, or Bulk APIs.
B
Explanation:
Effective operational monitoring focuses on the end-user experience and business outcomes rather than just raw technical metrics. An Integration consultant should identify critical business processes (e.g., "Lead Conversion" or "Order Processing") and establish benchmarks to detect performance degradation.
Monitoring purely technical limits (Option A) or individual API events (Option C) provides "noise" without context. For example, if API usage is high but the system is responding quickly, there is no degradation. However, if a critical process that normally takes 2 seconds starts taking 10 seconds, that is a clear indicator of a performance issue that impacts the business.32
The consultant should use tools like Salesforce Event Monitoring or external APM (Application Performance Management) tools to track the execution time of these key transactions. By setting alerts when performance deviates from established benchmarks, Operations can be proactively notified before users begin to lose productivity or abandon the system. This holistic approach ensures that monitoring is aligned with business value and provides actionable insights for troubleshooting bottlenecks in code, automation, or integrations.
A developer is researching different implementations of the Streaming API (PushTopic, Change Data Capture, Generic Streaming, Platform Events) and asks for guidance.
What should the architect consider when making the recommendation?
- A . PushTopic Events can define a custom payload.
- B . Change Data Capture does not have record access support.
- C . Change Data Capture can be published from Apex.
B
Explanation:
When recommending a streaming solution, the architect must evaluate how each event type handles Record-Level Security (Sharing). Change Data Capture (CDC) is unique because it ignores sharing settings for record change events. This means all records of an enabled object generate change events, regardless of whether a particular user has access to those records in the Salesforce UI.
While CDC disregards record-level sharing, it does respect Field-Level Security (FLS). Delivered events only include the fields that the subscribing user is permitted to access. This is a critical consideration for integrations: if a system requires a "Master" view of all record changes across the enterprise (such as a data warehouse sync), CDC is the appropriate tool because it ensures no data is missed due to user-specific sharing constraints.
In contrast, PushTopic Events (Option A) provide a fixed payload based on a SOQL query and do not allow a "custom" payload in the same sense as Platform Events. Platform Events (Option C) are published from Apex or external APIs, but CDC is a platform-native feature that broadcasts automatically when a database record is modified, rather than being "published from Apex" by a developer.
