Practice Free Plat-Arch-204 Exam Online Questions
A customer is evaluating the Platform Events solution and would like help in comparing/contrasting it with Outbound Messaging for real-time/near-real time needs. They expect 3,000 customers to view messages in Salesforce.
What should be evaluated and highlighted when deciding between the solutions?12
- A . In both Platform Events and Outbound Messaging, the event messages are retried by and delivered in sequence, and 3only once. Sales4force ensures there is no duplicate message delivery.
- B . Message sequence is possible in Outbound Messaging, but not guaranteed with Platform Events.
Both offer very high reliability. Fault handling and recovery are fully handled by Salesforce. - C . Both Platform Events and Outbound Messaging are highly scalable. However, unlike Outbound Messaging, only Platform Events have Event Delivery and Event Publishing limits to be considered.
C
Explanation:
When comparing Platform Events and Outbound Messaging for a near-real-time architecture, a Salesforce Platform Integration Architect must evaluate fundamental differences in their delivery models and governance. While both provide declarative, asynchronous "Fire-and-Forget" capabilities, their technical constraints differ significantly, particularly regarding scalability and platform limits.
The key architectural highlight in this scenario is that Platform Events operate on a specialized event bus with specific Event Publishing and Event Delivery limits. Unlike Outbound Messaging, which is governed by more general daily outbound call limits (often tied to user licenses), Platform Events have a dedicated allocation for the number of events that can be published per hour and delivered in a 24-hour period to external clients via the Pub/Sub API or CometD. For example, the number of concurrent subscribers to a Platform Event channel is typically capped at 2,000 for standard configurations. Since the customer expects 3,000 customers to view these messages, this limit is a critical evaluation point; the architecture would need to account for this gap, perhaps by using middleware to fan out messages to the larger audience.
In contrast, Outbound Messaging does not have an "Event Delivery" limit in the same sense. It is a point-to-point SOAP-based push mechanism where Salesforce manages retries for up to 24 hours if the receiving endpoint is unavailable. However, it is less flexible for multi-consumer scenarios because it requires a separate configuration for every unique destination.
Regarding the other options: Option A is incorrect because neither system strictly guarantees "exactly-once" delivery without the possibility of duplicates; in fact, Outbound Messaging may deliver a message more than once if it doesn’t receive a timely acknowledgment.
Option B is incorrect because Platform Events do not have built-in "fault recovery" handled by Salesforce in the same way as Outbound Messaging’s automatic retry queue; with Platform Events, it is the subscriber’s responsibility to use a Replay ID to retrieve missed events within the 72-hour retention window. Therefore, highlighting the unique delivery and publishing limits is the most vital step for the architect.
Northern Trail Outfitters uses a custom Java application to display code coverage and test results for all of its enterprise applications and plans to include Salesforce as well.
Which Salesforce API should an integration architect use to meet the requirement?
- A . Analytics REST API
- B . Metadata API
- C . Tooling API
C
Explanation:
For developer-centric tools that need to access fine-grained technical data like code coverage and test results, the Tooling API is the correct architectural choice.
While the Metadata API (Option B) is used to deploy or retrieve code, it does not provide real-time query access to the underlying metrics of a test run. The Tooling API, however, exposes specialized objects such as ApexCodeCoverage, ApexCodeCoverageAggregate, and ApexTestResult. These objects allow the Java application to query exactly which lines of code were executed during a test and the overall percentage of coverage for the organization.
The Analytics REST API (Option A) is designed for querying and interacting with Einstein Analytics (CRM Analytics) datasets and dashboards, which is irrelevant to software development lifecycle (SDLC) metrics. By using the Tooling API, the Java application can perform RESTful queries to gather comprehensive data on test successes, failures, and coverage gaps. This allows NTO to integrate Salesforce into its existing enterprise-wide quality dashboard, ensuring a unified view of code health across all platforms.
Northern Trail Outfitters leverages Sales Cloud. When an opportunity has changed its status to “Closed/Won” and there are products attached, the details should be passed to the OMS for fulfillment operations. The callout from Salesforce to the OMS should be synchronous.
What should an integration architect do to satisfy these requirements?
- A . Write a trigger that invokes an Apex proxy class to make a REST callout to the OMS.
- B . Develop a batch Apex job that aggregates closed opportunities and makes a REST callout to the OMS hourly.
- C . Build a Lightning component that makes a synchronous Apex REST callout to the OMS when a button is clicked.
C
Explanation:
A synchronous requirement in Salesforce implies a Request-Reply pattern where the user waits for a response. To implement this correctly while adhering to platform constraints, the architect must initiate the callout from the User Interface (UI) layer.3
Option C is the correct architectural choice. By using a Lightning component and a button (or a Quick Act4ion), the callout is initiated directly from the user’s session. The component calls an Apex controller, which performs the synchronous REST callout to the OMS. This allows the user to receive immediate feedback―such as a "Success" message or an "Error" notification from the OMS―directly on the Opportunity page.
Option A is incorrect because Apex Triggers are prohibited from making synchronous callouts. If a trigger initiates a callout, it must be asynchronous (using @future or Queueable Apex), which violates the business requirement for synchronicity. Making a synchronous call from a trigger would block the database transaction and could lead to "Uncommitted Work Pending" errors or system timeouts.
Option B (Batch Apex) is inherently asynchronous and delayed; an hourly job would not provide the real-time, synchronous feedback required during the "Closed/Won" transition. Therefore, moving the integration logic to a UI-driven button is the only way to satisfy the synchronous requirement while staying within Salesforce’s execution and transaction limits.
Universal Containers (UC) works with third-party agents on banner initial design concepts. The design files (2.5 GB) are stored in an on-premise file store. UC wants to allow agencies to view these files in the community.
Which solution should an integration architect recommend?
- A . Create a Lightning component with a Request and Reply integration pattern to allow the community users to download the design files.
- B . Use Salesforce Files to link the files to Salesforce records and display the record and the files in the community.
- C . Create a custom object to store the file location URL; when a community user clicks on the file URL, redirect the user to the on-premise system file location.
C
Explanation:
When dealing with extremely large files, such as the 2.5 GB design files mentioned, an architect must consider the platform’s file size limits and storage costs. Salesforce Files have a maximum upload size of 2 GB through most interfaces, making Option B technically unfeasible for a 2.5 GB file.
Furthermore, storing numerous large files natively in Salesforce would lead to excessive storage consumption and costs.
The most efficient and cost-effective approach is Data Virtualization or Redirection. By creating a custom object to store the file location URL (Option C), the actual file remains in the performant on-premise file store. When the community user needs to access the design, they are redirected to the source system, which handles the massive data transfer. This fulfills the requirement to "view" the files without the overhead of moving gigabytes of data through the Salesforce infrastructure.
Option A is less ideal because a 2.5 GB download over a standard Request-Reply pattern would likely lead to timeouts and a poor user experience.
NTO is merging two orgs but needs the retiring org available for lead management (connected to web forms). New leads must be in the new instance within 30 minutes.
Which approach requires the least amount of development effort?
- A . Use the Tooling API with Process Builder to insert leads in real time.
- B . Use the Composite REST API to aggregate multiple leads in a single call.
- C . Configure named credentials in the source org.
A customer imports data from an external system into Salesforce using Bulk API. These jobs have batch sizes of 2,000 and are run in parallel mode. The batches fail frequently with the error “Max CPU time exceeded”. A smaller batch size will fix this error.
What should be considered when using a smaller batch size?
- A . Smaller batch size may increase time required to execute bulk jobs.
- B . Smaller batch size can trigger “Too many concurrent batches” error.
- C . Smaller batch size may exceed the concurrent API request limits.
A
Explanation:
The Bulk API is designed to process massive datasets by breaking them into smaller batches that Salesforce processes asynchronously. When a batch fails with the “Max CPU time exceeded” error, it typically indicates that the complexity of the operations triggered by the record―such as Apex triggers, Flows, or complex sharing calculations―exceeds the 10,000ms limit within a single transaction.
Reducing the batch size is the standard architectural remedy because it reduces the number of records processed in a single transaction, thereby lowering the total CPU time consumed by those records. However, the architect must consider the impact on the overall throughput and execution time.
When batch sizes are smaller, the total number of batches required to process the same dataset increases. For instance, moving from a batch size of 2,000 to 200 for a 1-million-record dataset increases the number of batches from 500 to 5,000. Each batch carries its own overhead for initialization and finalization within the Salesforce platform. Consequently, while the individual batches are more likely to succeed, the total time required to complete the entire job will increase.
The architect should also be aware of the daily limit on the total number of batches allowed (typically 15,000 in a 24-hour period). While Option C mentions API request limits, the Bulk API is governed more strictly by its own batch limits.
Option B is less likely because "parallel mode" naturally manages concurrency. Thus, the primary trade-off the architect must present to the business is a gain in reliability (successful processing) at the cost of total duration (increased sync time).
A company needs to send data from Salesforce to a homegrown system behind a corporate firewall.
The data is pushed one way, doesn’t need to be real-time, and averages 2 million records per day.
What should an integration architect consider?
- A . Due to high volume of records, number of concurrent requests can hit the limit for the REST API.
- B . Due to high volume of records, the external system will need to use a BULK API Rest endpoint to connect to Salesforce.
- C . Due to high volume of records, a third-party integration tool is required to stage records off platform.
C
Explanation:
With a volume of 2 million records per day, this integration exceeds the practical limits of standard near-real-time patterns like Outbound Messaging or synchronous Apex Callouts. Sending 2 million individual REST requests would likely exhaust the daily API limit and could cause significant performance degradation in Salesforce due to transaction overhead.
An Integration Architect must recommend an Asynchronous Batch Data Synchronization pattern, typically facilitated by a third-party ETL/Middleware tool (e.g., MuleSoft, Informatica, or Boomi). Staging the records off-platform is essential for several reasons:
Throttling: The homegrown system behind a firewall may not be able to handle a massive, sudden burst of 2 million records. A middleware tool can ingest the data from Salesforce and "drip-feed" it into the target system at an acceptable rate.
Error Handling and Retries: Middleware provides sophisticated persistence and "Dead Letter Queues" to ensure that if the homegrown system goes offline, no data is lost.
API Efficiency: The middleware can use the Salesforce Bulk API 2.0 to extract the data in large chunks, which is significantly more efficient than individual REST calls and consumes far fewer API limits.
Option A is a valid concern but is a symptom of the wrong choice of tool (REST).
Option B describes
an inbound integration to Salesforce, whereas the requirement is outbound. By utilizing a third-party tool to stage and manage the 2 million record flow, the architect ensures that the integration is scalable, respects the corporate firewall constraints (via a secure agent or VPN), and maintains the performance of the Salesforce production environment.
A company needs to send data from Salesforce to a homegrown system behind a corporate firewall.
The data is pushed one way, doesn’t need to be real-time, and averages 2 million records per day.
What should an integration architect consider?
- A . Due to high volume of records, number of concurrent requests can hit the limit for the REST API.
- B . Due to high volume of records, the external system will need to use a BULK API Rest endpoint to connect to Salesforce.
- C . Due to high volume of records, a third-party integration tool is required to stage records off platform.
C
Explanation:
With a volume of 2 million records per day, this integration exceeds the practical limits of standard near-real-time patterns like Outbound Messaging or synchronous Apex Callouts. Sending 2 million individual REST requests would likely exhaust the daily API limit and could cause significant performance degradation in Salesforce due to transaction overhead.
An Integration Architect must recommend an Asynchronous Batch Data Synchronization pattern, typically facilitated by a third-party ETL/Middleware tool (e.g., MuleSoft, Informatica, or Boomi). Staging the records off-platform is essential for several reasons:
Throttling: The homegrown system behind a firewall may not be able to handle a massive, sudden burst of 2 million records. A middleware tool can ingest the data from Salesforce and "drip-feed" it into the target system at an acceptable rate.
Error Handling and Retries: Middleware provides sophisticated persistence and "Dead Letter Queues" to ensure that if the homegrown system goes offline, no data is lost.
API Efficiency: The middleware can use the Salesforce Bulk API 2.0 to extract the data in large chunks, which is significantly more efficient than individual REST calls and consumes far fewer API limits.
Option A is a valid concern but is a symptom of the wrong choice of tool (REST).
Option B describes
an inbound integration to Salesforce, whereas the requirement is outbound. By utilizing a third-party tool to stage and manage the 2 million record flow, the architect ensures that the integration is scalable, respects the corporate firewall constraints (via a secure agent or VPN), and maintains the performance of the Salesforce production environment.
A customer is migrating from an old legacy system to Salesforce. As part of the modernization effort, the customer would like to integrate all existing systems that currently work with its legacy application with Salesforce.
Which constraint/pain-point should an integration architect consider when choosing the integration pattern/mechanism?
- A . System types APIs, File systems, Email
- B . Reporting and usability requirements
- C . Multi-language and multi-currency requirement
A
Explanation:
When migrating from a legacy landscape to a modern platform like Salesforce, the most immediate technical hurdle is the diversity of system types and communication protocols used by the existing systems.
In a legacy environment, integrations are often not standardized. An architect may encounter systems that communicate via modern REST/SOAP APIs, but they will also likely find older systems
that rely on Flat File exchanges (FTP/SFTP), Email-based triggers, or direct Database connections. These "System Types" are a fundamental constraint because they dictate the choice of integration middleware. For example, Salesforce cannot natively poll a file system or read an on-premise database; therefore, an architect must identify these constraints to justify the need for an ETL or ESB tool that can bridge these legacy protocols with Salesforce’s API-centric architecture.
While reporting (Option B) and multi-currency (Option C) are important functional requirements for the Salesforce implementation, they do not dictate the integration pattern (e.g., Request-Reply vs. Batch) as much as the technical interface of the source/target systems does. By evaluating the APIs, file systems, and email capabilities of the legacy landscape first, the architect ensures that the chosen integration mechanism―whether it be the Streaming API, Bulk API, or middleware orchestration―is technically capable of actually communicating with the legacy debt.
Northern Trail Outfitters requires an integration to be set up between one of its Salesforce orgs and
an External Data Source using Salesforce Connect. The External Data Source supports Open Data Protocol.
Which configuration should an integration architect recommend be implemented in order to secure requests coming from Salesforce?
- A . Configure a certificate for OData connection.
- B . Configure Special Compatibility for OData connection.
- C . Configure Identity Type for OData connection.
C
Explanation:
Salesforce Connect is a powerful tool for data virtualization, allowing users to view and manage data in external systems (via OData) as if it were stored natively in Salesforce. However, a critical security decision during the setup of an External Data Source is determining the Identity Type.
The Identity Type determines how the external system authenticates the Salesforce user. There are two primary options:
Named Principal: Every Salesforce user accesses the external system using the same set of credentials. This is easier to maintain but provides less granular security tracking in the target system.
Per User: Each individual Salesforce user must provide their own credentials for the external system. This ensures that the data visible in Salesforce respects the user’s specific permissions in the external source.
Configuring the Identity Type is the fundamental way an architect secures OData requests because it defines the authentication boundary between the platforms. While certificates (Option A) can be used for transport layer security, the "Identity Type" configuration is the specific Salesforce Connect setting that governs how a session is authorized.
Option B (Special Compatibility) is a technical setting used to handle non-standard OData implementations and does not directly relate to securing the request. By recommending the correct Identity Type, the architect ensures that the integration adheres to the "Principle of Least Privilege," ensuring that users only see the external data they are authorized to access.
