Practice Free PT-AM-CPE Exam Online Questions
For Proof of Possession OAuth2 tokens, in addition to the access token, what must be presented to the authorization server?
- A . Nonce
- B . Client JSON Web Key (JWK)
- C . State
- D . Client private certificate
D
Explanation:
Proof of Possession (PoP) tokens, specifically Certificate-Bound Access Tokens as defined in RFC 8705 and supported by PingAM 8.0.2, are designed to prevent token misuse by binding the access token to a specific client’s cryptographic material.9
According to the PingAM documentation on "Certificate-Bound Proof-of-Possession," when an OAuth2 client requests a token, PingAM retrieves the client’s public key (either from a provided certificate or a JWK) and embeds a thumbprint (the cnf claim) of that material into the issued token. When the client subsequently presents this token to the Resource Server (or the Authorization Server’s introspection endpoint), it must also provide "Proof" that it possesses the private key corresponding to that thumbprint.
In the Mutual TLS (mTLS) approach, this proof is provided by the Client private certificate presented during the TLS handshake.10 The server verifies that the certificate used to establish the secure connection matches the one bound to the token. Without presenting the certificate (Option D), the token is considered "unbound" or invalid, even if the token itself is otherwise well-formed. This mechanism effectively "pins" the token to the client, ensuring that if the token is stolen, it cannot be used by any other entity that does not possess the matching private key. Nonce and State (Options A and C) are used during the initial authorization request for different security purposes (replay protection and CSRF), and while a JWK (Option B) can be used to define the public key, the actual presentation of proof during an mTLS transaction is the certificate.
What should be configured in PingAM if you are using an LDAP directory service that does not support persistent search?
- A . Enable user data caching, which will have a negative impact on performance
- B . Enable user data caching, which will have a positive impact on performance
- C . Disable user data caching, which will have a positive impact on performance
- D . Disable user data caching, which will have a negative impact on performance
D
Explanation:
Persistent Search is an LDAP control that allows a client (like PingAM) to receive real-time notifications from the Directory Server (like PingDS) whenever a user record is modified. PingAM 8.0.2 uses this to maintain its User Data Cache.
According to the "Identity Store Configuration" and "Tuning AM" documentation:
When persistent search is supported, PingAM caches user profile data in memory to speed up authentication and authorization decisions. When a change happens in the LDAP store, the directory server "pushes" the update to AM via the persistent search connection, and AM updates its cache immediately.
If the LDAP directory does not support persistent search (common in some legacy or highly restricted environments):
Cache Inconsistency: If caching were enabled, PingAM would not know when a user’s attribute (like a group membership) had changed in the back-end. The cache would become "stale," leading to incorrect authorization decisions.
Required Configuration: The administrator must Disable user data caching to ensure that every request results in a direct query to the LDAP server, ensuring "Read-through" accuracy. Performance Impact: Disabling the cache has a negative impact on performance (Option D) because every policy evaluation or session check now requires a synchronous network round-trip to the LDAP server, increasing latency and putting higher CPU/IO load on the directory.
Therefore, for directories lacking persistent search, disabling the cache is necessary for data integrity but comes at a significant performance cost.
What are the possible outcomes of the Push Result Verifier node?
- A . Success, Failure, Waiting, Retry
- B . Success, Failure, Expired, Retry
- C . Success, Failure, Expired, Waiting
- D . Success, Failure, Expired, Waiting, Retry
C
Explanation:
The Push Result Verifier node is a core component of the "MFA: Push Authentication" journey in PingAM 8.0.2. Its primary function is to check the status of a push notification that was previously dispatched to a user’s mobile device (usually via the Push Sender node).22
According to the "Authentication Node Reference" for version 8.0.2, the node evaluates the state of the push request and yields exactly four distinct outcomes:
Success: This path is followed if the user has actively approved the push notification on their registered device using the ForgeRock/Ping Authenticator app.
Failure: This path is taken if the user explicitly denies or rejects the push notification on their device, indicating a potential unauthorized login attempt.
Expired: This outcome occurs if the notification reaches its "Message Timeout" limit (defined in the Push Sender node) without any response from the user.23 In standard trees, this path often loops back to allow the user to try a different MFA method or resend the push.
Waiting: This outcome is triggered if a response has not yet been received but the timeout has not yet been reached. This is used in conjunction with a Push Wait or Polling mechanism to create a "check-and-loop" logic until a final result (Success, Failure, or Expired) is determined.
The Retry outcome (mentioned in other options) is notably absent from this specific node’s metadata. While a "Retry" might be implemented in the overall tree logic (for example, by using a Retry Limit Decision node after an Expired outcome), the Push Result Verifier node itself only reports the state of the specific push transaction it is tracking. Understanding these four discrete states is vital for designing resilient authentication journeys that handle user delays or network issues gracefully.
During the PingAM startup process, what is the location and name of the file that the PingAM bootstrap process uses to connect to the configuration Directory Services repository?
- A . <user-home-dir>/.openam/config/boot.json
- B . /path/to/tomcat/<tomcat-instance-dir>/webapps/<am-instance-dir>/boot.json
- C . <user-home>/<am-instance-dir>/boot.json
- D . <user-home-dir>/<am-instance-dir>/config/boot.json
C
Explanation:
In PingAM 8.0.2, especially when utilizing File-Based Configuration (FBC), the startup sequence relies on a "bootstrap" phase to locate the system’s configuration. According to the "Installation Guide" and "Configuration Directory Structure," the primary file involved in this process is named boot.json. The boot.json file contains the essential connection details required for the AM binaries to find and
unlock the configuration store (usually PingDS). This includes the LDAP host, port, bind DN, and references to the secret stores needed to decrypt the configuration.
The location of this file is determined by the Configuration Directory path specified during the initial setup. By default, PingAM creates its configuration directory in the home directory of the user running the web container. The standard path structure is <user-home>/<am-instance-dir>/. Therefore, the boot.json file is located at the root of this instance directory: <user-home>/<am-instance-dir>/boot.json.
Options A and D are incorrect because they place the file inside a /config subdirectory; while AM has many config files in subdirectories, the boot.json sits at the root to be accessible as the first point of entry.
Option B is incorrect because it suggests the file is stored within the Tomcat webapps folder. PingAM specifically avoids storing configuration data within the web application binaries to ensure that configuration persists even if the .war file is deleted or redeployed.
Understanding the location of boot.json is vital for DevOps engineers who need to automate the deployment of PingAM using tools like Amster or when troubleshooting a "Failed to connect to the configuration store" error during server startup.
During the PingAM startup process, what is the location and name of the file that the PingAM bootstrap process uses to connect to the configuration Directory Services repository?
- A . <user-home-dir>/.openam/config/boot.json
- B . /path/to/tomcat/<tomcat-instance-dir>/webapps/<am-instance-dir>/boot.json
- C . <user-home>/<am-instance-dir>/boot.json
- D . <user-home-dir>/<am-instance-dir>/config/boot.json
C
Explanation:
In PingAM 8.0.2, especially when utilizing File-Based Configuration (FBC), the startup sequence relies on a "bootstrap" phase to locate the system’s configuration. According to the "Installation Guide" and "Configuration Directory Structure," the primary file involved in this process is named boot.json. The boot.json file contains the essential connection details required for the AM binaries to find and
unlock the configuration store (usually PingDS). This includes the LDAP host, port, bind DN, and references to the secret stores needed to decrypt the configuration.
The location of this file is determined by the Configuration Directory path specified during the initial setup. By default, PingAM creates its configuration directory in the home directory of the user running the web container. The standard path structure is <user-home>/<am-instance-dir>/. Therefore, the boot.json file is located at the root of this instance directory: <user-home>/<am-instance-dir>/boot.json.
Options A and D are incorrect because they place the file inside a /config subdirectory; while AM has many config files in subdirectories, the boot.json sits at the root to be accessible as the first point of entry.
Option B is incorrect because it suggests the file is stored within the Tomcat webapps folder. PingAM specifically avoids storing configuration data within the web application binaries to ensure that configuration persists even if the .war file is deleted or redeployed.
Understanding the location of boot.json is vital for DevOps engineers who need to automate the deployment of PingAM using tools like Amster or when troubleshooting a "Failed to connect to the configuration store" error during server startup.
The Core Token Service (CTS) can be used for storing which of the following?
- A . Configuration
- B . Users
- C . Kerberos tokens
- D . OAuth2 tokens
D
Explanation:
The Core Token Service (CTS) is a high-performance persistence layer in PingAM 8.0.2 designed to store short-lived, stateful data. Unlike the Configuration Store (which holds static system settings) or the Identity Store (which holds user profiles), the CTS is optimized for "token-like" data that is frequently created, updated, and deleted.
According to the "Core Token Service (CTS) Overview" in the PingAM 8.0.2 documentation, the primary purpose of the CTS is to provide a centralized repository for:
Session Tokens: For server-side sessions, the session state is stored in the CTS.
OAuth 2.0 Tokens: This includes Access Tokens, Refresh Tokens, and Authorization Codes. When an OAuth2 client requests a token, AM generates it and, if configured for server-side storage, persists it in the CTS so that any node in an AM cluster can validate it.
SAML 2.0 Tokens: Used for tracking assertions and managing Single Logout (SLO) states.
UMA (User-Managed Access) Labels and Resources: Various state information for the UMA protocol. The documentation explicitly clarifies that the CTS is not a general-purpose database. Configuration (Option A) is strictly stored in the Configuration Data Store (usually a dedicated PingDS instance).
Users (Option B) are stored in an Identity Store such as Active Directory or PingDS. Kerberos tokens (Option C) are part of a challenge-response handshake that is typically handled at the protocol layer and not stored as persistent records in the CTS. Therefore, OAuth2 tokens are the definitive type of data managed by the CTS among the choices provided. Utilizing the CTS for OAuth2 tokens is a prerequisite for supporting features like token revocation and refresh token persistence across multiple AM instances in a high-availability deployment.
The Core Token Service (CTS) can be used for storing which of the following?
- A . Configuration
- B . Users
- C . Kerberos tokens
- D . OAuth2 tokens
D
Explanation:
The Core Token Service (CTS) is a high-performance persistence layer in PingAM 8.0.2 designed to store short-lived, stateful data. Unlike the Configuration Store (which holds static system settings) or the Identity Store (which holds user profiles), the CTS is optimized for "token-like" data that is frequently created, updated, and deleted.
According to the "Core Token Service (CTS) Overview" in the PingAM 8.0.2 documentation, the primary purpose of the CTS is to provide a centralized repository for:
Session Tokens: For server-side sessions, the session state is stored in the CTS.
OAuth 2.0 Tokens: This includes Access Tokens, Refresh Tokens, and Authorization Codes. When an OAuth2 client requests a token, AM generates it and, if configured for server-side storage, persists it in the CTS so that any node in an AM cluster can validate it.
SAML 2.0 Tokens: Used for tracking assertions and managing Single Logout (SLO) states.
UMA (User-Managed Access) Labels and Resources: Various state information for the UMA protocol. The documentation explicitly clarifies that the CTS is not a general-purpose database. Configuration (Option A) is strictly stored in the Configuration Data Store (usually a dedicated PingDS instance).
Users (Option B) are stored in an Identity Store such as Active Directory or PingDS. Kerberos tokens (Option C) are part of a challenge-response handshake that is typically handled at the protocol layer and not stored as persistent records in the CTS. Therefore, OAuth2 tokens are the definitive type of data managed by the CTS among the choices provided. Utilizing the CTS for OAuth2 tokens is a prerequisite for supporting features like token revocation and refresh token persistence across multiple AM instances in a high-availability deployment.
Which area of PingAM does affinity mode relate to?
- A . Authentication
- B . Load balancing
- C . Self-service
- D . Authorization
B
Explanation:
In PingAM 8.0.2, the term Affinity Mode (or session affinity) is strictly related to Load Balancing (Option B). It describes a configuration where a load balancer ensures that all requests belonging to a specific user session are consistently routed to the same PingAM server instance in a cluster. According to the "Load Balancing" and "Deployment Planning" documentation:
Affinity is critical for performance in stateful deployments. While PingAM can operate in a "stateless" manner by retrieving sessions from the Core Token Service (CTS) on every request, this creates unnecessary overhead. Affinity Mode allows the AM server to satisfy requests using its local "In-memory" session cache.
There are two primary levels of affinity discussed in PingAM documentation:
Client-to-AM Affinity: Usually handled by the load balancer using a cookie (like the AMLB cookie) to keep the user on the same AM node.
AM-to-DS Affinity: Used when AM connects to the CTS (PingDS). This ensures that an AM server always talks to the same directory server node to avoid "replication lag" where a session might be written to one DS node but not yet visible on another.
Without affinity, the system remains functional due to the CTS, but performance decreases as every request requires a cross-network database lookup. Therefore, affinity is a core concept of the Load Balancing and high-availability architecture.
Which type of logs are written by PingAM?
- A . Debug logs and Java logs
- B . Audit logs and Java logs
- C . Debug logs and audit logs
- D . Java logs, debug logs, and audit logs
C
Explanation:
According to the PingAM 8.0.2 "Maintenance and Troubleshooting" documentation, the system generates two primary, distinct categories of logs for monitoring and problem-solving: Audit Logs and Debug Logs.
Audit Logs: These are high-level logs intended for security auditing, compliance, and reporting. They record specific "business events" or "state changes" within the system. Examples include successful logins, failed authentication attempts, administrative configuration changes (logged in config.audit.json), and policy evaluation decisions (logged in access.audit.json). These logs are structured (often in JSON) to be easily consumed by SIEM (Security Information and Event Management) tools.
Debug Logs: These are low-level, highly verbose logs intended for developers and support engineers.
They record the internal "thought process" of the PingAM engine. They track the execution of specific Java classes, the results of LDAP queries, and the movement of data between authentication nodes.
These logs are stored in the /debug directory and can be adjusted to different levels of verbosity (Error, Warning, Message, Info).
While PingAM runs within a Java Virtual Machine (JVM), and you may see container logs (like catalina.out in Tomcat) or "Java logs" from the underlying web server, these are technically external to the PingAM application itself. The PingAM application’s internal logging framework is strictly split between Audit (what happened at a functional level) and Debug (why it happened at a code level). Therefore, Option C is the most accurate technical description of the logs natively managed and written by the PingAM service.
Which type of logs are written by PingAM?
- A . Debug logs and Java logs
- B . Audit logs and Java logs
- C . Debug logs and audit logs
- D . Java logs, debug logs, and audit logs
C
Explanation:
According to the PingAM 8.0.2 "Maintenance and Troubleshooting" documentation, the system generates two primary, distinct categories of logs for monitoring and problem-solving: Audit Logs and Debug Logs.
Audit Logs: These are high-level logs intended for security auditing, compliance, and reporting. They record specific "business events" or "state changes" within the system. Examples include successful logins, failed authentication attempts, administrative configuration changes (logged in config.audit.json), and policy evaluation decisions (logged in access.audit.json). These logs are structured (often in JSON) to be easily consumed by SIEM (Security Information and Event Management) tools.
Debug Logs: These are low-level, highly verbose logs intended for developers and support engineers.
They record the internal "thought process" of the PingAM engine. They track the execution of specific Java classes, the results of LDAP queries, and the movement of data between authentication nodes.
These logs are stored in the /debug directory and can be adjusted to different levels of verbosity (Error, Warning, Message, Info).
While PingAM runs within a Java Virtual Machine (JVM), and you may see container logs (like catalina.out in Tomcat) or "Java logs" from the underlying web server, these are technically external to the PingAM application itself. The PingAM application’s internal logging framework is strictly split between Audit (what happened at a functional level) and Debug (why it happened at a code level). Therefore, Option C is the most accurate technical description of the logs natively managed and written by the PingAM service.
