Practice Free NCP-AII Exam Online Questions
A company runs an application on a fleet of Amazon EC2 instances. The application is accessible to users around the world. The company associates an AWS WAF web ACL with an Application Load Balancer (ALB) that routes traffic to the EC2 instances.
A security engineer is investigating a sudden increase in traffic to the application. The security engineer discovers a significant amount of potentially malicious requests coming from hundreds of IP addresses in two countries. The security engineer wants to quickly limit the potentially malicious requests but does not want to prevent legitimate users from accessing the application.
Which solution will meet these requirements?
- A . Use AWS WAF to implement a rate-based rule for all incoming requests.
- B . Use AWS WAF to implement a geographical match rule to block all incoming traffic from the two countries.
- C . Edit the ALB security group to include a geographical match rule to block all incoming traffic from the two countries.
- D . Add deny rules to the ALB security group that prohibit incoming requests from the IP addresses.
A
Explanation:
AWS WAFrate-based rulesare specifically designed to protect applications from traffic floods and distributed attacks that originate from large numbers of IP addresses. According to the AWS Certified Security C Specialty Official Study Guide, rate-based rules automatically track the number of requests coming from individual IP addresses and temporarily block IPs that exceed a defined threshold.
In this scenario, the malicious traffic originates fromhundreds of IP addresses across two countries, mixed with legitimate user traffic. A rate-based rule allows the security engineer tolimit excessive request rates without fully blocking access from entire geographic regions, ensuring that legitimate users can still access the application.
Option B is incorrect because geographic match rules blockalltraffic from selected countries, which would deny access to legitimate users and violate the stated requirement. Option C is invalid because security groups do not support geographic filtering. Option D is not scalable, as manually blocking hundreds of IP addresses is operationally inefficient and ineffective against rapidly changing attacker
IPs.
AWS documentation emphasizes thatrate-based rules are the recommended first-line mitigationfor sudden traffic spikes and potential application-layer DDoS attacks when business continuity must be preserved.
AWS Certified Security C Specialty Official Study Guide
AWS WAF Developer Guide C Rate-Based Rules
AWS DDoS Resiliency Best Practices
A system engineer needs to set the vGPU scheduling behavior for all GPUs to share the scheduling equally with the default time slice length.
What command should be used?
- A . esxcli system module parameters set -m nvidia -p "NVreg_RegistryDwords=RmPVMRL=0x01"
- B . esxcli graphics module parameters set -m nvidia -p "NVreg_RegistryDwords=RmPVMRL=0x01"
- C . esxcli system module parameters set -m nvidia -p "NVreg_RegistryDwords=FRL=0x01"
- D . esxcli system module parameters set -m nvidia -p "NVreg_RegistryDwords=RmPVMRL=0x00"
A
Explanation:
When deploying NVIDIA vGPU on VMware ESXi, the NVIDIA driver provides several scheduling policies to determine how GPU physical resources are shared among multiple virtual machines. The default behavior is often the "Best Effort" scheduler, but for environments requiring predictable performance across all users, the "Equal Share" scheduler is preferred. This scheduler gives each vGPU an equal "time slice" of the physical GPU’s engines. The configuration is managed via module parameters passed to the nvidia kernel driver during host boot. The specific registry key for this behavior is RmPVMRL. Setting RmPVMRL=0x01 enables the Equal Share scheduler (Option A). Conversely, 0x00 would revert to the default time-sliced behavior. It is critical to use system module parameters set to ensure the setting persists across reboots and is applied globally to the NVIDIA driver stack. This ensures that no single "noisy neighbor" VM can monopolize the GPU cycles, which is a common requirement in shared AI research labs or virtual desktop infrastructures where consistency is more important than raw peak throughput of a single task.
What command sequence is used to identify the exact name of the server that runs as the master SM in a multi-node fabric?
- A . sminfo, then smpquery ND
- B . ibstat, then sminfo
- C . ibnetdiscover, then ibsim
- D . sminfo, then smpquery NI
A
Explanation:
In an InfiniBand fabric, the Subnet Manager (SM) is the "brain" of the network, responsible for discovering the topology, assigning Local Identifiers (LIDs), and calculating routing tables. In a multi-node fabric, there is typically one Master SM and several Standby SMs for high availability. To identify the master, the sminfo command is first used; it queries the fabric and returns the LID of the current Master SM. Once the LID is obtained, the engineer must map that numerical LID to a physical server name or Node Description. The smpquery ND (Node Description) command is then executed, targeting that specific LID. This sequence is vital for troubleshooting fabric-wide issues, as logs on the Master SM server provide the definitive record of sweeps, traps, and topology changes. Using smpquery NI (Node Info) would provide hardware-level details like the GUID and device ID, but it does not return the human-readable string (server name) defined in the Node Description, which is necessary for rapid identification in a crowded data center.
You have installed NVIDIA drivers using the .run’ installer on a system running Ubuntu. However, after each kernel update, the NVIDIA drivers stop working.
What is the most effective way to address this issue permanently?
- A . Re-run the ‘.run’ installer after each kernel update.
- B . Use DKMS (Dynamic Kernel Module Support) to automatically rebuild the NVIDIA kernel modules after each kernel update.
- C . Disable automatic kernel updates.
- D . Manually compile the NVIDIA kernel modules after each kernel update.
- E . Create a script that symlinks the old kernel modules to the new kernel directory.
B
Explanation:
DKMS (Dynamic Kernel Module Support) is the most effective solution. It automatically rebuilds kernel modules (like NVIDIA’s) whenever a new kernel is installed. Re-running the Irun’ installer is a manual and error-prone process. Disabling kernel updates is not recommended for security reasons. Manually compiling is complex and time-consuming. Symlinking old modules is unlikely to work due to kernel API changes.
You have installed NVIDIA drivers using the .run’ installer on a system running Ubuntu. However, after each kernel update, the NVIDIA drivers stop working.
What is the most effective way to address this issue permanently?
- A . Re-run the ‘.run’ installer after each kernel update.
- B . Use DKMS (Dynamic Kernel Module Support) to automatically rebuild the NVIDIA kernel modules after each kernel update.
- C . Disable automatic kernel updates.
- D . Manually compile the NVIDIA kernel modules after each kernel update.
- E . Create a script that symlinks the old kernel modules to the new kernel directory.
B
Explanation:
DKMS (Dynamic Kernel Module Support) is the most effective solution. It automatically rebuilds kernel modules (like NVIDIA’s) whenever a new kernel is installed. Re-running the Irun’ installer is a manual and error-prone process. Disabling kernel updates is not recommended for security reasons. Manually compiling is complex and time-consuming. Symlinking old modules is unlikely to work due to kernel API changes.
You’re deploying a distributed training workload across multiple NVIDIAAIOO GPUs connected with NVLink and InfiniBand.
What steps are necessary to validate the end-to-end network performance between the GPUs before running the actual training job? (Select all that apply)
- A . Run NCCL tests (e.g., to measure NVLink bandwidth and latency between GPUs on the same node.
- B . Use ‘ibstat’ to verify the status and link speed of the InfiniBand interfaces on each node.
- C . Employ ‘iperf3’ or ‘nc’ to measure TCP/UDP bandwidth between nodes over the InfiniBand network.
- D . Ping all nodes to confirm basic network connectivity
- E . Manually inspect the physical cabling of NVLink bridges and InfiniBand connections.
A,B,C
Explanation:
Validating end-to-end network performance requires checking NVLink performance within nodes (NCCL tests), verifying InfiniBand interface status Cibstat), and measuring inter-node bandwidth using tools like ‘iperf3’ or ‘ne. Pinging is a basic connectivity check, but doesn’t assess bandwidth. Physical cabling is important to visually inspect, however ibstat also shows the active link and cabling errors.
You are monitoring a server with 8 GPUs used for deep learning training. You observe that one of the GPUs reports a significantly lower utilization rate compared to the others, even though the workload is designed to distribute evenly. ‘nvidia-smi’ reports a persistent "XID 13" error for that GPU.
What is the most likely cause?
- A . A driver bug causing incorrect workload distribution.
- B . Insufficient system memory preventing data transfer to that GPU.
- C . A hardware fault within the GPU, such as a memory error or core failure.
- D . An incorrect CUDA version installed.
- E . The GPU’s compute mode is set to ‘Exclusive Process’.
C
Explanation:
XID 13 errors in ‘nvidia-smi’ typically indicate a hardware fault within the GPU. Driver bugs or memory issues would likely cause different error codes or system instability across multiple GPUs. CUDA version mismatch might prevent the application from running altogether, but is less likely to lead to a specific XID error on a single GPU. Exclusive Process mode will lead to it being used by a different process but not necessarily cause that XID error.
When deploying BlueField OS using PXE boot, which of the following files on the PXE server is responsible for specifying the kernel, initrd, and device tree files to be loaded by the client?
- A . dhcpd .conf
- B . pxelinux.cfg/default
- C . tftpboot/pxelinux.0
- D . /boot/grub/grub.cfg
- E . tftpboot/lpxelinux.0
B
Explanation:
The ‘pxelinux.cfg/default’ file (or a similar configuration file based on the client’s MAC address or IP address) contains the configuration directives for the PXE bootloader, including specifying the kernel, initrd, and device tree files. ‘dhcpd.conf is for DHCP server configuration, ‘pxelinux.ff is the PXE bootloader itself, and ‘/boot/grub/grub.cfg’ is a GRUB configuration file, usually on the client’s disk.
You are configuring a network bridge on a Linux host that will connect multiple physical network interfaces to a virtual machine. You need to ensure that the virtual machine receives an IP address via DHCP.
Which of the following is the correct command sequence to create the bridge interface ‘br0’, add physical interfaces ‘eth0’ and ‘eth1’ to it, and bring up the bridge interface? Assume the required packages are installed. Consider using ‘ip’ command.
A )
![]()
B )
![]()
C )
![]()
D )
![]()
E )
![]()
- A . Option A
- B . Option B
- C . Option C
- D . Option D
- E . Option E
D
Explanation:
Option D is the correct sequence using the Sip’ command. First, create the bridge ‘ bro’. Then, add the physical interfaces ‘eth0 and "eth1’ as slaves to the bridge. Next, bring up the physical interfaces. After that, bring up the bridge interface. Finally, use "dhclient bro to obtain an IP address for the bridge via DHCP.
Option C is the old way, using ‘brctr and ‘ifconfig’, which are deprecated. The others lack the crucial step of bringing up the bridge after attaching the physical interfaces and before running ‘dhclient’.
You are configuring a network bridge on a Linux host that will connect multiple physical network interfaces to a virtual machine. You need to ensure that the virtual machine receives an IP address via DHCP.
Which of the following is the correct command sequence to create the bridge interface ‘br0’, add physical interfaces ‘eth0’ and ‘eth1’ to it, and bring up the bridge interface? Assume the required packages are installed. Consider using ‘ip’ command.
A )
![]()
B )
![]()
C )
![]()
D )
![]()
E )
![]()
- A . Option A
- B . Option B
- C . Option C
- D . Option D
- E . Option E
D
Explanation:
Option D is the correct sequence using the Sip’ command. First, create the bridge ‘ bro’. Then, add the physical interfaces ‘eth0 and "eth1’ as slaves to the bridge. Next, bring up the physical interfaces. After that, bring up the bridge interface. Finally, use "dhclient bro to obtain an IP address for the bridge via DHCP.
Option C is the old way, using ‘brctr and ‘ifconfig’, which are deprecated. The others lack the crucial step of bringing up the bridge after attaching the physical interfaces and before running ‘dhclient’.
