Practice Free SPLK-1002 Exam Online Questions
What does the fillnull command replace null values with, if the value argument is not specified?
- A . 0
- B . N/A
- C . NaN
- D . NULL
A
Explanation:
The fillnull command replaces null values with 0 by default, if the value argument is not specified. You can use the value argument to specify a different value to replace null values with, such as N/A or NULL.
When can a pipe follow a macro?
- A . A pipe may always follow a macro.
- B . The current user must own the macro.
- C . The macro must be defined in the current app.
- D . Only when sharing is set to global for the macro.
A
Explanation:
A macro is a way to save a segment of a search string as a variable and reuse it in other searches2. A macro can be followed by a pipe, which is a symbol that separates commands in a search pipeline2. A pipe may always follow a macro, regardless of who owns the macro, where the macro is defined or how the macro is shared2. For example, if you have a macro called us_sales that returns events from the US region, you can use it in a search like this: us_sales | stats sum(price) by product2. This search will use the macro to filter the events and then calculate the total price for each product2. Therefore, option A is correct, while options B, C and D are incorrect because they are not conditions that affect whether a pipe can follow a macro.
Which of the following search modes automatically returns all extracted fields in the fields sidebar?
- A . Fast
- B . Smart
- C . Verbose
C
Explanation:
The search modes determine how Splunk processes your search and displays your results2. There are three search modes: Fast, Smart and Verbose2. The search mode that automatically returns all extracted fields in the fields sidebar is Verbose2. The Verbose mode shows all the fields that are extracted from your events, including default fields, indexed fields and search-time extracted fields2. The fields sidebar is a panel that shows the fields that are present in your search results2. Therefore, option C is correct, while options A and B are incorrect because they are not search modes that automatically return all extracted fields in the fields sidebar.
What information must be included when using the datamodel command?
- A . status field
- B . Multiple indexes
- C . Data model field name.
- D . Data model dataset name.
Which search retrieves events with the event type web_errors?
- A . tag=web_errors
- B . eventtype=web_errors
- C . eventtype "web errors"
- D . eventtype (web_errors)
B
Explanation:
The correct answer is B. eventtype=web_errors.
An event type is a way to categorize events based on a search. An event type assigns a label to events that match a specific search criteria. Event types can be used to filter and group events, create alerts, or generate reports1.
To search for events that have a specific event type, you need to use the eventtype field with the name of the event type as the value. The syntax for this is: eventtype=<event_type_name>
For example, if you want to search for events that have the event type web_errors, you can use the following syntax:
eventtype=web_errors
This will return only the events that match the search criteria defined by the web_errors event type. The other options are not correct because they use different syntax or fields that are not related to event types.
These options are:
A) tag=web_errors: This option uses the tag field, which is a way to add descriptive keywords to events based on field values. Tags are different from event types, although they can be used together. Tags can be used to filter and group events by common characteristics2.
C) eventtype “web errors”: This option uses quotation marks around the event type name, which is not valid syntax for the eventtype field. Quotation marks are used to enclose phrases or exact matches in a search3.
D) eventtype (web_errors): This option uses parentheses around the event type name, which is also not valid syntax for the eventtype field. Parentheses are used to group expressions or terms in a search3.
Reference: About event types About tags
Search command cheatsheet
How is a Search Workflow Action configured to run at the same time range as the original search?
- A . Set the earliest time to match the original search.
- B . Select the same time range from the time-range picker.
- C . Select the "Use the same time range as the search that created the field listing" checkbox.
- D . Select the "Overwrite time range with the original search" checkbox.
C
Explanation:
To configure a Search Workflow Action to run at the same time range as the original search, you need to select the “Use the same time range as the search that created the field listing” checkbox. This will ensure that the workflow action search uses the same earliest and latest time parameters as the original search.
For the following search, which field populates the x-axis?
index=security sourcetype=linux secure | timechart count by action
- A . action
- B . source type
- C . _time
- D . time
C
Explanation:
The correct answer is C. _time.
The timechart command creates a time series chart with corresponding table of statistics, with time used as the X-axis1. You can specify a split-by field, where each distinct value of the split-by field becomes a series in the chart1. In this case, the split-by field is action, which means that the chart will have different lines for different actions, such as accept, reject, or fail2. The count function will calculate the number of events for each action in each time bin1.
For example, the following image shows a timechart of the count by action for a similar search3:
As you can see, the x-axis is populated by the _time field, which represents the time range of the search. The y-axis is populated by the count function, which represents the number of events for each action. The legend shows the different values of the action field, which are used to split the chart into different series.
Reference: 2: Timechart Command In Splunk With Example – Mindmajix 1: timechart – Splunk Documentation 3:
timechart command examples – Splunk Documentation
What approach is recommended when using the Splunk Common Information Model (CIM) add-on to normalize data?
- A . Consult the CIM data model reference tables.
- B . Run a search using the authentication command.
- C . Consult the CIM event type reference tables.
- D . Run a search using the correlation command.
A
Explanation:
The recommended approach when using the Splunk Common Information Model (CIM) add-on to normalize data is A. Consult the CIM data model reference tables. This is because the CIM data model reference tables provide detailed information about the fields and tags that are expected for each dataset in a data model. By consulting the reference tables, you can determine which data models are relevant for your data source and how to map your data fields to the CIM fields. You can also use the reference tables to validate your data and troubleshoot any issues with normalization. You can find the CIM data model reference tables in the Splunk documentation1 or in the Data Model Editor page in Splunk Web2. The other options are incorrect because they are not related to the CIM add-on or data normalization. The authentication command is a custom command that validates events against the Authentication data model, but it does not help you to normalize other types of data. The correlation command is a search command that performs statistical analysis on event fields, but it does not help you to map your data fields to the CIM fields. The CIM event type reference tables do not exist, as event types are not part of the CIM add-on.
By default, how is acceleration configured in the Splunk Common Information Model (CIM) add-on?
- A . Turned off
- B . Turned on
- C . Determined automatically based on the sourcetype.
- D . Determined automatically based on the data source.
D
Explanation:
By default, acceleration is determined automatically based on the data source in the Splunk Common Information Model (CIM) add-on. The Splunk CIM Add-on is an app that provides common data models for various domains, such as network traffic, web activity, authentication, etc. The CIM Add-on allows you to normalize and enrich your data using predefined fields and tags. The CIM Add-on also allows you to accelerate your data models for faster searches and reports. Acceleration is a feature that pre-computes summary data for your data models and stores them in tsidx files.
Acceleration can improve the performance and efficiency of your searches and reports that use data models.
By default, acceleration is determined automatically based on the data source in the CIM Add-on. This means that Splunk will decide whether to enable or disable acceleration for each data model based on some factors, such as data volume, data type, data model complexity, etc. However, you can also manually enable or disable acceleration for each data model by using the Settings menu or by editing the datamodels.conf file.
When using the Field Extractor (FX), which of the following delimiters will work? (select all that apply)
- A . Tabs
- B . Pipes
- C . Colons
- D . Spaces
A, B, D
Explanation:
Reference: https://docs.splunk.com/Documentation/Splunk/8.0.3/Knowledge/FXSelectMethodstep https://community.splunk.com/t5/Splunk-Search/Field-Extraction-Separate-on-Colon/m-p/29751
The Field Extractor (FX) is a tool that helps you extract fields from your data using delimiters or regular expressions. Delimiters are characters or strings that separate fields in your data.
Some of the delimiters that will work with FX are:
Tabs: horizontal spaces that align text in columns.
Pipes: vertical bars that often indicate logical OR operations.
Spaces: blank characters that separate words or symbols.
Therefore, the delimiters A, B, and D will work with FX.