Practice Free DP-600 Exam Online Questions
HOTSPOT
You have the following KQL query.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

Explanation:
where Status != "Cancelled": Excludes records where the Status is "Cancelled".
where OrderDate >= ago(30d): Filters for records where the OrderDate is within the last 30 days.
summarize TotalSales = sum(SalesAmount) by ProductCategory: Calculates the total sales (SalesAmount) for each product category.
where TotalSales > 0: Filters out product categories where the total sales are zero or less. The query excludes sales that have a Status of Cancelled – Yes
The where Status != "Cancelled" condition ensures that rows with a "Cancelled" status are excluded.
The query calculates the total sales of each product category for the last 30 days – Yes The combination of where OrderDate >= ago(30d) and summarize TotalSales = sum (SalesAmount) by ProductCategory calculates the total sales for each product category for the last 30 days.
The query includes product categories that have had zero sales during the last 30 days – No The where TotalSales > 0 condition filters out product categories with zero sales.
You plan to use Fabric to store data.
You need to create a data store that supports the following:
– Writing data by using T-SQL
– Multi-table transactions
– Dynamic data masking
Which type of data store should you create?
- A . KQL database
- B . lakehouse
- C . warehouse
- D . semantic model
C
Explanation:
You can use dynamic data masking in Fabric data warehousing.
Transactions in Warehouse tables in Microsoft Fabric
Warehouse in Microsoft Fabric supports transactions that span across databases that are within the same workspace including reading from the SQL analytics endpoint of the Lakehouse.
Reference:
https://learn.microsoft.com/en-us/fabric/data-warehouse/dynamic-data-masking
https://learn.microsoft.com/en-us/fabric/data-warehouse/transactions
DRAG DROP
You are creating a data flow in Fabric to ingest data from an Azure SQL database by using a T-SQL statement.
You need to ensure that any foldable Power Query transformation steps are processed by the Microsoft SQL Server engine.
How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Explanation:
Box 1: Value
Query folding on native queries
Use Value.NativeQuery function
The goal of this process is to execute the following SQL code, and to apply more transformations with
Power Query that can be folded back to the source.
SELECT DepartmentID, Name FROM HumanResources.Department WHERE GroupName = ‘Research and Development’
The first step was to define the correct target, which in this case is the database where the SQL code will be run. Once a step has the correct target, you can select that step―in this case, Source in Applied Steps ―and then select the fx button in the formula bar to add a custom step. In this example, replace the Source formula with the following formula:
Value.NativeQuery(Source, "SELECT DepartmentID, Name FROM HumanResources.Department WHERE GroupName = ‘Research and Development’
Box 2: NativeQuery
Box 3: EnableFolding
The most important component of this formula is the use of the optional record for the forth parameter of the function that has the EnableFolding record field set to true.
Reference: https://learn.microsoft.com/en-us/power-query/native-query-folding
Note: This section contains one or more sets of questions with the same scenario and problem. Each question presents a unique solution to the problem. You must determine whether the solution meets the stated goals. More than one solution in the set might solve the problem. It is also possible that none of the solutions in the set solve the problem.
After you answer a question in this section, you will NOT be able to return. As a result, these questions do not appear on the Review Screen.
Your network contains an on-premises Active Directory Domain Services (AD DS) domain named contoso.com that syncs with a Microsoft Entra tenant by using Microsoft Entra Connect.
You have a Fabric tenant that contains a semantic model.
You enable dynamic row-level security (RLS) for the model and deploy the model to the Fabric service.
You query a measure that includes the USERNAME() function, and the query returns a blank result.
You need to ensure that the measure returns the user principal name (UPN) of a user.
Solution: You add user objects to the list of synced objects in Microsoft Entra Connect.
Does this meet the goal?
- A . Yes
- B . No
HOTSPOT
You have a Fabric workspace named Workspace1 and an Azure Data Lake Storage Gen2 account named storage1. Workspace1 contains a lakehouse named Lakehouse1.
You need to create a shortcut to storage1 in Lakehouse1.
Which protocol and endpoint should you specify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Explanation:
Box 1: abfss
Access Azure storage
Once you have properly configured credentials to access your Azure storage container, you can interact with resources in the storage account using URIs. Databricks recommends using the abfss driver for greater security.
spark.read.load("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to- data>")
dbutils.fs.ls("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>")
CREATE TABLE <database-name>.<table-name>;
COPY INTO <database-name>.<table-name>
FROM ‘abfss://[email protected]/path/to/folder’
FILEFORMAT = CSV
COPY_OPTIONS (‘mergeSchema’ = ‘true’);
Box 2: dfs
dfs is used for the endpoint:
dbutils.fs.ls("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>")
Reference: https://docs.databricks.com/en/connect/storage/azure-storage.html
DRAG DROP
You have a Fabric tenant that contains a data warehouse named DW1. DW1 contains a table named DimCustomer. DimCustomer contains the fields shown in the following table.
You need to identify duplicate email addresses in DimCustomer. The solution must return a maximum of 1,000 records.
Which four T-SQL statements should you run in sequence? To answer, move the appropriate statements from the list of statements to the answer area and arrange them in the correct order.

Explanation:
Step 1: SELECT TOP(1000) CustomerAltKey, Count(*) Use TOP(1000) to return maximum 1000 records. Step 2: FROM DimCustomer SQL HAVING Example:
The following SQL statement lists the number of customers in each country.
Only include countries with more than 5 customers:
SELECT COUNT(CustomerID), Country
FROM Customers
GROUP BY Country
HAVING COUNT(CustomerID) > 5;
Step 3: GROUP BY CustomerAltKey
Step 4: HAVING COUNT(*) > 1
The SQL HAVING Clause
The HAVING clause was added to SQL because the WHERE keyword cannot be used with aggregate functions.
Reference: https://www.w3schools.com/SQL/sql_having.asp
You have a Fabric tenant that contains 30 CSV files in OneLake. The files are updated daily.
You create a Microsoft Power BI semantic model named Model1 that uses the CSV files as a data source. You configure incremental refresh for Model1 and publish the model to a Premium capacity in the Fabric tenant.
When you initiate a refresh of Model1, the refresh fails after running out of resources.
What is a possible cause of the failure?
- A . Query folding is occurring.
- B . Only refresh complete days is selected.
- C . XMLA Endpoint is set to Read Only.
- D . Query folding is NOT occurring.
- E . The data type of the column used to partition the data has changed.
D
Explanation:
Incremental refresh and real-time data for semantic models, Troubleshoot incremental refresh and real-time data
D (not A): Most problems that occur when configuring incremental refresh and real-time data have to do with query folding. Your data source must support query folding.
If the incremental refresh policy includes getting real-time data with DirectQuery, non-folding transformations can’t be used.
Because support for query folding is different for different types of data sources, verification should be performed to ensure the filter logic is included in the queries being run against the data source.
Note: Cause: Data type mismatch
This issue can be caused by a data type mismatch where Date/Time is the required data type for the RangeStart and RangeEnd parameters, but the table date column on which the filters are applied aren’t Date/Time data type, or vice-versa. Both the parameters data type and the filtered data column must be Date/Time data type and the format must be the same. If not, the query can’t be folded.
Incorrect:
Not B: The Only refresh complete days setting ensures all rows for the entire day are included in the refresh operation.
Reference:
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-troubleshoot
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-overview
You have a Fabric tenant that contains 30 CSV files in OneLake. The files are updated daily.
You create a Microsoft Power BI semantic model named Model1 that uses the CSV files as a data source. You configure incremental refresh for Model1 and publish the model to a Premium capacity in the Fabric tenant.
When you initiate a refresh of Model1, the refresh fails after running out of resources.
What is a possible cause of the failure?
- A . Query folding is occurring.
- B . Only refresh complete days is selected.
- C . XMLA Endpoint is set to Read Only.
- D . Query folding is NOT occurring.
- E . The data type of the column used to partition the data has changed.
D
Explanation:
Incremental refresh and real-time data for semantic models, Troubleshoot incremental refresh and real-time data
D (not A): Most problems that occur when configuring incremental refresh and real-time data have to do with query folding. Your data source must support query folding.
If the incremental refresh policy includes getting real-time data with DirectQuery, non-folding transformations can’t be used.
Because support for query folding is different for different types of data sources, verification should be performed to ensure the filter logic is included in the queries being run against the data source.
Note: Cause: Data type mismatch
This issue can be caused by a data type mismatch where Date/Time is the required data type for the RangeStart and RangeEnd parameters, but the table date column on which the filters are applied aren’t Date/Time data type, or vice-versa. Both the parameters data type and the filtered data column must be Date/Time data type and the format must be the same. If not, the query can’t be folded.
Incorrect:
Not B: The Only refresh complete days setting ensures all rows for the entire day are included in the refresh operation.
Reference:
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-troubleshoot
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-overview
Note: This section contains one or more sets of questions with the same scenario and problem. Each question presents a unique solution to the problem. You must determine whether the solution meets the stated goals. More than one solution in the set might solve the problem. It is also possible that none of the solutions in the set solve the problem.
After you answer a question in this section, you will NOT be able to return. As a result, these questions do not appear on the Review Screen.
Your network contains an on-premises Active Directory Domain Services (AD DS) domain named contoso.com that syncs with a Microsoft Entra tenant by using Microsoft Entra Connect.
You have a Fabric tenant that contains a semantic model.
You enable dynamic row-level security (RLS) for the model and deploy the model to the Fabric service.
You query a measure that includes the USERNAME() function, and the query returns a blank result.
You need to ensure that the measure returns the user principal name (UPN) of a user.
Solution: You update the measure to use the USERPRINCIPALNAME() function.
Does this meet the goal?
- A . Yes
- B . No
A
Explanation:
The USERPRINCIPALNAME () function directly retrieves the UPN of the user querying the measure. This is the most appropriate function to use if your goal is to obtain the UPN, which is the format typically used in environments that integrate with Microsoft Entra.
Note: This section contains one or more sets of questions with the same scenario and problem. Each question presents a unique solution to the problem. You must determine whether the solution meets the stated goals. More than one solution in the set might solve the problem. It is also possible that none of the solutions in the set solve the problem.
After you answer a question in this section, you will NOT be able to return. As a result, these questions do not appear on the Review Screen.
Your network contains an on-premises Active Directory Domain Services (AD DS) domain named contoso.com that syncs with a Microsoft Entra tenant by using Microsoft Entra Connect.
You have a Fabric tenant that contains a semantic model.
You enable dynamic row-level security (RLS) for the model and deploy the model to the Fabric service.
You query a measure that includes the USERNAME() function, and the query returns a blank result.
You need to ensure that the measure returns the user principal name (UPN) of a user.
Solution: You create a role in the model.
Does this meet the goal?
- A . Yes
- B . No
B
Explanation:
No, creating a role in the model alone does not ensure that the USERNAME() function returns the user’s
User Principal Name (UPN). To achieve this, you should use the USERPRINCIPALNAME() function in your measure, as it consistently returns the user’s UPN across different environments.