Practice Free DP-600 Exam Online Questions
HOTSPOT
You have a Fabric tenant that contains three users named User1, User2, and User3. The tenant contains a security group named Group1. User1 and User3 are members of Group1.
The tenant contains the workspaces shown in the following table.
The tenant contains the domains shown in the following table.
User1 creates a new workspace named Workspace3.
You assign Domain1 as the default domain of Group1.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

Explanation:
User2 is assigned the Contributor role for Workspace3 – No
User2 is not a member of Group1, and Workspace3 is created by User1. Since Workspace3 is assigned to Domain1 (default domain of Group1), only members of Group1 will have permissions based on their role in the domain. User2 is not part of Group1, so they have no role in Workspace3.
User3 is assigned the Viewer role for Workspace3 – No
User3 is a member of Group1, and the default domain (Domain1) is assigned to Group1. However, there is no indication that User3 has been explicitly granted the Viewer role in Workspace3. If permissions were inherited, User3 would have the default role for Domain1, but the problem does not specify this explicitly, so we assume no Viewer role is assigned.
User3 is assigned the Contributor role for Workspace1 – No
Workspace1 is explicitly assigned to User1 as the admin. There is no indication that User3 has any permissions for Workspace1. Being a member of Group1 does not grant automatic Contributor access to a workspace unless explicitly configured.
You have a Fabric workspace named Workspace1.
You need to create a semantic model named Model1 and publish Model1 to Workspace1.
The solution must meet the following requirements:
– Can revert to previous versions of Model1 as required.
– Identifies differences between saved versions of Model1.
– Uses Microsoft Power Bl=I Desktop to publish to Workspace1.
– Can edit item definition files by using Microsoft Visual Studio Code.
Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
- A . Enable Git integration for Workspace1.
- B . Save Model1 in Power BI Desktop as a PBIT file.
- C . Enable users to edit data models in the Power BI service.
- D . Save Model1 in Power BI Desktop as a PBIP file.
AD
Explanation:
Enable Git integration for Workspace1
Git integration allows version control, enabling users to revert to previous versions and identify differences between saved versions of the semantic model.
Save Model1 in Power BI Desktop as a PBIP file
A PBIP (Power BI Project) file is a folder-based format that allows for direct editing of item definition files in Microsoft Visual Studio Code.
PBIP files are better suited for collaborative development and version control, making them the preferred choice for source control systems like Git.
Which type of data store should you recommend in the AnalyticsPOC workspace?
- A . a data lake
- B . a warehouse
- C . a lakehouse
- D . an external Hive metastore
C
Explanation:
Within the Data Lakehouse:
You can store unstructured, semi-structured, or structured data.
The data is organized by folders and files, lake databases, and delta tables.
Scenario:
All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.
Technical Requirements
The data store must support the following:
Read access by using T-SQL or Python
*-> Semi-structured and unstructured data
Row-level security (RLS) for users executing T-SQL queries
Incorrect:
Not A: a data lake
A data lake would be the data source of the lakehouse.
Not B: a warehouse
Within the Data Warehouse:
*-> You can store structured data.
The data is organized by databases, schemas, and tables (delta tables behind the scenes)
Reference: https://blog.fabric.microsoft.com/en-US/blog/lakehouse-vs-data-warehouse-deep-dive-into-use-cases-
differences-and-architecture-designs/
Which type of data store should you recommend in the AnalyticsPOC workspace?
- A . a data lake
- B . a warehouse
- C . a lakehouse
- D . an external Hive metastore
C
Explanation:
Within the Data Lakehouse:
You can store unstructured, semi-structured, or structured data.
The data is organized by folders and files, lake databases, and delta tables.
Scenario:
All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.
Technical Requirements
The data store must support the following:
Read access by using T-SQL or Python
*-> Semi-structured and unstructured data
Row-level security (RLS) for users executing T-SQL queries
Incorrect:
Not A: a data lake
A data lake would be the data source of the lakehouse.
Not B: a warehouse
Within the Data Warehouse:
*-> You can store structured data.
The data is organized by databases, schemas, and tables (delta tables behind the scenes)
Reference: https://blog.fabric.microsoft.com/en-US/blog/lakehouse-vs-data-warehouse-deep-dive-into-use-cases-
differences-and-architecture-designs/
You have a Fabric tenant that contains a semantic model.
You need to prevent report creators from populating visuals by using implicit measures.
What are two tools that you can use to achieve the goal? Each correct answer presents a complete solution. NOTE: Each correct answer is worth one point.
- A . Microsoft Power BI Desktop
- B . Tabular Editor
- C . Microsoft SQL Server Management Studio (SSMS)
- D . DAX Studio
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a semantic model named Model1.
You discover that the following query performs slowly against Model1.
You need to reduce the execution time of the query.
Solution: You replace line 4 by using the following code:
NOT (CALCULATE (COUNTROWS (‘Order Item’)) < 0)
Does this meet the goal?
- A . Yes
- B . No
B
Explanation:
Correct: You replace line 4 by using the following code:
NOT ISEMPTY (CALCULATETABLE (‘Order Item ‘))
Just check if it is empty or not.
Note: ISEMPTY
Checks if a table is empty.
Syntax
ISEMPTY(<table_expression>)
Parameters
table_expression – A table reference or a DAX expression that returns a table.
Return value – True if the table is empty (has no rows), if else, False.
Incorrect:
* CALCULATE (COUNTROWS (‘Order Item’)) >= 0
* ISEMPTY (RELATEDTABLE (‘Order Item’))
* NOT (CALCULATE (COUNTROWS (‘Order Item’)) < 0)
Reference: https://learn.microsoft.com/en-us/dax/isempty-function-dax
Implement and manage semantic models
Testlet 2
Case study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Overview
Contoso, Ltd. is a US-based health supplements company. Contoso has two divisions named Sales and Research. The Sales division contains two departments named Online Sales and Retail Sales. The Research division assigns internally developed product lines to individual teams of researchers and analysts.
Existing Environment
Identity Environment
Contoso has a Microsoft Entra tenant named contoso.com. The tenant contains two groups named ResearchReviewersGroup1 and ResearchReviewersGroup2.
Data Environment
Contoso has the following data environment:
– The Sales division uses a Microsoft Power BI Premium capacity.
– The semantic model of the Online Sales department includes a fact table named Orders that uses Import made. In the system of origin, the OrderID value represents the sequence in which orders are created.
– The Research department uses an on-premises, third-party data warehousing product.
– Fabric is enabled for contoso.com.
– An Azure Data Lake Storage Gen2 storage account named storage1 contains Research division data for a product line named Productline1. The data is in the delta format.
– A Data Lake Storage Gen2 storage account named storage2 contains Research division data for a product line named Productline2. The data is in the CSV format.
Requirements
Planned Changes
Contoso plans to make the following changes:
– Enable support for Fabric in the Power BI Premium capacity used by the Sales division.
– Make all the data for the Sales division and the Research division available in Fabric.
– For the Research division, create two Fabric workspaces named Productline1ws and Productline2ws.
– In Productline1ws, create a lakehouse named Lakehouse1.
– In Lakehouse1, create a shortcut to storage1 named ResearchProduct.
Data Analytics Requirements
Contoso identifies the following data analytics requirements:
– All the workspaces for the Sales division and the Research division must support all Fabric experiences.
– The Research division workspaces must use a dedicated, on-demand capacity that has per-minute billing.
– The Research division workspaces must be grouped together logically to support OneLake data hub filtering based on the department name.
– For the Research division workspaces, the members of ResearchReviewersGroup1 must be able to read lakehouse and warehouse data and shortcuts by using SQL endpoints.
– For the Research division workspaces, the members of ResearchReviewersGroup2 must be able to read lakehouse data by using Lakehouse explorer.
– All the semantic models and reports for the Research division must use version control that supports branching.
Data Preparation Requirements
Contoso identifies the following data preparation requirements:
– The Research division data for Productline1 must be retrieved from Lakehouse1 by using Fabric notebooks.
– All the Research division data in the lakehouses must be presented as managed tables in Lakehouse explorer.
Semantic Model Requirements
Contoso identifies the following requirements for implementing and managing semantic models:
– The number of rows added to the Orders table during refreshes must be minimized.
– The semantic models in the Research division workspaces must use Direct Lake mode.
General Requirements
Contoso identifies the following high-level requirements that must be considered for all solutions:
– Follow the principle of least privilege when applicable.
– Minimize implementation and maintenance effort when possible.
What should you use to implement calculation groups for the Research division semantic models?
- A . Microsoft Power BI Desktop
- B . the Power BI service
- C . DAX Studio
- D . Tabular Editor
You have a Microsoft Power BI semantic model.
You need to identify any surrogate key columns in the model that have the Summarize By property set to a value other than to None. The solution must minimize effort.
What should you use?
- A . DAX Formatter in DAX Studio
- B . Model explorer in Microsoft Power BI Desktop
- C . Model view in Microsoft Power BI Desktop
- D . Best Practice Analyzer in Tabular Editor
D
Explanation:
BPA lets you define rules on the metadata of your model, to encourage certain conventions and best practices while developing in SSAS Tabular.
Clicking one of the rules in the top list, will show you all objects that satisfy the conditions of the given rule in the bottom list:
Note: The Best Practice Analyzer (BPA) lets you define rules on the metadata of your model, to encourage certain conventions and best practices while developing your Power BI or Analysis Services Model.
PBA Overview
The BPA overview shows you all the rules defined in your model that are currently being broken:
Incorrect:
* DAX Formatter in DAX Studio
DAX Formatter can be used within DAX Studio to align parentheses with their associated functions.
* Model explorer in Microsoft Power BI Desktop
With Model explorer in the Model view in Power BI, you can view and work with complex semantic models with many tables, relationships, measures, roles, calculation groups, translations, and perspectives.
* Model view in Microsoft Power BI Desktop
Model view shows all of the tables, columns, and relationships in your model. This view can be especially helpful when your model has complex relationships between many tables.
Reference:
https://docs.tabulareditor.com/te2/Best-Practice-Analyzer.html
https://docs.tabulareditor.com/common/using-bpa.html?tabs=TE3Rules
https://learn.microsoft.com/en-us/power-bi/transform-model/model-explorer
DRAG DROP
You have a Fabric warehouse named Warehouse1 that contains a table named dbo.Product.
dbo.Product contains the following columns.
You need to use a T-SQL query to add a column named PriceRange to dbo.Product. The column must categorize each product based on UnitPrice.
The solution must meet the following requirements:
– If UnitPrice is 0, PriceRange is "Not for resale".
– If UnitPrice is less than 50, PriceRange is "Under $50".
– If UnitPrice is between 50 and 250, PriceRange is "Under $250“.
– In all other instances, PriceRange is "$250+".
How should you complete the query? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Explanation:
Using CASE for Conditional Logic:
– The CASE statement is used to categorize values based on conditions.
– It allows us to define multiple conditions for PriceRange based on UnitPrice.
Defining Conditions in Order of Priority:
– UnitPrice = 0 → ‘Not for resale’ (Ensuring products not for sale are labeled correctly)
– UnitPrice < 50 → ‘Under $50’ (Categorizing low-priced products)
– UnitPrice between 50 and 250 → ‘Under $250’ (Using >= 50 AND < 250 ensures the correct range)
– All other prices → ‘$250+’ (Handled by the ELSE clause)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a Delta table named Customer.
When you query Customer, you discover that the query is slow to execute. You suspect that maintenance was NOT performed on the table.
You need to identify whether maintenance tasks were performed on Customer.
Solution: You run the following Spark SQL statement:
REFRESH TABLE customer
Does this meet the goal?
- A . Yes
- B . No
B
Explanation:
Correct Solution: You run the following Spark SQL statement:
DESCRIBE HISTORY customer
DESCRIBE HISTORY
Applies to: Databricks SQL, Databricks Runtime
Returns provenance information, including the operation, user, and so on, for each write to a table. Table history is retained for 30 days.
Syntax
DESCRIBE HISTORY table_name
Note: Work with Delta Lake table history
Each operation that modifies a Delta Lake table creates a new table version. You can use history information to audit operations, rollback a table, or query a table at a specific point in time using time travel.
Retrieve Delta table history
You can retrieve information including the operations, user, and timestamp for each write to a Delta table by running the history command. The operations are returned in reverse chronological order.
DESCRIBE HISTORY ‘/data/events/’ — get the full history of the table
DESCRIBE HISTORY delta.`/data/events/`
DESCRIBE HISTORY ‘/data/events/’ LIMIT 1 — get the last operation only
DESCRIBE HISTORY eventsTable
Incorrect:
* DESCRIBE DETAIL customer
DESCRIBE TABLE statement returns the basic metadata information of a table. The metadata information includes column name, column type and column comment. Optionally a partition spec or column name may be specified to return the metadata pertaining to a partition or column respectively.
* EXPLAIN TABLE customer
* REFRESH TABLE
REFRESH TABLE statement invalidates the cached entries, which include data and metadata of the given table or view. The invalidated cache is populated in lazy manner when the cached table or the query associated with it is executed again.
Syntax
REFRESH [TABLE] tableIdentifier
Reference: https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/delta-describe-history
https://docs.gcp.databricks.com/en/delta/history.html
https://spark.apache.org/docs/3.0.0-preview/sql-ref-syntax-aux-refresh-table.html