Practice Free SOL-C01 Exam Online Questions
Question #51
What is the highest level object in the Snowflake object hierarchy?
- A . Database
- B . Virtual Warehouse
- C . Schema
- D . Account
Correct Answer: D
D
Explanation:
The Account is the top-level Snowflake container encompassing:
• All databases
• All schemas
• All compute resources (warehouses)
• All roles, users, and governance structures
All other objects exist within the Account context.
D
Explanation:
The Account is the top-level Snowflake container encompassing:
• All databases
• All schemas
• All compute resources (warehouses)
• All roles, users, and governance structures
All other objects exist within the Account context.
Question #52
Which SQL command is commonly used to load structured data from a stage into a Snowflake table?
- A . INSERT INTO
- B . COPY INTO
- C . LOAD DATA
- D . IMPORT DATA
Correct Answer: B
B
Explanation:
The COPY INTO <table> command is Snowflake’s primary and optimized mechanism for loading large volumes of structured data from either internal or external stages. Unlike INSERT INTO, which is used for row-level inserts or small datasets, COPY INTO is designed for high-throughput operations, parallel loading, and performant ingestion from files stored in S3, Azure Blob Storage, GCS, or Snowflake-managed stages. The COPY INTO command supports detailed file formatting options (such as TYPE=CSV, FIELD_DELIMITER, SKIP_HEADER), error-handling rules (ON_ERROR), and transformations using SQL expressions inside the COPY statement. Snowflake does not support LOAD DATA or IMPORT DATA as SQL commands; these exist in other database systems but not within Snowflake. COPY INTO ensures efficient, resilient, fault-tolerant data ingestion and is heavily used for ETL/ELT, data lake ingestion, and large batch ingestion pipelines.
B
Explanation:
The COPY INTO <table> command is Snowflake’s primary and optimized mechanism for loading large volumes of structured data from either internal or external stages. Unlike INSERT INTO, which is used for row-level inserts or small datasets, COPY INTO is designed for high-throughput operations, parallel loading, and performant ingestion from files stored in S3, Azure Blob Storage, GCS, or Snowflake-managed stages. The COPY INTO command supports detailed file formatting options (such as TYPE=CSV, FIELD_DELIMITER, SKIP_HEADER), error-handling rules (ON_ERROR), and transformations using SQL expressions inside the COPY statement. Snowflake does not support LOAD DATA or IMPORT DATA as SQL commands; these exist in other database systems but not within Snowflake. COPY INTO ensures efficient, resilient, fault-tolerant data ingestion and is heavily used for ETL/ELT, data lake ingestion, and large batch ingestion pipelines.
Question #53
What is the SHOW FILE FORMATS SQL command used for in Snowflake?
- A . Create a list of all of the file formats currently supported by Snowflake.
- B . Get information about how different file formats should be used.
- C . List the file formats in the currently-selected schema.
- D . List all of the files in a database that have data in a given file format.
Correct Answer: C
C
Explanation:
SHOW FILE FORMATS lists file format objects that exist in the currently selected schema (or optionally, a specific database or schema). These objects define how Snowflake interprets staged files during data loading.
It does not list supported built-in formats (CSV, Parquet, etc.).
It does not show example usage information and does not list files―LIST @stage handles that.
C
Explanation:
SHOW FILE FORMATS lists file format objects that exist in the currently selected schema (or optionally, a specific database or schema). These objects define how Snowflake interprets staged files during data loading.
It does not list supported built-in formats (CSV, Parquet, etc.).
It does not show example usage information and does not list files―LIST @stage handles that.
Question #53
What is the SHOW FILE FORMATS SQL command used for in Snowflake?
- A . Create a list of all of the file formats currently supported by Snowflake.
- B . Get information about how different file formats should be used.
- C . List the file formats in the currently-selected schema.
- D . List all of the files in a database that have data in a given file format.
Correct Answer: C
C
Explanation:
SHOW FILE FORMATS lists file format objects that exist in the currently selected schema (or optionally, a specific database or schema). These objects define how Snowflake interprets staged files during data loading.
It does not list supported built-in formats (CSV, Parquet, etc.).
It does not show example usage information and does not list files―LIST @stage handles that.
C
Explanation:
SHOW FILE FORMATS lists file format objects that exist in the currently selected schema (or optionally, a specific database or schema). These objects define how Snowflake interprets staged files during data loading.
It does not list supported built-in formats (CSV, Parquet, etc.).
It does not show example usage information and does not list files―LIST @stage handles that.
Question #55
Which of the following system-defined roles exist in Snowflake? (Choose any 3 options)
- A . SECURITYADMIN
- B . DATABASEADMIN
- C . ACCOUNTADMIN
- D . SYSADMIN
Correct Answer: A, C, D
A, C, D
Explanation:
Snowflake provides a predefined set of system-defined roles that enforce Role-Based Access Control (RBAC). These roles ensure structured governance and centralized privilege management across accounts. The primary system-defined roles include:
ACCOUNTADMIN, the highest-privileged role, responsible for global account-level activities such as billing, governance, replication, and cross-region/cloud configuration. It has implicit ownership of all objects.
SECURITYADMIN manages users, roles, MFA, and all privilege grants. This role ensures operational control over user lifecycle management while supporting separation of duties from ACCOUNTADMIN.
SYSADMIN manages objects such as databases, schemas, tables, warehouses, and other compute objects. It is the default role for data engineering and data platform teams needing full control of object creation and maintenance.
DATABASEADMIN does not exist as a system-defined role―it is typically user-created for customization. System roles form a foundational security model for controlled privilege escalation and governance.
A, C, D
Explanation:
Snowflake provides a predefined set of system-defined roles that enforce Role-Based Access Control (RBAC). These roles ensure structured governance and centralized privilege management across accounts. The primary system-defined roles include:
ACCOUNTADMIN, the highest-privileged role, responsible for global account-level activities such as billing, governance, replication, and cross-region/cloud configuration. It has implicit ownership of all objects.
SECURITYADMIN manages users, roles, MFA, and all privilege grants. This role ensures operational control over user lifecycle management while supporting separation of duties from ACCOUNTADMIN.
SYSADMIN manages objects such as databases, schemas, tables, warehouses, and other compute objects. It is the default role for data engineering and data platform teams needing full control of object creation and maintenance.
DATABASEADMIN does not exist as a system-defined role―it is typically user-created for customization. System roles form a foundational security model for controlled privilege escalation and governance.
Question #56
What task can be performed on the Snowsight Schema Details page?
- A . Change the schema name.
- B . Share a schema with a different account.
- C . Truncate the schema data.
- D . Edit the schema metrics.
Correct Answer: A
A
Explanation:
On the Snowsight Schema Details page, one of the supported operations is renaming the schema. The UI exposes controls that allow users with appropriate privileges to change the schema name, which can help maintain consistent naming conventions or reflect project reorganizations.
Sharing data with other accounts is typically done using secure shares at the database or object level, not from a simple “Share this schema” function on the Schema Details page. Truncation is a table-level operation (e.g., TRUNCATE TABLE), not something that applies at the schema level. Metrics visible on the Schema Details page (such as object counts or storage usage) are informational and not directly editable; they are derived from system metadata.
A
Explanation:
On the Snowsight Schema Details page, one of the supported operations is renaming the schema. The UI exposes controls that allow users with appropriate privileges to change the schema name, which can help maintain consistent naming conventions or reflect project reorganizations.
Sharing data with other accounts is typically done using secure shares at the database or object level, not from a simple “Share this schema” function on the Schema Details page. Truncation is a table-level operation (e.g., TRUNCATE TABLE), not something that applies at the schema level. Metrics visible on the Schema Details page (such as object counts or storage usage) are informational and not directly editable; they are derived from system metadata.
Question #57
What information can users view in the Query History option in Snowsight?
- A . Only successful queries
- B . Data storage configuration
- C . Network settings
- D . Details of executed queries, including status, duration, and resource usage
Correct Answer: D
D
Explanation:
Snowsight’s Query History interface provides comprehensive visibility into executed queries. Users can view:
Query text
Execution status (running, succeeded, failed)
Duration and start/end timestamps
Warehouse used and credits consumed
Partition pruning details
Error messages
Query profile for performance diagnostics
This allows developers, analysts, and administrators to troubleshoot slow queries, identify bottlenecks, and monitor compute usage.
Incorrect options:
Query History shows failed and running queries, not just successful ones.
Storage configuration is unrelated to query operations.
Network settings are managed by the Cloud Services Layer, not Query History.
Thus, Query History is a key operational tool in Snowsight.
D
Explanation:
Snowsight’s Query History interface provides comprehensive visibility into executed queries. Users can view:
Query text
Execution status (running, succeeded, failed)
Duration and start/end timestamps
Warehouse used and credits consumed
Partition pruning details
Error messages
Query profile for performance diagnostics
This allows developers, analysts, and administrators to troubleshoot slow queries, identify bottlenecks, and monitor compute usage.
Incorrect options:
Query History shows failed and running queries, not just successful ones.
Storage configuration is unrelated to query operations.
Network settings are managed by the Cloud Services Layer, not Query History.
Thus, Query History is a key operational tool in Snowsight.
Question #58
What is created in the Cloud Services layer of the Snowflake architecture?
- A . Dashboards
- B . Metadata
- C . Virtual warehouses
- D . Micro-partitions
Correct Answer: B
B
Explanation:
The Cloud Services Layer is responsible for generating and managing metadata, including object definitions, table schemas, micro-partition statistics, column-level profiles, access control information, and query optimization metadata. Metadata plays a central role in Snowflake’s performance and functionality because it informs pruning, query planning, and efficient execution.
Dashboards are created in Snowsight or external BI tools. Virtual warehouses belong to the Compute Layer, providing processing resources. Micro-partitions are created in the Storage Layer, where Snowflake automatically organizes compressed columnar data for efficient access.
Consequently, the Cloud Services Layer is where metadata―not data, not compute resources―is created and managed.
B
Explanation:
The Cloud Services Layer is responsible for generating and managing metadata, including object definitions, table schemas, micro-partition statistics, column-level profiles, access control information, and query optimization metadata. Metadata plays a central role in Snowflake’s performance and functionality because it informs pruning, query planning, and efficient execution.
Dashboards are created in Snowsight or external BI tools. Virtual warehouses belong to the Compute Layer, providing processing resources. Micro-partitions are created in the Storage Layer, where Snowflake automatically organizes compressed columnar data for efficient access.
Consequently, the Cloud Services Layer is where metadata―not data, not compute resources―is created and managed.
Question #59
What information can be accessed using the Snow sight Monitoring tab?
- A . Virtual warehouse usage metrics
- B . Query execution history
- C . Database Time Travel snapshots
- D . Database schema changes history
Correct Answer: A
A
Explanation:
The Snow sight Monitoring tab provides a centralized view of virtual warehouse usage metrics, enabling administrators and developers to evaluate how compute resources are being consumed. This includes critical insights such as credit usage, query load, concurrency levels, average queue times, execution durations, and auto-scaling activity (for multi-cluster warehouses). These metrics help determine whether a warehouse is correctly sized, whether concurrency issues are occurring, or whether workloads require scaling up or adding clusters.
Query history is available in a different section―“Activity → Query History”―not under Monitoring. Time Travel snapshots are not visualized within Monitoring; Time Travel is controlled via retention parameters and accessed with SQL (AT/BEFORE clauses). Schema change history is also not part of Monitoring and instead is discoverable through ACCOUNT_USAGE or specific metadata views.
The Monitoring tab exists specifically to help evaluate warehouse performance and resource consumption, enabling optimization of compute spending and better workload management.
A
Explanation:
The Snow sight Monitoring tab provides a centralized view of virtual warehouse usage metrics, enabling administrators and developers to evaluate how compute resources are being consumed. This includes critical insights such as credit usage, query load, concurrency levels, average queue times, execution durations, and auto-scaling activity (for multi-cluster warehouses). These metrics help determine whether a warehouse is correctly sized, whether concurrency issues are occurring, or whether workloads require scaling up or adding clusters.
Query history is available in a different section―“Activity → Query History”―not under Monitoring. Time Travel snapshots are not visualized within Monitoring; Time Travel is controlled via retention parameters and accessed with SQL (AT/BEFORE clauses). Schema change history is also not part of Monitoring and instead is discoverable through ACCOUNT_USAGE or specific metadata views.
The Monitoring tab exists specifically to help evaluate warehouse performance and resource consumption, enabling optimization of compute spending and better workload management.
Question #60
Which command is used to view the details of a file format object in Snowflake?
- A . LIST FILE FORMAT
- B . ALTER FILE FORMAT
- C . DESCRIBE FILE FORMAT
- D . SHOW FILE FORMATS
Correct Answer: C
C
Explanation:
DESCRIBE FILE FORMAT <name> returns configuration settings such as FIELD_DELIMITER, TYPE, SKIP_HEADER, and COMPRESSION for the specified format.
SHOW FILE FORMATS only lists existing formats, not their internals.
LIST FILE FORMAT is invalid. ALTER modifies but does not display details.
C
Explanation:
DESCRIBE FILE FORMAT <name> returns configuration settings such as FIELD_DELIMITER, TYPE, SKIP_HEADER, and COMPRESSION for the specified format.
SHOW FILE FORMATS only lists existing formats, not their internals.
LIST FILE FORMAT is invalid. ALTER modifies but does not display details.
