Practice Free SOL-C01 Exam Online Questions
Question #31
What is the purpose of the ACCOUNTADMIN role in Snowflake? (Choose any 2 options)
- A . To grant and revoke privileges across the account
- B . To monitor query performance
- C . To create and manage databases
- D . To manage all aspects of the Snowflake account
Correct Answer: A, D
A, D
Explanation:
The ACCOUNTADMIN role is Snowflake’s highest-privileged system-defined role. It provides complete administrative authority across the entire Snowflake account. Its functions include:
Managing global account parameters, replication settings, business continuity, failover groups, and region configurations
Administering billing, resource monitoring, and governance Granting and revoking privileges across all objects and roles Overseeing role hierarchy, including SECURITYADMIN and SYSADMIN
It is typically reserved for platform owners and security/governance teams, following least-privilege principles.
“Create and manage databases” is primarily a SYSADMIN responsibility.
“Monitor query performance” can be accomplished by roles with MONITOR privileges; it is not exclusive to ACCOUNTADMIN.
A, D
Explanation:
The ACCOUNTADMIN role is Snowflake’s highest-privileged system-defined role. It provides complete administrative authority across the entire Snowflake account. Its functions include:
Managing global account parameters, replication settings, business continuity, failover groups, and region configurations
Administering billing, resource monitoring, and governance Granting and revoking privileges across all objects and roles Overseeing role hierarchy, including SECURITYADMIN and SYSADMIN
It is typically reserved for platform owners and security/governance teams, following least-privilege principles.
“Create and manage databases” is primarily a SYSADMIN responsibility.
“Monitor query performance” can be accomplished by roles with MONITOR privileges; it is not exclusive to ACCOUNTADMIN.
Question #32
Which function in Snowflake Cortex LLM are Task Specific Function? (Select two)
- A . TRANSLATE
- B . CLASSIFY_TEXT / AI_CLASSIFY
- C . COUNT_TOKENS
- D . PARSE_DOCUMENT
Correct Answer: A, B, D
A, B, D
Explanation:
Snowflake’s Cortex LLM includes task-specific functions, meaning each performs a well-defined AI operation with predictable outputs. Examples include:
TRANSLATEC Converts text between languages; deterministic and domain-independent.
CLASSIFY_TEXT / AI_CLASSIFYC Assigns text to predefined categories, ideal for sentiment, topics, or routing tasks.
PARSE_DOCUMENTC Extracts structured information from documents (PDFs, invoices, receipts, contracts) including layout-aware content.
These functions are optimized for reliability, reproducibility, and governance, making them suitable for production pipelines.
COUNT_TOKENS is not task-specific―it’s a utility function used to estimate LLM token usage rather than perform a primary AI task.
Thus, TRANSLATE, CLASSIFY_TEXT, and PARSE_DOCUMENT are the correct task-specific functions.
A, B, D
Explanation:
Snowflake’s Cortex LLM includes task-specific functions, meaning each performs a well-defined AI operation with predictable outputs. Examples include:
TRANSLATEC Converts text between languages; deterministic and domain-independent.
CLASSIFY_TEXT / AI_CLASSIFYC Assigns text to predefined categories, ideal for sentiment, topics, or routing tasks.
PARSE_DOCUMENTC Extracts structured information from documents (PDFs, invoices, receipts, contracts) including layout-aware content.
These functions are optimized for reliability, reproducibility, and governance, making them suitable for production pipelines.
COUNT_TOKENS is not task-specific―it’s a utility function used to estimate LLM token usage rather than perform a primary AI task.
Thus, TRANSLATE, CLASSIFY_TEXT, and PARSE_DOCUMENT are the correct task-specific functions.
Question #33
What is a benefit of using an external stage to load data into Snowflake?
- A . External stages reduce data storage costs because data is stored outside Snowflake.
- B . External stages provide automatic data purging after successful loads.
- C . External stages are more secure than internal stages for sensitive data.
- D . External stages reduce the number of objects in a database.
Correct Answer: A
A
Explanation:
External stages point to files in external cloud storage (S3, Azure Blob, GCS). Because the data is not stored inside Snowflake, the user avoids Snowflake storage charges, which can significantly reduce cost for large staging datasets.
External stages do not automatically delete files, are not inherently more secure than internal stages, and still count as database objects even though they reference external storage.
A
Explanation:
External stages point to files in external cloud storage (S3, Azure Blob, GCS). Because the data is not stored inside Snowflake, the user avoids Snowflake storage charges, which can significantly reduce cost for large staging datasets.
External stages do not automatically delete files, are not inherently more secure than internal stages, and still count as database objects even though they reference external storage.
Question #34
How does Snowflake process queries?
- A . With shared-disk architecture
- B . Using MPP compute clusters
- C . By optimizing data in cloud storage
- D . Through third-party connectors
Correct Answer: B
B
Explanation:
Snowflake processes queries using Massively Parallel Processing (MPP)compute clusters, deployed as virtual warehouses. Each warehouse consists of multiple compute nodes working in parallel to execute queries efficiently. When a query is submitted, Snowflake distributes tasks across nodes, processes data subsets concurrently, and aggregates results. This architecture enables high performance, scalability, and the ability to handle complex analytical workloads. While Snowflake does incorporate elements of shared-disk storage, query execution itself depends on MPP compute clusters. Options such as third-party connectors or storage optimization do not represent the core query processing mechanism.
B
Explanation:
Snowflake processes queries using Massively Parallel Processing (MPP)compute clusters, deployed as virtual warehouses. Each warehouse consists of multiple compute nodes working in parallel to execute queries efficiently. When a query is submitted, Snowflake distributes tasks across nodes, processes data subsets concurrently, and aggregates results. This architecture enables high performance, scalability, and the ability to handle complex analytical workloads. While Snowflake does incorporate elements of shared-disk storage, query execution itself depends on MPP compute clusters. Options such as third-party connectors or storage optimization do not represent the core query processing mechanism.
Question #35
How can Python variable substitution be performed in Snowflake notebooks?
- A . By manually editing data files
- B . By configuring network settings
- C . By using placeholders in SQL queries
- D . By writing HTML code
Correct Answer: C
C
Explanation:
In Snowflake Notebooks, Python and SQL operate in an integrated environment. Python variable substitution is performed using string placeholders or f-string interpolation within SQL statements. This allows dynamic query construction where Python values are embedded directly into SQL code executed through the Snowpark session.
Example:
table_name = "CUSTOMERS"
session.sql(f"SELECT * FROM {table_name}").show()
This method enables parameterized, context-aware SQL generation―ideal for iterative development, automation, and pipeline construction. It avoids manual text editing and provides consistency across Python and SQL execution layers.
Incorrect responses:
Network settings and HTML are unrelated to variable handling.
Editing data files does not apply to dynamic query parameterization.
Variable substitution is essential for combining logic, iteration, and conditional flows within notebooks.
C
Explanation:
In Snowflake Notebooks, Python and SQL operate in an integrated environment. Python variable substitution is performed using string placeholders or f-string interpolation within SQL statements. This allows dynamic query construction where Python values are embedded directly into SQL code executed through the Snowpark session.
Example:
table_name = "CUSTOMERS"
session.sql(f"SELECT * FROM {table_name}").show()
This method enables parameterized, context-aware SQL generation―ideal for iterative development, automation, and pipeline construction. It avoids manual text editing and provides consistency across Python and SQL execution layers.
Incorrect responses:
Network settings and HTML are unrelated to variable handling.
Editing data files does not apply to dynamic query parameterization.
Variable substitution is essential for combining logic, iteration, and conditional flows within notebooks.
Question #36
Which of the following settings can be configured for a Snowflake Virtual Warehouse? (Choose any 3 options)
- A . Cloud provider region
- B . Auto-suspend time
- C . Auto-resume
- D . Warehouse size
Correct Answer: B, C, D
B, C, D
Explanation:
Snowflake Virtual Warehouses support several configuration parameters that directly influence compute behavior, performance, and cost control. Auto-suspend time determines how long the warehouse should remain idle before Snowflake automatically suspends it to save credits. Auto-resumeenables automatic warehouse reactivation whenever a new query is submitted, ensuring seamless user experience without manual intervention.Warehouse sizedetermines the compute resources available (e.g., X-SMALL, SMALL, MEDIUM, LARGE). Larger warehouses provide more CPU, memory, and parallel processing ability. Conversely, thecloud provider regioncannot be configured at the warehouse level; it is determined when the Snowflake account is created and applies globally across the account. These warehouse settings enable efficient workload management, dynamic compute scaling, and cost optimization, allowing Snowflake users to tailor compute behavior to their analytics and data processing needs.
B, C, D
Explanation:
Snowflake Virtual Warehouses support several configuration parameters that directly influence compute behavior, performance, and cost control. Auto-suspend time determines how long the warehouse should remain idle before Snowflake automatically suspends it to save credits. Auto-resumeenables automatic warehouse reactivation whenever a new query is submitted, ensuring seamless user experience without manual intervention.Warehouse sizedetermines the compute resources available (e.g., X-SMALL, SMALL, MEDIUM, LARGE). Larger warehouses provide more CPU, memory, and parallel processing ability. Conversely, thecloud provider regioncannot be configured at the warehouse level; it is determined when the Snowflake account is created and applies globally across the account. These warehouse settings enable efficient workload management, dynamic compute scaling, and cost optimization, allowing Snowflake users to tailor compute behavior to their analytics and data processing needs.
Question #37
Which SQL command is used to create a new SCHEMA in Snowflake? (Choose any 3 options)
- A . CREATE SCHEMA Schema_name;
- B . USE SCH Schema_name;
- C . USE SCHEMA Schema_name;
- D . CREATE SCH Schema_name;
Correct Answer: A
A
Explanation:
The only valid Snowflake SQL command among the options forcreating a schemais:
CREATE SCHEMA schema_name;
This command creates a schema within the currently active database. Optional modifiers like IF NOT EXISTS or OR REPLACE can be used to control behavior.
Other options:
USE SCHEMA switches to an existing schema; it does not create one.
USE SCH is invalid because SCH is not a Snowflake keyword.
CREATE SCH is invalid; SCH is not a valid abbreviation.
Thus, the only valid schema creation command is CREATE SCHEMA.
A
Explanation:
The only valid Snowflake SQL command among the options forcreating a schemais:
CREATE SCHEMA schema_name;
This command creates a schema within the currently active database. Optional modifiers like IF NOT EXISTS or OR REPLACE can be used to control behavior.
Other options:
USE SCHEMA switches to an existing schema; it does not create one.
USE SCH is invalid because SCH is not a Snowflake keyword.
CREATE SCH is invalid; SCH is not a valid abbreviation.
Thus, the only valid schema creation command is CREATE SCHEMA.
Question #38
In the Query Profile, what does the Pruning section provide?
- A . Information on how Snowflake removed rows from the query results.
- B . Information on how Snowflake removed columns from the query results.
- C . Information on how Snowflake removed objects from the query plan.
- D . Information on how Snowflake removed micro-partitions from the query scan.
Correct Answer: D
D
Explanation:
The Pruning section of the Snow sight Query Profile shows how Snowflake eliminated unnecessary micro-partitions from the scan phase of the query. Snowflake stores data in micro-partitions and maintains metadata such as min/max values for each column within each partition. When a query includes filters (e.g., WHERE clauses), Snowflake evaluates this metadata to determine which micro-partitions cannot possibly satisfy the predicate. These partitions are skipped, meaning they are never
scanned or read from storage.
This process drastically improves performance because Snowflake minimizes I/O, reduces compute usage, and shortens execution time. Partition pruning is especially impactful on large tables because only a fraction of the stored micro-partitions typically need to be accessed.
The Pruning section does not show removed rows―that happens during the filter step. It does not show removed columns―column pruning is handled separately by the optimizer. It also does not show removed objects from the plan. Its sole purpose is to document micro-partition elimination and scan reduction.
D
Explanation:
The Pruning section of the Snow sight Query Profile shows how Snowflake eliminated unnecessary micro-partitions from the scan phase of the query. Snowflake stores data in micro-partitions and maintains metadata such as min/max values for each column within each partition. When a query includes filters (e.g., WHERE clauses), Snowflake evaluates this metadata to determine which micro-partitions cannot possibly satisfy the predicate. These partitions are skipped, meaning they are never
scanned or read from storage.
This process drastically improves performance because Snowflake minimizes I/O, reduces compute usage, and shortens execution time. Partition pruning is especially impactful on large tables because only a fraction of the stored micro-partitions typically need to be accessed.
The Pruning section does not show removed rows―that happens during the filter step. It does not show removed columns―column pruning is handled separately by the optimizer. It also does not show removed objects from the plan. Its sole purpose is to document micro-partition elimination and scan reduction.
Question #39
Which of the following are date and time data types in Snowflake? (Choose any 3 options)
- A . TIMEDATE
- B . TIMESTAMP
- C . DATE
- D . DATETIME
Correct Answer: B, C, D
B, C, D
Explanation:
Snowflake provides multiple date and time data types to handle various temporal workloads. The DATE type stores only the calendar date (year, month, day), suitable for dimensional modeling, slowly changing dimensions, and calendar-related analytics.
TIMESTAMP represents a point in time and includes sub-second precision. Snowflake supports multiple variants:
TIMESTAMP_NTZ(no time zone)
TIMESTAMP_LTZ(local time zone)
TIMESTAMP_TZ(explicit time zone)
These options allow flexibility for global operations, event logs, and time-series analytics.
DATETIME is an alias for TIMESTAMP_NTZ, meaning it stores both date and time but without time-zone awareness. It is commonly used in ETL, application logs, and system-generated events.
TIMEDATE is not a valid Snowflake data type and does not exist in Snowflake’s type system.
These temporal types support extensive built-in datetime functions, automatic casting, and integration with semi-structured data through VARIANT.
B, C, D
Explanation:
Snowflake provides multiple date and time data types to handle various temporal workloads. The DATE type stores only the calendar date (year, month, day), suitable for dimensional modeling, slowly changing dimensions, and calendar-related analytics.
TIMESTAMP represents a point in time and includes sub-second precision. Snowflake supports multiple variants:
TIMESTAMP_NTZ(no time zone)
TIMESTAMP_LTZ(local time zone)
TIMESTAMP_TZ(explicit time zone)
These options allow flexibility for global operations, event logs, and time-series analytics.
DATETIME is an alias for TIMESTAMP_NTZ, meaning it stores both date and time but without time-zone awareness. It is commonly used in ETL, application logs, and system-generated events.
TIMEDATE is not a valid Snowflake data type and does not exist in Snowflake’s type system.
These temporal types support extensive built-in datetime functions, automatic casting, and integration with semi-structured data through VARIANT.
Question #40
What is the purpose of a role hierarchy in Snowflake?
- A . To define the sequence of SQL queries
- B . To organize roles and grant inherited privileges
- C . To manage network settings
- D . To store raw data
Correct Answer: B
B
Explanation:
Role hierarchy in Snowflake allows one role to inherit the privileges of another. By granting roles to other roles, Snowflake enables scalable, maintainable access control. Higher-level roles grant privileges downward, allowing administrators to create layered access structures. This hierarchy simplifies permission management across teams and environments. It has no relation to SQL sequencing, network settings, or data storage.
B
Explanation:
Role hierarchy in Snowflake allows one role to inherit the privileges of another. By granting roles to other roles, Snowflake enables scalable, maintainable access control. Higher-level roles grant privileges downward, allowing administrators to create layered access structures. This hierarchy simplifies permission management across teams and environments. It has no relation to SQL sequencing, network settings, or data storage.
