Practice Free SOL-C01 Exam Online Questions
What tasks can be performed using Snowflake Cortex AI? (Select TWO).
- A . Simplify unstructured data workflows.
- B . Share data through the Snowflake Marketplace.
- C . Load semi-structured data.
- D . Extract and classify text.
- E . Enhanced data security.
A, D
Explanation:
Snowflake Cortex AI provides built-in AI functions and tools designed to work natively with unstructured and structured data. Two key capabilities are:
• Extract and classify text using functions like PARSE_DOCUMENT, EXTRACT_TEXT, and classification models. Cortex can process documents, identify relevant fields, and convert unstructured content into usable structured formats.
• Simplify unstructured data workflows by combining document extraction, vector search, summarization, and AI reasoning tools (e.g., Cortex Analyst, Cortex Search) directly inside Snowflake without external services.
It does not provide Marketplace data sharing features, which belong to Snowflake’s Data Sharing platform. Loading semi-structured data is a core Snowflake capability using VARIANT and COPY INTO―not Cortex-specific. Enhancing data security is a platform-wide feature, not a Cortex function.
Snowflake stores semi-structured data in a column of which data type?
- A . TEXT
- B . VARCHAR
- C . STRING
- D . VARIANT
D
Explanation:
The VARIANT data type stores semi-structured data (JSON, Parquet, XML, Avro, ORC).
VARIANT allows schema-on-read and supports nested structures.
TEXT/VARCHAR/STRING store plain text only and cannot efficiently query nested structures. STRING is not a Snowflake-native type―VARCHAR is used instead.
Which of the following Snowflake functions is used to generate pre-signed URLs for accessing files in a stage?
- A . GET_STAGE_LOCATION()
- B . BUILD_STAGE_FILE_URL()
- C . GET_PRESIGNED_URL()
- D . GET_RELATIVE_PATH()
C
Explanation:
GET_PRESIGNED_URL() is the official Snowflake function used to create a temporary, secure, time-limited pre-signed URL, which provides controlled-access retrieval of a file stored in a Snowflake internal or external stage. Pre-signed URLs are essential for secure file sharing without exposing credentials or granting direct stage access to external users. The function accepts the stage name and file path and optionally allows specifying an expiration period. GET_STAGE_LOCATION() retrieves metadata about where a stage points, not access URLs. BUILD_STAGE_FILE_URL() constructs a basic URL but does not sign it, meaning it cannot authorize secure download access. GET_RELATIVE_PATH() provides path-only details, not a secure download link. GET_PRESIGNED_URL ensures that external consumers can access staged files safely, making it an important feature for secure distribution workflows, data export pipelines, partner integrations, and temporary file sharing scenarios.
How do you enable a Directory Table for an external stage in Snowflake?
- A . By default, Directory Tables are enabled
- B . Using the command CREATE DIRECTORY TABLE …
- C . By setting the ENABLE_DIRECTORY_TABLE property to TRUE on the external stage using ALTER STAGE … SET ENABLE_DIRECTORY_TABLE = TRUE
- D . By creating a view on the external stage
C
Explanation:
Directory tables allow Snowflake to expose metadata about files stored in an external stage, enabling users to query file listings using SQL. To enable a directory table, the correct approach is to update an existing external stage using:
ALTER STAGE my_stage SET DIRECTORY = (ENABLE = TRUE);
This enables automatic synchronization of file metadata from cloud providers such as AWS S3, Azure Blob, or Google Cloud Storage.
Incorrect options:
Directory tables are not enabled by default.
There is no CREATE DIRECTORY TABLE command in Snowflake.
Creating a view cannot enable directory table functionality.
Once enabled, users can query:
SELECT * FROM DIRECTORY(@my_stage);
to retrieve file names, sizes, MD5 hashes, and timestamps.
Which of the following types of data can be found on Snowflake Marketplace? (Choose any 3 options)
- A . Public data sets
- B . Financial market data
- C . Proprietary data sets
- D . Third-party data sets
A, B, D
Explanation:
Snowflake Marketplace serves as a centralized catalog where providers publish ready-to-use datasets, models, and applications across industries. It includes:
Public datasets, such as demographics, weather, healthcare statistics, and government data.
Financial market datasets, including equities, commodities, macroeconomic indicators, and pricing feeds from major financial data vendors.
Third-party datasets supplied by external organizations specializing in geospatial intelligence, retail analytics, marketing insights, healthcare, census data, and more.
“Proprietary datasets” are internal to organizations and typically not published on the public Marketplace unless the owner chooses to list them. Marketplace listings are curated and governed.
What syntax will enable the use of a Python string variable named myvar in a SQL cell within a Snowflake Notebook?
- A . $myvar
- B . ‘myvar’
- C . myvar
- D . {{myvar}}
D
Explanation:
Snowflake Notebooks support cross-cell interaction between Python and SQL by using Jinja-style templating syntax. To reference a Python variable inside a SQL cell, you wrap the variable name in double curly braces, like {{myvar}}. During execution, the Notebook engine substitutes the Python variable’s value into the SQL statement before sending it to Snowflake.
This mechanism allows dynamic SQL generation, parameterization of queries, incorporating Python logic into SQL workflows, and building interactive analytics pipelines.
Other provided options are invalid in Snowflake Notebooks: $myvar resembles shell syntax and is not supported; ‘myvar’ inserts a literal string rather than the variable’s value; using myvar alone would cause SQL to interpret it as a column or object name.
Therefore, only {{myvar}} correctly represents Snowflake Notebook variable substitution syntax.
What does "warehouse scaling up/down" refer to in Snowflake?
- A . Moving data between different storage locations
- B . Changing the region of the warehouse
- C . Changing the size of the warehouse (e.g., from Small to Medium or Vice Versa).
- D . Adjusting the number of clusters in a multi-cluster warehouse.
C
Explanation:
Scaling up or down refers to vertical scaling, meaning the warehouse’s compute size is increased or decreased. For example, moving from Small → Medium → Large increases CPU, memory, and I/O capacity, enabling faster processing for compute-intensive workloads.
Vertical scaling improves single-query performance, large ETL jobs, complex joins, or transformations. It does not improve concurrency unless multi-cluster mode is also used.
Horizontal scaling (scaling out/in), by contrast, adjusts the number of clusters and is used for concurrency.
Region selection is fixed at account creation and cannot be changed by resizing a warehouse. Storage movement is unrelated to compute rescaling.
What is the Snow sight Query Profile used for?
- A . To execute SQL queries
- B . To create new database objects
- C . To manage data loading processes
- D . To visualize and analyze query performance
D
Explanation:
The Snow sight Query Profile is a powerful diagnostic tool that provides a visual breakdown of how Snowflake executed a query. Its primary purpose is to help users visualize and analyze query performance. It displays execution steps, including scan operations, join strategies, pruning results, aggregation methods, and data movement between processing nodes.
The profile shows metrics such as execution time per step, partition pruning effectiveness, bytes scanned, and operator relationships. This allows developers, analysts, and DBAs to identify bottlenecks―such as unnecessary full-table scans, non-selective filters, or inefficient joins―and tune SQL accordingly.
Query Profile does not execute queries; execution happens in worksheets or programmatic interfaces. It does not create objects or manage data loading; those tasks involve separate SQL commands and UI interfaces.
Overall, Query Profile is essential for performance tuning, helping teams reduce compute costs, optimize warehouse sizing, and improve query efficiency.
How are privileges inherited in the Snowflake role hierarchy?
- A . Privileges are only inherited by direct child roles in the hierarchy.
- B . Privileges are inherited by any roles at the same level in the hierarchy.
- C . Privileges are inherited by any roles above a given role in the hierarchy.
- D . Privileges are only inherited by the direct parent role in the hierarchy.
C
Explanation:
Snowflake uses a hierarchical Role-Based Access Control (RBAC) model. When Role A is granted to Role B, the higher-level role (Role B) inherits all privileges of the lower-level role (Role A). This upward inheritance allows broad administrative roles (such as SYSADMIN) to automatically inherit permissions granted to subordinate roles.
Key principle: privilege inheritance flows upward, not downward.
• Parent roles inherit from child roles.
• Child roles donotinherit from parent roles.
Sibling roles do not share privileges unless explicitly granted.
Thus, roles above a given role in the hierarchy inherit privileges, making option C correct.
Options A and D incorrectly describe downward inheritance, which Snowflake does not perform.
Option B (same-level inheritance) also does not occur unless privileges are explicitly granted.
This hierarchical model simplifies governance by allowing higher-level administrative roles to centrally manage access across multiple subordinate roles.
What is the purpose of the COMPLETE function in Snowflake Cortex?
- A . To parse documents
- B . To translate languages
- C . To classify text
- D . To generate text completions based on a given prompt
D
Explanation:
The COMPLETE function enables natural language generation within Snowflake Cortex. It accepts a prompt and produces a text-based completion using an underlying large language model (LLM). COMPLETE can generate single-turn or multi-turn responses, depending on how the prompt is structured. It is used for tasks such as drafting summaries, generating recommendations, writing structured output such as JSON, or extending existing text. It is not a classification tool―that role is served by CLASSIFY_TEXT―and it does not perform linguistic translation (TRANSLATE) or document extraction (PARSE_DOCUMENT). COMPLETE allows developers and analysts to embed AI-driven generative capabilities directly within SQL pipelines or Snowpark processes, eliminating the need for external LLM services.
