Practice Free ARA-C01 Exam Online Questions
A user has the appropriate privilege to see unmasked data in a column.
If the user loads this column data into another column that does not have a masking policy, what will occur?
- A . Unmasked data will be loaded in the new column.
- B . Masked data will be loaded into the new column.
- C . Unmasked data will be loaded into the new column but only users with the appropriate privileges will be able to see the unmasked data.
- D . Unmasked data will be loaded into the new column and no users will be able to see the unmasked data.
A
Explanation:
According to the SnowPro Advanced: Architect documents and learning resources, column masking policies are applied at query time based on the privileges of the user who runs the query. Therefore, if a user has the privilege to see unmasked data in a column, they will see the original data when they query that column. If they load this column data into another column that does not have a masking policy, the unmasked data will be loaded in the new column, and any user who can query the new column will see the unmasked data as well. The masking policy does not affect the underlying data in the column, only the query results.
Reference: Snowflake Documentation: Column Masking
Snowflake Learning: Column Masking
A user has the appropriate privilege to see unmasked data in a column.
If the user loads this column data into another column that does not have a masking policy, what will occur?
- A . Unmasked data will be loaded in the new column.
- B . Masked data will be loaded into the new column.
- C . Unmasked data will be loaded into the new column but only users with the appropriate privileges will be able to see the unmasked data.
- D . Unmasked data will be loaded into the new column and no users will be able to see the unmasked data.
A
Explanation:
According to the SnowPro Advanced: Architect documents and learning resources, column masking policies are applied at query time based on the privileges of the user who runs the query. Therefore, if a user has the privilege to see unmasked data in a column, they will see the original data when they query that column. If they load this column data into another column that does not have a masking policy, the unmasked data will be loaded in the new column, and any user who can query the new column will see the unmasked data as well. The masking policy does not affect the underlying data in the column, only the query results.
Reference: Snowflake Documentation: Column Masking
Snowflake Learning: Column Masking
Which Snowflake data modeling approach is designed for BI queries?
- A . 3NF
- B . Star schema
- C . Data Vault
- D . Snowflake schema
B
Explanation:
A star schema is a Snowflake data modeling approach that is designed for BI queries. A star schema is a type of dimensional modeling that organizes data into fact tables and dimension tables. A fact table contains the measures or metrics of the business process, such as sales amount, order quantity, or profit margin. A dimension table contains the attributes or descriptors of the business process, such as product name, customer name, or order date. A star schema is called so because it resembles a star, with one fact table in the center and multiple dimension tables radiating from it. A star schema can improve the performance and simplicity of BI queries by reducing the number of joins, providing fast access to aggregated data, and enabling intuitive query syntax. A star schema can also support various types of analysis, such as trend analysis, slice and dice, drill down, and roll up12.
Reference: Snowflake Documentation: Dimensional Modeling
Snowflake Documentation: Star Schema
Which Snowflake data modeling approach is designed for BI queries?
- A . 3NF
- B . Star schema
- C . Data Vault
- D . Snowflake schema
B
Explanation:
A star schema is a Snowflake data modeling approach that is designed for BI queries. A star schema is a type of dimensional modeling that organizes data into fact tables and dimension tables. A fact table contains the measures or metrics of the business process, such as sales amount, order quantity, or profit margin. A dimension table contains the attributes or descriptors of the business process, such as product name, customer name, or order date. A star schema is called so because it resembles a star, with one fact table in the center and multiple dimension tables radiating from it. A star schema can improve the performance and simplicity of BI queries by reducing the number of joins, providing fast access to aggregated data, and enabling intuitive query syntax. A star schema can also support various types of analysis, such as trend analysis, slice and dice, drill down, and roll up12.
Reference: Snowflake Documentation: Dimensional Modeling
Snowflake Documentation: Star Schema
What is a characteristic of event notifications in Snowpipe?
- A . The load history is stored In the metadata of the target table.
- B . Notifications identify the cloud storage event and the actual data in the files.
- C . Snowflake can process all older notifications when a paused pipe Is resumed.
- D . When a pipe Is paused, event messages received for the pipe enter a limited retention period.
D
Explanation:
Event notifications in Snowpipe are messages sent by cloud storage providers to notify Snowflake of new or modified files in a stage. Snowpipe uses these notifications to trigger data loading from the stage to the target table. When a pipe is paused, event messages received for the pipe enter a limited retention period, which varies depending on the cloud storage provider. If the pipe is not resumed within the retention period, the event messages will be discarded and the data will not be loaded automatically. To load the data, the pipe must be resumed and the COPY command must be executed manually. This is a characteristic of event notifications in Snowpipe that distinguishes them from other options.
Reference: Snowflake Documentation: Using Snowpipe, Snowflake Documentation: Pausing and Resuming a Pipe
What is a characteristic of event notifications in Snowpipe?
- A . The load history is stored In the metadata of the target table.
- B . Notifications identify the cloud storage event and the actual data in the files.
- C . Snowflake can process all older notifications when a paused pipe Is resumed.
- D . When a pipe Is paused, event messages received for the pipe enter a limited retention period.
D
Explanation:
Event notifications in Snowpipe are messages sent by cloud storage providers to notify Snowflake of new or modified files in a stage. Snowpipe uses these notifications to trigger data loading from the stage to the target table. When a pipe is paused, event messages received for the pipe enter a limited retention period, which varies depending on the cloud storage provider. If the pipe is not resumed within the retention period, the event messages will be discarded and the data will not be loaded automatically. To load the data, the pipe must be resumed and the COPY command must be executed manually. This is a characteristic of event notifications in Snowpipe that distinguishes them from other options.
Reference: Snowflake Documentation: Using Snowpipe, Snowflake Documentation: Pausing and Resuming a Pipe
What is a characteristic of event notifications in Snowpipe?
- A . The load history is stored In the metadata of the target table.
- B . Notifications identify the cloud storage event and the actual data in the files.
- C . Snowflake can process all older notifications when a paused pipe Is resumed.
- D . When a pipe Is paused, event messages received for the pipe enter a limited retention period.
D
Explanation:
Event notifications in Snowpipe are messages sent by cloud storage providers to notify Snowflake of new or modified files in a stage. Snowpipe uses these notifications to trigger data loading from the stage to the target table. When a pipe is paused, event messages received for the pipe enter a limited retention period, which varies depending on the cloud storage provider. If the pipe is not resumed within the retention period, the event messages will be discarded and the data will not be loaded automatically. To load the data, the pipe must be resumed and the COPY command must be executed manually. This is a characteristic of event notifications in Snowpipe that distinguishes them from other options.
Reference: Snowflake Documentation: Using Snowpipe, Snowflake Documentation: Pausing and Resuming a Pipe
A company is trying to Ingest 10 TB of CSV data into a Snowflake table using Snowpipe as part of Its migration from a legacy database platform. The records need to be ingested in the MOST performant and cost-effective way.
How can these requirements be met?
- A . Use ON_ERROR = continue in the copy into command.
- B . Use purge = TRUE in the copy into command.
- C . Use FURGE = FALSE in the copy into command.
- D . Use on error = SKIP_FILE in the copy into command.
D
Explanation:
For ingesting a large volume of CSV data into Snowflake using Snowpipe, especially for a substantial amount like 10 TB, the on error = SKIP_FILE option in the COPY INTO command can be highly effective. This approach allows Snowpipe to skip over files that cause errors during the ingestion process, thereby not halting or significantly slowing down the overall data load. It helps in maintaining performance and cost-effectiveness by avoiding the reprocessing of problematic files and continuing with the ingestion of other data.
An Architect has been asked to clone schema STAGING as it looked one week ago, Tuesday June 1st at 8:00 AM, to recover some objects.
The STAGING schema has 50 days of retention.
The Architect runs the following statement:
CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => ‘2021-06-01 08:00:00’);
The Architect receives the following error: Time travel data is not available for schema STAGING. The requested time is either beyond the allowed time travel period or before the object creation time. The Architect then checks the schema history and sees the following: CREATED_ON|NAME|DROPPED_ON
2021-06-02 23:00:00 | STAGING | NULL
2021-05-01 10:00:00 | STAGING | 2021-06-02 23:00:00
How can cloning the STAGING schema be achieved?
- A . Undrop the STAGING schema and then rerun the CLONE statement.
- B . Modify the statement: CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => ‘2021-05-01 10:00:00’);
- C . Rename the STAGING schema and perform an UNDROP to retrieve the previous STAGING schema version, then run the CLONE statement.
- D . Cloning cannot be accomplished because the STAGING schema version was not active during the proposed Time Travel time period.
C
Explanation:
The error message indicates that the schema STAGING does not have time travel data available for the requested timestamp, because the current version of the schema was created on 2021-06-02 23:00:00, which is after the timestamp of 2021-06-01 08:00:00. Therefore, the CLONE statement cannot access the historical data of the schema at that point in time.
Option A is incorrect, because undropping the STAGING schema will not restore the previous version of the schema that was active on 2021-06-01 08:00:00. Instead, it will create a new version of the schema with the same name and no data or objects.
Option B is incorrect, because modifying the timestamp to 2021-05-01 10:00:00 will not clone the schema as it looked one week ago, but as it looked when it was first created. This may not reflect the desired state of the schema and its objects.
Option C is correct, because renaming the STAGING schema and performing an UNDROP to retrieve
the previous STAGING schema version will restore the schema that was dropped on 2021-06-02
23:00:00. This schema has time travel data available for the requested timestamp of 2021-06-01
08:00:00, and can be cloned using the CLONE statement.
Option D is incorrect, because cloning can be accomplished by using the UNDROP command to access the previous version of the schema that was active during the proposed time travel period.
Reference: Cloning Considerations: Understanding & Using Time Travel: CREATE <object> … CLONE
An Architect needs to design a Snowflake account and database strategy to store and analyze large amounts of structured and semi-structured data. There are many business units and departments within the company. The requirements are scalability, security, and cost efficiency.
What design should be used?
- A . Create a single Snowflake account and database for all data storage and analysis needs, regardless of data volume or complexity.
- B . Set up separate Snowflake accounts and databases for each department or business unit, to ensure data isolation and security.
- C . Use Snowflake’s data lake functionality to store and analyze all data in a central location, without the need for structured schemas or indexes
- D . Use a centralized Snowflake database for core business data, and use separate databases for departmental or project-specific data.
D
Explanation:
The best design to store and analyze large amounts of structured and semi-structured data for different business units and departments is to use a centralized Snowflake database for core business data, and use separate databases for departmental or project-specific data. This design allows for scalability, security, and cost efficiency by leveraging Snowflake’s features such as:
Database cloning: Cloning a database creates a zero-copy clone that shares the same data files as the original database, but can be modified independently. This reduces storage costs and enables fast and consistent data replication for different purposes.
Database sharing: Sharing a database allows granting secure and governed access to a subset of data in a database to other Snowflake accounts or consumers. This enables data collaboration and monetization across different business units or external partners.
Warehouse scaling: Scaling a warehouse allows adjusting the size and concurrency of a warehouse to match the performance and cost requirements of different workloads. This enables optimal resource utilization and flexibility for different data analysis needs.
Reference: Snowflake Documentation: Database Cloning, Snowflake Documentation: Database Sharing, [Snowflake Documentation:
Warehouse Scaling]
